I develop machine learning tools and techniques which are not only accurate but also fair and interpretable so that human decision-makers can leverage them to make better decisions. More specifically, my research addresses the following fundamental questions pertaining to human and algorithmic decision-making:
1. How do we build interpretable models that can aid human decision-making?
2. How do we evaluate the effectiveness of algorithmic predictions and compare them with human decisions?
3. How do we detect and correct underlying biases in human decisions and algorithmic predictions?
These questions have far-reaching implications in domains involving high-stakes decisions such as criminal justice, health care, public policy, business, and education.
I work on developing various tools and methodologies which can help decision makers (e.g., doctors, managers) to better understand the predictions of machine learning models.
The goal of this research is to assess the impact of deploying machine learning models in real world decision making in domains such as health care.
The goal of this research is to understand how adversaries can exploit various algorithms used for explaining complex machine learning models with an intention to mislead end users. For instance, can adversaries trick these algorithms into masking their racial and gender biases?
I am an Assistant Professor in the Technology and Operations Management Group at Harvard Business School. My research primarily involves machine learning and its applications to high-stakes decision making.
Prior to my stint at Harvard, I received my PhD in Computer Science from Stanford University. My PhD research was generously supported by a Stanford Graduate Fellowship, a Microsoft Research Dissertation Grant, and a Google Anita Borg Scholarship.
For more details, here is my CV and here is a one-pager about my research.
- Published Papers
-
- Slack, Dylan, Sophie Hilgard, Emily Jia, Sameer Singh, and Himabindu Lakkaraju. "Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (2020): 180–186. View Details
- Lakkaraju, Himabindu, and Osbert Bastani. "How Do I Fool You?": Manipulating User Trust via Misleading Black Box Explanations. Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (2020): 79–85. View Details
- Lakkaraju, Himabindu, Ece Kamar, Rich Caruana, and Jure Leskovec. "Faithful and Customizable Explanations of Black Box Models." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (2019). View Details
- Kleinberg, Jon, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. "Human Decisions and Machine Predictions." Quarterly Journal of Economics 133, no. 1 (February 2018): 237–293. View Details
- Lakkaraju, Himabindu, Jon Kleinberg, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. "The Selective Labels Problem: Evaluating Algorithmic Predictions in the Presence of Unobservables." Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining 23rd (2017). View Details
- Lakkaraju, Himabindu, and Cynthia Rudin. "Learning Cost-Effective and Interpretable Treatment Regimes." Proceedings of the International Conference on Artificial Intelligence and Statistics 20th (2017). View Details
- Lakkaraju, Himabindu, Ece Kamar, Rich Caruana, and Eric Horvitz. "Identifying Unknown Unknowns in the Open World: Representations and Policies for Guided Exploration." Proceedings of the AAAI Conference on Artificial Intelligence 31st (2017). View Details
- Lakkaraju, Himabindu, Ece Kamar, Rich Caruana, and Jure Leskovec. "Interpretable and Explorable Approximations of Black Box Models." Paper presented at the 4th Workshop on Fairness, Accountability, and Transparency in Machine Learning, Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD), Halifax, NS, Canada, August 14, 2017. View Details
- Lakkaraju, Himabindu, and Jure Leskovec. "Confusions over Time: An Interpretable Bayesian Model to Characterize Trends in Decision Making." Proceedings of the Conference on Neural Information Processing Systems 30th (2016). View Details
- Lakkaraju, Himabindu, Stephen H. Bach, and Jure Leskovec. "Interpretable Decision Sets: A Joint Framework for Description and Prediction." Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining 22nd (2016). View Details
- Kosinki, Michal, Yilun Wang, Himabindu Lakkaraju, and Jure Leskovec. "Mining Big Data to Extract Patterns and Predict Real-Life Outcomes." Psychological Methods 21, no. 4 (December 2016): 493–506. View Details
- Lakkaraju, Himabindu, Ece Kamar, Rich Caruana, and Eric Horvitz. "Discovering Unknown Unknowns of Predictive Models." Paper presented at the 30th Annual Conference on Neural Information Processing Systems (NIPS), Workshop on Reliable Machine Learning in the Wild, Barcelona, Spain, December 9, 2016. View Details
- Lakkaraju, Himabindu, and Cynthia Rudin. "Learning Cost-Effective and Interpretable Regimes for Treatment Recommendation." Paper presented at the 30th Annual Conference on Neural Information Processing Systems (NIPS), Workshop on Interpretable Machine Learning in Complex Systems, Barcelona, Spain, December 9, 2016. View Details
- Lakkaraju, Himabindu, and Cynthia Rudin. "Learning Cost-Effective and Interpretable Treatment Regimes for Judicial Bail Decisions." Paper presented at the 30th Annual Conference on Neural Information Processing Systems (NIPS), Symposium on Machine Learning and the Law, Barcelona, Spain, December 8, 2016. View Details
- Lakkaraju, Himabindu, Everaldo Aguiar, Carl Shan, David Miller, Nasir Bhanpuri, Rayid Ghani, and Kecia Addison. "A Machine Learning Framework to Identify Students at Risk of Adverse Academic Outcomes." Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining 21st (2015). View Details
- Lakkaraju, Himabindu, Jure Leskovec, Jon Kleinberg, and Sendhil Mullainathan. "A Bayesian Framework for Modeling Human Evaluations." Proceedings of the SIAM International Conference on Data Mining (2015): 181–189. View Details
- Aguiar, Everaldo, Himabindu Lakkaraju, Nasir Bhanpuri, David Miller, Ben Yuhas, Kecia Addison, and Rayid Ghani. "Who, When, and Why: A Machine Learning Approach to Prioritizing Students at Risk of Not Graduating High School on Time." Proceedings of the International Learning Analytics and Knowledge Conference 5th (2015). View Details
- Lakkaraju, Himabindu, Jon Kleinberg, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. "Using Big Data to Improve Social Policy." NBER Economics of Crime Working Group, 2014. View Details
- Lakkaraju, Himabindu, Richard Socher, and Chris Manning. "Aspect Specific Sentiment Analysis Using Hierarchical Deep Learning." Paper presented at the 28th Annual Conference on Neural Information Processing Systems (NIPS), Workshop on Deep Learning and Representation Learning, Montreal, Canada, December 12, 2014. View Details
- Lakkaraju, Himabindu, Julian McAuley, and Jure Leskovec. "What's in a Name? Understanding the Interplay Between Titles, Content, and Communities in Social Media." Proceedings of the International AAAI Conference on Weblogs and Social Media 7th (2013). View Details
- Lakkaraju, Himabindu, Indrajit Bhattacharya, and Chiranjib Bhattacharyya. "Dynamic Multi-Relational Chinese Restaurant Process for Analyzing Influences on Users in Social Media." Proceedings of the IEEE International Conference on Data Mining 12th (2012). View Details
- Lakkaraju, Himabindu, and Hyung-Il Ahn. "TEM: A Novel Perspective to Modeling Content on Microblogs." Proceedings of the International World Wide Web Conference 21st (2012). View Details
- Lakkaraju, Himabindu, Chiranjib Bhattacharyya, Indrajit Bhattacharya, and Srujana Merugu. "Exploiting Coherence for the Simultaneous Discovery of Latent Facets and Associated Sentiments." Proceedings of the SIAM International Conference on Data Mining (2011): 498–509. View Details
- Lakkaraju, Himabindu, and Jitendra Ajmera. "Attention Prediction on Social Media Brand Pages." Proceedings of the ACM Conference on Information and Knowledge Management 20th (2011). View Details
- Lakkaraju, Himabindu, Angshu Rai, and Srujana Merugu. "Smart News Feeds for Social Networks Using Scalable Joint Latent Factor Models." Proceedings of the International World Wide Web Conference 20th (2011). View Details
- Lakkaraju, Himabindu, and Hyung-Il Ahn. "A Non Parametric Theme Event Topic Model for Characterizing Microblogs." Paper presented at the 25th Annual Conference on Neural Information Processing Systems (NIPS), Workshop on Computational Science and the Wisdom of Crowds, Granada, Spain, December 17, 2011. View Details
- Lakkaraju, Himabindu, and Angshu Rai. "Unified Modeling of User Activities on Social Networking Sites." Paper presented at the 25th Annual Conference on Neural Information Processing Systems (NIPS), Workshop on Computational Science and the Wisdom of Crowds, Granada, Spain, December 17, 2011. View Details
- Research Summary
-
I develop machine learning tools and techniques which are not only accurate but also fair and interpretable so that human decision-makers can leverage them to make better decisions. More specifically, my research addresses the following fundamental questions pertaining to human and algorithmic decision-making:
1. How do we build interpretable models that can aid human decision-making?
2. How do we evaluate the effectiveness of algorithmic predictions and compare them with human decisions?
3. How do we detect and correct underlying biases in human decisions and algorithmic predictions?
These questions have far-reaching implications in domains involving high-stakes decisions such as criminal justice, health care, public policy, business, and education.
I work on developing various tools and methodologies which can help decision makers (e.g., doctors, managers) to better understand the predictions of machine learning models.
The goal of this research is to assess the impact of deploying machine learning models in real world decision making in domains such as health care.
The goal of this research is to understand how adversaries can exploit various algorithms used for explaining complex machine learning models with an intention to mislead end users. For instance, can adversaries trick these algorithms into masking their racial and gender biases?
- Teaching
-
I taught a set of lectures on "Introduction to Machine Learning for Social Scientists" as part of this required course for first year PhD students.
- Awards & Honors
-
Selected as one of
MIT Technology Review’s
35 Innovators Under 35 for her research on using machine learning to support decision making in law.
- Areas of Interest
-