Himabindu Lakkaraju - Faculty & Research - Harvard Business School
Photo of Himabindu Lakkaraju

Himabindu Lakkaraju

Assistant Professor of Business Administration

Technology and Operations Management

I am an Assistant Professor in the Technology and Operations Management Group at Harvard Business School. My research primarily involves machine learning and its applications to high-stakes decision making.   

Prior to my stint at Harvard, I received my PhD in Computer Science from Stanford University. My PhD research was generously supported by a Stanford Graduate Fellowship, a Microsoft Research Dissertation Grant, and a Google Anita Borg Scholarship.

For more details, here is my CV and here is a one-pager about my research.


Published Papers
  1. Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods.

    Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh and Himabindu Lakkaraju

    Citation:

    Slack, Dylan, Sophie Hilgard, Emily Jia, Sameer Singh, and Himabindu Lakkaraju. "Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (2020): 180–186.  View Details
  2. "How Do I Fool You?": Manipulating User Trust via Misleading Black Box Explanations

    Himabindu Lakkaraju and Osbert Bastani

    Citation:

    Lakkaraju, Himabindu, and Osbert Bastani. "How Do I Fool You?": Manipulating User Trust via Misleading Black Box Explanations. Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (2020): 79–85.  View Details
  3. Faithful and Customizable Explanations of Black Box Models

    Himabindu Lakkaraju, Ece Kamar, Rich Caruana and Jure Leskovec

    As predictive models increasingly assist human experts (e.g., doctors) in day-to-day decision making, it is crucial for experts to be able to explore and understand how such models behave in different feature subspaces in order to know if and when to trust them. To this end, we propose Model Understanding through Subspace Explanations (MUSE), a novel model agnostic framework which facilitates understanding of a given black box model by explaining how it behaves in subspaces characterized by certain features of interest. Our framework provides end users (e.g., doctors) with the flexibility of customizing the model explanations by allowing them to input the features of interest. The construction of explanations is guided by a novel objective function that we propose to simultaneously optimize for fidelity to the original model, unambiguity and interpretability of the explanation. More specifically, our objective allows us to learn, with optimality guarantees, a small number of compact decision sets each of which captures the behavior of a given black box model in unambiguous, well-defined regions of the feature space. Experimental evaluation with real-world datasets and user studies demonstrate that our approach can generate customizable, highly compact, easy-to-understand, yet accurate explanations of various kinds of predictive models compared to state-of-the-art baselines.

    Keywords: interpretable machine learning; black box models; Decision Making; Framework;

    Citation:

    Lakkaraju, Himabindu, Ece Kamar, Rich Caruana, and Jure Leskovec. "Faithful and Customizable Explanations of Black Box Models." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (2019).  View Details
  4. The Selective Labels Problem: Evaluating Algorithmic Predictions in the Presence of Unobservables

    Himabindu Lakkaraju, Jon Kleinberg, Jure Leskovec, Jens Ludwig and Sendhil Mullainathan

    Citation:

    Lakkaraju, Himabindu, Jon Kleinberg, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. "The Selective Labels Problem: Evaluating Algorithmic Predictions in the Presence of Unobservables." Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining 23rd (2017).  View Details
  5. Identifying Unknown Unknowns in the Open World: Representations and Policies for Guided Exploration

    Himabindu Lakkaraju, Ece Kamar, Rich Caruana and Eric Horvitz

    Citation:

    Lakkaraju, Himabindu, Ece Kamar, Rich Caruana, and Eric Horvitz. "Identifying Unknown Unknowns in the Open World: Representations and Policies for Guided Exploration." Proceedings of the AAAI Conference on Artificial Intelligence 31st (2017).  View Details
  6. Interpretable and Explorable Approximations of Black Box Models

    Himabindu Lakkaraju, Ece Kamar, Rich Caruana and Jure Leskovec

    Citation:

    Lakkaraju, Himabindu, Ece Kamar, Rich Caruana, and Jure Leskovec. "Interpretable and Explorable Approximations of Black Box Models." Proceedings of the KDD Workshop on Fairness, Accountability, and Transparency in Machine Learning (2017).  View Details
  7. Interpretable Decision Sets: A Joint Framework for Description and Prediction

    Himabindu Lakkaraju, Stephen H. Bach and Jure Leskovec

    Citation:

    Lakkaraju, Himabindu, Stephen H. Bach, and Jure Leskovec. "Interpretable Decision Sets: A Joint Framework for Description and Prediction." Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining 22nd (2016).  View Details
  8. Discovering Unknown Unknowns of Predictive Models

    Himabindu Lakkaraju, Ece Kamar, Rich Caruana and Eric Horvitz

    Citation:

    Lakkaraju, Himabindu, Ece Kamar, Rich Caruana, and Eric Horvitz. "Discovering Unknown Unknowns of Predictive Models." Paper presented at the 30th Annual Conference on Neural Information Processing Systems (NIPS), Workshop on Reliable Machine Learning in the Wild, Barcelona, Spain, December 9, 2016.  View Details
  9. Learning Cost-Effective and Interpretable Regimes for Treatment Recommendation

    Himabindu Lakkaraju and Cynthia Rudin

    Citation:

    Lakkaraju, Himabindu, and Cynthia Rudin. "Learning Cost-Effective and Interpretable Regimes for Treatment Recommendation." Paper presented at the 30th Annual Conference on Neural Information Processing Systems (NIPS), Workshop on Interpretable Machine Learning in Complex Systems, Barcelona, Spain, December 9, 2016.  View Details
  10. Learning Cost-Effective and Interpretable Treatment Regimes for Judicial Bail Decisions

    Himabindu Lakkaraju and Cynthia Rudin

    Citation:

    Lakkaraju, Himabindu, and Cynthia Rudin. "Learning Cost-Effective and Interpretable Treatment Regimes for Judicial Bail Decisions." Paper presented at the 30th Annual Conference on Neural Information Processing Systems (NIPS), Symposium on Machine Learning and the Law, Barcelona, Spain, December 8, 2016.  View Details
  11. A Machine Learning Framework to Identify Students at Risk of Adverse Academic Outcomes

    Himabindu Lakkaraju, Everaldo Aguiar, Carl Shan, David Miller, Nasir Bhanpuri, Rayid Ghani and Kecia Addison

    Citation:

    Lakkaraju, Himabindu, Everaldo Aguiar, Carl Shan, David Miller, Nasir Bhanpuri, Rayid Ghani, and Kecia Addison. "A Machine Learning Framework to Identify Students at Risk of Adverse Academic Outcomes." Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining 21st (2015).  View Details
  12. Who, When, and Why: A Machine Learning Approach to Prioritizing Students at Risk of Not Graduating High School on Time

    Everaldo Aguiar, Himabindu Lakkaraju, Nasir Bhanpuri, David Miller, Ben Yuhas, Kecia Addison and Rayid Ghani

    Citation:

    Aguiar, Everaldo, Himabindu Lakkaraju, Nasir Bhanpuri, David Miller, Ben Yuhas, Kecia Addison, and Rayid Ghani. "Who, When, and Why: A Machine Learning Approach to Prioritizing Students at Risk of Not Graduating High School on Time." Proceedings of the International Learning Analytics and Knowledge Conference 5th (2015).  View Details
  13. Aspect Specific Sentiment Analysis Using Hierarchical Deep Learning

    Himabindu Lakkaraju, Richard Socher and Chris Manning

    Citation:

    Lakkaraju, Himabindu, Richard Socher, and Chris Manning. "Aspect Specific Sentiment Analysis Using Hierarchical Deep Learning." Paper presented at the 28th Annual Conference on Neural Information Processing Systems (NIPS), Workshop on Deep Learning and Representation Learning, Montreal, Canada, December 12, 2014.  View Details
  14. What's in a Name? Understanding the Interplay Between Titles, Content, and Communities in Social Media

    Himabindu Lakkaraju, Julian McAuley and Jure Leskovec

    Citation:

    Lakkaraju, Himabindu, Julian McAuley, and Jure Leskovec. "What's in a Name? Understanding the Interplay Between Titles, Content, and Communities in Social Media." Proceedings of the International AAAI Conference on Weblogs and Social Media 7th (2013).  View Details
  15. Dynamic Multi-Relational Chinese Restaurant Process for Analyzing Influences on Users in Social Media

    Himabindu Lakkaraju, Indrajit Bhattacharya and Chiranjib Bhattacharyya

    Citation:

    Lakkaraju, Himabindu, Indrajit Bhattacharya, and Chiranjib Bhattacharyya. "Dynamic Multi-Relational Chinese Restaurant Process for Analyzing Influences on Users in Social Media." Proceedings of the IEEE International Conference on Data Mining 12th (2012).  View Details
  16. Exploiting Coherence for the Simultaneous Discovery of Latent Facets and Associated Sentiments

    Himabindu Lakkaraju, Chiranjib Bhattacharyya, Indrajit Bhattacharya and Srujana Merugu

    Citation:

    Lakkaraju, Himabindu, Chiranjib Bhattacharyya, Indrajit Bhattacharya, and Srujana Merugu. "Exploiting Coherence for the Simultaneous Discovery of Latent Facets and Associated Sentiments." Proceedings of the SIAM International Conference on Data Mining (2011): 498–509.  View Details