Filter Results
:
(80)
Show Results For
-
All HBS Web
(116,270)
- Faculty Publications (80)
Show Results For
-
All HBS Web
(116,270)
- Faculty Publications (80)
Page 1 of
80
Results
→
- 2024
- Conference Paper
Fair Machine Unlearning: Data Removal while Mitigating Disparities
By: Himabindu Lakkaraju, Flavio Calmon, Jiaqi Ma and Alex Oesterling
- 2024
- Conference Paper
Quantifying Uncertainty in Natural Language Explanations of Large Language Models
By: Himabindu Lakkaraju, Sree Harsha Tanneru and Chirag Agarwal
Large Language Models (LLMs) are increasingly used as powerful tools for several
high-stakes natural language processing (NLP) applications. Recent prompting
works claim to elicit intermediate reasoning steps and key tokens that serve as
proxy explanations for LLM...
View Details
Lakkaraju, Himabindu, Sree Harsha Tanneru, and Chirag Agarwal. "Quantifying Uncertainty in Natural Language Explanations of Large Language Models." Paper presented at the Society for Artificial Intelligence and Statistics, 2024.
- 2023
- Article
M4: A Unified XAI Benchmark for Faithfulness Evaluation of Feature Attribution Methods across Metrics, Modalities, and Models
By: Himabindu Lakkaraju, Xuhong Li, Mengnan Du, Jiamin Chen, Yekun Chai and Haoyi Xiong
While Explainable Artificial Intelligence (XAI) techniques have been widely studied to explain predictions made by deep neural networks, the way to evaluate the faithfulness of explanation results remains challenging, due to the heterogeneity of explanations for...
View Details
Keywords:
AI and Machine Learning
Lakkaraju, Himabindu, Xuhong Li, Mengnan Du, Jiamin Chen, Yekun Chai, and Haoyi Xiong. "M4: A Unified XAI Benchmark for Faithfulness Evaluation of Feature Attribution Methods across Metrics, Modalities, and Models." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- 2023
- Article
Post Hoc Explanations of Language Models Can Improve Language Models
By: Satyapriya Krishna, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh and Himabindu Lakkaraju
Large Language Models (LLMs) have demonstrated remarkable capabilities in performing complex tasks. Moreover, recent research has shown that incorporating human-annotated rationales (e.g., Chain-of-Thought prompting) during in-context learning can significantly enhance...
View Details
Krishna, Satyapriya, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh, and Himabindu Lakkaraju. "Post Hoc Explanations of Language Models Can Improve Language Models." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- 2023
- Article
Verifiable Feature Attributions: A Bridge between Post Hoc Explainability and Inherent Interpretability
By: Usha Bhalla, Suraj Srinivas and Himabindu Lakkaraju
With the increased deployment of machine learning models in various real-world applications, researchers and practitioners alike have emphasized the need for explanations of model behaviour. To this end, two broad strategies have been outlined in prior literature to...
View Details
Bhalla, Usha, Suraj Srinivas, and Himabindu Lakkaraju. "Verifiable Feature Attributions: A Bridge between Post Hoc Explainability and Inherent Interpretability." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- 2023
- Article
Which Models Have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness
By: Suraj Srinivas, Sebastian Bordt and Himabindu Lakkaraju
One of the remarkable properties of robust computer vision models is that their input-gradients are often aligned with human perception, referred to in the literature as perceptually-aligned gradients (PAGs). Despite only being trained for classification, PAGs cause...
View Details
Srinivas, Suraj, Sebastian Bordt, and Himabindu Lakkaraju. "Which Models Have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- 2023
- Working Paper
In-Context Unlearning: Language Models as Few Shot Unlearners
By: Martin Pawelczyk, Seth Neel and Himabindu Lakkaraju
Machine unlearning, the study of efficiently removing the impact of specific training points on the
trained model, has garnered increased attention of late, driven by the need to comply with privacy
regulations like the Right to be Forgotten. Although unlearning is...
View Details
Pawelczyk, Martin, Seth Neel, and Himabindu Lakkaraju. "In-Context Unlearning: Language Models as Few Shot Unlearners." Working Paper, October 2023.
- 2023
- Article
On Minimizing the Impact of Dataset Shifts on Actionable Explanations
By: Anna P. Meyer, Dan Ley, Suraj Srinivas and Himabindu Lakkaraju
The Right to Explanation is an important regulatory principle that allows individuals to request actionable explanations for algorithmic decisions. However, several technical challenges arise when providing such actionable explanations in practice. For instance, models...
View Details
Meyer, Anna P., Dan Ley, Suraj Srinivas, and Himabindu Lakkaraju. "On Minimizing the Impact of Dataset Shifts on Actionable Explanations." Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI) 39th (2023): 1434–1444.
- 2023
- Article
On the Impact of Actionable Explanations on Social Segregation
By: Ruijiang Gao and Himabindu Lakkaraju
As predictive models seep into several real-world applications, it has become critical to ensure that individuals who are negatively impacted by the outcomes of these models are provided with a means for recourse. To this end, there has been a growing body of research...
View Details
Gao, Ruijiang, and Himabindu Lakkaraju. "On the Impact of Actionable Explanations on Social Segregation." Proceedings of the International Conference on Machine Learning (ICML) 40th (2023): 10727–10743.
- August 2023
- Article
Explaining Machine Learning Models with Interactive Natural Language Conversations Using TalkToModel
By: Dylan Slack, Satyapriya Krishna, Himabindu Lakkaraju and Sameer Singh
Practitioners increasingly use machine learning (ML) models, yet models have become more complex and harder to understand. To understand complex models, researchers have proposed techniques to explain model predictions. However, practitioners struggle to use...
View Details
Slack, Dylan, Satyapriya Krishna, Himabindu Lakkaraju, and Sameer Singh. "Explaining Machine Learning Models with Interactive Natural Language Conversations Using TalkToModel." Nature Machine Intelligence 5, no. 8 (August 2023): 873–883.
- 2023
- Article
Towards Bridging the Gaps between the Right to Explanation and the Right to Be Forgotten
By: Himabindu Lakkaraju, Satyapriya Krishna and Jiaqi Ma
The Right to Explanation and the Right to be Forgotten are two important principles outlined to regulate algorithmic decision making and data usage in real-world applications. While the right to explanation allows individuals to request an actionable explanation for an...
View Details
Keywords:
Analytics and Data Science;
AI and Machine Learning;
Decision Making;
Governing Rules, Regulations, and Reforms
Lakkaraju, Himabindu, Satyapriya Krishna, and Jiaqi Ma. "Towards Bridging the Gaps between the Right to Explanation and the Right to Be Forgotten." Proceedings of the International Conference on Machine Learning (ICML) 40th (2023): 17808–17826.
- June 2023
- Article
When Does Uncertainty Matter? Understanding the Impact of Predictive Uncertainty in ML Assisted Decision Making
By: Sean McGrath, Parth Mehta, Alexandra Zytek, Isaac Lage and Himabindu Lakkaraju
As machine learning (ML) models are increasingly being employed to assist human decision
makers, it becomes critical to provide these decision makers with relevant inputs which can
help them decide if and how to incorporate model predictions into their decision...
View Details
McGrath, Sean, Parth Mehta, Alexandra Zytek, Isaac Lage, and Himabindu Lakkaraju. "When Does Uncertainty Matter? Understanding the Impact of Predictive Uncertainty in ML Assisted Decision Making." Transactions on Machine Learning Research (TMLR) (June 2023).
- 2023
- Article
Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse
By: Martin Pawelczyk, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci and Himabindu Lakkaraju
As machine learning models are increasingly being employed to make consequential decisions in real-world settings, it becomes critical to ensure that individuals who are adversely impacted (e.g., loan denied) by the predictions of these models are provided with a means...
View Details
Pawelczyk, Martin, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju. "Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse." Proceedings of the International Conference on Learning Representations (ICLR) (2023).
- April 2023
- Article
On the Privacy Risks of Algorithmic Recourse
By: Martin Pawelczyk, Himabindu Lakkaraju and Seth Neel
As predictive models are increasingly being employed to make consequential decisions, there is a growing emphasis on developing techniques that can provide algorithmic recourse to affected individuals. While such recourses can be immensely beneficial to affected...
View Details
Pawelczyk, Martin, Himabindu Lakkaraju, and Seth Neel. "On the Privacy Risks of Algorithmic Recourse." Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS) 206 (April 2023).
- 2023
- Article
Evaluating Explainability for Graph Neural Networks
By: Chirag Agarwal, Owen Queen, Himabindu Lakkaraju and Marinka Zitnik
As explanations are increasingly used to understand the behavior of graph neural networks (GNNs), evaluating the quality and reliability of GNN explanations is crucial. However, assessing the quality of GNN explanations is challenging as existing graph datasets have no...
View Details
Keywords:
Analytics and Data Science
Agarwal, Chirag, Owen Queen, Himabindu Lakkaraju, and Marinka Zitnik. "Evaluating Explainability for Graph Neural Networks." Art. 114. Scientific Data 10 (2023).
- 2022
- Article
Efficiently Training Low-Curvature Neural Networks
By: Suraj Srinivas, Kyle Matoba, Himabindu Lakkaraju and Francois Fleuret
Standard deep neural networks often have excess non-linearity, making them susceptible to issues such as low adversarial robustness and gradient instability. Common methods to address these downstream issues, such as adversarial training, are expensive and often...
View Details
Keywords:
AI and Machine Learning
Srinivas, Suraj, Kyle Matoba, Himabindu Lakkaraju, and Francois Fleuret. "Efficiently Training Low-Curvature Neural Networks." Advances in Neural Information Processing Systems (NeurIPS) (2022).
- 2022
- Article
OpenXAI: Towards a Transparent Evaluation of Model Explanations
By: Chirag Agarwal, Satyapriya Krishna, Eshika Saxena, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik and Himabindu Lakkaraju
While several types of post hoc explanation methods have been proposed in recent literature, there is very little work on systematically benchmarking these methods. Here, we introduce OpenXAI, a comprehensive and extensible opensource framework for evaluating and...
View Details
Agarwal, Chirag, Satyapriya Krishna, Eshika Saxena, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, and Himabindu Lakkaraju. "OpenXAI: Towards a Transparent Evaluation of Model Explanations." Advances in Neural Information Processing Systems (NeurIPS) (2022).
- 2022
- Article
Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations
By: Tessa Han, Suraj Srinivas and Himabindu Lakkaraju
A critical problem in the field of post hoc explainability is the lack of a common foundational goal among methods. For example, some methods are motivated by function approximation, some by game theoretic notions, and some by obtaining clean visualizations. This...
View Details
Han, Tessa, Suraj Srinivas, and Himabindu Lakkaraju. "Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations." Advances in Neural Information Processing Systems (NeurIPS) (2022). (Best Paper Award, International Conference on Machine Learning (ICML) Workshop on Interpretable ML in Healthcare.)
- 2022
- Article
A Human-Centric Take on Model Monitoring
By: Murtuza Shergadwala, Himabindu Lakkaraju and Krishnaram Kenthapadi
Predictive models are increasingly used to make various consequential decisions in high-stakes domains such as healthcare, finance, and policy. It becomes critical to ensure that these models make accurate predictions, are robust to shifts in the data, do not rely on...
View Details
Shergadwala, Murtuza, Himabindu Lakkaraju, and Krishnaram Kenthapadi. "A Human-Centric Take on Model Monitoring." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing (HCOMP) 10 (2022): 173–183.