Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
Publications
Publications
  • 2023
  • Article
  • Advances in Neural Information Processing Systems (NeurIPS)

Post Hoc Explanations of Language Models Can Improve Language Models

By: Satyapriya Krishna, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh and Himabindu Lakkaraju
  • Format:Electronic
  • | Pages:16
ShareBar

Abstract

Large Language Models (LLMs) have demonstrated remarkable capabilities in performing complex tasks. Moreover, recent research has shown that incorporating human-annotated rationales (e.g., Chain-of-Thought prompting) during in-context learning can significantly enhance the performance of these models, particularly on tasks that require reasoning capabilities. However, incorporating such rationales poses challenges in terms of scalability as this requires a high degree of human involvement. In this work, we present a novel framework, Amplifying Model Performance by Leveraging In-Context Learning with Post Hoc Explanations (AMPLIFY), which addresses the aforementioned challenges by automating the process of rationale generation. To this end, we leverage post hoc explanation methods which output attribution scores (explanations) capturing the influence of each of the input features on model predictions. More specifically, we construct automated natural language rationales that embed insights from post hoc explanations to provide corrective signals to LLMs. Extensive experimentation with real-world datasets demonstrates that our framework, AMPLIFY, leads to prediction accuracy improvements of about 10–25% over a wide range of tasks, including those where prior approaches which rely on human-annotated rationales such as Chain-of-Thought prompting fall short. Our work makes one of the first attempts at highlighting the potential of post hoc explanations as valuable tools for enhancing the effectiveness of LLMs. Furthermore, we conduct additional empirical analyses and ablation studies to demonstrate the impact of each of the components of AMPLIFY, which, in turn, leads to critical insights for refining in-context learning.

Keywords

AI and Machine Learning; Performance Effectiveness

Citation

Krishna, Satyapriya, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh, and Himabindu Lakkaraju. "Post Hoc Explanations of Language Models Can Improve Language Models." Advances in Neural Information Processing Systems (NeurIPS) (2023).
  • Read Now

About The Author

Himabindu Lakkaraju

Technology and Operations Management
→More Publications

More from the Authors

    • 2024
    • Faculty Research

    Fair Machine Unlearning: Data Removal while Mitigating Disparities

    By: Himabindu Lakkaraju, Flavio Calmon, Jiaqi Ma and Alex Oesterling
    • 2024
    • Faculty Research

    Quantifying Uncertainty in Natural Language Explanations of Large Language Models

    By: Himabindu Lakkaraju, Sree Harsha Tanneru and Chirag Agarwal
    • 2023
    • Advances in Neural Information Processing Systems (NeurIPS)

    Verifiable Feature Attributions: A Bridge between Post Hoc Explainability and Inherent Interpretability

    By: Usha Bhalla, Suraj Srinivas and Himabindu Lakkaraju
More from the Authors
  • Fair Machine Unlearning: Data Removal while Mitigating Disparities By: Himabindu Lakkaraju, Flavio Calmon, Jiaqi Ma and Alex Oesterling
  • Quantifying Uncertainty in Natural Language Explanations of Large Language Models By: Himabindu Lakkaraju, Sree Harsha Tanneru and Chirag Agarwal
  • Verifiable Feature Attributions: A Bridge between Post Hoc Explainability and Inherent Interpretability By: Usha Bhalla, Suraj Srinivas and Himabindu Lakkaraju
ǁ
Campus Map
Harvard Business School
Soldiers Field
Boston, MA 02163
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College.