Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
Publications
Publications
  • Article
  • Advances in Neural Information Processing Systems (NeurIPS)

Incorporating Interpretable Output Constraints in Bayesian Neural Networks

By: Wanqian Yang, Lars Lorch, Moritz Graule, Himabindu Lakkaraju and Finale Doshi-Velez
  • Format:Print
ShareBar

Abstract

Domains where supervised models are deployed often come with task-specific constraints, such as prior expert knowledge on the ground-truth function, or desiderata like safety and fairness. We introduce a novel probabilistic framework for reasoning with such constraints and formulate a prior that enables us to effectively incorporate them into Bayesian neural networks (BNNs), including a variant that can be amortized over tasks. The resulting Output-Constrained BNN (OC-BNN) is fully consistent with the Bayesian framework for uncertainty quantification and is amenable to black-box inference. Unlike typical BNN inference in uninterpretable parameter space, OC-BNNs widen the range of functional knowledge that can be incorporated, especially for model users without expertise in machine learning. We demonstrate the efficacy of OC-BNNs on real-world datasets, spanning multiple domains such as healthcare, criminal justice, and credit scoring.

Citation

Yang, Wanqian, Lars Lorch, Moritz Graule, Himabindu Lakkaraju, and Finale Doshi-Velez. "Incorporating Interpretable Output Constraints in Bayesian Neural Networks." Advances in Neural Information Processing Systems (NeurIPS) 33 (2020).
  • Read Now

About The Author

Himabindu Lakkaraju

Technology and Operations Management
→More Publications

More from the Authors

    • 2022
    • Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS)

    Exploring Counterfactual Explanations Through the Lens of Adversarial Examples: A Theoretical and Empirical Analysis.

    By: Martin Pawelczyk, Chirag Agarwal, Shalmali Joshi, Sohini Upadhyay and Himabindu Lakkaraju
    • 2022
    • Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS)

    Probing GNN Explainers: A Rigorous Theoretical and Empirical Analysis of GNN Explanation Methods.

    By: Chirag Agarwal, Marinka Zitnik and Himabindu Lakkaraju
    • Advances in Neural Information Processing Systems (NeurIPS)

    Reliable Post hoc Explanations: Modeling Uncertainty in Explainability

    By: Dylan Slack, Sophie Hilgard, Sameer Singh and Himabindu Lakkaraju
More from the Authors
  • Exploring Counterfactual Explanations Through the Lens of Adversarial Examples: A Theoretical and Empirical Analysis. By: Martin Pawelczyk, Chirag Agarwal, Shalmali Joshi, Sohini Upadhyay and Himabindu Lakkaraju
  • Probing GNN Explainers: A Rigorous Theoretical and Empirical Analysis of GNN Explanation Methods. By: Chirag Agarwal, Marinka Zitnik and Himabindu Lakkaraju
  • Reliable Post hoc Explanations: Modeling Uncertainty in Explainability By: Dylan Slack, Sophie Hilgard, Sameer Singh and Himabindu Lakkaraju
ǁ
Campus Map
Harvard Business School
Soldiers Field
Boston, MA 02163
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Digital Accessibility
Copyright © President & Fellows of Harvard College