Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
Publications
Publications
  • 2022
  • Article
  • Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS)

Probing GNN Explainers: A Rigorous Theoretical and Empirical Analysis of GNN Explanation Methods.

By: Chirag Agarwal, Marinka Zitnik and Himabindu Lakkaraju
  • Format:Electronic
ShareBar

Abstract

As Graph Neural Networks (GNNs) are increasingly employed in real-world applications, it becomes critical to ensure that the stakeholders understand the rationale behind their predictions. While several GNN explanation methods have been proposed recently, there has been little to no work on theoretically analyzing the behavior of these methods or systematically evaluating their effectiveness. Here, we introduce the first axiomatic framework for theoretically analyzing, evaluating, and comparing state-of-the-art GNN explanation methods. We outline and formalize the key desirable properties that all GNN explanation methods should satisfy in order to generate reliable explanations, namely, faithfulness, stability, and fairness. We leverage these properties to present the first ever theoretical analysis of the effectiveness of state-of-the-art GNN explanation methods. Our analysis establishes upper bounds on all the aforementioned properties for popular GNN explanation methods. We also leverage our framework to empirically evaluate these methods on multiple real-world datasets from diverse domains. Our empirical results demonstrate that some popular GNN explanation methods (e.g., gradient-based methods) perform no better than a random baseline and that methods which leverage the graph structure are more effective than those that solely rely on the node features.

Keywords

Graph Neural Networks; Explanation Methods; Mathematical Methods; Framework; Theory; Analysis

Citation

Agarwal, Chirag, Marinka Zitnik, and Himabindu Lakkaraju. "Probing GNN Explainers: A Rigorous Theoretical and Empirical Analysis of GNN Explanation Methods." Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS) 25th (2022).
  • Read Now

About The Author

Himabindu Lakkaraju

Technology and Operations Management
→More Publications

More from the Authors

    • 2024
    • Faculty Research

    Fair Machine Unlearning: Data Removal while Mitigating Disparities

    By: Himabindu Lakkaraju, Flavio Calmon, Jiaqi Ma and Alex Oesterling
    • 2024
    • Faculty Research

    Quantifying Uncertainty in Natural Language Explanations of Large Language Models

    By: Himabindu Lakkaraju, Sree Harsha Tanneru and Chirag Agarwal
    • 2023
    • Advances in Neural Information Processing Systems (NeurIPS)

    Post Hoc Explanations of Language Models Can Improve Language Models

    By: Satyapriya Krishna, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh and Himabindu Lakkaraju
More from the Authors
  • Fair Machine Unlearning: Data Removal while Mitigating Disparities By: Himabindu Lakkaraju, Flavio Calmon, Jiaqi Ma and Alex Oesterling
  • Quantifying Uncertainty in Natural Language Explanations of Large Language Models By: Himabindu Lakkaraju, Sree Harsha Tanneru and Chirag Agarwal
  • Post Hoc Explanations of Language Models Can Improve Language Models By: Satyapriya Krishna, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh and Himabindu Lakkaraju
ǁ
Campus Map
Harvard Business School
Soldiers Field
Boston, MA 02163
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College.