Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
Publications
Publications
  • Forthcoming
  • Article
  • Scientific Data

Evaluating Explainability for Graph Neural Networks

By: Chirag Agarwal, Owen Queen, Himabindu Lakkaraju and Marinka Zitnik
  • Format:Electronic
  • | Pages:37
ShareBar

Abstract

As explanations are increasingly used to understand the behavior of graph neural networks (GNNs), evaluating the quality and reliability of GNN explanations is crucial. However, assessing the quality of GNN explanations is challenging as existing graph datasets have no or unreliable ground-truth explanations. Here, we introduce a synthetic graph data generator, SHAPEGGEN,whichcangenerate a variety of benchmark datasets (e.g., varying graph sizes, degree distributions, homophilic vs. heterophilic graphs) accompanied by ground-truth explanations. The flexibility to generate diverse synthetic datasets and corresponding ground-truth explanations allows SHAPEGGEN to mimic the data in various real-world areas. We include SHAPEGGEN and several real-world graph datasets in a graph explainability library, GRAPHXAI. In addition to synthetic and real-world graph datasets with ground-truth explanations, GRAPHXAI provides data loaders, data processing functions, visualizers, GNN model implementations, and evaluation metrics to benchmark GNN explainability methods.

Keywords

Analytics and Data Science

Citation

Agarwal, Chirag, Owen Queen, Himabindu Lakkaraju, and Marinka Zitnik. "Evaluating Explainability for Graph Neural Networks." Scientific Data (forthcoming).
  • Read Now

About The Author

Himabindu Lakkaraju

Technology and Operations Management
→More Publications

More from the Authors

    • 2023
    • Faculty Research

    When Algorithms Explain Themselves: AI Adoption and Accuracy of Experts' Decisions

    By: Himabindu Lakkaraju and Chiara Farronato
    • 2022
    • Advances in Neural Information Processing Systems (NeurIPS)

    Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations

    By: Tessa Han, Suraj Srinivas and Himabindu Lakkaraju
    • 2022
    • Advances in Neural Information Processing Systems (NeurIPS)

    Efficiently Training Low-Curvature Neural Networks

    By: Suraj Srinivas, Kyle Matoba, Himabindu Lakkaraju and Francois Fleuret
More from the Authors
  • When Algorithms Explain Themselves: AI Adoption and Accuracy of Experts' Decisions By: Himabindu Lakkaraju and Chiara Farronato
  • Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations By: Tessa Han, Suraj Srinivas and Himabindu Lakkaraju
  • Efficiently Training Low-Curvature Neural Networks By: Suraj Srinivas, Kyle Matoba, Himabindu Lakkaraju and Francois Fleuret
ǁ
Campus Map
Harvard Business School
Soldiers Field
Boston, MA 02163
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College