Publications
Publications
- Proceedings of the International Conference on Machine Learning (ICML)
Robust and Stable Black Box Explanations
By: Himabindu Lakkaraju, Nino Arsov and Osbert Bastani
Abstract
As machine learning black boxes are increasingly being deployed in real-world applications, there
has been a growing interest in developing post hoc explanations that summarize the behaviors
of these black boxes. However, existing algorithms for generating such explanations have been
shown to lack stability and robustness to distribution shifts. We propose a novel framework for
generating robust and stable explanations of black box models based on adversarial training. Our
framework optimizes a minimax objective that aims to construct the highest fidelity explanation
with respect to the worst-case over a set of adversarial perturbations. We instantiate this algorithm
for explanations in the form of linear models and decision sets by devising the required optimization procedures. To the best of our knowledge, this work makes the first attempt at generating
post hoc explanations that are robust to a general class of adversarial perturbations that are of practical interest. Experimental evaluation with real-world and synthetic datasets demonstrates that our approach substantially improves robustness of explanations without sacrificing their fidelity on the
original data distribution.
Keywords
Citation
Lakkaraju, Himabindu, Nino Arsov, and Osbert Bastani. "Robust and Stable Black Box Explanations." Proceedings of the International Conference on Machine Learning (ICML) 37th (2020): 5628–5638. (Published in PMLR, Vol. 119.)