Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
Publications
Publications
  • Article
  • Proceedings of the International Joint Conference on Artificial Intelligence

Detecting Adversarial Attacks via Subset Scanning of Autoencoder Activations and Reconstruction Error

By: Celia Cintas, Skyler Speakman, Victor Akinwande, William Ogallo, Komminist Weldemariam, Srihari Sridharan and Edward McFowland III
  • Format:Print
ShareBar

Abstract

Reliably detecting attacks in a given set of inputs is of high practical relevance because of the vulnerability of neural networks to adversarial examples. These altered inputs create a security risk in applications with real-world consequences, such as self-driving cars, robotics and financial services. We propose an unsupervised method for detecting adversarial attacks in inner layers of autoencoder (AE) networks by maximizing a non-parametric measure of anomalous node activations. Previous work in this space has shown AE networks can detect anomalous images by thresholding the reconstruction error produced by the final layer. Furthermore, other detection methods rely on data augmentation or specialized training techniques which must be asserted before training time. In contrast, we use subset scanning methods from the anomalous pattern detection domain to enhance detection power without labeled examples of the noise, retraining or data augmentation methods. In addition to an anomalous “score” our proposed method also returns the subset of nodes within the AE network that contributed to that score. This will allow future work to pivot from detection to visualisation and explainability. Our scanning approach shows consistently higher detection power than existing detection methods across several adversarial noise models and a wide range of perturbation strengths.

Keywords

Autoencoder Networks; Pattern Detection; Subset Scanning; Computer Vision; Statistical Methods And Machine Learning; Machine Learning; Deep Learning; Data Mining; Big Data; Large-scale Systems; Mathematical Methods

Citation

Cintas, Celia, Skyler Speakman, Victor Akinwande, William Ogallo, Komminist Weldemariam, Srihari Sridharan, and Edward McFowland III. "Detecting Adversarial Attacks via Subset Scanning of Autoencoder Activations and Reconstruction Error." Proceedings of the International Joint Conference on Artificial Intelligence 29th (2020).
  • Read Now

About The Author

Edward McFowland III

Technology and Operations Management
→More Publications

More from the Authors

    • Pattern Recognition Letters

    Pattern Detection in the Activation Space for Identifying Synthesized Content

    By: Celia Cintas, Skyler Speakman, Girmaw Abebe Tadesse, Victor Akinwande, Edward McFowland III and Komminist Weldemariam
    • MIS Quarterly

    A Prescriptive Analytics Framework for Optimal Policy Deployment Using Heterogeneous Treatment Effects

    By: Edward McFowland III, Sandeep Gangarapu, Ravi Bapna and Tianshu Sun
    • 2021
    • Faculty Research

    Toward Automated Discovery of Novel Anomalous Patterns

    By: Edward McFowland III and Daniel B. Neill
More from the Authors
  • Pattern Detection in the Activation Space for Identifying Synthesized Content By: Celia Cintas, Skyler Speakman, Girmaw Abebe Tadesse, Victor Akinwande, Edward McFowland III and Komminist Weldemariam
  • A Prescriptive Analytics Framework for Optimal Policy Deployment Using Heterogeneous Treatment Effects By: Edward McFowland III, Sandeep Gangarapu, Ravi Bapna and Tianshu Sun
  • Toward Automated Discovery of Novel Anomalous Patterns By: Edward McFowland III and Daniel B. Neill
ǁ
Campus Map
Harvard Business School
Soldiers Field
Boston, MA 02163
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College