Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
  • Research
    • Research
    • Publications
    • Global Research Centers
    • Case Development
    • Initiatives & Projects
    • Research Services
    • Seminars & Conferences
    →
  • Publications→

Publications

Publications

Filter Results : (2) Arrow Down
Filter Results : (2) Arrow Down Arrow Up

Show Results For

  • All HBS Web  (16)
    • Faculty Publications  (2)

    Show Results For

    • All HBS Web  (16)
      • Faculty Publications  (2)

      Adversarial Examples Remove Adversarial Examples →

      Page 1 of 2 Results

      Are you looking for?

      → Search All HBS Web
      • 2022
      • Article

      Exploring Counterfactual Explanations Through the Lens of Adversarial Examples: A Theoretical and Empirical Analysis.

      By: Martin Pawelczyk, Chirag Agarwal, Shalmali Joshi, Sohini Upadhyay and Himabindu Lakkaraju
      As machine learning (ML) models become more widely deployed in high-stakes applications, counterfactual explanations have emerged as key tools for providing actionable model explanations in practice. Despite the growing popularity of counterfactual explanations, a...  View Details
      Keywords: Machine Learning Models; Counterfactual Explanations; Adversarial Examples; Mathematical Methods
      Citation
      Read Now
      Related
      Pawelczyk, Martin, Chirag Agarwal, Shalmali Joshi, Sohini Upadhyay, and Himabindu Lakkaraju. "Exploring Counterfactual Explanations Through the Lens of Adversarial Examples: A Theoretical and Empirical Analysis." Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS) 25th (2022).
      • Article

      Detecting Adversarial Attacks via Subset Scanning of Autoencoder Activations and Reconstruction Error

      By: Celia Cintas, Skyler Speakman, Victor Akinwande, William Ogallo, Komminist Weldemariam, Srihari Sridharan and Edward McFowland III
      Reliably detecting attacks in a given set of inputs is of high practical relevance because of the vulnerability of neural networks to adversarial examples. These altered inputs create a security risk in applications with real-world consequences, such as self-driving...  View Details
      Keywords: Autoencoder Networks; Pattern Detection; Subset Scanning; Computer Vision; Statistical Methods And Machine Learning; Machine Learning; Deep Learning; Data Mining; Big Data; Large-scale Systems; Mathematical Methods; Analytics and Data Science
      Citation
      Read Now
      Related
      Cintas, Celia, Skyler Speakman, Victor Akinwande, William Ogallo, Komminist Weldemariam, Srihari Sridharan, and Edward McFowland III. "Detecting Adversarial Attacks via Subset Scanning of Autoencoder Activations and Reconstruction Error." Proceedings of the International Joint Conference on Artificial Intelligence 29th (2020).
      • 1

      Are you looking for?

      → Search All HBS Web
      ǁ
      Campus Map
      Harvard Business School
      Soldiers Field
      Boston, MA 02163
      →Map & Directions
      →More Contact Information
      • Make a Gift
      • Site Map
      • Jobs
      • Harvard University
      • Trademarks
      • Policies
      • Accessibility
      • Digital Accessibility
      Copyright © President & Fellows of Harvard College