Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
Publications
Publications
  • Article
  • Advances in Neural Information Processing Systems (NeurIPS)

Adaptive Machine Unlearning

By: Varun Gupta, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi and Chris Waites
  • Format:Print
ShareBar

Abstract

Data deletion algorithms aim to remove the influence of deleted data points from trained models at a cheaper computational cost than fully retraining those models. However, for sequences of deletions, most prior work in the non-convex setting gives valid guarantees only for sequences that are chosen independently of the models that are published. If people choose to delete their data as a function of the published models (because they don't like what the models reveal about them, for example), then the update sequence is adaptive. In this paper, we give a general reduction from deletion guarantees against adaptive sequences to deletion guarantees against non-adaptive sequences, using differential privacy and its connection to max information. Combined with ideas from prior work which give guarantees for non-adaptive deletion sequences, this leads to extremely flexible algorithms able to handle arbitrary model classes and training methodologies, giving strong provable deletion guarantees for adaptive deletion sequences. We show in theory how prior work for non-convex models fails against adaptive deletion sequences, and use this intuition to design a practical attack against the SISA algorithm of Bourtoule et al.

Keywords

Machine Learning; AI and Machine Learning

Citation

Gupta, Varun, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, and Chris Waites. "Adaptive Machine Unlearning." Advances in Neural Information Processing Systems (NeurIPS) 34 (2021).
  • Read Now

About The Author

Seth Neel

Technology and Operations Management
→More Publications

More from the Authors

    • September 2022 (Revised October 2022)
    • Faculty Research

    Data Privacy in Practice at LinkedIn

    By: Iavor Bojinov, Marco Iansiti and Seth Neel
    • Mar 2021
    • Faculty Research

    Descent-to-Delete: Gradient-Based Methods for Machine Unlearning

    By: Seth Neel, Aaron Leon Roth and Saeed Sharifi-Malvajerdi
    • 2021
    • Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society

    Fair Algorithms for Infinite and Contextual Bandits

    By: Matthew Joseph, Michael J Kearns, Jamie Morgenstern, Seth Neel and Aaron Leon Roth
More from the Authors
  • Data Privacy in Practice at LinkedIn By: Iavor Bojinov, Marco Iansiti and Seth Neel
  • Descent-to-Delete: Gradient-Based Methods for Machine Unlearning By: Seth Neel, Aaron Leon Roth and Saeed Sharifi-Malvajerdi
  • Fair Algorithms for Infinite and Contextual Bandits By: Matthew Joseph, Michael J Kearns, Jamie Morgenstern, Seth Neel and Aaron Leon Roth
ǁ
Campus Map
Harvard Business School
Soldiers Field
Boston, MA 02163
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College