Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Faculty & Research
  • Faculty
  • Research
  • Featured Topics
  • Academic Units
  • …→
  • Harvard Business School→
  • Faculty & Research→
Publications
Publications
  • Mar 2021
  • Conference Presentation

Descent-to-Delete: Gradient-Based Methods for Machine Unlearning

By: Seth Neel, Aaron Leon Roth and Saeed Sharifi-Malvajerdi
  • Format:Print
  • | Language:English
ShareBar

Abstract

We study the data deletion problem for convex models. By leveraging techniques from convex optimization and reservoir sampling, we give the first data deletion algorithms that are able to handle an arbitrarily long sequence of adversarial updates while promising both per-deletion run-time and steady-state error that do not grow with the length of the update sequence. We also introduce several new conceptual distinctions: for example, we can ask that after a deletion, the entire state maintained by the optimization algorithm is statistically indistinguishable from the state that would have resulted had we retrained, or we can ask for the weaker condition that only the observable output is statistically indistinguishable from the observable output that would have resulted from retraining. We are able to give more efficient deletion algorithms under this weaker deletion criterion.

Keywords

Machine Learning; Unlearning Algorithm; Mathematical Methods

Citation

Neel, Seth, Aaron Leon Roth, and Saeed Sharifi-Malvajerdi. "Descent-to-Delete: Gradient-Based Methods for Machine Unlearning." Paper presented at the 32nd Algorithmic Learning Theory Conference, March 2021.
  • Read Now

About The Author

Seth Neel

Technology and Operations Management
→More Publications

More from the Authors

    • 2023
    • Proceedings of the Conference on Empirical Methods in Natural Language Processing

    MoPe: Model Perturbation-based Privacy Attacks on Language Models

    By: Marvin Li, Jason Wang, Jeffrey Wang and Seth Neel
    • 2023
    • Faculty Research

    Black-box Training Data Identification in GANs via Detector Networks

    By: Lukman Olagoke, Salil Vadhan and Seth Neel
    • 2023
    • Faculty Research

    In-Context Unlearning: Language Models as Few Shot Unlearners

    By: Martin Pawelczyk, Seth Neel and Himabindu Lakkaraju
More from the Authors
  • MoPe: Model Perturbation-based Privacy Attacks on Language Models By: Marvin Li, Jason Wang, Jeffrey Wang and Seth Neel
  • Black-box Training Data Identification in GANs via Detector Networks By: Lukman Olagoke, Salil Vadhan and Seth Neel
  • In-Context Unlearning: Language Models as Few Shot Unlearners By: Martin Pawelczyk, Seth Neel and Himabindu Lakkaraju
ǁ
Campus Map
Harvard Business School
Soldiers Field
Boston, MA 02163
→Map & Directions
→More Contact Information
  • Make a Gift
  • Site Map
  • Jobs
  • Harvard University
  • Trademarks
  • Policies
  • Accessibility
  • Digital Accessibility
Copyright © President & Fellows of Harvard College.