We introduce a new family of fairness definitions that interpolate between statistical and individual notions of fairness, obtaining some of the best properties of each. We show that checking whether these notions are satisfied is computationally hard in the worst case, but give practical oracle-efficient algorithms for learning subject to these constraints, and confirm our findings with experiments.
Seth Neel
Assistant Professor of Business Administration
Assistant Professor of Business Administration
Technology and Operations Management
We give a new proof of the “transfer theorem” underlying adaptive data analysis: that any mechanism for answering adaptively chosen statistical queries that is differentially private and sample-accurate is also accurate out-of-sample. Our new proof is elementary and gives structural insights that we expect will be useful elsewhere. We show: 1) that differential privacy ensures that the expectation of any query on the conditional distribution on datasets induced by the transcript of the interaction is close to its expectation on the data distribution, and 2) sample accuracy on its own ensures that any query answer produced by the mechanism is close to the expectation of the query on the conditional distribution. This second claim follows from a thought experiment in which we imagine that the dataset is resampled from the conditional distribution after the mechanism has committed to its answers. The transfer theorem then follows by summing these two bounds, and in particular, avoids the “monitor argument” used to derive high probability bounds in prior work. An upshot of our new proof technique is that the concrete bounds we obtain are substantially better than the best previously known bounds, even though the improvements are in the constants, rather than the asymptotics (which are known to be tight). As we show, our new bounds outperform the naive “sample-splitting” baseline at dramatically smaller dataset sizes comp
We study the data deletion problem for convex models. By leveraging techniques from convex optimization and reservoir sampling, we give the first data deletion algorithms that are able to handle an arbitrarily long sequence of adversarial updates while promising both per-deletion run-time and steady-state error that do not grow with the length of the update sequence. We also introduce several new conceptual distinctions: for example, we can ask that after a deletion, the entire state maintained by the optimization algorithm is statistically indistinguishable from the state that would have resulted had we retrained, or we can ask for the weaker condition that only the observable output is statistically indistinguishable from the observable output that would have resulted from retraining. We are able to give more efficient deletion algorithms under this weaker deletion criterion.
Seth Neel is an Assistant Professor housed in the Department of Technology and Operations Management (TOM) at HBS, and a Faculty Afficilate in Computer Science at SEAS. He is Principal Investigator of the Trustworthy AI Lab in Harvard's new D^3 Institute.
Professor Neel's primary academic interest is in responsible A.I., with a focus on red-teaming models to uncover practical privacy risks, developing efficient algorithms to remove the influence of user data on trained models, and training models that are consistent with notions like fairness or interpretability. His best known work develops fair algorithms that can accomodate very flexible definitions of protected subgroups while maintaining accuracy, and have been adopted and incorporated into the open source efforts of companies like IBM AI Research. He has been featured in Forbes, WIRED, Axios, and CNBC.
Outside of research, he is also a co-founder of the energy data company Welligence for which he was named to the Forbes 30 under 30 list in 2019.
For more information about Professor Neel's work, see his personal website.
Professor Neel's primary academic interest is in responsible A.I., with a focus on red-teaming models to uncover practical privacy risks, developing efficient algorithms to remove the influence of user data on trained models, and training models that are consistent with notions like fairness or interpretability. His best known work develops fair algorithms that can accomodate very flexible definitions of protected subgroups while maintaining accuracy, and have been adopted and incorporated into the open source efforts of companies like IBM AI Research. He has been featured in Forbes, WIRED, Axios, and CNBC.
Outside of research, he is also a co-founder of the energy data company Welligence for which he was named to the Forbes 30 under 30 list in 2019.
For more information about Professor Neel's work, see his personal website.
- Featured Work
-
We introduce a new family of fairness definitions that interpolate between statistical and individual notions of fairness, obtaining some of the best properties of each. We show that checking whether these notions are satisfied is computationally hard in the worst case, but give practical oracle-efficient algorithms for learning subject to these constraints, and confirm our findings with experiments.We give a new proof of the “transfer theorem” underlying adaptive data analysis: that any mechanism for answering adaptively chosen statistical queries that is differentially private and sample-accurate is also accurate out-of-sample. Our new proof is elementary and gives structural insights that we expect will be useful elsewhere. We show: 1) that differential privacy ensures that the expectation of any query on the conditional distribution on datasets induced by the transcript of the interaction is close to its expectation on the data distribution, and 2) sample accuracy on its own ensures that any query answer produced by the mechanism is close to the expectation of the query on the conditional distribution. This second claim follows from a thought experiment in which we imagine that the dataset is resampled from the conditional distribution after the mechanism has committed to its answers. The transfer theorem then follows by summing these two bounds, and in particular, avoids the “monitor argument” used to derive high probability bounds in prior work. An upshot of our new proof technique is that the concrete bounds we obtain are substantially better than the best previously known bounds, even though the improvements are in the constants, rather than the asymptotics (which are known to be tight). As we show, our new bounds outperform the naive “sample-splitting” baseline at dramatically smaller dataset sizes compWe study the data deletion problem for convex models. By leveraging techniques from convex optimization and reservoir sampling, we give the first data deletion algorithms that are able to handle an arbitrarily long sequence of adversarial updates while promising both per-deletion run-time and steady-state error that do not grow with the length of the update sequence. We also introduce several new conceptual distinctions: for example, we can ask that after a deletion, the entire state maintained by the optimization algorithm is statistically indistinguishable from the state that would have resulted had we retrained, or we can ask for the weaker condition that only the observable output is statistically indistinguishable from the observable output that would have resulted from retraining. We are able to give more efficient deletion algorithms under this weaker deletion criterion.
- Journal Articles
-
- Li, Marvin, Jason Wang, Jeffrey Wang, and Seth Neel. "MoPe: Model Perturbation-based Privacy Attacks on Language Models." Proceedings of the Conference on Empirical Methods in Natural Language Processing (2023): 13647–13660. View Details
- Pawelczyk, Martin, Himabindu Lakkaraju, and Seth Neel. "On the Privacy Risks of Algorithmic Recourse." Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS) 206 (April 2023). View Details
- Gupta, Varun, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, and Chris Waites. "Adaptive Machine Unlearning." Advances in Neural Information Processing Systems (NeurIPS) 34 (2021). View Details
- Jung, Christopher, Michael J. Kearns, Seth Neel, Aaron Leon Roth, Logan Stapleton, and Zhiwei Steven Wu. "An Algorithmic Framework for Fairness Elicitation." Paper presented at the 2nd Symposium on Foundations of Responsible Computing (FORC), 2021. View Details
- Neel, Seth, Aaron Leon Roth, and Saeed Sharifi-Malvajerdi. "Descent-to-Delete: Gradient-Based Methods for Machine Unlearning." Paper presented at the 32nd Algorithmic Learning Theory Conference, March 2021. View Details
- Diana, Emily, Michael J. Kearns, Seth Neel, and Aaron Leon Roth. "Optimal, Truthful, and Private Securities Lending." Paper presented at the 1st Association for Computing Machinery (ACM) International Conference on AI in Finance (ICAIF), October 2020. View Details
- Neel, Seth, Aaron Leon Roth, Giuseppe Vietri, and Zhiwei Steven Wu. "Oracle Efficient Private Non-Convex Optimization." Proceedings of the International Conference on Machine Learning (ICML) 37th (2020). View Details
- Jung, Christopher, Katrina Ligett, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, and Moshe Shenfeld. "A New Analysis of Differential Privacy's Generalization Guarantees." Paper presented at the 11th Innovations in Theoretical Computer Science Conference, Seattle, March 2020. View Details
- Joseph, Matthew, Jieming Mao, Seth Neel, and Aaron Leon Roth. "The Role of Interactivity in Local Differential Privacy." Proceedings of the IEEE Annual Symposium on Foundations of Computer Science (FOCS) 60th (2019). View Details
- Neel, Seth, Aaron Leon Roth, and Zhiwei Steven Wu. "How to Use Heuristics for Differential Privacy." Proceedings of the IEEE Annual Symposium on Foundations of Computer Science (FOCS) 60th (2019). View Details
- Kearns, Michael J., Seth Neel, Aaron Leon Roth, and Zhiwei Steven Wu. "An Empirical Study of Rich Subgroup Fairness for Machine Learning." Proceedings of the Conference on Fairness, Accountability, and Transparency (2019): 100–109. View Details
- Kearns, Michael J., Seth Neel, Aaron Leon Roth, and Zhiwei Steven Wu. "Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness." Proceedings of the International Conference on Machine Learning (ICML) 35th (2018). View Details
- Neel, Seth, and Aaron Leon Roth. "Mitigating Bias in Adaptive Data Gathering via Differential Privacy." Proceedings of the International Conference on Machine Learning (ICML) 35th (2018). View Details
- Ligett, Katrina, Seth Neel, Aaron Leon Roth, Bo Waggoner, and Steven Wu. "Accuracy First: Selecting a Differential Privacy Level for Accuracy-Constrained ERM." Journal of Privacy and Confidentiality 9, no. 2 (2019). View Details
- Joseph, Matthew, Michael J Kearns, Jamie Morgenstern, Seth Neel, and Aaron Leon Roth. "Fair Algorithms for Infinite and Contextual Bandits." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society 4th (2021). View Details
- Joseph, Matthew, Michael J. Kearns, Jamie Morgenstern, Seth Neel, and Aaron Leon Roth. "Rawlsian Fairness for Machine Learning." Paper presented at the 3rd Workshop on Fairness, Accountability, and Transparency in Machine Learning, Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD), November 18, 2016. View Details
- Berk, Richard, Hoda Heidari, Shahin Jabbari, Matthew Joseph, Michael J. Kearns, Jamie Morgenstern, Seth Neel, and Aaron Roth. "A Convex Framework for Fair Regression." Paper presented at the 4th Workshop on Fairness, Accountability, and Transparency in Machine Learning, Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD), August 14, 2017. View Details
- Leoni, Megan, Gregg Musiker, Seth Neel, and Paxton Turner. "Aztec Castles and the dP3 Quiver." Journal of Physics A: Mathematical and Theoretical 47, no. 47 (November 28, 2014). View Details
- Elzayn, Hadi, Shahin Jabbari, Christopher Jung, Michael J Kearns, Seth Neel, Aaron Leon Roth, and Zachary Schutzman. "Fair Algorithms for Learning in Allocation Problems." Proceedings of the Conference on Fairness, Accountability, and Transparency (2019): 170–179. View Details
- Working Papers
-
- Pawelczyk, Martin, Seth Neel, and Himabindu Lakkaraju. "In-Context Unlearning: Language Models as Few Shot Unlearners." Working Paper, October 2023. View Details
- Chang, Peter W., Leor Fishman, and Seth Neel. "Feature Importance Disparities for Data Bias Investigations." Working Paper, March 2023. View Details
- Neel, Seth. "PRIMO: Private Regression in Multiple Outcomes." Working Paper, March 2023. View Details
- Olagoke, Lukman, Salil Vadhan, and Seth Neel. "Black-box Training Data Identification in GANs via Detector Networks." Working Paper, October 2023. View Details
- Cases and Teaching Materials
-
- Bojinov, Iavor, Marco Iansiti, and Seth Neel. "Data Privacy in Practice at LinkedIn." Harvard Business School Case 623-024, September 2022. (Revised July 2023.) View Details
- Awards & Honors
-
Named to the 2019 Forbes 30 Under 30 list in the Energy category.Recipient of National Science Foundation Graduate Research Fellowships, 2017—2020.Winner of the 2022 DrivenData Labs U.S. Privacy-Enhancing Technologies (PETs) Prize Challenge, Phase 1: Concept Development, for “MusCAT (a multi-scale federated system for privacy-preserving pandemic risk prediction)” with Hyunghoon Cho, David Froelicher, Denis Loginov, David Wu, and Yun William Yu.
- Additional Information
- Areas of Interest
- In The News
-