Filter Results:
(38)
Show Results For
- All HBS Web
(200)
- Faculty Publications (38)
Show Results For
- All HBS Web
(200)
- Faculty Publications (38)
Page 1 of 38
Results →
- 2024
- Working Paper
Don’t Expect Juniors to Teach Senior Professionals to Use Generative AI: Emerging Technology Risks and Novice AI Risk Mitigation Tactics
By: Katherine C. Kellogg, Hila Lifshitz-Assaf, Steven Randazzo, Ethan Mollick, Fabrizio Dell'Acqua, Edward McFowland III, François Candelon and Karim R. Lakhani
The literature on communities of practice demonstrates that a proven way for senior professionals to upskill
themselves in the use of new technologies that undermine existing expertise is to learn from junior
professionals. It notes that juniors may be better able... View Details
Kellogg, Katherine C., Hila Lifshitz-Assaf, Steven Randazzo, Ethan Mollick, Fabrizio Dell'Acqua, Edward McFowland III, François Candelon, and Karim R. Lakhani. "Don’t Expect Juniors to Teach Senior Professionals to Use Generative AI: Emerging Technology Risks and Novice AI Risk Mitigation Tactics." Harvard Business School Working Paper, No. 24-074, June 2024.
- 2024
- Working Paper
Winner Take All: Exploiting Asymmetry in Factorial Designs
By: Matthew DosSantos DiSorbo, Iavor I. Bojinov and Fiammetta Menchetti
Researchers and practitioners have embraced factorial experiments to simultaneously test multiple treatments, each with different levels. With the rise of technologies like Generative AI, factorial experimentation has become even more accessible: it is easier than ever... View Details
Keywords: Factorial Designs; Fisher Randomizations; Rank Estimators; Employer Interventions; Causal Inference; Mathematical Methods; Performance Improvement
DosSantos DiSorbo, Matthew, Iavor I. Bojinov, and Fiammetta Menchetti. "Winner Take All: Exploiting Asymmetry in Factorial Designs." Harvard Business School Working Paper, No. 24-075, June 2024.
- 2023
- Working Paper
An Experimental Design for Anytime-Valid Causal Inference on Multi-Armed Bandits
By: Biyonka Liang and Iavor I. Bojinov
Typically, multi-armed bandit (MAB) experiments are analyzed at the end of the study and thus require the analyst to specify a fixed sample size in advance. However, in many online learning applications, it is advantageous to continuously produce inference on the... View Details
Liang, Biyonka, and Iavor I. Bojinov. "An Experimental Design for Anytime-Valid Causal Inference on Multi-Armed Bandits." Harvard Business School Working Paper, No. 24-057, March 2024.
- 2023
- Article
Balancing Risk and Reward: An Automated Phased Release Strategy
By: Yufan Li, Jialiang Mao and Iavor Bojinov
Phased releases are a common strategy in the technology industry for gradually releasing new products or updates through a sequence of A/B tests in which the number of treated units gradually grows until full deployment or deprecation. Performing phased releases in a... View Details
Li, Yufan, Jialiang Mao, and Iavor Bojinov. "Balancing Risk and Reward: An Automated Phased Release Strategy." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- 2023
- Article
Post Hoc Explanations of Language Models Can Improve Language Models
By: Satyapriya Krishna, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh and Himabindu Lakkaraju
Large Language Models (LLMs) have demonstrated remarkable capabilities in performing complex tasks. Moreover, recent research has shown that incorporating human-annotated rationales (e.g., Chain-of-Thought prompting) during in-context learning can significantly enhance... View Details
Krishna, Satyapriya, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh, and Himabindu Lakkaraju. "Post Hoc Explanations of Language Models Can Improve Language Models." Advances in Neural Information Processing Systems (NeurIPS) (2023).
- 2023
- Working Paper
In-Context Unlearning: Language Models as Few Shot Unlearners
By: Martin Pawelczyk, Seth Neel and Himabindu Lakkaraju
Machine unlearning, the study of efficiently removing the impact of specific training points on the
trained model, has garnered increased attention of late, driven by the need to comply with privacy
regulations like the Right to be Forgotten. Although unlearning is... View Details
Pawelczyk, Martin, Seth Neel, and Himabindu Lakkaraju. "In-Context Unlearning: Language Models as Few Shot Unlearners." Working Paper, October 2023.
- 2023
- Article
On Minimizing the Impact of Dataset Shifts on Actionable Explanations
By: Anna P. Meyer, Dan Ley, Suraj Srinivas and Himabindu Lakkaraju
The Right to Explanation is an important regulatory principle that allows individuals to request actionable explanations for algorithmic decisions. However, several technical challenges arise when providing such actionable explanations in practice. For instance, models... View Details
Meyer, Anna P., Dan Ley, Suraj Srinivas, and Himabindu Lakkaraju. "On Minimizing the Impact of Dataset Shifts on Actionable Explanations." Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI) 39th (2023): 1434–1444.
- 2023
- Article
Towards Bridging the Gaps between the Right to Explanation and the Right to Be Forgotten
By: Himabindu Lakkaraju, Satyapriya Krishna and Jiaqi Ma
The Right to Explanation and the Right to be Forgotten are two important principles outlined to regulate algorithmic decision making and data usage in real-world applications. While the right to explanation allows individuals to request an actionable explanation for an... View Details
Keywords: Analytics and Data Science; AI and Machine Learning; Decision Making; Governing Rules, Regulations, and Reforms
Lakkaraju, Himabindu, Satyapriya Krishna, and Jiaqi Ma. "Towards Bridging the Gaps between the Right to Explanation and the Right to Be Forgotten." Proceedings of the International Conference on Machine Learning (ICML) 40th (2023): 17808–17826.
- July 2023
- Article
Design and Analysis of Switchback Experiments
By: Iavor I Bojinov, David Simchi-Levi and Jinglong Zhao
In switchback experiments, a firm sequentially exposes an experimental unit to a random treatment, measures its response, and repeats the procedure for several periods to determine which treatment leads to the best outcome. Although practitioners have widely adopted... View Details
Bojinov, Iavor I., David Simchi-Levi, and Jinglong Zhao. "Design and Analysis of Switchback Experiments." Management Science 69, no. 7 (July 2023): 3759–3777.
- 2023
- Working Paper
Design-Based Confidence Sequences: A General Approach to Risk Mitigation in Online Experimentation
By: Dae Woong Ham, Michael Lindon, Martin Tingley and Iavor Bojinov
Randomized experiments have become the standard method for companies to evaluate the performance of new products or services. In addition to augmenting managers’ decision-making, experimentation mitigates risk by limiting the proportion of customers exposed to... View Details
Keywords: Performance Evaluation; Research and Development; Analytics and Data Science; Consumer Behavior
Ham, Dae Woong, Michael Lindon, Martin Tingley, and Iavor Bojinov. "Design-Based Confidence Sequences: A General Approach to Risk Mitigation in Online Experimentation." Harvard Business School Working Paper, No. 23-070, May 2023.
- 2023
- Article
Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse
By: Martin Pawelczyk, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci and Himabindu Lakkaraju
As machine learning models are increasingly being employed to make consequential decisions in real-world settings, it becomes critical to ensure that individuals who are adversely impacted (e.g., loan denied) by the predictions of these models are provided with a means... View Details
Pawelczyk, Martin, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju. "Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse." Proceedings of the International Conference on Learning Representations (ICLR) (2023).
- 2023
- Working Paper
Nailing Prediction: Experimental Evidence on the Value of Tools in Predictive Model Development
By: Daniel Yue, Paul Hamilton and Iavor Bojinov
Predictive model development is understudied despite its centrality in modern artificial
intelligence and machine learning business applications. Although prior discussions
highlight advances in methods (along the dimensions of data, computing power, and
algorithms)... View Details
Keywords: Analytics and Data Science
Yue, Daniel, Paul Hamilton, and Iavor Bojinov. "Nailing Prediction: Experimental Evidence on the Value of Tools in Predictive Model Development." Harvard Business School Working Paper, No. 23-029, December 2022. (Revised April 2023.)
- 2022
- Article
Data Poisoning Attacks on Off-Policy Evaluation Methods
By: Elita Lobo, Harvineet Singh, Marek Petrik, Cynthia Rudin and Himabindu Lakkaraju
Off-policy Evaluation (OPE) methods are a crucial tool for evaluating policies in high-stakes domains such as healthcare, where exploration is often infeasible, unethical, or expensive. However, the extent to which such methods can be trusted under adversarial threats... View Details
Lobo, Elita, Harvineet Singh, Marek Petrik, Cynthia Rudin, and Himabindu Lakkaraju. "Data Poisoning Attacks on Off-Policy Evaluation Methods." Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI) 38th (2022): 1264–1274.
- 2022
- Article
Towards Robust Off-Policy Evaluation via Human Inputs
By: Harvineet Singh, Shalmali Joshi, Finale Doshi-Velez and Himabindu Lakkaraju
Off-policy Evaluation (OPE) methods are crucial tools for evaluating policies in high-stakes domains such as healthcare, where direct deployment is often infeasible, unethical, or expensive. When deployment environments are expected to undergo changes (that is, dataset... View Details
Singh, Harvineet, Shalmali Joshi, Finale Doshi-Velez, and Himabindu Lakkaraju. "Towards Robust Off-Policy Evaluation via Human Inputs." Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (2022): 686–699.
- 2022
- Conference Presentation
Towards the Unification and Robustness of Post hoc Explanation Methods
By: Sushant Agarwal, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Steven Wu and Himabindu Lakkaraju
As machine learning black boxes are increasingly being deployed in critical domains such as healthcare and criminal justice, there has been a growing emphasis on developing techniques for explaining these black boxes in a post hoc manner. In this work, we analyze two... View Details
Keywords: AI and Machine Learning
Agarwal, Sushant, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Steven Wu, and Himabindu Lakkaraju. "Towards the Unification and Robustness of Post hoc Explanation Methods." Paper presented at the 3rd Symposium on Foundations of Responsible Computing (FORC), 2022.
- Article
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
By: Dylan Slack, Sophie Hilgard, Sameer Singh and Himabindu Lakkaraju
As black box explanations are increasingly being employed to establish model credibility in high stakes settings, it is important to ensure that these explanations are accurate and reliable. However, prior work demonstrates that explanations generated by... View Details
Keywords: Black Box Explanations; Bayesian Modeling; Decision Making; Risk and Uncertainty; Information Technology
Slack, Dylan, Sophie Hilgard, Sameer Singh, and Himabindu Lakkaraju. "Reliable Post hoc Explanations: Modeling Uncertainty in Explainability." Advances in Neural Information Processing Systems (NeurIPS) 34 (2021).
- Article
Towards the Unification and Robustness of Perturbation and Gradient Based Explanations
By: Sushant Agarwal, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Steven Wu and Himabindu Lakkaraju
As machine learning black boxes are increasingly being deployed in critical domains such as healthcare and criminal justice, there has been a growing emphasis on developing techniques for explaining these black boxes in a post hoc manner. In this work, we analyze two... View Details
Keywords: Machine Learning; Black Box Explanations; Decision Making; Forecasting and Prediction; Information Technology
Agarwal, Sushant, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Steven Wu, and Himabindu Lakkaraju. "Towards the Unification and Robustness of Perturbation and Gradient Based Explanations." Proceedings of the International Conference on Machine Learning (ICML) 38th (2021).
- 2021
- Working Paper
Population Interference in Panel Experiments
By: Iavor I Bojinov, Kevin Wu Han and Guillaume Basse
The phenomenon of population interference, where a treatment assigned to one experimental unit affects another experimental unit's outcome, has received considerable attention in standard randomized experiments. The complications produced by population interference in... View Details
Bojinov, Iavor I., Kevin Wu Han, and Guillaume Basse. "Population Interference in Panel Experiments." Harvard Business School Working Paper, No. 21-100, March 2021.
- Article
Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses
By: Kaivalya Rawal and Himabindu Lakkaraju
As predictive models are increasingly being deployed in high-stakes decision-making, there has been a lot of interest in developing algorithms which can provide recourses to affected individuals. While developing such tools is important, it is even more critical to... View Details
Rawal, Kaivalya, and Himabindu Lakkaraju. "Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses." Advances in Neural Information Processing Systems (NeurIPS) 33 (2020).
- 2020
- Working Paper
Design and Analysis of Switchback Experiments
By: Iavor I Bojinov, David Simchi-Levi and Jinglong Zhao
In switchback experiments, a firm sequentially exposes an experimental unit to a random treatment, measures its response, and repeats the procedure for several periods to determine which treatment leads to the best outcome. Although practitioners have widely adopted... View Details
Bojinov, Iavor I., David Simchi-Levi, and Jinglong Zhao. "Design and Analysis of Switchback Experiments." Harvard Business School Working Paper, No. 21-034, September 2020.