Assistant Professor of Business Administration
Michael Luca is an assistant professor of business administration in the Negotiation, Organizations, and Markets Unit. He teaches the Negotiations course in the MBA elective curriculum.
Professor Luca applies econometric methods to field data in order to study the impact of information in market settings. He investigates the types and features of information disclosure that are most effective, the way in which information disclosure is produced and designed, and how these phenomena affect market structure. In his research, Professor Luca considers rankings, expert reviews, online consumer reviews, and quality disclosure laws.
His current work focuses on crowdsourced reviews, analyzing a variety of companies including Yelp, Amazon, and Airbnb. His findings have been written and blogged about in such media outlets as The Wall Street Journal, The New York Times, The Washington Post, The Huffington Post, Chicago Tribune, Harvard Business Review, PC World Magazine, and Salon.
Professor Luca received his Ph.D. in economics from Boston University and a bachelor’s degree in economics and mathematics from SUNY Albany. Before beginning his doctoral studies, he worked as a health-care actuary for major insurers.
Salience in Quality Disclosure: Evidence from the U.S. News College Rankings
How do rankings affect demand? This paper investigates the impact of college rankings, and the visibility of those rankings, on students' application decisions. Using natural experiments from U.S. News and World Report College Rankings, we present two main findings. First, we identify a causal impact of rankings on application decisions. When explicit rankings of colleges are published in U.S. News, a one-rank improvement leads to a 1-percentage-point increase in the number of applications to that college. Second, we show that the response to the information represented in rankings depends on the way in which that information is presented. Rankings have no effect on application decisions when colleges are listed alphabetically, even when readers are provided data on college quality and the methodology used to calculate rankings. This finding provides evidence that the salience of information is a central determinant of a firm's demand function, even for purchases as large as college attendance.
Keywords: Rank and Position;
Demand and Consumers;
Strategic Disclosure: The Case of Business School Rankings
Using a novel data set, we present three findings about the rankings that business schools choose to display on their websites. First, the data strongly rejects patterns predicted by classic models of voluntary disclosure. In contrast with the traditional unraveling hypothesis, top schools are least likely to display their rankings. Second, schools that do poorly in the U.S. News rankings are more likely to disclose their Princeton Review certification, suggesting that schools treat different certifications as substitutes. Third, conditional on displaying a ranking, the majority of schools coarsen information to make it seem more favorable.
Keywords: Voluntary Disclosure;
Evolution of Land Distribution in West Bengal 1967-2004: Role of Land Reform and Demographic Changes
This paper examines the indirect effect of land reform and demographic changes on land inequality operating through induced household divisions and land market transactions. We develop a intra-household model of joint production where divisions, out-migration or land purchases arise to avoid inefficient free-riding arising from demographic growth. Land reform affects divisions and land transactions owing to induced effects on farm profitability and anticipation of future reforms. These predictions are successfully tested in data from a West Bengal household survey spanning 1967-2004, where we find the quantitative effect of the land reforms were dwarfed by demographic changes.
Optimal Aggregation of Consumer Ratings: An Application to Yelp.com
Consumer review websites such as Yelp.com leverage the wisdom of the crowd, with each product being reviewed many times (some with more than 1,000 reviews). Because of this, the way in which information is aggregated is a central decision faced by consumer review websites. Given a set of reviews, what is the optimal way to construct an average rating? We offer a structural approach to answering this question, allowing for (1) reviewers to vary in stringency (some reviewers tend to leave worse reviews on average) and accuracy (some reviewers are more erratic than others), (2) reviewers to be influenced by existing reviews, and (3) product quality to change over time. We apply this approach to reviews from Yelp.com to derive optimal ratings for each restaurant (in contrast with the arithmetic average displayed by Yelp). Because we have the history of reviews for each restaurant and many reviews left by each reviewer, we are able to identify these factors using variation in ratings within and across reviewers and restaurants. Using our estimated parameters, we construct optimal ratings for all restaurants on Yelp and compare them to the arithmetic averages displayed by Yelp. As of the end of our sample, a conservative finding is that roughly 25%–27% of restaurants are more than 0.15 stars away from the optimal rating, and 8%–10% of restaurants are more than 0.25 stars away from the optimal rating. This suggests that large gains could be made by implementing optimal ratings. Much of the gains come from our method responding more quickly to changes in a restaurant's quality. Our algorithm can be flexibly applied to many different review settings.
What Makes a Critic Tick? Connected Authors and the Determinants of Book Reviews
This paper investigates the determinants of expert reviews in the book industry. Reviews are determined not only by the quality of the product, but also by the incentives of the media outlet providing the review. For example, a media outlet may have the incentive to provide favorable coverage to certain authors or to slant reviews toward the horizontal preferences of certain readers. Empirically, we find that an author's connection to the media outlet is related to the outcome of the review decision. When a book's author also writes for a media outlet, that outlet is 25% more likely to review the book relative to other media outlets, and the resulting ratings are roughly 5% higher. Prima facie, it is unclear whether media outlets are favoring their own authors because these are the authors that their readers prefer or simply because they are trying to collude. We provide a test to distinguish between these two potential mechanisms, and present evidence that this is because of tastes rather than collusion—the effect of connections is present both for authors who began writing for a media outlet before and after the book release. We then investigate other determinants of expert reviews. Relative to consumer reviews, we find that professional critics are less favorable to first time authors and more favorable to authors who have garnered other attention in the press (as measured by number of media mentions outside of the review) and who have won book prizes.
Experience and Expertise;
Reviews, Reputation, and Revenue: The Case of Yelp.com
Do online consumer reviews affect restaurant demand? I investigate this question using a novel dataset combining reviews from the website Yelp.com and restaurant data from the Washington State Department of Revenue. Because Yelp prominently displays a restaurant's rounded average rating, I can identify the causal impact of Yelp ratings on demand with a regression discontinuity framework that exploits Yelp's rounding thresholds. I present three findings about the impact of consumer reviews on the restaurant industry: (1) a one-star increase in Yelp rating leads to a 5% to 9% increase in revenue, (2) this effect is driven by independent restaurants; ratings do not affect restaurants with chain affiliation, and (3) chain restaurants have declined in market share as Yelp penetration has increased. This suggests that online consumer reviews substitute for more traditional forms of reputation. I then test whether consumers use these reviews in a way that is consistent with standard learning models. I present two additional findings: (4) consumers do not use all available information and are more responsive to quality changes that are more visible and (5) consumers respond more strongly when a rating contains more information. Consumer response to a restaurant's average rating is affected by the number of reviews and whether the reviewers are certified as "elite" by Yelp, but is unaffected by the size of the reviewer's Yelp friends network.
Social and Collaborative Networks;
Food and Beverage Industry;
Washington (state, US);