31 Mar 2020

Digital Initiative Summit 2020 Tackles Disinformation

ShareBar
Top Row (LtoR): David Homa, Renee DiResta
Bottom Row (LtoR): James Mickens, Ezra Engelbardt

by Shona Simkin

Misinformation, false claims, confusing and contradictory information, viral rumors, fake news. It’s difficult enough to wade through all of the mixed messaging we see online in our standard overloaded day-to-day lives, but toss in an event like an election—or, say, a global pandemic—and veracity, credibility, and authenticity become even more vital. We seek this affirmation and security from many sources, from the more practical—that our Chicken McNuggets aren’t “pink slime”—to those at a higher level—that the news we read is from a trustworthy source. Those extremes, and the many layers in between, rely on a scaffolding of regulations, ethics, and standards that are becoming increasingly easy to manipulate in our digital environment.

Against a shifting world, the Harvard Business School Digital Initiative has been investigating brands, trust, and disinformation, culminating in Summit 2020: Brands and the Disinformation Reality on March 23. The coronavirus-necessitated online-only event attracted over 1,300 participants and was moderated by David Homa, Digital Initiative director, and featured a panel of experts including:

- Ezra Englebardt, expert in brand strategy, planning, and execution;
- Renée DiResta, trust and technical research manager at the Stanford Internet Observatory at Stanford University;
- and James Mickens, Gordon McKay Professor of Computer Science at the Harvard John A. Paulson School of Engineering and Applied Sciences



When asked why this topic, why now, Homa reflected: "Technology and people are colliding more profoundly than ever before—we are experiencing the intersection of the most advanced technologies and algorithms with the very essence of human emotions and actions, all magnified and accelerated in a global digital frenzy. The long history of pursuing brand loyalty has led to a very personal engagement between brands and people. That engagement is now online, and at a scale and speed previously unseen. Organizations must navigate this new melding of scale, speed, money, power, and emotions with the broader struggles of information dissemination and veracity. It’s a very complicated and disruptive space in which the way brands react will impact other online disinformation spaces."

How can organizations combat rampant disinformation and take control of their brand and products? One solution, which is neither easy nor a magic bullet, says Englebardt, is to invest in building a strong brand that scams or disinformation can’t penetrate. Patagonia, for example, bolstered its environmentalist brand and mission with the 2011 Black Friday ad, “Don’t Buy This Jacket.” On the country’s most profitable sales day, Patagonia asked consumers to consider the environmental costs of clothing production—to not buy what they don’t need, to pass on their long-lasting gear, and to return items for repair or recycling. None of that means that Patagonia doesn’t have critics or isn’t the subject of rumors, but it does mean that unsubstantiated concerns and rumors just don’t stick.

Good storytelling is another tactic. To address the “pink slime” controversy, McDonalds released a video of its entire Chicken McNugget manufacturing process, comparing the ground, seasoned chicken breast with the viral “pink slime.” The video received more than 5 million hits, beating the hoax at its own game. “Facts beat myths. We’d all love for that to be true. The truth is it’s really stories and content beat myths. You have to outperform the lies and the rumors and the disinformation. You have to have better stories and better content that’s more interesting,” concluded Englebart.

All industries are vulnerable to the effects of bad information, says DiResta. To best react, defend, and create interventions, corporations, policy-makers, platform-owners, and users need to understand the basic taxonomy. From misinformation (bad information altruistically shared with friends and loved ones, currently prevalent with COVID-19) to disinformation (a deliberate and highly-coordinated deception) to mal-information (attacks intended to harass or harm) as well as propaganda, fake news, and hoaxes.

When considering the actors in these scenarios and campaigns, DiResta says that it’s less about a specific attribution, and more about how factions—persistent communities that are organized, hyper motivated, and united by passions and beliefs—engage with each other and talk about your brand. “We think a lot about the model of internet communications as narratives hopping from a series of factions to other factions, or fights happening between factions,” said DiResta. “When there's sufficient attention and energy there, that's where you'll see bizarre and interesting trends pop-up.”

How do we address this proliferation of bad information from a technology perspective? Who should be responsible? Mickens is heading a project that could be a solution: Verifiable Digital Provenance. Current options for validating information, says Mickens, are problematic. Fact checkers are difficult to scale and are often accused of bias; algorithms are easy to scale, but also easy to fool; users can be trained, but are given scant, opaque information. Verifiable Digital Provenance bundles existing technology and security systems with additional information from the story’s pipeline (images, text, coding, photoshop) into a tamper-resistant edit history for the user.

“What we want to do with this technology is ultimately allow the end users to make what fundamentally is a personal decision about the trustworthiness of each object. You want to expose the ways in which a particular piece of news is created and then say, ‘User, you have to decide.’” said Mickens.

Post a Comment

Comments must be on-topic and civil in tone (with no name calling or personal attacks). Any promotional language or urls will be removed immediately. Your comment may be edited for clarity and length.