Fix Facebook, by Joe Flood

How Technologists Can Help Counter Misinformation and Other Social Harms

By Brendan Nyhan and Patrick Ball


Since “fake news” rose to prominence during and after the 2016 election, the United States and countries around the world have struggled to determine how to most effectively address political misinformation. Though critics’ worst fears about the influence of online misinformation are likely overstated, false or misleading political claims are being amplified by platforms like Facebook, Twitter, and YouTube, deceiving voters and distorting public debate on an unprecedented scale.

As a result, these firms have faced increasing scrutiny from Congress and the media. However, calls for reform are increasingly coming not just from politics and journalism but from employees of the platforms themselves, the skilled computer engineers and managers who are Big Tech’s most scarce – and thus most valuable – resource. For that reason, technologists could be a critical ally in the fight against online misinformation.

Demands for greater social responsibility in technology may seem futile to journalists and activists, who frequently lament the platforms’ lack of responsiveness. Firms like Facebook and Google lack viable competitors and wield enormous political clout, leaving them seemingly invulnerable to both market competition and antitrust action.

But critics have greater leverage than it might seem. Most obviously, they wield the threat of damaging media coverage and greater regulatory scrutiny, which have helped motivate the changes that these companies have (grudgingly) made since 2016.

However, growing awareness of harms produced by the platforms like online misinformation have produced a cultural backlash that threatens the platforms’ ability to retain (and attract) talent. As a result, employees within the companies have greater leverage to challenge problematic policies and products than it might seem, especially when their concerns are amplified by the media, NGOs, and public officials.

The framework proposed in Albert O. Hirschman’s classic Exit, Voice, and Loyalty can help illustrate how platform employees could help to change firms from the inside. In situations where members have concerns about the practices of an organization, Hirschman describes how they might use voice – i.e., protesting or demanding change – rather than leaving, especially when exit is costly.

For employees at companies like Google and Facebook, leaving would mean giving up lucrative stock options and seeking employment at smaller firms with fewer resources whose prospects are riskier. As a result, many have strong incentives to stay and fight for change. Correspondingly, technology firms want to prevent the departure of top engineering and managerial talent in an intensely competitive labor market.

Facebook seems especially vulnerable to this sort of pressure. It faces significant internal dissent after more than two years of intense media coverage of the harms that its products have created (among other challenges). Staff morale reportedly plunged last year – an internal survey found that only 53% of employees thought the company is making the world better compared to 72% in 2017. As a result, more Facebook employees are making inquiries about jobs to former colleagues in other firms. The company’s reputation has also taken a hit with computer science students and other young talent in the industry. However, actually walking away (or passing up a lucrative job offer) is more difficult. The founders of WhatsApp, which was purchased by the company, gave up more than $1 billion in stock options to exit the firm, but their decision is likely to prove an exception.

If employees lobby for Facebook or other platforms to reduce the social harms they have created, they could help encourage positive changes. Though steps that have been taken since 2016 have had some success – the prominence of dubious content on Facebook appears to have declined, for instance – more can be done. One key challenge is to get the platforms to take concerns about misinformation and other low-quality content seriously enough that they will pay real costs to reduce its prevalence – not just adding more moderators, but scaling back algorithmic approaches to content recommendation (such as auto-playing recommended videos on YouTube) that are highly profitable but vulnerable to abuse.

Two recent examples illustrate the leverage that workers can exert over technology companies when they speak up for important social values. At Google, a massive walkout protesting sexual harassment at the company prompted its leaders to end a policy that required employees alleging sexual harassment or assault to resolve their claims using private arbitration (though other organizer demands were not met). Facebook preemptively made the same policy change after the walkout.

Google employees have also put substantial pressure on the company to stop work on a censored version of its search engine for the Chinese market known as “Project Dragonfly.” An internal petition objecting to the plan obtained approximately 1,400 signatures, saying it “raise[d] urgent moral and ethical issues” and that Google employees “do not have the information required to make ethically-informed decisions about our work, our projects, and our employment.”

More than 700 later signed a public letter calling for Google to cancel the project. The letter specifically highlights the role of values in their employment decisions, writing that “Many of us accepted employment at Google with the company’s values in mind, including its previous position on Chinese censorship and surveillance, and an understanding that Google was a company willing to place its values above its profits.” Members of an internal privacy team at Google later helped stop the use of sensitive Chinese user data in the project, which reportedly  “effectively ended” its development.

To be sure, these sorts of internal tech company revolts are not always successful. Last year, for instance, Google declined to renew a Department of Defense contract to use artificial intelligence to analyze drone footage and withdrew from the bidding for a DoD cloud computing contract after opposition from its employees, but Microsoft decided to pursue the cloud computing contract despite facing internal dissent of its own.

Still, the potential for socially valuable dissent is clear. Dissidents and whistleblowers inside technology firms can alert civil society about worrisome or ill-conceived policies and demand change in products or services that create social harm. In this way, they may be able to help prevent or mitigate potential harms more quickly and effectively than regulation. Platform employees know what’s happening inside their companies long before the public or regulators, have the expertise to understand how the technologies that their companies are deploying actually work, and can speak credibly to external audiences who might be suspicious of ideological critics of technology firms.

None of these steps is a silver bullet; we face inherent limits in how much any effort to stop online misinformation can accomplish. As human beings, we are psychologically vulnerable to false beliefs that seem to confirm our point of view; it is exceptionally difficult to accurately identify misinformation at the scale on which the platforms operate; and increased legal prohibitions against false claims could silence legitimate speech and would require us to delegate vast powers over political debate to corporations.

For these reasons, though, encouraging the tech giants to do more to limit the spread of misinformation may be the least bad solution to the problem that now confronts us. The effectiveness of these efforts and their consistency with our values is difficult to monitor from outside of these massive and powerful firms.

In this sense, the employees of Facebook, Google, and other platforms have been entrusted with a great responsibility; we must encourage them to act as advocates for socially responsible computing and protect and reward those who come forward when those principles are violated.

Brendan Nyhan is a professor of public policy at the Ford School of Public Policy at the University of Michigan. Nyhan is also a contributor to The Upshot at The New York Times; a co-founder of Bright Line Watch, a watchdog group that monitors the status of American democracy; and a 2018 Andrew Carnegie Fellow. He has received funding to support research on online misinformation in India from Facebook, which has no control over its content or publication.

Patrick Ball has spent more than 25 years conducting quantitative analysis for truth commissions, NGOs, international criminal tribunals, and U.N. missions. He has also provided expert testimony in trials of former leaders such as Slobodan Milošević. Ball founded the Human Rights Data Analysis Group in 1991, where he currently serves as Director of Research. He received the Karl E. Peace Award for Outstanding Statistical Contributions for the Betterment of Society from the American Statistical Association in 2018.

[Image Credit: Joe Flood / Flickr]

    Leave Your Comment Here