Students write out the articles of the Universal Declaration of Human Rights on the steps at the Colchester Campus | Flickr | https://www.flickr.com/photos/universityofessex/11322287594/in/photolist-ifvFD5-9sKFdF-caavtf-7M22Fe-4bXs7C-4bTTqi-4bTqgv-4bXY5S-7TXg1z-4bXTAW-4bXP31-4bXn9E-s3i8tb-4bY1Ay-7U1ki5-4bY4dY-9s6bx4-4bTrRi-4bU3pP-4bXRCS-4bTRyH-8ZQPtm-7TX98K-86eNHn-ifw3k4-qNHwRU-nJTKAS-VwLhJG-3krA4k-UMwRSL-9SH4Zb-4XjuW3-ifvzUw-9b45XN-9SH553-269hgrs-Y2qL7L-8rH4HE-8ZQPwm-r5LkC3-8EKtiK-7owY8w-7M5XLo-8rTnkM-7otYYA-bgzxMz-86eMwa-8rH4Mu-4AJGuE-pE6Xxw

Responding to disinformation should not trample on human rights

By Amie Stepanovich

From France, to Australia, and back to the United States, there is a growing popular consensus that the spread of misinformation and disinformation online, whether through platforms or apps, has significant real-world impact. To address the problem we need to leave behind grand “solutions” that will interfere with human rights and instead focus on discrete shifts in law and policy that tackle the underlying causes instead of the symptoms.

The impact of (mis/dis)information

Debates still rage about the real impact of misinformation and disinformation on the results of the Brexit vote in the United Kingdom or the most recent U.S. presidential election. But the stories don’t end there. The role of false reporting in the ongoing conflicts in Ukraine has been well-documented, and violent and deadly attacks have been attributed to the spread of false information in India, Indonesia, Sri Lanka, and beyond.

Unfortunately, in response governments have advanced a seemingly endless parade of misguided and impractical proposals, including outright bans of “fake news” or other limits on what speech can be posted online, in some cases to be determined at the discretion of a private company. Other plans include increasing legal liability for entities that host online speech, preferential treatment for certain so-labeled “legitimate” news sites, or even localized blocking of apps, sites, or internet services. Some countries have either started or are ramping up the practice of arresting journalists for spreading “fake news.”

The consequences for human rights

All these proposals are disastrous for human rights online. Pursuit would create overbroad restrictions on online speech while facilitating massive chilling effects for journalists and others against sharing any messages or information that those in power do not like. For example, labeling certain news organizations as “legitimate” in order to give them preference will result in others being considered “illegitimate.” That’s a threat to independent reporters and researchers, particularly in countries dominated by state-run media.

Making matters worse, some proposals would effectively deputize private companies with authority to determine what is “good” or “bad” speech, with the appearance of a civic, and sometimes legal, responsibility to make substantive determinations on content. Those in marginalized populations, who have a history of having their speech censored by public and private entities alike, could see their lack of political capital translate to automated and more deeply entrenched patterns of discrimination and silencing.

Not only are these government “solutions” alike in the threat they pose to human rights, they also uniformly lack depth and nuance. In a complicated and intricate policy area, these responses are both reactionary and untailored, and therefore not fit-for-purpose, especially in the long term.

We’ve been here before

Misinformation, “fake news,” yellow journalism — call it what you want, but it’s been around for  quite some time, well predating the internet. The most (in)famous example of fake news influencing real-world events may be the “War of the Worlds” radio broadcasts between the 1930s and 1960s. While there is disagreement about the level of panic these broadcasts caused for listeners at home, we do know they resulted in military deployment in at least two cases, including in Quito, Ecuador in 1949. When people realized the story was not true, the public reacted so strongly that it led to the violent destruction of the Quito radio station.

But misinformation is also really hard to spot. It often gets confused with satire or parody. It may also get mixed in with “clickbait” headlines that are drafted with the effect if not the purpose of mis-characterizing the news being prevented, or, even more troubling, failing to provide full context for public understanding.

The viral spread of certain types of content on the internet is only the latest twist in an old story. Once content is out in the public sphere, it can take on a life of its own, and that can influence people in unexpected ways. The “Pizzagate” conspiracy story is a good example. Reporting by Buzzfeed reveals how the story spread and ultimately resulted in a man “investigating” the facts first hand, including by taking a gun into a family-run restaurant.

How platforms are responding

Under pressure from governments, platforms like Facebook and Google are making technical changes to step up their efforts to fight misinformation. Notably, Facebook’s WhatsApp is limiting the number of people you can forward a message to, which reportedly has “significantly reduced forwarded messages around the world.” Adding friction to the process of sharing information via WhatsApp may help more people stop and think about what they share, while admittedly also limiting the capacity to pass along action alerts and other information for political organizing or advocacy campaigns.

It’s worth pointing out that this attempted solution preserves the basic features of the service and does not entail degrading WhatsApp’s end-to-end encryption to enable government access and monitoring of private communications. Systems that use strong encryption provide critical security to people around the world, including human rights defenders. Exploring solutions that do not sacrifice vital and necessary security features will likely become even more important given potential developments, such as Facebook’s plan to integrate WhatsApp with Facebook Messenger and Instagram. (The wisdom of this move and the impact on human rights depends on its implementation; done badly, it could give Facebook an increasingly detailed portrait of our interactions across its products.)

Twitter is also testing technical approaches, such as suspending millions of “unwanted” accounts in 2018, including automated accounts, or “bots.” In theory, killing bots should make it harder for misinformation to propagate across the platform, but the approach may have limited impact. According to a study published late last year, “‘relatively few accounts are responsible for a large share of the traffic that carries misinformation,’ with just 6 percent of Twitter accounts identified as bots responsible for 31 percent of “low-credibility” content.” That means that bad actors may need to keep only a small number of accounts alive to reach a large audience.

Last month, Google’s YouTube, which may be one of the biggest sources of misinformation, announced tweaks to its algorithms for videos that “[came] close to” violating its content rules that are designed to make these videos harder to find. By focusing on the content and not the poster, this approach differed meaningfully from previous changes, including a move that focused on the perceived quality of the content that promoted content from “trusted news providers.” However well meaning, this kind of change could end up reducing the reach of independent journalistic voices and shift even more attention to content produced by already entrenched and powerful media empires. YouTube says the move allows it to “prioritize vetted outlets over obscure, conspiracy-pushing users,” but more research is necessary to gauge the impact on small media entities, unaffiliated journalists, human rights defenders, and others.  

And then there are clearly retrograde and harmful ideas. Facebook, in what appears to be a response to allegations of bias in its content moderation practices, says that it will start registering the trustworthiness of a news source, as determined by user surveys. It’s hard to see how this program would not be “gamed” and fall victim to the same bias and ambiguity that allows misinformation to spread widely in the first place.

How to do better: Forget silver bullets. Explore approaches built on human rights principles

To prevent entrenching inequality and discrimination and promoting censorship, laws and policies must reflect and promote international human rights law. This is not easy, perhaps especially on the topic of misinformation. Both government and industry should tread carefully in this area and pursue nuanced, incremental improvements over bright shiny catch-all solutions. As a start, stakeholders must avoid deputizing platforms as the arbiters of speech; support the adoption of comprehensive data protection laws and the open internet; and engage in an ongoing multi-stakeholder conversation that is based on mutual responsibility instead of blame-shifting.

Free expression is a human right, but today companies are being pressured heavily by government regulators to remove certain categories of content outside of the rule of law and devoid of any individualized determination of legality. It is critical that governments walk back from these approaches, and from threats (or worse, implementation) of overbroad laws that will force these determinations. Privatized enforcement for misinformation means that private actors will respond far too vigorously, and the result will be vast swathes of improperly censored speech.

Stakeholders that care about misinformation should also support the passage of robust and comprehensive data protection regulations. Misinformation campaigns, such as those launched by foreign governments or other bad actors, run on our personal data and thrive when fact-checking is difficult or expensive. As noted by one of the most comprehensive studies of misinformation to date, by the Oxford Internet Institute:

“[s]ocial media are particularly effective at directly reaching large numbers of people, while simultaneously microtargeting individuals with personalized messages. Indeed, this effective impression management—and fine-grained control over who receives which messages—is what makes social media platforms so attractive to advertisers, but also to political and foreign operatives. Where government control over Internet content has traditionally relied on blunt instruments to block or filter the free flow of information, powerful political actors are now turning to computational propaganda to shape public discourse and nudge public opinion.”

Turning the tide here won’t be easy. The internet as we know it today has developed under the (false) assumption that massive amounts of data collection and analysis, applied to heavily (if often improperly) targeted behavioral advertising is necessary for driving profit. We have supported policies globally that have replaced the open internet with “walled gardens” of content and services that are hard to escape from. This has created an online environment where start-up companies launch products and services without a real business model other than data capture. Empowering users with a set of rights in and to their data and to internet access, and placing affirmative obligations on businesses to respect them, could help disrupt the marketplace for our data that malicious actors leverage when they carry out misinformation campaigns online.

Finally, in any crisis, it is easy to scapegoat others. But an effective approach to a problem as complicated as stopping the spread of misinformation online will involve stakeholders working together, not shifting the blame to other parties. Everyone, from the government, to the tech sector, to civil society, must look inward and reflect on their piece of the misinformation machine and find a way to shift gears. They must engage in an ongoing process to learn from and correct the inevitable mistakes.

There is no quick fix to the societal problems that give rise to and feed on misinformation, and belief in silver bullets is likely to result in oppressive laws and corporate policies. Instead we should focus on advancing specific, incremental proposals, based in human rights principles, that could help address the root causes of the problem, working across sectors and stakeholder groups. Only in this way will we retain our human rights.

Amie Stepanovich is the U.S. Policy Manager for Access Now, where she manages and develops the organization’s U.S. policy and leads global projects at the intersection of human rights and government surveillance, including advocating for evidence-based solutions to disinformation.

Image Credit: University of Essex | Flickr

    Leave Your Comment Here