How Building Data Protection Regimes Can Counter Disinformation
By Daniel Arnaudo
Online disinformation and computational propaganda can have major effects, particularly in volatile political environments where public opinion can be shifted to a narrative pushed by a group with access to personal data from target populations.
The power of online systems to shift elections or referenda is the lesson of many recent political campaigns in history. That reality has shown the importance of strong institutions, laws that are capable of shifting with ongoing technological and political innovation, and better corporate governance.
These systems are now integral components of our identities, politics, and modern political campaigns. The technology companies that operate social media networks and enable our connections, from applications and software makers to hardware and telecoms, now all control great amounts of our data and must take great care.
Nations and multinational corporations are dealing with this new online reality in different ways. They are responding with different forms of legislation, public programs, and even coordinated social media campaigns that are harmful to human rights principles, such as freedom of expression, privacy, and independent media.
Most people encountering disinformation often think primarily of its offensive aspects: bots, trolls, algorithmic manipulation and other forms of what is known as “computational propaganda.” There is another component that must be considered to understand the problem: personal data that online campaigns use to target users.
We can find concrete solutions to the issues to these techniques by addressing the information in databases on the back end of systems, including by protecting personal, sensitive information and metadata about our activities, networks and colleagues.
Data has also been pilfered and used to target voters with incredible precision by private sector companies, such as Cambridge Analytica (CA). The British firm is just one of the most famous examples of a bad actor: there are many more, some of which are the same group in different forms, such as CA’s parent company SCL Group.
CA diverted data directly from a research program Facebook had authorized on its services: a personality test that created a psychographic profile of the user, alongside data about their friends. The academic institution that ran the research program subsequently sold that data to a company, which then supplied analysis to campaigns in the 2016 election, Brexit and other political campaigns.
Facebook has more heavily restricted this kind of research since then, with good reason. They are more carefully vetting these kinds of programs and ensuring stronger penalties when agreements are breached. This has also had the effect of weakening Facebook’s ability to collaborate with the academic and broader researcher community on ways of understanding its social network. This is a pattern among networks such as Twitter, LinkedIn, and other companies that do not allow access to data about user activity, political advertising and online disinformation campaigns.
The world’s largest social network recently closed off open access to data about advertising, making it much more difficult to identify cheaters and understand how elections are fought and monitored online. This is now true not only for the United States, but also in global elections, from Brazil, Colombia, and Mexico over the last year, to others like Indonesia, Nigeria, South Africa, Ukraine, and the European Union in 2019.
Reputable research and development organizations such as Mozilla, ProPublica, and others that were cut off should have access to this kind of anonymized information to help understand the effects of these campaigns.
Companies are also sharing data between each other in third party arrangements. For example, Facebook shares data with Spotify and Netflix, and Google shares Android data with app makers and social networks, all in exchange for benefits – often without strong protections or policies in place. These inter-linkages will become more important as the ecosystems become more robust.
Already, we are years away from the era of individual, single-user fixed desktop machines to networked, portable computers that now fit in your pocket. The application ecosystems that Google and Apple have developed, created new opportunities for data collection that consumers don’t realize or understand.
Companies should create better systems to monitor these problems and report them to relevant authorities as they occur. They need to adopt greater transparency in their processes and networks, including how they govern the data that we entrust them with.
Governments need recourse for stronger penalties if they don’t comply. Companies should have to abide by strong data protection regulations and responsibilities. These organizations are all now trusted with our political processes, identities and social networks. They must be held responsible for these important components of our online lives and societies.
The European Union’s Global Data Protection Regulation (GDPR) provides a new model for the management and control of user data throughout its member states and those who travel through them. It creates penalties for those that abuse data and requires companies, governments, and other organizations to reconsider how they are handling personal information. This has the potential to change the way that campaigns handle personal information and how analytics companies provide them with this data.
The GDPR provides a clear framework for companies to collect voter data and safeguard it while avoiding the use of data that has not been explicitly provided. This framework is important not only because of the system it has in place to ensure user data is taken care of, but also because it institutes strong penalties if campaigns misuse this information or do not adequately safeguard their controls. This is an important shift, because it puts the onus on companies to operate ethically while considering the incredible changes in how campaigns are run in the digital age.
One of the major issues that governments, parties, and societies around the world are struggling with is a shift away from traditional campaigns using print, radio and television media to campaigns that are increasingly organized, fought, and won online.
Developing stronger controls around these transfers and negotiations will be crucial to defining future realities online that protect privacy, freedom of expression, independent media, and other core democratic principles. GDPR and laws such as the Brazilian General Data Protection Law or South Korea’s Personal Information Protection Act, and a host of other projects springing up in different forms around the world show different ways forward.
India, the world’s largest democracy, has attempted to grapple with the problem by developing a data protection authority that would help to regulate companies and government agencies that hold large amounts of user data.
The creation of such an independent authority is a crucial element of any system to provide a means for oversight and much more effective enforcement, particularly in cases where data is used to further political aims. Creating an open, neutral, independent authority can help ensure these objectives.
All of these regulations, particularly in large countries such as Brazil, India and supranational bodies like the EU, are slowly changing the policies and norms around personal data and leaving the systems of relatively weak data regulation behind. The potential for these new regulations to change political culture is also significant, particularly when they are carefully tailored to target issues such as political advertising and abuse of voter data.
Beyond the GDPR, the EU is also working to implement a Code of Practice on Disinformation for tech companies, social networks and advertisers. That code includes provisions on political advertising and targeting users, ensuring transparency around these practices and their governance. The signatories also commit to allowing researchers to have access to data on their systems to help better understand these problems. This is another approach that tries to establish and re-affirm norms of conduct online, which has the potential to change attitudes but stops short of outright regulation. As Facebook’s shut down of research around political advertising shows, however, these codes also have limits.
Regimes that abuse consumer data help illustrate the risks of unrestrained systems of data collection and use. Besides organizations like Cambridge Analytica, which had consulted with almost complete impunity in countries around the world with little experience or regulation in these technical areas, governments with large stores of user data are also threats to political dialogue, increasingly threatening democracies internationally. Russia showed this danger in its meddling in the 2016 U.S. election and Brexit.
Without strong regulations to force companies to keep track of what’s happening on their platforms and prevent abuse of data, democratic governments are leaving the door wide open to these kinds of abuse.
In the United States, the Federal Communications Commission has recently weakened consumer protections by allowing third party sharing. Congress has not been capable of enacting any new initiatives for data protection. Major breakdowns in the way these companies moderate content and their governance have been revealed, repeatedly.
Given such weak systems of oversight, enforcement and control, there are many ways for the companies to improve. As the U.S. holds most of the world’s most central social networking and technology companies, its laws have great influence on how those companies are governed and operate globally. These enterprises can be run in more ethical ways with more open information access.
As of today, there are a patchwork of regulations and organizations that have not been up to the task of oversight and enforcement. Laws will have to be rewritten, particularly with respect to elections and electoral campaigns, parties and other connected organizations in the private sector, how they are run, how they care for data and what responsibilities they have managing it in political contexts.
Governments also need systems of research and verification. Data must be kept sacred, much as nuclear, chemical or biological materials are safeguarded. We must consider the importance of this information and how it is used as core components of how our societies and political systems are organized. Their governance and policies of these networks and data should also be transparent, with strong oversight and enforcement provisions.
Authoritarian countries such as China, Russia, Iran and their networks globally have weaponized systems of social control, while corrupt corporate practices have demonstrated the dangers of unlawful access to personal information, alongside the creeping interrelated issues of surveillance and censorship.
If democratic governments are to succeed in confronting authoritarians in the online space, they will have to re-emphasize the importance of the democratic values at the heart of political systems in the 21st century, partly by ensuring better management and protection of our personal information. They need to support policies that fortify democratic institutions and principles online, building stronger protection regimes are critical to this project, ultimately to support a more open, interconnected and free online public sphere.
Ultimately, building security, privacy, accountability, and transparency into the governance systems for online data will help lay the foundation of any democratic government in future.
Daniel Arnaudo is a senior program manager at the National Democratic Institute for governance, covering the intersection of democracy and technology, with a special responsibility to develop projects countering and tracking disinformation worldwide.