The Future of Disinformation – and how to stop it
By Justin Hendrix, Executive Director, NYC Media Lab
The world has become increasingly aware of the threat of misinformation and disinformation delivered through vast, largely unregulated social media networks propagated primarily across addictive mobile devices.
The ambitions and means available to bad actors are shaped by the technology and media ecosystem of the day. Even though we have yet to contend with many of the problems and faults that have emerged in today’s ecosystem, we must look ahead to imagine what threats we may face in the future, so that we can better shape public policy, educate citizens, and inform the design of new technologies and media.
In 2019, with the last year of the decade ahead of us, it is useful to cast our minds forward to 2030. We are about to enter a fifth decade of digital media transformation. The first decade was marked by the arrival of the personal computer. The second, by the advent of the World Wide Web and the Web browser. And of course, in the third the rise of mobile devices has changed how we engage with media, information and one another profoundly. In the decade ahead, we will enter yet another new period, when a variety of technology vectors will combine to change the landscape yet again.
There are two major forces driving the evolution of media today. Those same forces will determine the future of disinformation.
The first force is advances in the data sciences, including machine learning, artificial intelligence, and related technologies such as natural language processing, computer vision, and more. We aren’t just teaching computers how to read, how to see, and how to talk. We are now teaching them to combine and recombine information in novel ways, to recognize and employ emotion, and to generate content of all types. We are feeding machines vast amounts of personal data to target and manipulate the reaction or behavior of the intended consumer. At the same time, new methods are making it possible to generate photorealistic media without relying entirely on the real world as the source for data. This creates incredible opportunities, and some pretty daunting challenges.
The second major force in media today is the evolution of interface technologies. We are all reliant now – even addicted – to the devices we use to engage with most media. We carry smartphones in our pockets, staring at them intermittently, sometimes even hundreds of times every day. But a new generation of interfaces is bringing information into our environment in new ways. Voice interfaces, brain and neural interfaces, virtual and augmented reality devices and more will change the ways we interact with computers, with content, and with one another.
Here come the AI impostors
Advances in data science and interface technologies will be major forces in a variety of ways. For instance, in the next few years we will see the emergence of a broad variety of artificially intelligent characters with which we will more frequently interact. I recently hosted a working group of experts on this subject, convening people working on artificial intelligence, interaction design, and other questions. We asked them to predict when they believe we will regularly see AI characters in our lives. The majority believed that this would take place in the next 3-5 years. These characters may be animated, taking any form, or they may be hyper-realistic, passing themselves off as human. They will engage with us in all aspects of our lives, from entertainment to education to healthcare to commerce. For many, these characters will serve as the first point of contact with the Internet, and thus with news, information and politics.
There are already examples of what this world might look like. Thousands of chatbots and other automated systems that engage and sometimes frustrate us today. But consider projects like “Baby X,” out of Auckland’s Bioengineering Institute Laboratory for Animate Technologies. Baby X is a digital creature imbued with emotional characteristics and realistic expressions that responds to conversational stimuli.
View this post on Instagram
Checked a dream off my list when Giphy asked me to direct(!!!) a short for their film festival! It’s called #RobotProblems and it’s about a very specific personal issue. Actually, it’s a documentary. I hope Wes Anderson and Ava DuVernay aren’t, like…TOO shook right now. Enjoy! And lmk who should play my love interest when this becomes a full-length, Oscar-winning movie #giphyfilmfest
Or imagine Miquela Sousa, otherwise known as “Lil Miquela,” who is billed as a Brazilian model and musician but a fictional creation of an agency. She isn’t real, but she is a legitimate celebrity, with more than a million followers on Instagram and on other social networks. It’s not hard to see the trend line. Eventually, we’ll regularly interact with a variety of digital impostors, composites of characteristics that may be designed to our individual preferences, with synthesized personalities.
We will all have the ability to easily create puppets out of any person or object or voice, using deep learning methods.
For instance, methods presented by Stanford researchers at the computer graphics conference SIGGRAPH this year show just how good puppeteering characters from video inputs already is.
Generative adversarial networks make it possible to create incredibly difficult to discern puppets from any video. Some people have put this to use to re-animate world leaders to say what they wish; others have used the technology to create pornography “deep fakes,” as the result of these methods are now termed.
More mundane applications of the technology are slowly creeping into consumer applications. Try Mug Life, which will let you turn any photo into interesting 3D animated models.
Security experts fear the blurring of the line between what is real and what is synthesized, of the spread of hoaxes and lies. But these technologies will also create new means for expression, new tools to create art, and new ways to engage people.
We will have to find a balance.
Fiddling with reality
We will see augmented reality applications, in particular, employ this information to allow us to explore digital layers permeating the world around us, enabling new products and services and means of communication. Our memories and ideas will be situated in physical space, and every location will contain volumes of information. This will prompt some pretty strange behavior as we learn to navigate the new layers to the world around us. Think Pokémon Go on steroids.
It will also open up new avenues for bad actors who seek to jam reality. All of this will take place in a context of near limitless bandwidth and throughput on 5G wireless networks, which will enable new, much more immersive media products and services than the ones that command our attention today.
Combine all of these changes with the ambitions of the technology companies to advance interface technologies. Beyond pursuing AI, Microsoft, Facebook, Apple, Samsung, and others are investing in the next generation of display technologies. One goal is ultimately to install a perfect display into a contact lens that can beam digital information directly onto our retina. Samsung has filed a patent for such a device. While such an advance is likely not possible before the end of the decade ahead, it’s worth knowing that it is the goal.
Ultimately, advances in physics should make such miniature displays possible. And certainly, various types of neural and brain computer interfaces will be commercialized and deployed well before the end of the decade for consumer applications.
Shifting tech-tonic plates
If we cast our mind forward a decade, we can imagine a different media technology landscape altogether, based upon what’s already happening today.
For those concerned with how to ensure an informed democratic citizenry capable of coming to consensus on the major challenges that will face us, there are major questions.
How will the economics of the major technology platforms evolve? How will that define the emerging media ecosystem? Presently, it looks set to concentrate an enormous amount of power in the hands of just a few companies.
How will governments respond to that concentration of power? What new regulations will they impose? In the US and Europe, there are rumblings of regulatory interventions to address various externalities of the technology platforms and the economics they impose on the news media, for instance, as well as to address growing concerns over privacy.
With several high profile breaches and controversies behind us, we should question whether we want an Internet premised on perpetual surveillance. What does that mean for society? What are the right types of liability for these companies? And are we really just building tools for people to be manipulated in ever more discrete and imperceptible ways?
A grim future?
Far from the early utopian visions of Silicon Valley gurus like the late John Perry Barlow, there are many reasons to fear the implications of where we might be a decade from now. In the United States, we’ve recently summoned our technology leaders to testify in front of Congress. We want answers from them on a number of crises and questions- from election interference to privacy to other malign effects of their products.
Nothing illustrates these problems better than the Russian effort to interfere in US domestic politics over the last few years. From the covert effort to influence the 2016 Presidential election to America’s upcoming midterm elections, Russian operatives have used open social media platforms to stoke divisions and frame issues with the goal of polarizing public opinion.
Or look at the lynchings that followed disinformation on WhatsApp in India. Rumors spread on the messaging app about strangers approaching a community to do harm to the community’s children. When the next stranger happens to drive through, they are attacked. At least 20 individuals have been lynched or beaten to death in such attacks in the past two years.
Or consider the growing body of evidence that heavy use of digital media may have negative effects on mental and physical health. A number of studies point to the dangers, particularly to children and teens, of social media use and smartphone addiction. Depression, distraction, and other toxic side effects are only now being investigated. By the end of the next decade, we will have a broader understanding of the effects of digital media on our health.
What we can do
So where does that leave us, as we look to the decade ahead? Is the future destined to be grim? Can we course correct? People involved in advancing media, technology and policy must put away the breathless enthusiasm for new technology that dominated the past two decades of venture-backed growth in these sectors. Instead, it is a time for us to experiment together, with some new constraints in place, based upon a broader set of concerns for what societal outcomes we seek to produce. We must engage on what is possible critically, creating new technologies and new policies with the interests of democracy in mind. We must consider the implications of what we build, and make that act of doing so part of how we build.
It’s not possible to imagine one set of legal or regulatory solutions that will prepare us for the future of disinformation. New technologies will always emerge dramatically faster than policymakers can respond. Bad actors will always find new ways to exploit vulnerabilities in any system.
We need a set of principles on what to prioritize for the years ahead. Here are four key areas where I believe we need to invest to prepare for the decade ahead:
Diversity. A key way to avoid bad outcomes and identify faults is to advance the diversity of the teams working on developing media, technology and policy. The goal is to create cultural, social, and design awareness to avoid blind spots that bad actors may exploit. Disinformation and other technology externalities often affect vulnerable populations the most- look at the evidence that Russian disinformation disproportionately targeted black Americans in the run up to the 2016 election. We need to listen and involve affected communities in building safeguards against the threat of disinformation going forward.
New thinking on free speech. Law professor Tim Wu’s famous essay, Is the First Amendment Obsolete?, remains an excellent summary of the challenge to addressing threats to the public discourse while preserving free speech. The First Amendment is both one of America’s signature inventions, and one of the greatest stumbling blocks to solving the problem of modern disinformation. We will need to advance our thinking on this subject and a host of related ones – such as what liability the platforms have for hosting disinformation – in order to preserve democracy. New initiatives such as Columbia University’s Knight First Amendment Institute– will foster this important debate
Privacy and data protection. It is crucial that we think about privacy and broader data protections in the context of what the data we collect COULD be used for in future applications. The United States needs to consider the kind of broad data protections that the European Union has adopted. And, it’s time for a nation-wide dialogue on the responsible regulation of technology platforms. Some of the worst potential abuses of the technological advances I detail above are made possible by the combination with massive amounts of personal data.
Red teams and research. Media and technology platforms must create independent groups to challenge their own organizations to improve effectiveness by assuming an adversarial role or point of view. The goal should be to find the dangers before the bad guys do, and build solutions. Every new technology product must be considered through this lens of what will happen if someone with nefarious intentions takes advantage of it. This should not stop innovation- but rather provide new constraints. And certainly, to understand the threats of tomorrow, we’ll need the data. We must prize the work of independent researchers – and give them reliable access to data – to study these emergent phenomena and help inform society and policymakers.
Nations face profound challenges in the decade ahead, from climate change to inequality, corruption to threats abroad.
Advances in emerging media technologies carry the promise that we can enable citizens to engage with information, with one another, with machines, with government, with art, with entertainment and with the whole of human experience in new ways that will allow us to overcome our challenges. But the same advances create new threat vectors, which we must prepare for now.
We must recognize not only the opportunity but also the seriousness of what lies ahead and we must solve these challenges together. It is our responsibility.
Justin Hendrix is the executive director of the NYC Media Lab, a public-private partnership between New York City’s industry and its universities, and the founding executive director of RLab, New York’s home for VR, AR and spatial computing. You can read more of his writing on disinformation and misinformation at JustSecurity. The opinions here are his own.
[Image Credit: intueri / Shutterstock]