Sunlight

How Transparency Can Help Defuse Disinformation From Botnets, Sockpuppets, and Online Trolls

Sunlight

By Yochai Benkler


Editor’s Note: In 2017, when Harvard Law School Professor Yochai Benkler analyzed how the Honest Ads Act would affect election advertising disclosure, he also noted that it would not address botnets, sockpuppets or lies from paid social promotions. We’ve syndicated his article below under the Creative Commons license.

In Part 1 of this post, I focused on how the Honest Ads Act bill would be a valuable step forward in normalizing the status of political ads online. Under this bill, online political ads would be treated more consistently with the influence they now have on the political process—like television or radio, rather than like a minor part of the equation. But while the platforms, in particular Facebook and Google, play a large role, they are certainly not the only ones selling behavioral marketing online. And, of course, advertising is not the only source of misleading and manipulative political communications.

What About Botnets, Sockpuppets, and Paid Social Promoters?

A major class of concern in the discussions since the election has been the rise of botnets and sockpuppets—essentially automated and semi-automated accounts used by human beings to simulate authentic social mobilization and concern. While much of the debate focuses on “bots” and combines anxiety about automation and anxiety about coordinated propaganda and manipulation, the basic problem is that social networks are susceptible to coordinated efforts, whether carried out by paid agents of a government, as in China or Russia, or by more-or-less sophisticated automated accounts that are rapidly improving in their ability to mimic authentic human accounts. The field of bot and coordinated campaign detection is still in its infancy, and the rapid evolution of bots makes approaches that even two or three years ago were best of breed outdated. Nonetheless, the flow of studies able to identify at least simpler bots make it reasonable to assume that the capability to manage coordinated campaigns aimed to simulate public engagement and attention, and to draw other, real citizens, to follow the astroturfing networks in terms of agenda setting, framing of issues, and levels of credibility assigned to various narratives is real and could move elections and public debate. A related strategy involves marketing firms paying influencers—highly connected individuals in their specific networks—to communicate within their networks.

So, if regulation stopped at “paid advertising” as traditionally defined, the solution would be significant, but partial even with regard to paid advertising. Historically, when a broadcast station or editor was a bottleneck that needed to be paid to publish anything on the platform, defining “paid” as “paid to the publisher” would have made sense. Here, however, a major pathway to communicating on a platform whose human users are provided the service for free is by buying outside marketing firms that specialize in using that free access to provide a paid service to the person seeking political influence. Search engine optimizers who try to manipulate Google search results to come out on top, or behavioral marketing firms that use coordinated accounts, whether automated or not, to simulate social engagement all are firms that offer paid services to engage in political communication. The difficulty posed by such campaigns is that they will not appear on the platforms as paid advertising, because those who engage in these platforms are simulating accounts on the networks. The marketers—whether they are a Russian information operations center or a behavioral marketing firm—engage with the network through multiple accounts, as though they are authentic users, and control and operate the accounts from outside the platform.

The Honest Ads Act definition of “qualified Internet or digital communication” is “any communication which is placed or promoted for a fee on an online platform.” This definition is certainly broad enough to encompass the products sold by third party paid providers whose product is to use the free affordances of the online network to produce the effect of a political communication, and to do so for a fee. As a practical matter, such a definition would reduce the effectiveness of viral political marketing that uses botnets or sockpuppets to simulate authentic grassroots engagement, because each bot, sockpuppet, or paid influencer would have to carry a disclaimer as to the fact that they are paid and the source of payment. Given, however, that the whole purpose of such coordinated campaigns is to create the false impression that the views expressed are expressed authentically in the target Facebook or Twitter community, the burden on expression is no greater than the burden on any political advertiser who would have preferred to communicate without being clearly labeled as political advertising. The party seeking to communicate is still permitted to communicate, to exactly the same people (unless the false accounts violate the platform’s terms of service, but it is not a legitimate complaint for the marketers to argue that the campaign disclosure rule makes it harder for them to violate the terms of service of the platforms they use). The disclaimer requirement would merely remove the misleading representation that the communication is by a person not paid to express such views.

While the general language of the definition of a qualified Internet communication is broad enough to include paid bot and sockpuppet campaigns, and the disclaimer provisions described in Part I of this post too seem to apply, the present text of the bill seems to exclude such campaigns from the provision that requires online platforms to maintain a public database of advertisements. The definition of “qualified political advertisement” to which the database requirement applies, includes “any advertisement (including search engine marketing, display advertisements, video advertisements, native advertisements, and sponsorships).” It would be preferable to include “coordinated social network campaigns” explicitly among the list of examples of “advertisement.” It is possible, and certainly appropriate for courts to read “native advertisements” to include a sockpuppet or bot pushing a headline or meme that supports a candidate or campaign. But there is a risk that courts would not. Furthermore, the provision requires platforms only to keep a record of “any request to purchase on such online platform a qualified political advertisement,” and advertisers are only required to make information necessary for the online platform to comply with its obligations. It would be preferable to clarify that the advertisers owe an independent duty to disclose to the platform all the information they need to include paid coordinated campaigns in the database, even if the request for the advertisement and the payment are not made to the platform.

As with the more general requirements of disclaimer applied to explicit advertising, clarifying that the disclosure and disclaimer requirements apply to coordinated campaigns will not address every instance of media manipulation. A covert foreign information operation will not comply with laws intended to exclude it. But just as the disclosure and database for advertisements would limit the effectiveness of efforts by would-be propagandists (campaigns, activists, or foreign governments) to leverage the best data and marketing techniques that Google and Facebook have to offer, so too would an interpretation of the bill that extends to commercial marketing firms that provide synthetic social-behavioral marketing through paid sockpuppets, botnets, or human influencers. Now, this will not address all propaganda, but it will certainly bring some of the most effective manipulation tactics into the sunlight.

Intentional or Reckless Falsehoods

At the end of the day, however, the Honest Ads bill deals with what it can; not with what it cannot. A large concern in communications leading up to the election was simply the overwhelming presence of intentional lies or reckless falsehoods to which the public was exposed. In many cases these were propagated by hyper-partisan online media that played a central role in the public discourse, primarily on the right wing. One of the most tweeted stories on InfoWars during the campaign season ran in the primaries, under the headline “Jeb Bush close Nazi ties exposed.” The source repeatedly cited in the campaign to smear Hillary Clinton as somehow connected to pedophilia was an interview on Breitbart with Blackwater founder Erik Prince, who claimed that he had inside information that FBI director James Comey had reopened the Clinton email investigation a week before the election because the emails retrieved off Anthony Weiner’s machine included email evidence that Hillary Clinton “went to this sex island with a convicted pedophile….” And, by December of 2016, 46% of Trump voters responding to a YouGov poll “gave at least some credence to the Pizzagate rumors.”

Nothing under present election law or the proposed bill would come near to touching this kind of intentional lying. While in commercial settings the FTC can police misleading advertising, in the political context, courts have been more reticent to uphold prohibitions on false advertising. Some states have tried to maintain laws prohibiting making false statements in the context of a political campaign. The Sixth Circuit struck down an Ohio law in 2016, after it had already once been litigated up and down the federal system in Susan B. Anthony List v. Driehaus. Here, the court of appeals recognized that keeping false advertising out of political discourse is a compelling state interest, but doing so in a way that was sufficiently narrowly tailored given the First Amendment heightened concerns about core political speech was far from easy. In outlining the failings of the law before it, the opinion did suggest that a law that was more narrowly tailored may survive. Such a law would seem to require some mechanism for screening out frivolous claims of falsehoods and would impose some materiality requirements—that is, that the falsehood be material. The court further suggested that the process for determining falsehood had to be extremely expeditious, to fit the time cycle of an election campaign, and could only apply to the speaker, not the intermediary. In truth, however, the designation of political false advertising as subject to strict scrutiny as core political speech and the courts’ reluctance to open up a litigation avenue in the midst of a political campaign suggest that it would be extremely difficult to persuade any court to accept any process as consistent with the First Amendment. The Susan B. Anthony List court cited a number of other courts that reached a similar conclusion. Similarly, the approach of the recent German law seeking to address the problem, requiring online media to remove hate speech within twenty-four hours, would not survive scrutiny in the United States.

The more expensive and tortuous route, one that will only work in the long run by undermining the business model of outlets that draw their audience by disseminating defamatory falsehoods about opposing candidates, would be defamation law. Given the high barrier that public figures must pass under New York Times v. Sullivan, and given how long cases can last, it would not be a strategy that could work within an election cycle. And, given the high burdens, it would be a risky strategy for the defamed candidate. But as Hulk Hogan’s suit, which bankrupted Gawker media, showed, defamation law offers one path to make the business of false and defamatory news less attractive. Under normal circumstances, such a path should raise concerns for anyone who is properly concerned with robust political speech. Certainly, defamation has been used in many countries as a way of silencing the government’s critics, and the strict limits under the New York Times v Sullivan line of cases make this path appropriately difficult. But the level of bile and sheer disinformation that characterized the 2016 election is such that perhaps raising the cost of reckless or intentional defamatory falsehood as a business model, at least, is a reasonable path to moderation of the most extreme instances of falsehood. Whether such an approach is worth the candle depends on one’s empirical answer to the question of how much of the defamation comes from fly-by-night fake news outlets, which would be effectively judgment proof; and how much comes from a core number of commercial sites that have made it their business model to sell false information and peddle in conspiracy theory.

Yochai Benkler is the Berkman Professor of Entrepreneurial Legal Studies at Harvard Law School, and faculty co-director of the Berkman Klein Center for Internet and Society at Harvard University.

[Photo Credit: Aah Yeah/ Flickr]

    Leave Your Comment Here