In July, Trump brought right-wing media producers to the White House to laud their creation and promotion of conspiratorial and false content. "The crap you think of is unbelievable," Trump said. Afterward, some attendees began attacking reporters who are critical in their coverage of the administration, personalizing Trump's war on the press. (In October, one attendee recycled a video mash-up he made last year that depicted a fake Trump killing his critics, including reporters, for a GOP forum at a Trump-owned Florida hotel.)
The rampaging president video drew coverage and was seen as another sign of our times. But less transparent forms of disinformation also appeared to be resurfacing in 2019, including harder-to-trace tools that amplify narratives.
In the second 2020 Democratic presidential candidate debate, Rep. Tulsi Gabbard, D-HI, went after California Sen. Kamala Harris. Social media lit up with posts about the attack and Google searches about Gabbard. Ian Sams, Harris' spokesman, made a comment that raised a bigger issue.
Sams tweeted that Russian bots magnified the online interest in Gabbard. Bots are computer code, acting like robots online. Their goal is generating viewers and with it, purported concern or even outrage. Sams' tweet was the first time a presidential campaign made a comment about bots. Social media, especially Twitter, is known for bot activity that amplifies fake and conspiratorial posts. Estimates have said that 15 percent of Twitter shares have been automated by bots -- or faked.
Sams' tweet came after speculation from a new source that has become a standard feature of 2020 election coverage: an "analytics company" that said that it saw the "bot-like" characteristics," as the Wall Street Journal put it. Their experts said that they saw similar spikes during the spring. What happened next was telling.
Harris' staff and the Journal may have been correct that something was artificially magnifying online traffic to wound her campaign. But when tech-beat reporters tried to trace the bots, the evidence trail did not confirm the allegation, backfiring on her campaign.
That inconclusive finding highlights a larger point about online disinformation in 2020. Attacks in cyberspace may not be entirely traceable, eluding even the best new tools. The resulting murkiness can cause confusion, which is one goal of propagandists: to plant doubts and conspiracies that eclipse clarity and facts while confusing voters.
Sometimes, those doubts can resurface expectedly. In mid-October, Hillary Clinton said during a podcast that pro-Trump forces were "grooming" Gabbard to run as a third-party candidate, including "a bunch of [web]sites and bots and ways of supporting her." (In 2016, a third-party candidate hurt Clinton's campaign. Jill Stein, the Green Party candidate, received more votes than the margin separating Trump and Clinton in the closest swing states of Michigan and Wisconsin. That was not the case in Pennsylvania.) Gabbard rejected Clinton's assertion that she was poised to be a 2020 spoiler, saying that she was only running as a Democrat. Trump, predictably, used their spat to smear all Democrats.
But bot activity is real whether it can be traced overseas or not. In October, Facebook announced that it had taken down four foreign-based campaigns behind disinformation on Facebook and Instagram. One of the targets of the disinformation campaigns was Black Lives Matter, which told CNN that it had found "tens of thousands of robotic accounts trying to sway the conversation" about the group and racial justice issues.
Three days after Facebook's announcement, Black Lives Matter posted instructions for activists to defend "against disinformation going into 2020." It asks its activists to "report suspicious sites, stories, ads, social accounts, and posts," so its consultants can trace what's going on -- and not rely on Facebook.
Dirty campaigning is nothing new. Deceptive political ads have long been used to dupe impressionable voters. But online propaganda differs from door flyers, mailers, and campaign ads on radio and TV. Online advertising does not aim at wide general audiences, but instead targets individuals that are grouped by their values and priorities. The platforms know these personal traits because they spy on users to create profiles that advertisers tap. Thus, online platforms invite personal narrow-casting, which, additionally, can be sent anonymously to recipients.
The major online platforms created their advertising engines to prosper. But government agencies that rely on information about populations -- such as intelligence agencies, military units, and police departments -- quickly grasped the power of social media data, user profiling and micro-targeting. More recently, political consultants also have touted data-driven behavioral modification tactics as must-have campaign tools.
Thus, in 2016, these features enabled Trump's presidential campaign to produce and deliver 5.9 million customized Facebook ads targeting 2.5 million people. This was the principal technique used by his campaign to find voters in swing states, Brad Parscale, his 2016 digital strategist and 2020 campaign manager, has repeatedly said. In contrast, Clinton's campaign had 66,000 ads targeting 8 million people.
Television advertising never offered such specificity. TV ads are created for much wider audiences and thus are far more innocuous. As Emma L. Briant, the British academic and propaganda expert who unmasked the behavioral modification methods deployed on online platforms, noted, these systems can identify traumatized people and target them for messages intended to provoke fragile psyches.
"What they have learned from their [psychologically driven online] campaigns is that if you target certain kinds of people with fear-based messaging -- and they know who to go for -- that will be most effective," she said, speaking of past and present Trump campaigns, pro-Brexit forces and others.