You are here

Deepfakes and astroturfing: When simulations manipulate opinions

webcard for library month, blue background, tag line read between the line in white

Oct 19, 2022

Want to spice up your next Zoom meeting? Just as that lawyer appeared in virtual court as a cat now using a webcam app called Xpression Camera you can show up to your morning meeting as Mozartthe Mona Lisa, or Elon Musk. Such video tomfoolery can induce a chuckle, but more sophisticated versions of the same technology, using neural networks and artificial intelligence, can also be used to manipulate opinions.  

Deepfakes,” or video impersonations of famous people, have proliferated on TikTok (“deeptomcruise” is one lighthearted example) but also surfaced in an attempt to deceive. In March this manipulated video of Ukrainian president Volodymyr Zelensky supposedly “surrendering” to Russia circulated widely. The technology behind the most sophisticated deepfakes is designed to create ever more deceptive simulations, relying on machine learning and complex algorithms to create ever more accurate mimicry over time.  

A key development in deepfake evolution has been the use of generative adversarial networks (GANs), or two groups of artificial intelligence networks, one to generate the product and the second to attempt to evaluate if it is fake -- and provide self-reinforcing feedback on how to fine tune the results. The more insidious uses of deepfakes have included not only attempts to wield political influence, but also, and more commonly, to defame and harass women. (In addition to including a good explanation of the technology behind deepfakes, an article by The Canadian Global Affairs Institute notes that a 2019 survey identified 15,000 deepfake videos online, with over 95% targeting women with pornographic content.)  

Even if they aren’t overtly offensive, part of the problem of deepfake videos is their sheer novelty, which can promote their spread. One strain of neuroscience research examines how dopamine, a neurotransmitter associated with reward satisfaction, increases when one is in the presence of novel experiences (see this article for a summary). Even if the deepfake itself is known to be false, there are powerful inducements in the brain to share it -- both the initial dopamine kick from the video’s novelty and the subsequent anticipation of more dopamine hits as the item is ‘liked’ on social media.  

Deepfakes are expected to grow more dangerous as propaganda tools as the simulations improve, but for now they tend to exhibit telltale signs of falsity: unnatural-looking eyelid and mouth movements, an overly smooth face and odd hair, especially facial hair. (Actually humans, at least so far, tend to be good at detecting things that look off with human simulations – see this definition of the uncanny valley.) You can also take a screenshot of the video and do a Google reverse-image search to see whether an original version of the video exists, which you can then compare against the doctored one. (See this article for more tips on how to detect a deepfake.) 

Automated Twitter accounts pose another challenge of trying to sort out who is human and who isn’t: such automated accounts, or bots, can also be deceptive amplifiers of mis-information. Astroturfing, a term coined in the 80s to describe faked correspondence in letter writing campaigns, has come to be more broadly applied to any attempt on social media to simulate grassroots support. On Twitter bots can do benign things like notify when a New York Times headline has been edited. But they can also be used to rapidly retweet misinformation, act as misinformation “super-spreaders” and simulate a broad groundswell of support, or dissent, where it may, or may not, actually exist. Furthermore, research has shown that humans tend to share dodgy information from bots as often as they share from verified human-owned accounts. (In other words, in the split-second it takes to retweet a Twitter post, it can be hard to tell if the tweeter is a human or not.) 

Covid-19- and vaccine-related misinformation on Twitter has made detecting and analyzing bot activity an important aspect of medical research. Twitter has stated that about 5% of its account base are undeclared bots, a figure disputed by Twitter’s former head of cybersecurity and by the company’s potential purchaser, Elon Musk. This article in the Journal of Medical Internet Research identified about 4% of roughly 200,000 Twitter accounts tweeting about Covid-19 in the early stage of the pandemic to be clandestine bots, which were also found to have the most negative content in their tweets: “undeclared bots were generally focused on criticizing political measures, interpersonal blame between senators or governors, and criticism directed at governments or political leaders in relation to the mismanagement of the health crisis.” Political hot-button issues also tend to attract Twitter bot activity. Researchers analyzing 10 million tweets in the month before the 2019 UK federal election focusing on attitudes toward Brexit found that up to 10 percent of accounts were bots, the activity of which saw a remarkable increased immediately after a nationally televised debate between Jeremy Corbin and Boris Johnson.  

How do you tell if a bot is a bot? The Botometer can be used to analyze a Twitter account’s tweeting history and patterns to determine if it is acting bot-like. This other tool by data scientist @conspirator0 can be used without a Twitter account to quickly check a graph of recent tweets -- if the account has been posting 24 hours a day, for example, it is probably software, not a person. Other, more sophisticated bot investigations have adopted such things as the tools of DNA analysis to delve into the randomness and relative similarity among pools of accounts to suss out bot behaviour.  

Of course not every peddler of dodgy information on Twitter is non-human. One similarity with deepfake videos is the use of the same Generative Adversarial Network technique to create fake profile pictures, used both for wholly automated Twitter accounts and by humans masking their identity online. The website https://thispersondoesnotexist.com/ uses this technique. Refreshing the page to show a new non-existing person can be quite unsettling. The use of such fake profile photos can also fool people: in this study of perception of synthetic faces the authors note: “…the human ability to accurately distinguish between synthetic and real faces is not better than chance.” That said @conspirator0 notes that there are still telltale signs of falsity in such images: the eyes are typically centred in the same location on each generated image, and hair and teeth tend to have odd characteristics. 

For now, deepfake videos and automated Twitter accounts with a malicious intent to spread false information are not impossible to spot. However as such simulations are expected to improve in the future, it’s good to start developing skeptical instincts in order to build mental defences against computer-generated propaganda. Keep the software in its place.