Fake TikTok Accounts Push Election Lies for Profit and Chaos
WSJ Tech News BriefingAugust 09, 202400:13:29

Fake TikTok Accounts Push Election Lies for Profit and Chaos

A Wall Street Journal investigation has found a flood of anti-Trump videos on TikTok, generated with the help of artificial intelligence and traced back to hidden overseas networks. WSJ reporters Georgia Wells and John West join host Zoe Thomas to discuss the lies and hyperbole these accounts push and the possible reasons for them. Sign up for the WSJ's free Technology newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices

A Wall Street Journal investigation has found a flood of anti-Trump videos on TikTok, generated with the help of artificial intelligence and traced back to hidden overseas networks. WSJ reporters Georgia Wells and John West join host Zoe Thomas to discuss the lies and hyperbole these accounts push and the possible reasons for them.


Sign up for the WSJ's free Technology newsletter.

Learn more about your ad choices. Visit megaphone.fm/adchoices

[00:00:00] It's 4am and you're sucking baby snot through a tube because she's congested. If you love her that much, love her enough to make sure she's buckled in the right car seat. Find out more at nhtsa.gov slash the right seat.

[00:00:11] Brought to you by the National Highway Traffic Safety Administration and the Ad Council. Welcome to Tech News Briefing. It's Friday, August 9th. I'm Zoe Thomas for The Wall Street Journal. Intelligence officials have warned that the 2024 presidential contest could face an unprecedented flood of fake news from foreign actors.

[00:00:34] A WSJ analysis of videos on TikTok has found it's already happening. We'll tell you what kind of content is being served up, how it's spreading and what we know about who is behind it. We're joined today by WSJ reporters Georgia Wells and John West.

[00:00:53] Georgia, fake stories have thrived online since the earliest days of the internet. What's different about these stories? What's happening now is artificial intelligence is making it easier for bad actors to slice and dice and create fake stories much faster at huge scale and for very, very cheap.

[00:01:11] Can you describe a few of these videos for us? One of the more shocking videos claims that Marjorie Taylor Greene is pregnant with Donald Trump's baby. Other videos said Melania Trump was seeking a divorce from the former President Trump.

[00:01:27] Another video claims Trump had assaulted a judge and been thrown in jail. And to be clear, all of these videos are making false claims. Yes, the videos are pushing a massive pile of lies. Sometimes mixed in there are factual statements. John, how difficult are these videos to make?

[00:01:47] Some of the people we spoke to said that using off-the-shelf automation tools popularized in the last few years as well as using AI to voice your scripts and even sometimes write the scripts, it could be very easy to produce these videos en masse.

[00:01:59] Georgia, do we know why these videos were focused on Trump? It's really hard to figure out what the true motives of these players are, but Trump is a very viral character and also critical to national security.

[00:02:13] So it's an easy way for bad actors to spread chaos and undermine institutions. And some of the accounts that you found, they weren't always focusing on Trump, right? Yeah, so in addition to Trump himself as a major figure that was a target of attacks,

[00:02:29] we also had Clarence Thomas and Marjorie Taylor Greene. Now I will say some of the bots actually seem to have flip-flopped. And at some point, either in the past or more recently, were either praising Trump or were attacking President Biden. So it's interesting.

[00:02:45] The overwhelming majority of what we saw though was about Trump and more than that was against Trump. Did you receive any comment from former President Donald Trump? A spokesman for Trump did not respond for a comment. How does TikTok's algorithm help these posts spread?

[00:03:00] TikTok's algorithm is very fine-tuned to serve users more videos based on what they've engaged in in the past. But what this means is if people who are just watching a ton of political content

[00:03:13] or content about Trump come across it and they rewatch it or they kind of show indications they're interested, it can be really easy for these sorts of accounts to just rack up view counts really, really quickly because they get locked into this engagement machine.

[00:03:26] And then TikTok continues to serve their videos to users who watch them and other users like them. So they don't have to take the time to build up a large massive following similar to traditionally

[00:03:38] the way people had to spend time on YouTube and Instagram if they wanted to attract huge followings. That's changing on those platforms, but TikTok really pioneered this engagement-based strategy for serving content. How did TikTok respond to those accounts?

[00:03:54] TikTok had already removed close to half of the 91 accounts before we reached out to them over the course of our reporting. But then as we continued reporting, more accounts similar to them kept on popping up and then TikTok did delete some of those, but it just was notable

[00:04:10] that it was this whack-a-mole game where they kept on reappearing. Let's talk about the WSJ's methodology for this analysis. John, tell us how this analysis was conducted. To start, we created 12 bots and we had them set up in a variety of swing states as well as kind

[00:04:28] of in a control group in suburban DC and Maryland. So we had Pennsylvania, Arizona, Michigan, and Georgia. And we have these little bots just hanging out, these little small computers, they're called Raspberry Pis, and they're in reporters' homes in these states. And they are

[00:04:42] scrolling TikTok many hours a day. And they picked up... basically the way TikTok algorithm works is that it shows you more of what you have dwelled on. So we've had two kinds of bots. One kind of

[00:04:53] bot just rewatched political content over and over and over again. And another kind of bot only rewatched content from mainstream news outlets, whether or not it was political or not. Both kinds of bots got served this content from these fake accounts.

[00:05:06] Apart from being served this content, what other patterns did the bots find? Well, really early on, we noticed that just Trump content in general, our bots were getting a lot more of it. So TikTok was serving more than two times as much Trump content as Biden content

[00:05:20] before Biden dropped out of the race. So that's pretty striking. Another thing that emerged really fast was we would just see these like AI-voiced weird turns of phrase. They called him old Donnie instead of President Trump. They would say very kind of windy, circuitous

[00:05:36] ways of speaking about, oh, stay tuned for the next installment. You'll never believe what happens next. And we noticed that AI voice and then these repeated lies kept coming up again and again and again. And pretty soon we started to dig into those accounts after our bots have

[00:05:50] been served a lot of their content. How much content were these accounts putting out? So in total, we have a little over 10,000 posts from 91 accounts. In March, they posted over 3,000 times collectively. But keep in mind that the bots were being whack-a-moled by TikTok. So

[00:06:08] at no point were all 91 of these accounts that we found posting at the same time. Do we have a sense of how their videos performed in terms of engagement? We do. In aggregate, they were viewed over 108 million times. They also got 300,000 comments and 2.5 million likes.

[00:06:26] All right. After the break, what's the purpose of these thousands of videos that push political lies and hyperbole? We'll have more on that when we come back. It's 4 a.m. and you're sucking baby snot through a tube because she's congested.

[00:06:46] If you love her that much, love her enough to make sure she's buckled in the right car seat. Find out more at nhtsa.gov slash the right seat. Brought to you by the National Highway Traffic Safety Administration and the Ad Council. We're back now with our reporters Georgia

[00:07:03] Wells and John West. So John, the journal analyzed 91 TikTok accounts. Are these accounts connected? So TikTok says that these are distinct separate networks and they're able to look at their internal data to find that out. Now from what a user sees, they look very connected. And from

[00:07:22] what our analysis shows, there's a lot of complex overlapping ways that these accounts have things in common. So whether it's behavior or the actual content they're posting, there's a lot of overlap in the ways those distinct networks, according to TikTok, seem to be behaving on the platform.

[00:07:38] Where are these accounts based and how did you figure that out? TikTok said the accounts were based in China, Nigeria, Vietnam, and Iran. John had some technical signals he was able to see that confirmed some of those. Yeah, that's right. So we were able

[00:07:54] to see that the language settings for some of the accounts were set to Chinese and those accounts also had other ways of showing it. So one of the accounts posted this errant video intermixed with

[00:08:05] all this Trump content. It's a video of a skyline for a city in China. We also were able to see that Vietnam was a region setting that some of the accounts had set their TikTok apps to and that

[00:08:15] was visible to us. But again, TikTok confirmed for us China and Vietnam and then they added to the list Iran and Nigeria. And what has TikTok said specifically about these accounts and their content? A spokeswoman for TikTok said that TikTok continually takes action against spam accounts

[00:08:33] and she also said that TikTok will continue to improve how they fight deceptive behavior on an ongoing basis. TikTok removed these accounts both before and after we flagged them to them and TikTok said the reason it removed these accounts was because they violated its rules

[00:08:48] against spam and against deceptive behavior. Georgia, how does TikTok moderate this kind of content? So TikTok works with fact checkers to verify certain claims on its platform and in some cases when these fact checkers and TikTok determine that the content breaks its rules, often for being what

[00:09:07] they consider like harmful misinformation, TikTok removes content and accounts. And then in messier cases when accounts post information that can't be verified, TikTok sometimes labels the content and stops recommending that content to users. Are there some numbers about how many accounts

[00:09:24] they've taken down based on this? TikTok said it has removed 24 influence operations from its platform in the first half of this year and in the first quarter of this year TikTok said it removed

[00:09:34] more than 170 million authentic or fake accounts from its platform and so that kind of speaks to the sort of arms race between the platforms and bad actors here. Could making money be a motive?

[00:09:46] TikTok has a creator rewards program and this program has a certain like threshold for how accounts can qualify for the program, how many followers accounts need to have and how many views.

[00:09:57] And for accounts that make it into this program TikTok pays the accounts per view and also based on other metrics and that can really add up. And so we did some back of the on float math just if

[00:10:09] you assume that all the accounts that we identified had been in this program and had been monetizing at the lower range that TikTok pays creators they could have made $10,000 which in many developing regions is well above a middle class income. Did the journal attempt to contact these

[00:10:27] TikTok accounts for comment? Yeah so I tried to message all of them I didn't manage to actually get messages through to all of them because they were getting taken down as I was reaching out to

[00:10:37] them but one replied that one that replied suggested to me that it was for sale it said if you buy this account please contact me. I replied to the person and tried to ask how much

[00:10:48] money they were selling the account for but I didn't hear back from them and then their account was banned. So that suggested to us that at least that account could have been a part of a for-profit

[00:10:59] operation because in terms of ways accounts make money there's the creator rewards program that I already mentioned but there's also this kind of illicit market of buying and selling accounts

[00:11:09] that this account may have been a part of. So is there any way to know what the intention of sharing these fake stories might be? It's really hard to get inside the heads of the people kind

[00:11:20] of with their hands on the keyboards of the people directing this but when we talked to experts a lot of the experts said these sorts of operations are really good at spreading chaos and undermining trust in institutions. Some of the accounts flip-flopped like John mentioned and that can

[00:11:35] be indicative of for-profit operations. All right those were WSJ reporters Georgia Wells and John West. And that's it for Tech News Briefing. Today's show was produced by Julie Chang. I'm your host Zoe Thomas. Jessica Fenton and Michael LaValle wrote our theme music. Our supervising producer is

[00:11:53] Catherine Millsop. Our development producer is Aisha Al-Muslim. Scott Salloway and Chris Zinsley are the deputy editors and Felana Patterson is the Wall Street Journal's head of news audio. We'll be back this afternoon with TNB Tech Minute. Thanks for listening.