WASHINGTON (AP) — Over the past 11 months, someone created thousands of fake, automated Twitter accounts — perhaps hundreds of thousands of them — to offer a stream of praise for Donald Trump.
Besides posting adoring words about the former president, the fake accounts ridiculed Trump’s critics from both parties and attacked Nikki HaleyThe former governor of South Carolina, and U.N. diplomat who is challenging her onetime boss The 2024 Republican Presidential Nomination.
It was called Ron DeSantis, the bots aggressively suggested that the Florida governor couldn’t beat Trump, but would be a great running mate.
Republican voters increase their size candidates for 2024The bot network creator is trying to place a thumb on it using online manipulation techniques pioneered The Kremlin to sway the digital platform conversation about candidates while exploiting Twitter’s algorithms to maximize their reach.
Cyabra researchers, an Israeli tech company, found the vast bot network. They shared their discoveries with The Associated Press. While the identity of those behind the network of fake accounts is unknown, Cyabra’s analysts determined that it was likely created within the U.S.
To identify a bot, researchers will look for patterns in an account’s profile, its follower list and the content it posts. While humans post on a wide range of topics, often with original material and reposted content, bots tend to post the same topical posts.
This was the case for many bots that Cyabra identified.
“One account will say, ‘Biden is trying to take our guns; Trump was the best,’ and another will say, ‘Jan. 6 was a lie and Trump was innocent,’” Jules Gross was the Cyabra engineer that first discovered this network. “Those voices are not people. For the sake of democracy I want people to know this is happening.”
BotsAs they’re commonly known, phishing accounts are automated fake accounts. This was after Russia used them to influence the 2016 presidential election. While tech giants have made it easier to spot fake accounts in the past, Cyabra still shows them as a powerful force shaping online political discourse.
Actually, the three networks that are pro-Trump include different Twitter accounts. These were created in massive batches, in April, October, and November 2022. Researchers believe there could be hundreds of thousands of accounts involved.
All accounts feature photos and a name as well as personal information about the account holder. Some accounts shared their content in response to users. Others reposted material from users to amplify it.
“McConnell… Traitor!” One of these accounts was written in response to an article published in a conservative magazine about Mitch McConnell (one of many Republican critics of Trump)
To gauge bots’ impact, one can measure how many posts are made about a topic by fake accounts. For most internet debates, the percentage can be in the single digits. Twitter It has stated that less than 5 percent of active users per day are either spam or fake accounts.
Cyabra’s researchers found much higher levels of fake news when they examined posts that were negative about Trump critics. Nearly three quarters of all negative posts were about Trump critics. HaleyThe fake accounts were used to trace the identity of several people, including John.
The network also helped popularize a call for DeSantis to join Trump as his vice presidential running mate — an outcome that would serve Trump well and allow him to avoid a potentially bitter matchup if DeSantis enters the race.
Researchers found that the same account network shared positive information about Trump, which contributed to a false perception of Trump’s support online.
“Our understanding of what is mainstream Republican sentiment for 2024 is being manipulated by the prevalence of bots online,” Conclusions of the Cyabra scientists
Gross analysed tweets on different political figures to discover the triple network. Gross also noticed many accounts that were posting content the same day. Although most accounts are active they still have a small number of followers.
A message left with a spokesman for Trump’s campaign was not immediately returned.
Most bots aren’t designed to persuade people, but to amplify certain content so more people see it, according to Samuel Woolley, a professor and misinformation researcher at the University of Texas whose most recent book focuses on automated propaganda.
When a human user sees a hashtag or piece of content from a bot and reposts it, they’re doing the network’s job for it, and also sending a signal to Twitter’s algorithms to boost the spread of the content further.
Bots He said that people can be convinced of the popularity or unpopularity of a candidate, or an idea. For example, people can overstate Trump’s popularity by using more pro-Trump bots.
“Bots absolutely do impact the flow of information,” Woolley spoke. “They’re built to manufacture the illusion of popularity. Repetition is the core weapon of propaganda and bots are really good at repetition. They’re really good at getting information in front of people’s eyeballs.”
Most bots could be identified by their awkward writing, long lists of random numbers or accounts names that contained nonsensical words. These bots evolved as social media platforms become better at finding these accounts.
So-called cyborg One example is accounts. A bot can be periodically taken over and redirected to a human user, who can respond in human-like fashion to the users.
Bots Artificial intelligence will make it easier to be sneakier. AI programs are able to create authentic profile pictures and posts, which can sound much better. Bots That sounds like an a real person Deploy deepfake video technology According to Katie Harbath (a fellow at Bipartisan Policy Centre and an ex-director of public policy for Facebook), this could pose a challenge to platforms and users in new ways.
“The platforms have gotten so much better at combating bots since 2016,” Harbath said. “But the types that we’re starting to see now, with AI, they can create fake people. Fake videos.”
These technological advances likely ensure that bots have a long future in American politics — as digital foot soldiers in online campaigns, and as potential problems for both voters and candidates trying to defend themselves against anonymous online attacks.
“There’s never been more noise online,” Tyler Brown, who is also a consultant in politics and was previously the digital director at Republican National Committee, said that. “How much of it is malicious or even unintentionally unfactual? It’s easy to imagine people being able to manipulate that.”