Last week, as thousands of Central American migrants made their way northward through Mexico, walking a treacherous route toward the US border, talk of "the caravan," as it's become known, took over Twitter. Conservatives, led by President Donald Trump, dominated the conversation, eager to turn the caravan into a voting issue before the midterms. As it turns out, they had some help---from propaganda bots on Twitter.
Late last week, about 60 percent of the conversation was driven by likely bots. Over the weekend, even as the conversation about the caravan was overshadowed by more recent tragedies, bots were still driving nearly 40 percent of the caravan conversation on Twitter. That's according to an assessment by Robhat Labs, a startup founded by two UC Berkeley students that builds tools to detect bots online. The team's first product, a Chrome extension called BotCheck.me, allows users to see which accounts in their Twitter timelines are most likely bots. Now it's launching a new tool aimed at news organizations called FactCheck.me, which allows journalists to see how much bot activity there is across an entire topic or hashtag.
Take the deadly shooting at the Tree of Life synagogue in Pittsburgh over the weekend. On Sunday, one day after the shooting, bots were driving 23 percent of the Twitter activity related to the incident, according to FactCheck.me.
"These big crises happen, and there’s a flurry of social media activity, but it's really hard to go back and see what’s being spread and get numbers around bot activity," says Ash Bhat, a Robhat Labs cofounder. So the team built an internal tool. Now they're launching it publicly, in hopes of helping newsrooms measure the true volume of conversation during breaking news events, apart from the bot-driven din.
Identifying bots is an ever-evolving science. To develop their methodology, Bhat and his partner Rohan Phadte compiled a sample set of accounts they had a high confidence were political propaganda bots. These accounts exhibited unusual behavior, like tweeting political content every few minutes throughout the day or amassing a huge following almost instantly. Unlike automated accounts that news organizations and other entities sometimes set up to send regularly scheduled tweets, the propaganda bots that Robhat Labs is focused on pose as humans. Bhat and Phadte also built a set of verified accounts to represent standard human behavior. They built a machine learning model that could compare the two and pick up on the patterns specific to bot accounts. They wound up with a model that they say is about 94 percent accurate in identifying propaganda bots. Factcheck.me does more than just track bot activity, though. It also applies image recognition technology to identify the most popular memes and images about a given topic being circulated by both bots and humans.
The tool is still in its earliest stages and requires Bhat and his eight-person team to pull the numbers themselves each time they get a request. Newsrooms interested in tracking a given event have to email Robhat Labs with the topic they want to track. Within 24 hours, the company will spit back a report. Reporters will be able to see both the extent of the bot activity on a given topic, as well as the most shared pieces of content pertaining to that topic.
There are limitations to this approach. It's not currently possible to the view the percentage of bot activity over a longer period of time. Factcheck.me also doesn't indicate which way the bots are swaying the conversation. Still, it offers more information than newsrooms have previously had at their disposal. Plenty of researchers have studied bot activity on Twitter as a whole, but FactCheck.me allows for more narrow analyses of specific topics, almost in real time. Already, Robhat Labs has released reports on the caravan, the shooting in Pittsburgh, and the senate race in Texas.
Twitter has spent the last year cracking down on bot activity on the platform. Earlier this year, the company banned users from posting identical tweets to multiple accounts at once or retweeting and liking en masse from different accounts. Then, in July, the company purged millions of bot accounts from the platform, and has booted tens of millions of accounts that it previously locked for suspicious behavior.
But according to Bhat, the bots have hardly disappeared. They've just evolved. Now, rather than simply sending automated tweets that Twitter might delete, they work to amplify and spread the divisive Tweets written by actual humans. "The impact of these bot accounts is still seen and felt on Twitter," Bhat says.
- So much genetic testing, so few people to explain it to you
- When tech knows you better than you know yourself
- These magical sunglasses block all the screens around you
- All you need to know about online conspiracy theories
- Our 25 favorite features from the past 25 years
- Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories