Wading through the deluge of information and political messaging ahead of an election is hard enough. When you add in fake Twitter accounts that try to manipulate what you see, it gets even more complicated.
But while bots attract a lot of attention and stoke fear, researchers say the actual threat is overblown, and that they’re only one part of a bigger system aimed at manipulating opinion online.
“There’s a lot of attention on bots as the central cause of our media woes,” said Fenwick McKelvey, an assistant professor at Montreal’s Concordia University who researches social media platforms. “Bots are, at best, a small part of it.”
Twitter bots are automated accounts run by software instead of human beings and programmed to automatically tweet, retweet, like and follow other accounts. They’re not always malicious. Some bots are programmed to be helpful, like the account YK Climate Watch, which automatically tweets anomalies in temperature and precipitation in Yellowknife to track the impacts of climate change.
The bots you’ve probably heard the most about are ones that are designed to impersonate people in order to boost certain tweets, accounts or topics. These bots certainly exist, but they may not be as powerful as they sound.
“I don’t think we should dismiss it, but I think it’s a little bit overblown,” said John Gray, the CEO and co-founder of Mentionmapp Analytics, a social media data company.
Gray has researched bot activity on Twitter during the Alberta and Ontario elections, as with political hashtags like #cdnpoli and #Trudeaumustgo. He said typically 20 to 30 per cent of accounts participating in these Twitter conversations are “suspicious,” meaning they show signs of bot-like behaviour. One of those signs is tweeting more than 72 times per day on average, seven days a week.
But conclusively identifying a bot is challenging, even for experts, Gray said. And measuring the influence of bots is even more so.
There are behaviours we can observe where bots, or other inauthentic accounts, attempt to manipulate online conversations. Dozens of bots might retweet or reply to a particular tweet, making it appear to be more popular than it really is. Bots or fake accounts might also tweet under certain hashtags in order to try to sway the conversation there into more divisive topics.
In fact, it’s something that was documented to have happened in the last Canadian federal election, when Twitter accounts suspected of having links to Russia and Iran attempted to inflame debate on issues like pipelines and immigration.
But its impossible to measure whether those campaigns have an impact on how real people behave or think, Gray said.
“I don’t believe we can actually talk about impact,” Gray said. “We don’t have the research. We don’t have the data, and I don’t even know if we have the right questions to ask about the impact.”
Bots and #TrudeauMustGo
A recent example of the rising concern over the threat of bots is the debate that followed a trending hashtag: #TrudeauMustGo. In mid-July, the hashtag was trending on Twitter in Canada, and The National Observer reported that some of the accounts tweeting the hashtag showed evidence of automation, such as tweeting at excessively high rates. Some of the accounts identified were later removed by Twitter.
Many Twitter users interpreted the story to mean that bots had caused the hashtag to trend. In response, users who oppose Trudeau began using the hashtag #NotABot, along with #TrudeauMustGo, to challenge the idea that bots played a role in the hashtag’s popularity. A spokesperson from Twitter said the accounts removed weren’t responsible for making the hashtag trend.
“Our initial investigation has not found evidence of bot activity amplifying the #TrudeauMustGo hashtag,” said Michele Austin, head of government and public policy for Twitter Canada. “These were driven by organic, authentic conversation.”
Twitter’s trending hashtags are determined by an algorithm, which considers not only the number of tweets, but also the period of time over which those tweets are sent, and the number of different users tweeting the hashtag, according to the site.
But the specifics of how a trending topic is determined aren’t publicly known, which is part of why it’s difficult to measure the influence of bots, McKelvey said.
“I don’t know how Twitter makes trends. They have an explanation about it, but there’s concerns in general about the transparency,” McKelvey said. “It’s moments like this where you have a lot of uncertainty, which creates room for these competing interpretations.”
Twitter has made efforts to improve transparency, including releasing archives of accountsidentified as potential foreign election interference operatives. It also provides access to its API to researchers, and is constantly improving its ability to identify, track, and disable inauthentic accounts that violate its terms of service.
“We have a team dedicated to monitoring inauthentic and spam activity for the Canadian election and it’s something that we will be banning if we see it happen,” Austin said. “We do take these issues very, very seriously. The public conversation is never more important than during an election.”