Algorithms and bots in the service of good and evil
‘Bots’, algorithms that automatically message social media streams, have been deployed to swamp discussions, overwhelm critics or more subtly, to emulate human opinion and manipulate social media discourse. But can bots play a role in restoring positive, constructive discourse to social media, countering hate speech with messages of tolerance and spreading accurate information to dispel bigotry, malicious rumour or fabrication? And as ever, just because we can, should we?
The power of bots to do ill is well documented. Supporters of the Mexican state’s favoured election candidate Enrique Peña Nieto used bots to mass attack critics mobilising against his 2012 candidacy online. Indiana University’s John-Paul Verkamp and Minaxi Gupta tracked pro-state spammers as they targeted critical tweets with #marchaAntiEPN and related tags, between 19-20 May of that year.
Bots on 3,200 fake accounts fired nearly half a million spam tweets, out-tweeting some 28,800 pro-protest tweeters 62% to 38%, and seriously obstructing the opposition’s efforts to organise protest numbers. Activist Rossana Reguillo and others received death threats on Twitter for two months, mostly from bot accounts. Mexican political activists accused the spammers of criminalising peaceful protest and segregating dissident opinion.
Writing on “the Algorithmic Manufacturing of Consent and the Hindering of Online Dissidence” in this month’s Institute of Development Studies Bulletin, Emiliano Treré of the University of Querétaro criticised the spammers’ dubious achievement as an artificial “construction of consent,” instead of a space to reinforce democracy through “dialogue, participation and transparency”.
By employing these strategies, the candidates rejected the opportunity to add voters’ feedback to their decision-making, citing a MIT Technology Review report that warned this kind of ‘large-scale political spamming’ needed to be countered, to prevent its spread to other political spaces.
Verkamp and Gupta looked at four more events where waves of spam tweets were used to spread propaganda or to suppress political expression. In each case the accounts used to send spam were registered in blocks and had automatically generated usernames, a clue to a means to finding defence mechanisms to counter political spam on social networks.
Twitter tolerates automated tweets but hates spambots. The company reported in June 2014 that up to 8.5% of its monthly user accounts, 23 million of them, were possibly bots. Twitter sets limits that prevent spam-like behaviour: tweeting too often, re-tweeting too aggressively, and creating “following churn” by rapidly following and unfollowing people.
Technologist Hunter Scott, who developed a Twitter bot that entered 165,000 ‘retweet and win a prize’ contests over nine months (he won about a thousand), found the network limits the total number of people you can follow. If you have below a few hundred followers, you cannot follow more than 2,000 people.
Twitter’s own bots track new accounts registered en masse and in advance, reverse engineer existing bots, study tweets for semantic clues that give away the bot’s non-human origins, and scan for spammers’ digital ‘fingerprints’. The harder trick is to track spam where the text is disguised, algorithmically generated to look like the work of human beings. Fewer fake accounts are needed as these are more slowly identified and closed down.
Verkamp and Gupta found that ten of the spam accounts in the 2012 China incident – targeting #FreeTibet hashtags between 12-15 March 2012, produced over 5,000 tweets each before Twitter shut them down. “On the other hand, Russia employed the highest number of spam accounts but with relatively fewer tweets per account. In this instance, it would be necessary to detect individual tweets, since finding accounts will have only a marginal effect at best.”
Most times, a human eye matched with media and linguistic literacy can spot the messages’ peculiar phrasings on sight. “Linguistic nonsense”, a grammatical play “that would make even a Dadaist envious,” write Alexander R. Galloway and Eugene Thacker. Hiding these linguistic, grammatical clues is the key to successfully ‘fooling’ recipients into thinking that their conversational partner on the other end is a real person, not a bot.
This is the objective of the so-called Turing Test, where the bar is constantly being set higher by improvements in so-called ‘machine learning’, loosely, the ability of algorithm-driven programs to modify themselves to better perform their assigned tasks.
Bots work best with receptive audiences. An estimated 81% of men tempted to join the adulterers’ dating site Ashley Madison did so on solely on the basis of ‘conversing’ with algorithmic bots generating fake seductive messages. Tech website Gizmodo reported that it had “clear evidence” that the messaging bots were generating almost half the company’s revenue at one point.
They work much less well with unreceptive ones. Even an algorithm that could fool more sceptical humans would then run up against the ‘ideological silo’ effect of political polarisation, which obstructs real human to human discourse, let alone the bot to human kind. The Pew Research Center found in 2014 that 63% of consistent US conservatives and 49% of consistent liberals say most of their close friends share their political views, compared with just 35% among the public as a whole.
This increasingly narrow range of discursive experience is reflected in the ‘echo chamber’ effect of social media, in that users tend only to register opinion that already reflects their political position. A much-challenged in-house 2015 Facebook study found that for some politically-engaged users, Facebook’s algorithm suppressed some diversity of political content, and its newsfeed acted as a powerful gatekeeper for news stories which may reinforce opinions rather than challenge them.
Expert observer Zeynep Tufecki does not argue against all uses of algorithms in making choices in what we see online. “The questions that concern me are how these algorithms work, what their effects are, who controls them, and what are the values that go into the design choices,” she writes. “At a personal level, I’d love to have the choice to set my newsfeed algorithm to ‘please show me more content I’d likely disagree with’ — something the researchers prove that Facebook is able to do.”
Facebook, and Twitter for that matter, are not visibly rushing to provide this as a service. But in theory, as Treré notes, activists could. He cites the example of Australian software developer Nigel Leck’s now-defunct Chatbot @AI_AGW, which searched tweets for phrases used commonly by climate change deniers and would reply with a relevant counter-response and a link to the evidence.
It was a neat way of pointing climate deniers in the direction of counter-argument, though it is not clear if many heads were turned by it. “(@AI_AGW) reveals the mindlessness of a certain kind of political argument, in which both sides endlessly trade facts, figures, and talking points, neither crediting the other’s sources,” commented the Boston Globe. “The bot might make more of a contribution if, instead of just making assertions, it could use the Socratic method: asking pointed questions and prompting self-examination.”
Socratic or not, it still doesn’t take much for a bot to get a following. Anonymous bot makers @tinypirate and @aerofade explored how social bots can influence changes in the social graph of a sub-network on Twitter, drawing responses and interactions from users that were previously not directly connected. “In the future, social robots maybe able to subtly shape and influence targets across much larger user networks, driving them to connect with (or disconnect from) targets, or to share opinions and shape consensus in a particular direction.”
Tim Hwang, chief scientist at the Pacific Social Architecting Corporation, wonders if bots can start influencing communities in positive ways, by giving people a perspective that they wouldn’t get from their regular social media circles. Hwang told Wired magazine that bots could provide information services that could find their own audiences. Bots, he said, are “basically a prosthetic that we can install into a network of humans” to enhance the way they socialise”.
Bots could also help moderate the tone of broader debate – especially one characterised by hate speech and bigotry – by rapidly countering abuse or malicious rumour with alternative opinion in the style of Leck’s environmentalist @AI_AGW bot. Alternatively bots need not reply to pre-identified hateful terms, false statements or politically loaded phrases. They could just identify and flag them so the impact of their originators, human or otherwise, could be addressed.
The media development NGO Internews has developed a communications system that collects and tracks rumours, passed from human collectors via SMS texts to a verifier team on a laptop. Teams covering disaster areas in Nepal and Ebola-stricken communities in Liberia then connect with a variety of accessible media outlets to distinguish verified reports or debunk rumour. Internews’ and their local partners don’t mind the rumours – they are good guides to people’s real fears – as long as they can sift, track and check them out efficiently.
In theory a set of algorithms that sort and classify new rumours as they come in could ‘triage’ the data, and speed up the process of prioritisation and rumour mapping before human verification. It also potentially extends the range of sources beyond human contacts sending SMS text to include a broader range of public social media messages, Tweets, Snapchat, Telegram and others.
Internews’ partners operate in a complex ‘information ecosystem,’ broadly a loose, dynamic configuration of different sources, flows, producers, consumers, and sharers of information interacting – through word of mouth, key community members, phone, the internet, and other channels and technologies. Bots could possibly help distinguish trends, themes, sources and text, and enable examination of the trust, influence, use, and impact of news and information within that ecosystem.
Algorithms need not dictate the distribution of information across their ecosystems, but they can help map their network nodes and key actors. By enabling interventions at strategic points of entry to complex and flawed information networks– what hackers call ‘exploits’ – media actors can spot points of entry to the equally complex and flawed debates that route through them.
Applying bots and algorithms to Twitter runs against Twitter’s rules. Even Nigel Leck’s gentle environmentalist @AI_AGW was shut down for breaching its terms and conditions (allegedly, after Twitter techs read about the bot in Popular Science magazine).
But Twitter has a tougher battle on its hands with hate speech, from college sexists to ISIS terrorists. It continues to face demands that it do more to fight online hate – if only to protect the corporation’s market value. “We suck at dealing with abuse and trolls on the platform, and we’ve sucked at it for years,” said former Twitter CEO Dick Costolo. “It’s no secret and the rest of the world talks about it every day.”
Treré cites a US field experiment that raised voter turnout in an election with non-partisan messaging appealing to public duty rather than political agendas. Targeting less committed bigots following social media trends – rather than the original haters at the source – could mitigate the impact of ideological ‘echo chambers’. Good-hearted bots could pick up followings on their own and create a counter balance to a Twitter feed otherwise thick with hate. If Twitter allows it.