What happens when you mess with Twitter? The big bird gets angry. Then it gets smart. And then it gets even.
I’ve been writing for Little Atoms about a new paper on Twitter’s digital insurgency against the Islamic State’s unwanted occupation of its network, and I’m both impressed and a little disturbed by Fat Blue’s war readiness.
The scale of Twitter’s under-reported victory may be a corrective to conventional wisdom — not unreasonably held — that there were just too many cyber-jihadis out there to nail, too many bots retweeting and too many wannabes hiding the real-deal ISIS soldiers by sheer weight of numbers.
Twitter found a way, in secret, and is keeping it that way. The story only came to light when academic J.M. Berger and data scientist Jonathon Morgan tried to test real data against the popular theory of ISIS’ much vaunted and allegedly unchallengable social media skills.
Conducted within Twitter’s rules on independent research but without its support, Berger & Morgan’s study covered the network’s late summer 2014 counterattack against key ISIS Twitter users — the so-called mujtahidun (‘the industrious ones’) — and their re-tweeting bots.
Twitter began targeting ISIS accounts on June 13 last year, suspending around a thousand to start, including ISIS’ main information page @nnewsi, to the dismay of free speech champions, analysts and spies seeking ‘open source intelligence’ on ISIS’ ambitions.
But that was the easy bit. Identifying ISIS supporters from their tweets is straightforward. Even Anonymous, in a weird Daily Mail-lite burst of moral outrage, was collecting names of particularly vile examples to pass on to Twitter’s complaints department.
Twitter’s war-winning Manhattan Project-style technical advance was first to precisely track and suspend the key leaders, and secondly to keep suspending them at speed, each time they tried to rejoin the fray under new account names, operating new retweeting bots.
This was indeed a game of whack-a-mole, of the kind that other analysts had previously argued made a strategy of suspending ISIS supporters’ accounts pointless. But instead of finding an alternative, Twitter’s techies just upped the game to warp speed.
The human mujtahidun were simply knocked out by faster algorithms that dramatically limited the reach and scope of ISIS social media. The primary ISIS hash tag — its name in Arabic — went from 40,000 tweets per day or more in September 2014, to under 5,000 a day in February.
ISIS were mad, posting apocalyptic threats to Twitter founder Jack Dorsey and his staff, but largely powerless to prevent the rollback.
Yet while the strategy impressively limited the ISIS network’s ability to grow and spread, “total interdiction,” as Berger & Morgan put it, was not Twitter’s goal. The test of success was a substantially reduced presence on Twitter’s feeds overall.
“Perhaps most important is what we didn’t see,” the researchers noted. “We did not see images of beheaded hostages flooding unrelated hash tags or turning up in unrelated search results. We also did not see ISIS hash tags trend or aggregate widely.”
The result, possibly the intent, of the angry bird’s warp-speed account suspension strategy has been to isolate core ISIS supporters online. Denied access to the merely curious, naturally malignant or simply lost online, they preach to a smaller, already converted crowd.
A network graph from Berger & Morgan’s paper. The bottom part of the graph shows a high density of interactions between the most-connected and influential members of ISIS, but much less interaction between the less influential and connected supporters in the top part. “As suspensions contract the network, members increasingly talk to each other rather than outsiders,” notes the report.
This isolation could reduce audience among potential “lone wolf ” attackers, people only marginally engaged with ISIS ideology, many already prone to violence, or mentally ill. (Research suggests an association between mental illness and lone-actor terrorism.)
“Specifically,” concluded Berger & Morgan, “neutering ISIS’s ability to use Twitter to broadcast its message outside of its core audience has numerous potential benefits in reducing the organisation’s ability to manipulate public opinion and attract new recruits.”
ISIS has plenty of other ways to communicate, everything from teenager’s Snapchat to hacker’s Pastebin, but none have the mass immediacy of Twitter. Twitter took down ISIS in a fair fight on common ground. It sheds light on the power of Twitter’s de facto ‘weapons division’.
What might be its next target? Speculating: The tools could be used to target anti- (or pro-) gun control tweets after a US school shooting, responding to fears of copycat attacks. Or opinions judged to be ‘hate speech’ in a region-focused sweep like Berger & Morgan’s own ISIS sample, so specific to Iraq & Syria it counted just one primary ISIS tweeter in the UK.
Who can say? Twitter doesn’t comment. What are you doing when you have developed a ‘weapon’ that provides total control — at speed — over the communication capacities of a targeted group?
Berger & Morgan found Twitter “discloses literally no information about the accounts it suspends (in these cases), yet this activity takes place every day.” In fact, they add, the legal vacuum that surrounds these issues concedes near-absolute authority to Twitter, as it does to all the online giants.
So measurable has the big bird’s victory over ISIS been, this may not last, given the failure of everyone else, the military included, to clear the jihadis off the digital park to date. As Berger & Morgan themselves warn, companies like Twitter would be well advised to consider proactive measures and clarify their rules of engagement in “an area where government oversight may eventually come into play”.
Berger & Morgan’s report for the Brookings Institution. Read it here.
A fuller version of this article is published at Little Atoms.