Chat Jihad

Terrorism and tech - Silicon Valley’s violent propaganda headache

© Fady AlGhorra en Mahmoud Elsobky

 

Terrorist content has been a problem for internet giants like Facebook, Google and Twitter for years. It is one strand of what has become a string of controversies marring the technology sector, including the spread of hate speech and fake news.

Silicon Valley has been throwing a lot of resources at the fight against violent jihadist content on-line. Progress has been made, but fighting terrorists on the internet remains a game of whack-a-mole. If one platform cracks down on ISIS and Al-Qaeda supporters they will just move to another. “The only solution would be to pull the plug on the entire internet.”

In early 2017, our team was wondering how we would go about researching on-line activities of violent jihadist groups without being noticed. For the better part of a year we did research with profiles under false identities.

It was easy.

The SIM card problem

Most people have gone through the process of setting up accounts on social media. On most platforms it is an exceedingly easy affair. Major social media platforms typically ask first-time users for some personal data to identify and verify before setting up an account. Usually that requires providing a valid e-mail address.

Getting a valid e-mail addres without any identification requires a bit more effort, though.

We ended up working around the SIM card problem by buying a pre-paid SIM in the United Kingdom, where an ID is not usually required for a pre-paid card.

Most free-of-charge e-mail services require authentification of users via mobile phones. Since most mobile phone subscriptions are linked to users’ identities in some way, we needed to get a pre-paid SIM card without registering any personal data. This turned out to be the hardest part of our undercover research.

Getting a SIM card without showing some form of ID has become more difficult in recent years. In the wake of major terrorist attacks, Belgium was one of many countries to introduce ID requirements for the sale of pre-paid, non-contract SIM cards for mobile phones. That made it more difficult to coordinate illegal activities with “burner phones” and non-registered SIM cards. It also added one more layer of potential identification to people wanting to set up an e-mail address under a false identity.

We ended up working around the SIM card problem by buying a pre-paid SIM in the United Kingdom, where an ID is not usually required for a pre-paid card. With this SIM, we were then able to activate a mobile phone in Belgium and authenticate several e-mail addresses. This in turn allowed us to create profiles under fictitious names on all major social media platforms.

As reported in this series, it took us no longer than ten minutes to find our first violent jihadist video once the mobile phone and e-mail addresses were activated.

After a first day-long surfing session in March 2017, we had little to no problems to keep surfing regularly on jihadist channels on Telegram and other platforms. On some platforms we experienced blocks when we used jihadist imagery in the account profiles. Other platforms could be browsed with little to no limitations.

From Twitter to Russian apps

Violent jihadist content grew as social media platforms grew. It all kicked into gear with a big tweetstorm.

© Fady AlGhorra en Mahmoud Elsobky

Pieter Van Ostaeyen

Twitter was the first major social media company to witness large scale activity of global violent jihadist groups. When ISIS rose to prominence in 2014, the microblog site was plagued by thousands of jihadist accounts. It was often used as a vehicle to push out and amplify the most egregious propaganda.“Twitter really was the platform of choice in the early days of ISIS”, says Belgian jihadism researcher Pieter Van Ostaeyen.

Twitter very soon started to crack down heavily on jihadist Twitter activity. Thousands of accounts were suspended and it became difficult for many ISIS and Al-Qaeda supporters to build up larger followings over time. Some reverted to setting up new accounts in quick succession. Twitter hired more content moderators to remove accounts and the platform also increasingly started to develop artificial intelligence that would prevent people from even setting up a jihadist account.

‘Less than half an hour after the article went on-line, all ISIS accounts disappeared from VKontakte. My interview effectively meant the end of ISIS on VKontakte.’

Van Ostaeyen remembers how many of the jihadists then gradually moved off Twitter. “After that ‘Twitter crisis’, ISIS tried a number of other platforms, some of which I never even heard of like Friendica or Diaspora. They then eventually settled on VKontakte.”

That app, the Russian equivalent of Facebook, was developed by young Russian tech entrepreneur Pavel Durov and his brother Nikolai. When the jihadis moved to VK, Van Ostaeyen was interviewed on the sudden influx of jihadists on the platform by a magazine. The reporters also published his research data on VK users supporting ISIS.

‘Less than half an hour after the article went on-line, all ISIS accounts disappeared from VKontakte’, remembers Van Ostaeyen. ‘My interview effectively meant the end of ISIS on VKontakte.’

© Fady AlGhorra en Mahmoud Elsobky

Pieter Van Ostaeyen showing IS telegram Channel

Most activity then moved to messaging app Telegram, developed by the very same Pavel Durov when he left Russia in self-imposed exile.

Telegram is highly encrypted and has several features that allow users to chat secretly, and to create channels with which they can reach unlimited amounts of followers. The app’s developers announced that they had hit the 100 million monthly users mark in 2016. More recent figures showed that the number had grown to some 180 million.

That reach is modest compared to Facebook’s two billion monthly users, but the end-to-end encryption features on Telegram, the possibility to self-destruct messages and to set up channels have been a draw to jihadist propagandists “even though some are already withdrawing from the platform because they deem it unsafe”, says Van Ostaeyen.

50 million euro fines

The jihadist wave on Twitter in 2014 showed how serious the problem of terrorist propaganda had become for Silicon Valley. Things got worse when Western countries were hit by a wave of jihadist attacks.

Facebook, Twitter and Youtube started taking severe flak in Europe and the United States in 2015 and 2016, particularly after the coordinated Paris and Brussels attacks, the San Bernardino and Orlando shootings and the string of attacks in Nice, Berlin, Manchester and London.

The investigations into these attacks revealed that in almost all of the cases, the perpetrators of the attacks had either consumed vast amounts of jihadist propaganda on-line, or they used on-line tools to instruct or coordinate attacks.

The investigations into these attacks revealed that in almost all of the cases, the perpetrators of the attacks had either consumed vast amounts of jihadist propaganda on-line, or they used on-line tools to instruct or coordinate attacks. Some early research reports and our own analysis of some seventy perpetrator profiles also confirm that the internet was at least one factor in the complex of circumstances surrounding most recent violent jihadist incidents.

Government response across Europe soon followed. In the wake of a botched bomb attack on Parsons Green underground station in London on 15 September 2017, UK Prime Minister Theresa May spoke to representatives of the big technology companies, echoing earlier calls of her government for the big platforms to take measures to remove terrorist content more rapidly and more thoroughly.

The UK’s Home Office said at the time that ISIS shared 27,000 links to extremist content in the first five months of 2017 and, once shared, the material remained available online for an average of 36 hours. The UK government is considering measures that would take that down to two hours, to prevent wider sharing from the moment of publication.

Adolfo Lujan (CC BY-NC-ND 2.0)

Barcelona, august 2017

German Fines

The German government, in the meantime, has taken a much more aggressive approach. Berlin has started enforcing a law that would impose fines of up to 50 million euro to internet firms that fail to remove illegal content within 24 hours after users report it to the platform. For more complex cases, the deadline is a week. If the platform does not react in time, users can notify the Justice ministry and the fine may be imposed.

The legislation, known as the Netzwerkdurchsetzungsgesetz, or NetzDG, is not aimed at terrorist content alone, but also at racist content, fake news and other more egregious things that can be found on the internet. The internet platforms are supposed to put a complaints structure in place which allows for timely removal of illegal content.

So far, the results of the NetzDG have been hard to read. Earlier this month, the German Justice ministry reported 205 complaints since the law went into effect, far below the expected 25,000 complaints. Controversies keep popping up, with reports of social media platforms removing content that should not be removed out of fear over fines.

In one incident in January, the satirical magazine Titanic was suspended from Twitter for two days over a tweet mocking the far-right politician Beatrix von Storch of the Alternative für Deutschland party.

In one incident in January, the satirical magazine Titanic was suspended from Twitter for two days over a tweet mocking the far-right politician Beatrix von Storch of the Alternative für Deutschland party.

Around New Year’s Day, von Storch had responded to a series of tweets by the Cologne police department with New Year’s wishes and safety advice in Arabic. “What the hell is happening in this country? Why is an official police site tweeting in Arabic? Do you think it is to appease the barbaric, gang-raping hordes of Muslim men?”, von Storch wrote in her tweet, in an apparent reference to the sexual assault incidents that took place in the city during the 2015 - 2016 New Year’s celebrations.

Von Storch was then suspended temporarily from Twitter, only to be mocked later by Titanic, who in its turn saw its account suspended for its response to the racist slur.

Both Titanic and von Storch suggested that Twitter had suspended them under pressure of NetzDG. Many in Germany now agree that the law needs to be amended to avoid effects on free speech.

The debate on curbing hate speech and violent extremism on-line is also very much ongoing in the European institutions and the UN. Several reports, guidelines and recommendations have been issued by a variety of governments and international organisations.

Earlier this month, the European Commission published a set of recommendations for EU member states, including the introduction of a “one hour rule” for removing terrorist content from on-line platforms. The recommendations may lead to legislation at a later stage.

Community standards

In the early days of the self-styled caliphate of ISIS, Facebook, Twitter and other companies never failed to refer to their community standards to keep violent, pornographic and otherwise offensive content off their platforms. Those rules allow them to suspend users that cross certain red lines.

The “community standards” model and account suspensions were easily avoided by technology savvy terrorist supporters.

‘That system is putting a lot of the onus of reporting and removing violent content on the user’, says Nikita Malik, Director of the Centre for the Response to Radicalisation and Terrorism at the Henry Jackson Society (HJS). “That’s a huge task if you think about it. (…) If a young and vulnerable person is looking at something graphic and we expect them to click a drop-down menu and indicate that they want to report or remove it, that’s putting too much pressure on a person that’s already been exposed to that kind of content.”

© Fady AlGhorra en Mahmoud Elsobky

 

The “community standards” model and account suspensions were easily avoided by technology savvy terrorist supporters. Terrorist supporters in some cases used bots and other artificial intelligence tools to increase the spread and leverage of some of the content before an account suspension would occur. Some jihadis developed small applications o automatically generate new profiles.

Human eyes

Most major tech companies have since started investing heavily in counter-measures. Most have hired armies of human content moderators to keep terrorist content off their platforms.

The scale of these operations is baffling and it shows how the growth of the internet has created new problems that are difficult to immediately remedy.

In its September 2017 Transparency Report, Twitter reported 299,649 terrorism-related account suspensions in the first half of 2017, 95 percent of which at Twitter’s own initiative, without governments reporting contentious accounts to the platform.

In its September 2017 Transparency Report, Twitter reported 299,649 terrorism-related account suspensions in the first half of 2017, 95 percent of which at Twitter’s own initiative, without governments reporting contentious accounts to the platform.

The number of suspended accounts on the 330-million-user platform is going down now and the vast majority of accounts is suspended even before tweets could be placed. “We have suspended a total of 935,897 accounts for the promotion of terrorism in the period of August 1, 2015 through June 30, 2017”, according to the Twitter report.

In a May 3, 2017, post on his page, Facebook CEO Mark Zuckerberg announced that his company would be hiring an additional 3,000 staff to monitor and remove egregious content. Those thousands of new staff will be added to 4,500 people already policing the two billion users on the network. The measures were already well underway since Facebook was plagued in recent years with reports of on-line harrassment and violence, live suicides and other incidents.

Google has taken similar measures, especially on its videos streaming service YouTube, which has long been abused as an outlet for terrorist propaganda. “We have thousands of people around the world who review and counter abuse of our platforms”, said the company’s General Counsel Kent Walker in a June 2017 blog post.

Google said it had removed over 150,000 YouTube videos from June to December 2017 for violent extremist content. The company intends to hire extra people for content review, bringing the total number of staff doing such monitoring at Google to over 10,000 in 2018.

In June, Google also announced it is expanding its Trusted Flagger programme, a network of civil society organisations and experts that flag extremist and violent content on YouTube and Google’s other platforms.

“Fingerprint” of terrorist material

Facebook and the other platforms are now increasingly cooperating to crack down on terrorist content in the meantime.

In December 2016, Facebook, Twitter, Microsoft and Youtube announced that they had started working together to share “hashes”, digital fingerprints of terrorist content, across their platforms. Other companies like ask.fm, Snap, Oath, LinkedIn, justpaste.it and Facebook subsidiary Instagram soon joined. The companies announced in December 2017 that they had shared over 40,000 of these hashes and are looking at expanding the number of partner companies in the shared database.

The hashes sharing system by no means guarantees removal of content across platforms. According to a 5 December 2016 Facebook press release, “each company will independently determine what image and video hashes to contribute to the shared database. No personally identifiable information will be shared, and matching content will not be automatically removed.”

The major platforms have also started working together more closely with international organisations.

In May 2017, the tech industry founded the Global Internet Forum to Counter Terrorism, an industry body that “formalizes and structures how our companies work together to curtail the spread of terrorism and violent extremism on our hosted consumer services”, according to Microsoft’s press blog.

The GIFCT held its first meeting in August 2017. The industry has since started working with the United Nations in the Tech Against Terrorism project, which aims at sharing best practices on counter-terrorism with technology companies.

Tech representatives gave presentations to heads of state at the UN General Assembly in September. In December 2017 the industry gathered in Brussels with the European Commission for a third meeting of the EU Internet Forum. All these efforts seem to continue well into 2018 and beyond.

© Fady AlGhorra en Mahmoud Elsobky

Links to Jihadi Telegram Channels and chats on a facebook page

Artificial intelligence

On June 15, 2017, Facebook’s global policy management director Monika Bickert and its counter-terrorism lead Brian Fishman for the first time opened up on the platform’s use of artificial intelligence to track down and remove terrorist content.

Facebook is developing algorythms to find imagery and text that may be related to terrorism.

Facebook is developing algorythms to find imagery and text that may be related to terrorism. New is also that networks of terror group supporters are being tracked on-line. “We know from studies of terrorists that they tend to radicalize and operate in clusters”, wrote Bickert and Fishman.

“This offline trend is reflected online as well. So when we identify Pages, groups, posts or profiles as supporting terrorism, we also use algorithms to “fan out” to try to identify related material that may also support terrorism. We use signals like whether an account is friends with a high number of accounts that have been disabled for terrorism, or whether an account shares the same attributes as a disabled account.”

Facebook is also using technology to crack down on people and algorythms that set up large sets of fake accounts to spread terrorist propaganda on-line. “We’re constantly identifying new ways that terrorist actors try to circumvent our systems — and we update our tactics accordingly.”

“Our engineers have developed technology to prevent re-uploads of known terrorist content using image-matching technology. We have invested in systems that use content-based signals to help identify new videos for removal. And we have developed partnerships with expert groups, counter-extremism agencies, and the other technology companies to help inform and strengthen our efforts.”

Using technology to detect and remove terrorist content remains a tough challenge. Many analysts and journalists have complained about account suspensions because they referenced terrorist material for critical analysis. “A video of a terrorist attack may be informative news reporting if broadcast by the BBC, or glorification of violence if uploaded in a different context by a different user”, wrote Google’s Kent Walker.

The UK government announced in February that it had commissioned its own artificial intelligence tool, which it would offer to smaller internet companies that may not be able to afford development of tools to automatically detect and remove violent jihadist content. The tool is apparently capable of filtering out 94 percent of jihadist content before it is posted on-line.

In a BBC interview, UK Home Secretary Amber Rudd did not rule out that she would make the tool mandatory in case industry initiatives did not yield satisfactory results.

© Redirect Method

 

Redirect

Google, in the meantime, pioneered the use of targeted counter-advertising and is currently rolling out its Redirect Method in Europe, a system that sends targeted anti-radicalisation content to people seeking extremist content on Google’s platforms, to avoid recruitment into terrorist groups. Google and Facebook are also financing other “counter-speech” efforts to combat violent jihadism.

In some case, this involved sponsoring creative teams making a mockery of ISIS propganda, or small operations publishing testimonials of ex-ISIS members.

Telegram

But despite these massive invetments by the bigger companies, the game of whack-a mole between terrorists and smaller technology companies continues.

In June 2017, the start-up social media platform Baaz was infiltrated by ISIS operatives who started streaming channels with content from its Amaq and Nashir News propaganda outlets. Baaz suspended the operatives and removed the content after being warned by journalists. Other relatively small platforms such as Tumblr and Wickr have been affected as well. ISIS even published rankings and manuals on which apps best to use to communicate without being detected by authorities.

One platform that continues to receive criticism for not cracking down enough on terrorist content is Telegram.

One platform that continues to receive criticism for not cracking down enough on terrorist content is Telegram. Our own undercover research showed that a lot of jihadist content remains readily available on the platform.

NGOs and governments have called out Telegram several times to do more to remove jihadist content. Counter-extremism NGO the Counter-Extremist Project (CEP) levelled harsh criticism at the company in an August 2017 blog post its website.

“Since the Paris attacks, Telegram has revised its formal position, pledging to remove ISIS accounts from public channels. The company has, however, adamantly refused to take down private ISIS chats, where the attacks coordination is believed to take place. (…) Telegram CEO Pavel Durov has refused to take any steps to curtail ISIS’s access to Telegram’s private chats despite evidence that the platform has been used to plan and coordinate attacks.”

© Fady AlGhorra en Mahmoud Elsobky

 

Repeated outreach of our team to Telegram’s spokesperson Markus Ra via the Telegram app remained unanswered. Ra addressed the criticism in a 27 March 2017 blog post titled “Don’t Shoot the Messenger.”

In the blog, Telegram defends its model of encrypted messaging between persons in private chats, and it adds that terrorists will always find ways to communicate and coordinate in private messages. The company vehemently opposes any surveillance or limitation of encrypted person-to-person communications for privacy and freedom of speech reasons.

Telegram goes on to point out that it does remove public channels spreading terrorist content. “Just like other networks, Telegram took steps to kick them off this public platform”, wrote Ra. “Terrorist public channels still pop up – just as they do on other networks – but they are reported almost immediately and are shut down within hours, well before they can get any traction.”

In a 27 December 2016 tweet, Telegram said it blocks “over 60 ISIS-related channels” every day “ before they get any traction, more than 2,000 channels each month”. Users can use an “ISIS watch” channel to track terrorist content, and abuses can be reported. The number of daily channel suspensions has risen to 100 to 200 per day.

How to fight an ideology

Despite all this, the criticism keep coming. In July 2017, Telegram was temporarily blocked in Indonesia because it allegedly carried too much violent propaganda. In February, the CEP reported that ISIS supporters had posted a channel called “Le Moujahid Solitaire” containing among other bomb making instructions and calls to attack in France.

© Fady AlGhorra en Mahmoud Elsobky

Pieter Nanninga

Durov has been taking the many media reports about Telegram and terrorism with some irony, at some point quipping on his Twitter feed that even a new passport photo he had made is “strangely suitable for media articles about terrorists using Telegram.”

Prominent researchers agree that the internet sector has made progress towards better policing and removal of content. At the same time, they warn that much remains to be done.

“The counter-measures by the internet companies have definitiely shrunk the area in which jihadi groups can operate”, says Dutch jihadism reseacher Pieter Nanninga. “But Islamic State is in the first place an idea, and as long as the idea dies not disappear, I think we will keep seeing this kind of stuff on-line.”

Pieter Van Ostaeyen: “You can counter all you will on-line, but the minute one platform cracks down on jihadis, they immediately move to another. I can’t see how you can realistically prevent people from sharing this kind of content. The only solution would be to effectively pull the plug on the entire internet.”

© Fonds Pascal Decroos

This report was produced with support of the Fonds Pascal Decroos voor Bijzondere Journalistiek.

Maak MO* mee mogelijk.

Word proMO* net als 2793   andere lezers en maak MO* mee mogelijk. Zo blijven al onze verhalen gratis online beschikbaar voor iédereen.

Ik word proMO*    Ik doe liever een gift

Met de steun van

 2793  

Onze leden

11.11.1111.11.11 Search <em>for</em> Common GroundSearch for Common Ground Broederlijk delenBroederlijk Delen Rikolto (Vredeseilanden)Rikolto ZebrastraatZebrastraat Fair Trade BelgiumFairtrade Belgium 
MemisaMemisa Plan BelgiePlan WSM (Wereldsolidariteit)WSM Oxfam BelgiëOxfam België  Handicap InternationalHandicap International Artsen Zonder VakantieArtsen Zonder Vakantie FosFOS
 UnicefUnicef  Dokters van de WereldDokters van de wereld Caritas VlaanderenCaritas Vlaanderen

© Wereldmediahuis vzw — 2024.

De Vlaamse overheid is niet verantwoordelijk voor de inhoud van deze website.