On October 21, 2017, a suicide bomber blew himself up outside the Marshal Fahim military academy in Kabul, killing at least 15 military cadets and wounding four others. The Taliban claimed responsibility.
On 2 February 2016, representatives of tech giants Google, Facebook, and Twitter appeared in front of the U.K. Parliament’s Home Affairs Select Committee as part of its Countering Extremism Inquiry.
Facebook hosts 1.59 billion accounts across the world, Twitter has 320 million users, and Google, just the previous day, had become the largest company in the world. Google’s subsidiary YouTube sees 400 hours of video uploaded every minute. In short, these companies provide very powerful platforms for communicating used by people all over the world.
The U.K. Anti-Terror Police Chief Mark Rowley had reported to the Committee in 2015 that cooperation between U.K. security authorities and Internet-based services were less than ideal, going as far as to say that some Internet platforms were “undermining” counterterrorism activities.
Understandably, none of the social media company representatives were prepared to accept responsibility for dissemination of extremist narratives but were quick to condemn them. There was a reticence from the representatives to even admit that their platforms were being utilised by extremists to spread their messages. Most notable is that none of the companies stated that they monitored the content of their websites or had teams dedicated to ensuring that extremist and terrorist activity was reported to the authorities. Instead, they relied on users (i.e. the general public) to report activity that was of concern. Twitter argued that its platform was inherently transparent but failed to recognise any responsibility for keeping an eye out for dangerous or inappropriate activity.
The representatives included Dr. Anthony House, Head of Public Policy Strategy for Google Europe, the Middle East and Africa; Simon Milner, Policy Director for U.K. Facebook; and Nick Pickles, Head of U.K. Public Policy for Twitter. They were asked how many requests they had received from the British authorities for information on their users in relation to terrorist activity. Google had 1,200 requests in 2013 and 3,000 in 2015, a significant rise but still relatively small considering their vast customer base. Twitter reported that they had less than 300 requests in the last six months of 2015. The Committee Chair asked the three if social media companies tipped off account holders who were under surveillance, but a straight answer was not forthcoming. Twitter works with law enforcement on a case-by-case basis, the committee was told, and decisions are made on whether to notify users about investigations that are content specific. Facebook’s requests from authorities are similar in number to Google’s and they don’t tip off account holders under investigation in the U.K. They may, however, do so in the U.S. because of differing legal considerations.
In terms of counter-extremism activity, Facebook and Google launched respective innovative programs to empower users to take action against extremism. Facebook’s Online Civil Courage Initiative (OCCI) was announced recently as a three-year program in Berlin, and is aimed at combatting online extremism and hate speech. OCCI provides a better understanding of social media platforms and helps people respond to hate speech and extremism online, promoting the civil courage that already exists amongst Facebook users. Facebook will also provide financial and marketing support to European NGOs already working in the field of countering online extremism, help develop best practices for use by NGOs, governments, and other online services, and fund essential research into the causes of violent extremism and hate speech, in order to better understand these problems.
Google’s AdWords Grant program is beginning a pilot that would run ads against terrorism-related search queries. Under the programme, users who search for extremist-related material are to be shown counter-radicalisation links. The initiative, aimed at combatting the online influence of groups such as ISIS, is running alongside another pilot scheme designed to make counter-radicalisation videos easier to find on YouTube.
Twitter did not mention anything similar but claimed that they could be of assistance in terms of expertise. Twitter made clear, however, that they would not be involved in counter messaging. Whilst they might not be the creators of an extremist message they are certainly providing a platform for hate speech that can be propagated unchecked and unchallenged. Of more concern is the fact that Twitter does not seem to elevate reports of extremist content. There is one avenue for reporting all types of inappropriate content and it is not particularly streamlined or efficient. As Twitter admits, they rely on the public to flag content, so it is alarming they do not empower users properly.
Some Social media companies are beginning to take action against extremists’ misuse of their online platforms. Nonetheless, the programs are relatively new and small. These companies must continue to work cooperatively with NGOs, industry, and governments to tackle extremists’ recruitment, propagandizing, and incitement to violence even before it begins.
Get the latest news on extremism and counter-extremism delivered to your inbox.