Fact:
On April 3, 2017, the day Vladimir Putin was due to visit the city, a suicide bombing was carried out in the St. Petersburg metro, killing 15 people and injuring 64. An al-Qaeda affiliate, Imam Shamil Battalion, claimed responsibility.
In a June 15 statement, Facebook described plans to reduce and remove the amount of terrorist content on its platforms, including by developing artificial intelligence capabilities, expanding its community operations team and partnering with private- and public-sector groups. Unfortunately, the announcement is bereft of specific details and raises additional questions concerning transparency and accountability.
Facebook’s announcement confirms what security and policy experts have known for some time—that terrorist and extremist content is easily available and pervasive online. It appears that as extremist and terrorist content proliferated online, Facebook’s content review and moderation teams remained persistently understaffed, and the company has only just begun experimenting with existing and new technologies in this space. Facebook should explain why it has (1) delayed in building appropriate staffing capacity and (2) delayed incorporating new and existing technologies to assist with detecting terrorist and extremist content for removal until now. Frustrated policymakers and the public deserve a clear answer.
Per Facebook’s request in its statement to “hear feedback so we can do better,” CEP has posed a series of questions and requests for additional information to better understand the tech company’s progress in detecting and removing terrorist and extremist content. We hope Facebook will follow through on its stated desire “to answer those questions head on.”
Artificial Intelligence
“Already, the majority of accounts we remove for terrorism we find ourselves.”
“Image matching: When someone tries to upload a terrorist photo or video, our systems look for whether the image matches a known terrorism photo or video. This means that if we previously removed a propaganda video from ISIS, we can work to prevent other accounts from uploading the same video to our site. In many cases, this means that terrorist content intended for upload to Facebook simply never reaches the platform.”
“Language understanding: We have also recently started to experiment with using AI to understand text that might be advocating for terrorism. We’re currently experimenting with analyzing text that we’ve already removed for praising or supporting terrorist organizations such as ISIS and Al Qaeda so we can develop text-based signals that such content may be terrorist propaganda. That analysis goes into an algorithm that is in the early stages of learning how to detect similar posts. The machine learning algorithms work on a feedback loop and get better over time.”
“Removing terrorist clusters: We know from studies of terrorists that they tend to radicalize and operate in clusters. This offline trend is reflected online as well. So when we identify Pages, groups, posts or profiles as supporting terrorism, we also use algorithms to ‘fan out’ to try to identify related material that may also support terrorism. We use signals like whether an account is friends with a high number of accounts that have been disabled for terrorism, or whether an account shares the same attributes as a disabled account.”
“Recidivism: We’ve also gotten much faster at detecting new fake accounts created by repeat offenders. Through this work, we’ve been able to dramatically reduce the time period that terrorist recidivist accounts are on Facebook. This work is never finished because it is adversarial, and the terrorists are continuously evolving their methods too. We’re constantly identifying new ways that terrorist actors try to circumvent our systems — and we update our tactics accordingly.”
“Cross-platform collaboration: Because we don’t want terrorists to have a place anywhere in the family of Facebook apps, we have begun work on systems to enable us to take action against terrorist accounts across all our platforms, including WhatsApp and Instagram. Given the limited data some of our apps collect as part of their service, the ability to share data across the whole family is indispensable to our efforts to keep all our platforms safe."
Human Expertise
“Reports and reviews: Our community — that’s the people on Facebook — helps us by reporting accounts or content that may violate our policies — including the small fraction that may be related to terrorism. Our Community Operations teams around the world — which we are growing by 3,000 people over the next year — work 24 hours a day and in dozens of languages to review these reports and determine the context. This can be incredibly difficult work, and we support these reviewers with onsite counseling and resiliency training.”
“Real-world threats: We increasingly use AI to identify and remove terrorist content, but computers are not very good at identifying what constitutes a credible threat that merits escalation to law enforcement. We also have a global team that responds within minutes to emergency requests from law enforcement.”
Partnering with Others
“Industry cooperation: In order to more quickly identify and slow the spread of terrorist content online, we joined with Microsoft, Twitter and YouTube six months ago to announce a shared industry database of “hashes” — unique digital fingerprints for photos and videos — for content produced by or in support of terrorist organizations. This collaboration has already proved fruitful, and we hope to add more partners in the future. We are grateful to our partner companies for helping keep Facebook a safe place.”
“Governments: Governments and inter-governmental agencies also have a key role to play in convening and providing expertise that is impossible for companies to develop independently. We have learned much through briefings from agencies in different countries about ISIS and Al Qaeda propaganda mechanisms. We have also participated in and benefited from efforts to support industry collaboration by organizations such as the EU Internet Forum, the Global Coalition Against Daesh, and the UK Home Office.”
“Encryption. We know that terrorists sometimes use encrypted messaging to communicate. Encryption technology has many legitimate uses – from protecting our online banking to keeping our photos safe. It’s also essential for journalists, NGO workers, human rights campaigners and others who need to know their messages will remain secure. Because of the way end-to-end encryption works, we can’t read the contents of individual encrypted messages — but we do provide the information we can in response to valid law enforcement requests, consistent with applicable law and our policies.”
“Counterspeech training: We also believe challenging extremist narratives online is a valuable part of the response to real world extremism. Counterspeech comes in many forms, but at its core these are efforts to prevent people from pursuing a hate-filled, violent life or convincing them to abandon such a life. But counterspeech is only effective if it comes from credible speakers. So we’ve partnered with NGOs and community groups to empower the voices that matter most.
“Partner programs: We support several major counterspeech programs.”
To read Facebook’s June 15 statement in full, please click here.
Extremists: Their Words. Their Actions.
Fact:
On April 3, 2017, the day Vladimir Putin was due to visit the city, a suicide bombing was carried out in the St. Petersburg metro, killing 15 people and injuring 64. An al-Qaeda affiliate, Imam Shamil Battalion, claimed responsibility.
Get the latest news on extremism and counter-extremism delivered to your inbox.