Under Pressure From EU Regulators, Facebook Explains How It Will Tackle Extremist Content

The Counter Extremism Project (CEP) issued the following statement today in response to Facebook’s announcement of its plans for combating terrorism online:

In the wake of the renewed prospect of regulations and fines from the EU, Facebook on June 15 published a blog pledging to develop new technology to root out extremists on its platform.

The stated objective is commendable, but Facebook’s description of its efforts is lacking in specifics and serious questions concerning transparency and accountability remain. Notably:

1. Facebook’s announcement confirms what security and policy experts have known for some time—that terrorist and extremist content is pervasive and easily accessible online, including on social media platforms like Facebook. Given its apparent knowledge of the problem, it is troubling that Facebook’s content review and moderation teams have been so understaffed, even while the problem of online radicalization festered and the consequences of inaction became evident. Facebook should explain why it has taken so long to begin experimenting with existing and new technologies to quickly and accurately detect terrorist content given its acknowledgement of the role the Internet plays in radicalization.

2. Facebook uses vague language—“gotten much faster,” “begun work,” and “started to experiment”—to describe its progress on detecting and removing terrorist and extremist content. Facebook also fails to provide specific data or metrics to describe either the scope of the problem, or Facebook’s success in confronting it. Furthermore, Facebook and other major tech companies leverage an abundance of data to customize the user experience and sell ads, which suggests they could also provide the public and policy makers with specific data related to online extremism—yet none is provided. For example, just how much terrorist and extremist content is actually on Facebook and related platforms such as WhatsApp and Instagram? How much of this content was removed in 2016 and 2017? What percentage of extremist content is being detected using artificial intelligence (AI) versus human review? What is the error rate of AI detection? Finally, how exactly is Facebook defining terrorist and extremist content? It is unfortunate that Facebook has decided to keep all of this information hidden from the public.

3. Facebook’s announcement demonstrates once again that tech companies respond to the prospect of lost revenue and increased regulation. Policymakers and security officials in the European Commission, the U.K., France, and Germany should be applauded for their consistent and principled efforts to prevent extremists’ misuse of the Internet. Those entrusted with providing for the general welfare and safety of the public must continue to work to compel private sector actors to ensure that their services and platforms are no longer welcoming environments for terrorists. The public deserves comprehensive explanations with measurable solutions from tech companies. Corporate public relations campaigns may work temporarily to relieve pressure from regulatory authorities, but only concrete action can ensure public safety.

To read CEP’s complete analysis of Facebook’s June 15 statement, please click here.

Related Press Resources

Daily Dose

Extremists: Their Words. Their Actions.

Fact:

On May 8, 2019, Taliban insurgents detonated an explosive-laden vehicle and then broke into American NGO Counterpart International’s offices in Kabul. At least seven people were killed and 24 were injured.

View Archive