Zuckerberg’s 99 Percent Myth: Extremist Content Removal Claims False & Misleading

For 18 Months, Company has been Repeating Incorrect Statistic to the Public

(New York, N.Y.) - Since 2017, Facebook has on numerous occasions announced to the public, shareholders and the U.S. Congress that it removes “99 percent” of Al Qaeda and ISIS-related content from its platforms. However, a recent Associated Press investigation into whistleblower claims to the Securities and Exchange Commission (SEC) alleges that in reality, Facebook’s extremist content removal rate was just 38 percent, not 99 percent. This disconnect calls into question once again the company’s credibility and its commitment to removing dangerous content from its platform.

CEO Mark Zuckerberg, other company executives and Facebook itself must be held accountable for why they have been repeatedly touting what have turned out to be false and misleading claims. Over the course of one-and-a-half years, the “99 Percent Myth” was relayed before both houses of Congress, on quarterly earnings reports and on press release after press release. This puzzling talking point, once again, exposes the company’s efforts to assuage critics and limit legislative or regulatory pressure, calling into question its commitment to removing terrorist and extremist content from its platform.

Facebook’s troubling “99 percent” removal statistic is intentionally vague and is meant to stamp out valid criticisms of the company. CEP has documented a multitude of instances in which Mr. Zuckerberg, Sheryl Sandberg, Monika Bickert, Brian Fishman and other company executives have used various mediums to tout what is now claimed in court documents to be entirely false information, and which was also relied upon by Congress and investors. The company’s inability to moderate its platform threatens the safety of its users and the public at large, serving as another example for why the era of self-regulation for tech must end.

To view the instances in which Facebook has made its 99 percent removal claim, please continue reading below.

Mark Zuckerberg (CEO)

Testifying Before Congress In 2018, Zuckerberg Claimed “99 Percent Of The ISIS And Al Qaida Content That We Take Down On Facebook, Our A.I. Systems Flag Before Any Human Sees It.” (“Transcript Of Mark Zuckerberg’s Senate Hearing,” The Washington Post, 4/10/18)

During An April 2019 Earnings Call, Zuckerberg Said “99 Percent” Of Extremist Content Is Flagged Before Anyone Sees It. “In the face of criticism, CEO Mark Zuckerberg has spoken of his pride in the company's ability to weed out violent posts automatically through artificial intelligence.  During an earnings call last month, for instance, he repeated a carefully worded formulation that Facebook has been employing.  ‘In areas like terrorism, for al-Qaida and ISIS-related content, now 99 percent of the content that we take down in the category our systems flag proactively before anyone sees it,’ he said. Then he added: ‘That's what really good looks like.’” (“Facebook Auto-Generates Videos Celebrating Extremist Images,” The Associated Press, 5/9/19)

Zuckerberg Wrote In A Facebook Post That Artificial Intelligence Takes Down “99 Percent Of The Terrorist-Related Content We Remove Before Anyone Even Reports It.” “For stopping the spread of harmful content, we've built AI systems to automatically identify and remove content related to terrorism, hate speech, and more before anyone even sees it.  These systems take down 99% of the terrorist-related content we remove before anyone even reports it, for example.” (Facebook, 12/28/18)

In An April 2018 Earnings Call, Zuckerberg Said He Was Proud Facebook’s AI Tools “Take Down ISIS And Al-Qaeda-Related Terror Content, With 99 Percent Of That Content Being Removed Before Any Person Flags It To Us.” “In the past month, Mark Zuckerberg has boasted to Congress and investors that Facebook Inc.’s artificial intelligence programs are turning the tide against extremism on his site.  ‘One thing that I’m proud of is our AI tools that help us take down ISIS and al-Qaeda-related terror content, with 99 percent of that content being removed before any person flags it to us,’ the chief executive said on April’s earnings call.  Facebook executives repeated that number onstage at early May’s annual developer conference.” (Vernon Silver, “Terrorists Are Still Recruiting On Facebook, Despite Zuckerberg’s Reassurances,” Bloomberg Businessweek, 5/10/18)

Sheryl Sandberg (COO)

Sandberg: “Ninety-Nine Percent Of The ISIS Content We're Able To Take Down Now We Find Before It's Even Posted.” “There's no place for terrorism on our platform.  We've worked really hard on this.  Ninety-nine percent of the ISIS content we're able to take down now we find before it's even posted.  We've worked very closely with law enforcement all across the world to make sure there is no terrorism content on our site, and that's something we care about very deeply.” (Vanessa Romo, “Facebook's Sheryl Sandberg On Data Privacy Fail: 'We Were Way Too Idealistic,’” NPR, 4/6/18)

Sandberg: “We Take Down 99% Of ISIS Or That Kind Of Terrorist Content Before It Even Hits The Platform.” “We are massively investing in machine learning and automation that can help find some of these things.  There are areas where we’ve had great success.  We take down 99% of ISIS or that kind of terrorist content before it even hits the platform. Machines make that possible.  Our commitment is clear.  What matters is our community.  What matters is the safety and security of people who use Facebook.” (“CNBC Exclusive: CNBC Transcript: Sheryl Sandberg Sits Down With CNBC’s Julia Boorstin Today,” CNBC, 3/22/18)

Monika Bickert (Head Of Global Policy Management)

Bickert Said “More Than 99 Percent” Of Terror Propaganda Is Identified Using “Technical Tools Our Engineers Have Built.” “We are now hiring a lot of engineers who are working on that and are making it so that when content is uploaded to Facebook, we are able to use technical tools to flag it, sometimes removing it even at the time of upload.  You think about terrorism propaganda.  In the first quarter of 2018, we removed 1.9 million posts for being related to terror propaganda, and of those, more than 99 percent we identified using the technical tools that our engineers have built.” (“The Facebook Dilemma,” PBS, 9/6/18)

Bickert Reported In Q1 Of 2018: “99% Of The ISIS And Al-Qaeda Content We Took Action On Was Before A User Reported It.” “We find the vast majority of this content ourselves.  In Q1 2018, 99% of the ISIS and al-Qaeda content we took action on was before a user reported it … In most cases, we found this material due to advances in our technology, but this also includes detection by our internal reviewers.” (Monika Bickert, “Hard Questions: How Effective Is Technology In Keeping Terrorists Off Facebook,” Facebook Newsroom, 4/23/18)

In 2017, Bickert Wrote “99 Percent Of The ISIS And Al-Qaida Terror-Related Content” Is Removed Before Being Reported. “Facebook announced that 99 percent of the ISIS and al-Qaida terror-related content it removes from the social network is detected before even being flagged and, in some cases, before it even goes live. Head of global policy management Monika Bickert and head of counterterrorism policy Brian Fishman said in the latest installment of the social network’s Hard Questions series that by using automated systems—including photo and video matching and text-based machine learning—once Facebook becomes aware of terror content, 83 percent of subsequently uploaded copies are removed within one hour of upload.” (David Cohen, “Facebook Says It’s Removing 99 Percent of Terror Content Before It’s Even Reported,” AdWeek, 11/29/17)

Brian Fishman (Policy Director Of Counterterrorism)

Fishman: Facebook Has Removed 99 Percent Of “Islamic State And Al-Qaeda Content” By Using Artificial Intelligence. “Brian Fishman, Facebook’s global head of counterterrorism, said the social network had zero tolerance for any group that the United States listed as a terrorist entity.  He added that the company had removed 99 percent of Islamic State and Al Qaeda content largely by using artificial intelligence.” (Sheera Frenkel, “After Social Media Bans, Militant Groups Found Ways to Remain, The New York Times, 4/19/19)

Fishman: “In Both Q2 And Q3 We Found More Than 99% Of The ISIS And Al-Qaeda Content Ultimately Removed Ourselves, Before It Was Reported By Anyone In Our Community.” “In both Q2 and Q3 we found more than 99% of the ISIS and al-Qaeda content ultimately removed ourselves, before it was reported by anyone in our community.  These figures represent significant increases from Q1 2018, when we took action on 1.9 million pieces of content, 640,000 of which was identified using specialized tools to find older content.” (Brian Fishman, “Hard Questions: What Are We Doing to Stay Ahead of Terrorists,” Facebook Newsroom, 11/8/18)

In 2017, Fishman Wrote “99 Percent Of The ISIS And Al-Qaida Terror-Related Content” Is Removed Before Being Reported. “Facebook announced that 99 percent of the ISIS and al-Qaida terror-related content it removes from the social network is detected before even being flagged and, in some cases, before it even goes live.  Head of global policy management Monika Bickert and head of counterterrorism policy Brian Fishman said in the latest installment of the social network’s Hard Questions series that by using automated systems—including photo and video matching and text-based machine learning—once Facebook becomes aware of terror content, 83 percent of subsequently uploaded copies are removed within one hour of upload.” (David Cohen, “Facebook Says It’s Removing 99 Percent of Terror Content Before It’s Even Reported,” AdWeek, 11/29/17)

Antigone Davis (Global Head Of Safety)

Davis: Facebook Has Taken Down “8.7 Million Pieces Of Content … 99 Percent Of Which Was Removed Before Anyone Reported It.” “Previously, Facebook used a ‘photo-matching’ technology that could only detect ‘known’ images of child nudity and expel them from the platform.  Facebook’s new technology is able to detect ‘unknown images,’ casting a wider net for exploitative content.  According to Davis, this has enabled the platform to take down ‘8.7 million pieces of content,’ and ‘99 percent of which was removed before anyone reported it.’” (Jasmin Boyce, “Facebook Touts Use Of Artificial Intelligence To Fight Child Exploitation, NBC News, 10/24/18)

Steve Hatch (Vice President Of Northern Europe)

Hatch: Facebook Removes “More Than 99% Of Fake Accounts, Spam And Terror Content Before Any Users Report It To Us.” “However, we’re not just waiting for regulation, we’re taking action now. Thanks to investments in people and artificial intelligence, we’re making progress in keeping harmful content off our platform – taking down more than 99% of fake accounts, spam and terror content before any users report it to us, more than 96% of adult nudity, sexual activity and violent content, and making progress in other important areas such as hate speech.” (Steve Hatch, “Steve Hatch: Facebook Takes Responsibility And Welcomes Regulation, Campaign Live, 4/18/19)

Sheen Handoo (Public Policy Manager)

Handoo Said Facebook Removed “1.9 Million Pieces Of Terrorist Propaganda… About 99.5% Of It Was Flagged By Our AI And Machine Learning Tools” Over A 6 Month Period. “We recently published a detailed enforcement report.  Globally, between October 2017 and March 2018, we disabled 583 million fake accounts within minutes of registration, and 99% were flagged by internal tools.  We identified 837 million cases of spam, of which nearly 800% were flagged before reporting.  We also removed 1.9 million pieces of terrorist propaganda, and about 99.5% of it was flagged by our AI and machine learning tools.” (Durba Ghosh, “Facebook Has A List Of Hazardous Content That Can Get Your Account Deactivated,” Quartz, 11/1/18)

Daily Dose

Extremists: Their Words. Their Actions.

In Their Own Words:

We reiterate once again that the brigades will directly target US bases across the region in case the US enemy commits a folly and decides to strike our resistance fighters and their camps [in Iraq].

Abu Ali al-Askari, Kata’ib Hezbollah (KH) Security Official Mar. 2023
View Archive