Facebook, Extremist Content, A.I., David Ibsen|May 23, 2019|CEP Staff

Facebook Spin Machine Goes Into Overdrive

Top Company Executives Double-Down on Artificial Intelligence & the 99 Percent Myth

Earlier this month, an Associated Press investigation exposed whistleblower claims made to the Securities and Exchange Commission (SEC) alleging that Facebook mislead the public about its 99 percent extremist content removal rate. The whistleblower claims that the removal rate was, in reality, just 38 percent. Since then and as expected, Facebook has launched a public relations campaign to try and repair the company’s reputation by reverting to muddled and misleading messaging concerning the efficacy of its artificial intelligence (A.I.) program and its ability to detect and block images and videos that are impermissible on its platform. Facebook is also seeking to deflect criticisms for its misleading statements and failed policies via sympathetic depictions of its senior leadership in the media.

Facebook officials tell one story, parroting CEO Mark Zuckerberg’s zealot-like belief in the all-encompassing power of A.I. to solve its problems with extremist content. But to counter the growing evidence that Facebook’s A.I. is a failure, notably in its inability to detect mass shooting videos, Chief Technology Officer Mike Schroepfer was bizarrely and prominently featured in the New York Times. Schroepfer continued celebrating his company A.I.’s ability to distinguish between broccoli and marijuana and, at the same time, was depicted as someone struggling to come to terms with the overwhelming and insurmountable challenge of curbing misuse of the platform. The effort to understand Schroepfer’s state of mind and work challenges is puzzling given that many corporate executives and managers face tremendous pressures on a daily basis, yet are not subject to profiles in major media outlets. Large-scale corporate problems are not unique to Facebook.

“At the same time Facebook executives are desperately trying to forestall criticism and regulation by hyping overly optimistic claims about A.I.’s capability, they are also seeking sympathy by constantly complaining about roadblocks and challenges inherent in A.I. technology and also the challenges inherent in the scale of their platform,” said Counter Extremism Project Executive Director David Ibsen. “They talk about how their incredible A.I. tools can distinguish between broccoli and marijuana—something that most humans can differentiate. Yet, these same smart and powerful algorithms can’t seem to help Facebook actually enforce its own terms of service like detecting and removing a first-person video of a massacre on its site. When predictable and supposedly preventable abuse occurs, Facebook engineers don’t fix the problem – they instead gather journalists to share excuses and a good cry. Extremist content appears on Facebook every day, inspiring more aspiring terrorists, even though the material is clearly prohibited and in verifiable cases has previously been flagged and removed. Facebook must be held accountable for its own failures and deceptions.”

Hard data and the recent complaint filed by a Facebook whistleblower directly contradicts the 99 percent removal rate—showing repeated failures, flawed technologies, deceptions, and an unwillingness to tackle online extremism seriously. Facebook executives have no answer to highly credible claims and hard evidence that this 99 percent figure is absurdly false and that its A.I. systems are woefully inadequate. Instead, the company has relied on PR tactics and spin. Documents filed by a whistleblower allege Facebook’s A.I. only detected 38 percent of profiles displaying known terrorist symbols over a five-month period and its A.I. missed millions of uploads of the March 2019 New Zealand mass shooting video, which had been seen by its A.I. systems and could have been easily blocked.

The tabulation on extremist content is simple. Images and videos are either blocked or they are not blocked. Today, the score is clearly not in Facebook’s favor. Company officials need to end their overreliance on faulty A.I. systems and admit that their approach and misleading claims of success has allowed extremist content to proliferate. Only then can Facebook begin to restore its credibility.