Company’s Promises to do More to Combat Hate Speech, Re-Uploads of Known Terrorist Material are Again Proven Meaningless
A few weeks before the New Zealand shootings were live-streamed on Facebook, the company’s top executives, including Mark Zuckerberg, were touting the company’s efforts to improve its safety and security processes in a public relations campaign. In a profile piece with Fortune, published the day before the terrorist attack, Facebook’s Chief Technology Officer Mike Schroepfer bragged about the company’s artificial intelligence (AI) and its supposed ability to identify harmful content within “hundredths of milliseconds, billions of times a day.” His proof was a demonstration showing that the system could differentiate between a picture of marijuana and a picture of broccoli at an overall accuracy rate of approximately 90 percent. Given the number of uploads to Facebook each day, that equates to about 100 million mistakes.
This month, the company also released a series of sponsored articles “brought to you by Facebook” in The Telegraph. The latest article in the series was in regards to the company’s Counter-Terrorism and Dangerous Organisations Team, which described how the team employs a “mix of solutions” to eradicate extremism from the website. The self-congratulatory article discussed the company’s achievement of “extraordinary things” as it seeks to “understand the context and changing nature of terrorist content.” By using a “mix of human expertise, tooling and machine learning,” Facebook claims, it is able to remove “83 percent of subsequently uploaded copies within one hour of upload.”
“Last April, Mr. Zuckerberg testified before the U.S. Congress and stated in no uncertain terms that Facebook was able to remove 99 percent of all ISIS and Al Qaeda material. Now, we’re expected to praise success rates as low as 90 percent. Additionally, failing one out of 10 times in an obvious comparison test between a cannabis plant and broccoli is not only embarrassing, but absolutely unacceptable given that Facebook has sought to convince the public and lawmakers that AI will save the day. Moreover, these shamefully low rates mean nothing relative to the billions of pieces of digital content uploaded per day,” said Counter Extremism Project Senior Advisor Dr. Hany Farid. “Worse, Facebook’s much touted artificial intelligence failed when it counted most. Facebook did nothing until law enforcement alerted the company to the New Zealand livestream and struggled to prevent the re-upload of the horrific video.”
Facebook’s AI algorithms did not stop the live-streamed attack from being uploaded more than one million times. Since then, Facebook has excused its failures by listing a litany of processes in place leading up to the attack, even noting that “no users reported the video during the live broadcast” – something its algorithms were supposedly created to fix. And, more than 24 hours after the attack took place, it had failed to remove approximately 300,000 different uploads of the livestream. CEP’s tracker for Facebook’s policy changes shows that, time and time again, the company does not take action till the damage is done. It further shows that the action is far too little and far too late.