On May 22, 2019, al-Shabaab fighters detonated a car bomb at a security checkpoint near the presidential palace in Mogadishu, Somalia, killing at least nine people and injuring 13 others.
On Tuesday, Facebook whistleblower Frances Haugen testified at a hearing before the U.S. Senate’s Subcommittee on Consumer Protection, Product Safety, and Data Security. She explained how Facebook’s products and algorithms have had negative effects on mental health, bullying, human trafficking, and political discourse, and how the company has failed to take appropriate action to reduce or eliminate these harms.
In her opening statement, Haugen called on Congress to act, saying, “I believe that Facebook’s products harm children, stoke division, weaken our democracy and much more. The company’s leadership knows ways to make Facebook and Instagram safer and won’t make the necessary changes because they have put their immense profits before people. Congressional action is needed.”
“Facebook’s actions—whether it be selectively applying its content moderation rules, withholding information on the harmful effects the platform has on young users, or doing nothing when it comes to content that incites violence and hatred—have one common thread: putting profits ahead of public safety,” said Counter Extremism Project (CEP) Executive Director David Ibsen. “Like other for-profit tech firms, Facebook’s business model is very clear and relies on algorithms to drive engagement to make money—including pushing content that is divisive, conspiratorial, and extremist. Facebook’s behavior proves that its repeated promises to do better cannot be trusted. In the face of inaction and even defiance from the tech giant, government regulators must step in.”
Haugen’s testimony comes a month after the Wall Street Journal published The Facebook Files that were based on the internal documents Haugen provided to them. Haugen has also been invited to testify before the European Parliament. The findings from the Journal investigation include:
Earlier this year, CEP hosted a webinar with Dr. Hany Farid, senior advisor to CEP and a professor at UC Berkeley, to explore the nature and extent of the global phenomenon of misinformation as well as the role of algorithmic amplification in promoting misinformation and divisive content online. Dr. Farid said: “Algorithmic amplification is the root cause of the unprecedented dissemination of hate speech, misinformation, conspiracy theories, and harmful content online. Platforms have learned that divisive content attracts the highest number of users and as such, the real power lies with these recommendation algorithms. Until thorough regulation is put in place, controversial content will continue to be promoted and amplified online.”
Facebook has long faced criticism for the misuse of its platform on issues ranging from the publication of inappropriate content to user privacy and safety. Rather than taking preventative measures, however, Facebook has merely jumped to make policy changes after damage has already been done. CEP has documented instances in which Facebook has made express policy changes following public accusations, a scandal, or pressure from lawmakers. While one would hope that Facebook is continuously working to improve security on its platform, there is no excuse as to why so many policy changes have been reactive, and it raises the question as to what other scandals are in the making due to still-undiscovered lapses in Facebook’s current policy.
To read CEP’s resource Tracking Facebook’s Policy Changes, please click here.
Get the latest news on extremism and counter-extremism delivered to your inbox.