The European Union’s (EU) proposed Terrorist Content Regulation continues to face opposition from major tech companies including Google/YouTube and Microsoft. The new regulation would allow EU member states to impose fines on tech firms of up to four percent of their revenue for failure to consistently remove extremist content from their platforms. It would also require a takedown of said extremist content within one hour of receiving notice from public authorities. Given tech companies’ consistent inability to enforce their own terms of service, the EU’s proposal is part of a growing trend by legislative bodies to prescribe content moderation policies in an effort to stymie the spread of extremist and terrorist material online. The Terrorist Content Regulation builds upon Germany’s pioneering 2018 Network Enforcement Act (NetzDG).
“The EU’s counterterrorism legislation sets a reasonable standard of one-hour to takedown extremist content. The longer terrorist propaganda and recruiting materials remain online, the more its viewership will increase and the higher the likelihood that it will be viewed, copied, and uploaded elsewhere,” said Counter Extremism Project (CEP) Executive Director David Ibsen. “It is therefore only sensible that governments are seeking to instill a duty of care for tech companies to ensure the failure to remove content online does not translate to real world harm. Instead of lobbying against such sensible public policy, major tech companies should harness their extensive resources and capabilities to work towards achieving the one-hour takedown goal. Moreover, tech firms should proactively utilize industry consortiums such as the Global Internet Forum to Counter Terrorism (GIFCT) to better leverage technological expertise and share information and best practices between large and small firms, as its founders claim it was originally created to do. Doing so will help keep extremist and terrorist material permanently off the open web.”
When NetzDG first entered into law, similar to the EU proposal, critics argued that it would lead to both overreporting and the suppression of free speech, but also the stifling of innovation, because smaller companies would not have the resources needed to comply with the law. A joint report by CEP and the Centre for European Policy Studies (CEPS) released in November 2018 showed these arguments were baseless and the concerns unfounded. NetzDG did not result in a flood of reports or over-blocking, and the expense of implementing NetzDG was minimal at one percent of total revenue.
This March, CEP Berlin released a follow-up NetzDG study that tested the tech industry’s compliance with the law between January 31 and February 14 of this year. The law in its current form requires online platforms to remove “manifestly illegal” content within 24 hours only after it has been reported by users. CEP’s study revealed that YouTube, Facebook, and Instagram removed a mere 43.5 percent of clearly extremist and terrorist content, even after that material was reported for their illegal nature under the NetzDG. CEP Berlin’s findings suggested that this “notice and takedown” method for removing illegal content can only be effective if platforms are being searched continuously and systemically for such material.
To read the results of CEP Berlin’s NetzDG study and the policy paper, please click here.
To read CEP and CEPS’ joint study on NetzDG, Germany’s NetzDG: A Key Test For Combatting Online Hate, please click here.