For immediate release | Tuesday, November 27, 2018

ICYMI: New Report on Germany’s NetzDG Online Hate Speech Law Shows No Threat of Over-Blocking

Press Contact:

Media at CEP

On November 22, the Counter Extremism Project (CEP) in partnership with the Centre for European Policy Studies (CEPS) launched a joint report analyzing the impact of Germany’s Network Enforcement Act (NetzDG). In the report, Germany’s NetzDG: A key test for combatting online hate, researchers found that concerns that NetzDG would lead to censorship were unfounded.

First coming into effect on January 1, 2018, Germany’s NetzDG is an ambitious law that fines online platforms for failures to delete illegal content and represents a crucial measure to help combat hate speech online. CEP found that the law has not resulted in a flood of reports or over-blocking, and researchers uncovered no further evidence of false positives. Additionally, the study provides suggestions for ways to improve the law, including by establishing clear reporting standards, targeting terrorist content, outlawing re-uploading, establishing a forum for disputed content, spending more on law enforcement, and forcing companies to reveal raw data.

Executive Director David Ibsen co-authored an opinion piece in Euronews with Centre for European Policy Studies (CEPS) Associate Senior Research Fellow William Echikson in conjunction with the release of the new study.

###

Germany’s New Anti-Hate Speech Law Needs Teeth If It Has Any Hope Of Stamping It Out Online

William Echikson & David Ibsen

November 23, 2018

Euronews

Critics accuse it of facilitating a draconian censorship regime. Supporters say it will stem the rising tide of online hate speech. As the most ambitious law of its kind, Germany’s new Network Enforcement Act (NetzDG) has become a touchstone for Western democracies struggling to deal with hate speech on the internet.

The law is designed to force social networks to effectively monitor and remove dangerous content. Online platforms operating in Germany face fines of up to €50 million for failing to systematically delete it.

On New Year’s Day 2018 - the moment it came into effect - the critical concern that NetzDG would ultimately act as a censorship tool appeared to come to fruition. Twitter and Facebook took down a post from far-right Alternative For Germany (AfD) politician Beatrix von Storch that mentioned “barbaric, Muslim, rapist hordes of men.” Twitter later suspended the account of Titanic, a German satirical magazine, for mocking von Storch.

In reality, concerns that NetzDG would lead to censorship have proven unfounded. Its introduction has precipitated a trickle rather than a flood of reports. Research by the Counter Extremism Project (CEP) has uncovered no further evidence of false positives and shows that three quarters of reports are not upheld by social media companies. There have been no fines imposed on companies and little change in overall takedown rates (21.2% for Facebook and 10.8% for Twitter).

While NetzDG is a bold attempt at addressing the right problem, it needs more teeth to effectively tackle hate speech.

Here are few ideas that would improve NetzDG now and ensure it does the job it is supposed to:

Establish clear reporting standards

Google, Facebook and Twitter all have individual reporting formulas under NetzDG, creating confusion for users and making reporting more opaque. These should be standardised.

Target terrorist content

NetzDG does not differentiate between 21 criminal offences. Most users, however, would agree that different types of content merit different approaches. Targeting terrorist content uploaded by organisations on a commonly agreed list would limit the danger of over-blocking. Recently-proposed European Commission legislation targeting terrorist content takes this approach.

Outlaw re-uploading

At present, if an extremist simply presses ‘upload’ on the same content, it must be flagged and checked all over again. This is neither efficient nor effective. A CEP study showed some 91% of the Islamic State videos uploaded to YouTube appeared on the platform more than once. Google’s attempts at manual management is failing with 24% of terrorist videos remaining online for more than two hours. However, there is already a proven solution. Automated re-upload filters are already common practice in the fight against child pornography.

None of these reforms will ‘solve’ the problem of online hate. It is not a problem that can be solved, only combatted. It must be combatted but will alone is not enough. We must ensure the tools we are using are as optimal as they can be. Failure on either front will mean those who spread hate online win.

William Echikson is an Associate Senior Research Fellow at the Centre for European Policy Studies (CEPS) and the Head of the CEPS Digital Forum.

David Ibsen is the Executive Director for the Counter Extremism Project (CEP).