NGOs express fear that new EU ‘terrorist content’ draft will make things worse for human rights defenders

January 31, 2019

On Wednesday 30 January 2019 Mike Masnick in TechDirt published a piece entitled: “Human Rights Groups Plead With The EU Not To Pass Its Awful ‘Terrorist Content’ Regulation“. The key argument is that machine-learning algorithms are not able to distinguish between terrorist propaganda and investigations of, say, war crimes, It points out that as an example that Germany’s anti-“hate speech” law has proven to be misused by authoritarian regimes.

..The EU’s Terrorist Content Regulation is shaping up to be a true horror story, as we discussed in a recent podcast on the topic. As covered in that podcast, the EU is barreling forward on that regulation with little concern for the damage it will do (indeed, with little concern for showing any evidence that it’s needed).

The basic idea behind the regulation is that, apparently, the internet is full of horrible “terrorist content” that is doing real damage (citation needed, but none given), and therefore, any online platform (including small ones) will be required to remove content based on the demands of basically anyone insisting they represent a government or law enforcement authority, within one hour of the report being sent, or the site will face crippling liability. On top of that, the regulation will create incentives for internet platforms to monitor all speech and proactively block lots of speech with little to no recourse. It’s a really, really bad idea, and everyone is so focused elsewhere that there hasn’t been that much public outcry about it.

The group WITNESS, which helps people — be they activists or just everyday citizens — document and record human rights violations and atrocities around the globe, has teamed up with a number of other human rights groups to warn the EU just how damaging such a regulation would be:

Our letter is based on real-world experience of exactly these problems.

In particular, as we note in the letter, Syrian Archive and WITNESS, as well as many of our partners, “have seen firsthand that use of machine-learning algorithms to detect ‘extremist content’ has created hundreds of thousands of false positives and damaged [a huge] body of human rights content.”As Jillian York of the Electronic Frontier Foundation notes, “”By placing the burden on companies to make rapid decisions in order to avoid penalties, this regulation will undoubtedly result in censorship of legitimate speech. That includes content essential “for prosecutions or other accountability processes across borders or in conflict situations.” The proposal ignores the incredible work of human rights defenders and journalists who “risk their freedom, safety, and sometimes even their lives to upload videos for the purpose of exposing abuses to the world. Without this documentation, we would have little idea what is happening in Syria, Yemen, Myanmar, the favelas of Brazil, and elsewhere.” 

Indeed, as we’ve documented, attempts by digital platforms to block “terrorist content” in the past has resulted in the blockade of activists documenting war crimes. No one pushing for these laws has yet explained how one can distinguish “terrorist content” with “documenting war crimes,” — and that’s because those two are often the exact same thing, even if used for a different purpose. But any law that requires technology filters to “block” such content will simply not be able to comprehend the difference.

The letter also highlights how — as we saw with Russia copying Germany’s anti-“hate speech” law — authoritarian regimes will use these kinds of laws to justify “similar” laws that are actually used to suppress dissent:

In addition to devastating the processes being used to create and preserve human rights content, this regulation will harm some of the most vulnerable groups in the world by inspiring dangerous copycat regulation that will be used to silence essential voices of dissent. This is not hypothetical, as Germany’s NetzDG law is already inspiring replicas in authoritarian countries, including Russia. The impact of the European Union on global norms should be a net positive, especially in the face of a rising tide of political repression, violence and fascism. This regulation hinders efforts to fight that tide.

One hopes that EU regulators can understand this and back away from this disaster of a proposal, but the recent record of the EU and legislation regarding the internet does not suggest we have much to be hopeful about.

Free Speech

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: