Pitfalls of Innocent takedown regimes – Berlin IGF 2019

by | Feb 10, 2020 | Free Speech, Open Blog | 0 comments

This is a summary of Kyung Sin Park’s speech at an IGF workshop on addressing violent and extremist content https://www.intgovforum.org/multilingual/content/addressing-terrorist-and-violent-extremist-content-online.

Violent extremist content is a misnomer.  There is a lot of extremely violent content online or offline.  Look at games that kids play.  Remember the movie A Clockwork Orange. The US Supreme Court already said that animal cruelty videos in the Stevens decision and violent video games in the Brown decision cannot be banned for content.  Content can be banned only for its external harms by regulations necessary to prevent to keep a democratic society.  In the context of the examples that Prime Minister talked about, I believe that what we really mean by violent extremist content is hate speech.  I believe we should couch this discourse comfortably in hate speech regulation, which we have understood and written so eloquently.   

Having said so, do we understand hate speech? Questions remain:  UN norms define hate speech as advocacy of discrimination and hostility across national, religious, and racial lines.  But, what does hostility mean?  Is causing a hostile mindset a sufficient basis?  Why not count the chilling effect on vulnerable hearers of speech count as harm justifying regulation?  Should counter-speech of minority also count as hate speech when their speech does not have the effect of electrifying the pre-existing oppressive system permissive of discrimination and hostility against certain groups and therefore are not likely to cause discrmination and hostility? How about joining an organization advocating for such violence instead of ?

Terrorism can kill some of us quickly but if we respond in a wrong way, it will kill all of us slowly. Hate speech regulation may be necessary for a democratic society but overreaching hate speech regulation will destroy a democratic society.  – NetDZ Network Enforcement Act – illegal content – transparency – holds liable through hefty fines for not immediately removing illegal content.  It takes time to find out whether something is illegal.  Platforms will taking chances on the side of deleting, resulting in many many false positives.  Holding platforms for illegal contents sounds innocent.  But there are only defamation law (and insult law in particular in Germany) and superimposing any content removal regime on top of that will cause many false positives.  Abhorrent violent content takedown law of Australia will be even more problematic because it is even broader and amorphous than Germany’s definition of illegal content under criminal code.

Look at Korean law of mandatory takedown of illegal content.  Platforms are so habituated into taking down even marginally controversial content that they took down crucial vital first warnings about humidifier detergents, and even contents designed to fight sexual discrimination. Korea’s 2016 Anti-Terrorism law will even increase false positives because its definition of terrorism is even broader: A Syrian living in Korea was incredubly prosecuted for cyberterrorism for uploading Sunii extremist group’s promotion video and fortunately found not guilty.  A even broader anti-cyberterrorism law to quote, “invading, interfering, paralyzing, destroying telecommunication facilities or stealing, destroying, damaging, or transmitting by distorting information by hacking, computer virus, service interference, and other electronic means.”

Intermediary liability safe harbor is now a human right.  EU’s e-commerce directive bans general monitoring and imposing liability for unknown content, to allow Internet freedom within which truth can crowd out falsity.  Also, the safe harbor such as DMCA 512 incentivizes platforms to take things down without increasing non-consensual false positives (DMCA takedowns being consensual because they are open to reinstatement requests).   

GIFCT (sharing hashes, training small platforms on content moderation)  is another approach but I do not like it.  It is the first time the Big Three companies come together to decide on what goes down and what does go down.  It can cartelize the Internet to some extent and people will not have a choice over platforms with differing content tolerance.

Also, Facebook rep just said that Facebook has more than 15K reviewers, more than 300 policy makers, and has removed more than 10 million contents in one year. What will happen when AI does this?  Will the machine-based content moderation be considered censorial?

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *