“Platforms voluntary efforts to remove hate speech: Need for a matrix”

by | Apr 30, 2019 | Free Speech, Open Blog | 0 comments

Co-hosted by Israeli Democracy Institute and David Kaye UN Special Rapporteur on Freedom of Expression

Date: April 11, 2019

Venue: UC Irvine Law School

“We are in a dilemma where states are the most likely parties to abuse their power to suppress free speech but we have to rely on the states to suppress hate speech according to Article 20 of ICCPR.”  Apparently, for Yuval Shany, the main organizer of the event, co-regulation was the method that resolves that dilemma and also commandeers the forces of the platforms and their technologies to find a scalable solution to the combat against free speech.  He did not leave it subtle what the objective of the event was: “to make platforms do more in abating online hate speech.”

The event opened with a topology of various forms of “hate speech”.  The legal definition of hate speech is the speech likely to incite discrimination or violence against religious, national, or racial minorities.  However, given the private platforms exercising their freedom not to be involved with certain lawful forms of speech, their definitions of hate speech can be broader.  For instance, “trolling” is condemned not just for incitement of third parties into actions but for mental stress incurred by the listeners.  Twitter took the liberty of walking down this road to boldly state that ‘we do not have a hate speech policy but a “hateful conduct” policy’, censoring many more comments that are hateful to view rather than to cause violent or discriminatory hate.  For instance, a Tweet stating “In light of the leaked Trump videos, it is pretty clear that I need to castrate all men” was deemed to be in violation of their Terms of Use. If you ask me, given that the speaker advocates for violence supposedly by a minority against the ruling majority (men), such violence will not be made even incrementally likely to be materialized by such statement.

All in all, the matrix for measuring the hatefulness is developing and becoming more nuanced in the backoffices of the platforms as the main presentations by Twitter, Facebook, and Google showed, including (1) distinctions made between contents targeting specific individuals and those targeting a group, (2) between contents against majority groups and those against minority groups, (3) between contents causing mental distress on listeners and contents advocating harms against minorities, etc. Depending on how we define each category of hate speech, the remedies must be different: if we are to abate a certain Tweet not for its advocacy of violence but for its mental damage on the targets, taking down the content may be a disproportionate remedy, and the content should be blocked from the listeners, for instance.

Here are some recommendations to the platforms on the process of the development:

1.  Values and norms are different.  All platforms should serve the same values but can innovate on with what norms.  For instance, Twitter’s decision to make their platform more “family friendly” is perfectly fine.

2. Distinguish “discriminatory harms” from non-discriminatory harms. The main value platforms should seek to achieve is prevention of discrimination.  Copyright violation, obscenity, gambling information, are all good reasons for expecting the platforms to do more but the real dissatisfaction comes from whether the platforms are somehow helping people intensify discriminatory tendencies.

3. Do not give up on ‘harm-based’ approach. Temptation is to censor all contents showing hate instead of censor only those contents likely to cause violence or discrimination against vulnerable minorities.  However, hate sometimes has a reason and anger is often the driver for progress.  Platforms can narrow their censoring scope by staying close to the harms-based approach, which can be expanded through social science studies.  A 2002 study by Laura Leets shows that being exposed to hate speech has debilitating effects on the listener’s cognitive abilities.  (Actually, the 1953 Brown v. Board of Education decision of the U.S. Supreme Court was influenced by Clark’s black doll and white doll study, making the case the first hate speech case ever.)  However, one should also be mindful of the social science research showing that bias can be treated (for instance by “analogic perspective taking” as in a 2015 David Broockman study).   

4.  Do not kill the messengers (the only advice to state actors). Temptation is to define hate speech and hold platforms responsible for not removing it.  China, Korea, Malaysia, and Indonesia all did what seems innocent:  hold platforms responsible for not removing unlawful contents.  However, such rules end up encouraging broad censorship of lawful contents as the platforms prefer to err on the side of keeping up the contents.  So, Germany’s Social Network Act and Australia’s recent Social Media Law all fail in that regard.  If you really want to encourage platforms to remove as much unlawful content as possible, do stick to the golden standard: intermediary liability safe harbor fashioned after DMCA 512 or e-Commerce Directive 16.

5. Allow user literacy to work.  Some users may want to keep out of filter bubble and echo chamber.  Let them by using the technology.

6. Broaden transparency.  Transparency so far meant reporting on what has been taken down.  Instead, broaden it to report on what remains or what gets posted.  Let people have more information on the influence of hate speech.     

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *