K.S. Park at 9:30 am December 15, 2021 Sri Lanka Time
Internet’s physical structure based on TCP/IP allows all people to communicate with one another without gatekeepers in the middle. However, social media facilitates that direct communication by providing the hubs for people to communicate with one another. Social media communication is based more on relationship than on power of contents and is therefore subject more to echo chamber and filter bubble effects.
Disinformation and hate speech are the two types of bad speech that go viral through these effects and call for social media regulation. However, any attempt to regulate disinformation should comply with established human rights standards. The jurisprudence on “false news” crime counsels moderation against any law attempting to punish speech simply for being false or simply for undermining vaguely defined “public interest” or “peace”. The 2012 Trump election caused much concern as some noticed “fake news” going viral beyond legitimate news and many Trump voters’s beliefs in clearly false electoral information. However, much of the harmful electoral information (e.g., Obama’s Kenyan birth) originated from politicians not from social media. Also hate speech regulation should answer some of the definitional questions about hate speech such as “Should hate speech against majority be regulated?”, “Should verbal discrimination be regulated when it does not incite violence or discrimination?”
All forms of social media governance should preserve the civilizational significance of the Internet: the fact that it gives powerless people the same power of information and communication without gatekeepers (like TV and newspaper) as big companies and governments, and therefore contributes to equality and democracy. Recognition of such value has resulted in intermediary liability safe harbor as a human rights standard whereby intermediaries were protected from liability for user contents that they were not aware of, as in EU’s E-Commerce Directive 2000, and the US’ DMCA 512.
A gray area was what to do with contents that intermediaries are put notice of existence but not of their illegality? Germany’s 2017 NetzDG exploited this grey area to impose intermediary liability on noticed contents and this encouraged several Southeast Asian governments to follow suit in different adaptations (Malaysia, Viet Nam, Indonesia, Philippines) but this is a problem as intermediaries will engage in self-censorship, erring on the side of deleting as opposed to retaining the contents that they are given notice of but they are not sure about illegality of.