ABA 2023: Effective Response to Disinformation – Platform Liability v Platform Transparency

by | Nov 12, 2023 | Free Speech, Open Blog | 0 comments

On October 11, 2023, K.S. Park spoke at American Bar Association’s International Law Section Annual Conference at a panel titled “Regulating Fake News v Protection Free Expressions: Building Healthy Information Ecosystem for the 21st Century” as follows:

Who uses the word ‘fake news’ most in the US? Who put the word in such high currency? Trump. He is claiming to be the watchdog against the phenomenon that he is the main source of. Shows that we should not be easily swayed by the disinformation fad. 

Criminalizing falsity

Within international human rights law, it has been established beyond that criminalization of speech for being false violates the standard.  The highest courts from Canada (Zundel), Zimbabwe, Antigua Babuda, and most recently South Korea (2010) have found “false news” crime laws unconstitutional.  The reasoning is as follows: firstly, punishing someone for falsity will only delay the advent of real truth. Flat earth activists had their global conference in Seoul causing no harm. Round earth activists were burned on the stake by the Catholic Church hundred years ago only to delay the advent of geocentrism by 200 years.  Secondly, exaggeration is sometimes important for civic discourse and “necessary for a democratic society”. Environmental activists may exaggerate facts to build movement. Thirdly and most importantly, historically “false news” crimes have been used by authoritarian governments to put down rather truthful critiques and critics of their regimes.  The depth of abuse is mind-boggling.  In 1960s, the military junta that took over South Korea by a coup changed the constitution that gave legislative powers to the president. The first law passed under the new constitution was named Emergency Decree No.1 banned and punished fake news about the new constitution, silencing critiques of the coup de tat.  Only 50 years later, the highest courts of South Korea are compensating thousands of people jailed for protesting against the coup and the constitution.  This is not just part of the past.  During the recent pandemic, the Malaysian government declared a State of Emergency under which the executive branch was empowered to pass laws, and again the first law passed under that State of Emergency was also named Emergency Decree No. 1 and punished “fake news” about the State of Emergency and COVID-19.  

Taking down false info

As criminalization has become frowned upon, the governments of the world have now moved to administrative takedowns to respond to disinformation.  However, administrative, non-judicial, censorship has been found unconstitutional by the highest courts of France twice (3-strike-out copyright law and Avia hate speech law), Philippines (Cybercrime law), Spain (public health authorities’ takedown of abortion information websites), Turkey (entire administrative censorship system, where the decision forced all online censorship decisions to go through judicial approval before being implemented), and United States (Bantam Books v. Sullivan).  

The reasons are similar: Firstly, pro-incumbent bias of the administrative bodies conducting the content moderation. It is for this reason that governments around the world have stayed away from administrative censorship bodies except in the broadcasting where broadcasters are put under public interest obligations in exchange of the grant of state monopolies over bandwidths and cable conduits. It is for this reason that, even within broadcasting, FCC dropped the fairness review in 80s.  Actually, the sources of most harmful disinformation are governments.  

The top subjects of study by disinformation researchers are disinformation coming from China, Russia, and Myanmar that have specific victims such as Taiwan, Ukraine, and Lohingyas. Administrative censorship is sometimes commandeered to silience dissent voices and manipulate public opinions and therefore such censorship is part and parcel of government disinformation campaigns.  

Administrative censorship on online content has become law in Indonesia, Myanmar, Viet Nam, Thailand, and Singapore, and reveals the room fo such abuse.  In some of these countries, the standard of taking down is whether the posting is “Against the State” or “Insulting the State”.  You can see how much open to abuse the standard is.  

South Korea has had the same problem where the standard of taking down online content is “whether it is necessary for nurturing sound communication ethics”.  Whose ethics?  In the digital age, the problem becomes more serious as more and more legacy media outlets go online.  Just a couple of weeks agon, South Korea government announced “one strike out policy” aiming to shut down a media house even for issuing one false media report.  This announcement was triggered by a news report by an online independent media house that questioned the propriety and bribe implication of the current president’s past conduct as a prosecutor.  

Secondly, non-judicial interventions cause “chilling effects”.  Administrative bodies in other domains have acted first and alone before judicial bodies act. However, in the area of free speech, the doctrine of “chilling effects” has condemned and prohibited the situations where lawful speech is being withdrawn voluntarily by the speakers due to the inconvenience of having to prove that their speech is lawful, Administrative censorship, once taken against a posting, can be reversed only by filing a lawsuit, and suffers from that problem. Judicial branches act slower of course but as the Philippines Supreme Court ruled, online content takedown can be equated to search and seizure, and courts can issue warrants within hours for immediately dangerous speech.  

Killing the messengers? 

Other governments have moved to intermediary liability to respond to disinformation. I believe they are generally and only generally in the right direction as disinformation has become a problem because of technology and on digital platforms: information, true of false, travels wider, faster, and further. Also, platforms are not bound by international human rights law on free speech but only by UN Guiding Principles on Business and Human Rights. Governments cannot restrict or criminalize speech for being false but platforms have wiggle rooms to restrict, deamplify, or demonetize speech for being false, without violating international law.  Platforms can also go 100% harm-based without “siloing” or “compartmentalizing” moderation categories: they don’t have to check boxes on all elements of moderation categories and do nothing if the posting does not meet 100% of any single moderation category.  They can apply a “sliding scale” to focus only on the harm instead of bureaucratically requiring all boxes to be checked for a moderation category.  What is more, they can act on contents that look alone innocent and yet act as links in a chain of causation through the existing discriminatory structure of the society 

However, imposing liability on platforms for failure to take down specific contents basically sets up, and repeats the problem of, administrative censorship, which is what happened to Indonesia, Myanmar, Viet Nam, Thailand, and Singapore where platforms can be blocked from respective countries on failure to take down specific content.  

Also, Intermediary Liability Safe Harbor has been already established as a human right, True, it bars liability only for contents that platforms are not aware of. However, specific content liability will make platforms err on the side of taking down/complying with government requests. It still privileges the administrative body to force taking down specific contents, and furthermore repeats the problem of censorship.  

Now, I think that there are better regulatory approaches than criminal or intermediary liability: that is transparency regulation.  There are trusted flaggers but still the level of communication between the civil society and the platforms is still juvenile largely because platforms usually do not disclose specific standards of moderation or sufficient number of similar cases.  In order to go 100% harm based, there must be better communication between CSOs and platforms in identifying harmful contents.  Transparency by platforms can help CSOs identify contents that are likely to be taken down and advise on rooms for improvement both in the wording and implementation of the community guidelines.  

The other reason for transparency is to dilute the inevitable conflict of interest between information integrity and the business model of platforms. Platforms sell advertising and that accounts for 90% of their revenues, and therefore have financial incentives in retaining postings that are, true or not, sensational and attract public attention, which often unfortunately coincide with disinformation and hate speech.  Transparency on this relationship will provide the basis to better the self-regulation of contents.  

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *