T20 Indonesia workshop on Healthy Online Sphere Through Digital Literacy

by | Aug 12, 2022 | Free Speech, Open Blog | 0 comments

https://www.linkedin.com/posts/t20solutions_creating-a-healthy-online-sphere-through-activity-6962265325207773185-lWnH/?utm_source=li_share&utm_content=public_post&utm_medium=g_dt_web

Date : Tuesday, August 11th, 2022

Time : 07.00 PM – 09.00 PM (Western Indonesia Time – Jakarta) Media : Zoom Virtual Conference

When the use of the internet is getting easier and used for public consumption, public concern grew about the existence of illegal Internet content. Currently, the spread of information that is very fast and can be accessed by anyone makes it increasingly difficult to control the spread of harmful and prohibited information.

Governments, industry, and civil society around the world share the same objective of protecting the public from harmful content through a shared responsibility framework between them. Yet, differences occur in defining what is considered prohibited content and how to identify them. The ambiguity in definitions of harmful and prohibited content often leads to dispute, misunderstandings, and perceived lack of action by the public (and often leads to hefty fines for digital platforms). Additionally, new types of content are uploaded vastly and rapidly every day which potentially requires the need to have an adaptive way of defining prohibited content and its implication for access to information.

Additionally, an effective content moderation framework and regulation requires a sound and consistent process founded on a proper legal framework. Oftentimes, content regulations are focusing solely on shorter turnaround times at the expense of proper legal standings and the risk of harming users’ access to information. The discussion is intended to identify and propose the appropriate framework for building a formal and transparent process between governments, CSOs, and platforms.

This workshop aims at providing a thought-leadership discussion venue at T20 on creating a healthy online sphere by promoting good content moderation through increased digital literacy skills. Specifically, the discussion will focus on how we should define the boundaries of content moderation and ensure transparent and reliable due process, as part of and in the margins of the T20 Policy Areas.

KS Park’s comments in summary below:

New technology usually attracts resistance from progressive activists because, on account of being “new”, it is usually controlled by big companies and used to benefit the rich, worsening inequality. However, when the Internet became popular and popularized around the world, many progressive activists began digital rights organizations, dedicated to protect the Internet, like SAFEnet in Indonesia and Open Net in Korea.

Why? Internet gave the same power of mass communication to powerless people, which had been enjoyed only by big companies and governments through newspaper and television, making the human societies more equal.  To quote from the 2012 Constitutional Court of Korea, “speech in the Internet, rapidly spreading and reciprocal, allows people to overcome the economic or political hierarchy off-line and therefore to form public opinions free from class, social status, age, and gender distinctions, which make governance more reflective of the opinions of people from diverse classes and thereby further promotes democracy.”  Even the Chinese government, an often target of human rights advocacy, uses Sina Weibo to crowdsource information from many people to fight corruption of local officials.

This attitude was adopted by the United Nations General Assembly and Human Rights Council who repeatedly made the following resolution: “What is protected offline should be protected online as well.” Online life became so pervasive. Even if online speech travels faster, further, and to many more people than offline speech, we should not set up a higher bar of regulation on speech – otherwise we will lose the liberating/equalizing potential of the Internet.

Now, the offline world is interrupted by many bad actors and bad deeds. But that is a price paid to give people freedom. We are given freedom to act first and take responsibility for it afterwards. The same approach should be taken toward online world.

This principle was adopted into intermediary liability safe harbor in the US and Europe: no platform should be held responsible for contents that they are not aware of or they will shut down the valuable space for discussion and information or put it under prior censorship of all material coming online.  One corollary of this rule was a ban on general monitoring. If platforms are required to vet the legality of all contents coming online, the end result will be that the online world will be tailored by the whims of platform operators and will lose the liberalizing/equalizing civilizational significance of the internet.

In relation to the ban on general monitoring, I am concerned about MR5 that puts preventive obligations on platform operators for future unknown illegal contents. We have a similar law in Korea but only in the narrow category of illegally created sexual videos but even that caused a lot of controversy and push-back because all videos in Korea are going through 10 seconds delay in being uploaded during which they are compared against a database of illegal material pre-curated by the government. MR5 imposes the same preventive obligations with respect to even broader categories of information. The broader categories originated from other provisions of MR5 that emulated Germany’s NetzDG that puts takedown obligations on noticed contents but such obligations are limited to the content defined to be illegal under the country’s Criminal Code only but no other law and limited to hate speech, condoning of genocide, etc., whose illegality can be more ore less apparent on the face of the content (Problematically, Germany’s NetzDG also includes defamation, which has become the lion’s share of takedown requests under the system.)

Another concern is with government access to traffic information and subscriber information in MR5. After many discussions, we now know that an answer to disinformation is not reduction of speech but promotion of more speech which can counter to disinformation. Singaporean “fake news” law been criticized for suppressing media literacy. People should be allowed to ask questions and raise doubts without having 100% accuracy. The fake news law, regulating speech only for being false, suppresses such discourse. Only the speech likely to cause harm with falsity (like defamation of ascertained individuals)  should be regulated.  Now, to strengthen digital literacy, we need to protect anonymity online especially because vulnerable classes, gender, and races need anonymity to engage in public discourse. However, MR5 allows warrantless disclosure of traffic info and subscriber info to government authorities without any judicial supervision such as warrant. Such disclosure has a high risk of data breach whereby online users’ identities will be exposed and right to anonymous speech deprived.

In Q&A, there was a question about what we can do to counter abuse of algorithm or AI for disinformation. One classic example was Microsoft’s chatbot Tay which quickly fell into racist and sexist remarks through machine-learning and therefore adopting much bias in human tweets and writings. We need to cleanse the training data of human bias. In order to do that, we need to use AI and we call that Algorithmic Affirmative Action.  Also, there was a question about deep fakes and need to regulate that.  Deep fakes are visual manipulations but they can be used for good purposes for social discourse, medicine, education. Korean bill criminalizing deep-faking another person against his/her will did not pass. Visual manipulation itself is not the harm. As long as it does not defame or defraud another ascertained person by spreading falsities, imagining aloud on another person should be allowed.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Recents