1. Introduction
Digital platforms refer to software systems that allow an unspecified number of people to freely share information with each other, and ultimately, digital platform regulation is regulation of those who operate these software systems, namely information intermediaries. All companies are subject to various regulations, and information intermediaries are naturally subject to these regulations as well, but this article focuses on regulations related to the characteristics of digital platforms—specifically, regulations related to activities that mediate information for the masses.
When illegal activities occur through the internet, there are repeated attempts to hold platforms responsible for hosting the information that mediates such activities, but platform regulation has not spread as widely as one might expect. The reason is the principle of intermediary liability limitation, which has become part of international human rights standards.
2. Intermediary Liability Safe Harbor Principle
The life of the internet lies in engaging all individuals in public communication or mass communication through extremely decentralized and individualized communication methods. Here, public communication or mass communication means communicating with the general public all at once. The internet is a forum for public communication where everyone can view content they wish to see without others’ permission, and everyone can post content on this forum for public communication without others’ permission. In other methods of public communication such as broadcasting and newspapers, individuals who are not selected by professional journalists are inevitably excluded from public communication, but the internet is different. Of course, the internet has other functions such as email, chat, and cloud services, but the reason countries around the world make special efforts to protect the internet is that it includes an incredibly diverse range of individuals in public communication, thereby advancing politics, society, and the economy in its unique way.
Korea’s Constitutional Court has also held that the internet “overcomes hierarchical structures based on economic power or authority, forming public opinion free from class, status, age, gender, etc…. equally reflecting the people’s will and further developing democracy” (2010Hun-Ma47, Constitutional Court decision on internet real-name system), and stated that it is “a medium that anyone can easily access at low cost and is the most participatory medium… where the possibility of fairness being undermined due to differences in economic power is significantly low” (2007Hun-Ma1001, Public Official Election Act Constitutional Court decision).
As long as individuals are allowed the freedom to communicate without others’ permission, illegal activities such as defamation, copyright infringement, and obscenity are bound to occur on the internet. Protecting the life of the internet requires two determinations. First, it must be post-regulation rather than prior blocking. To block illegal information in advance, someone must censor all information beforehand and only allow information that passes through to appear on the internet, but such “general monitoring” means the destruction of the freedom to engage in public communication without others’ permission, which is the life of the internet, because information posted on the internet would only go up with others’ approval. Second, when holding platform operators who mediate information liable for illegal information, they should only be held liable when they are aware of that information. If information intermediaries are held liable even for illegal information they were unaware of, businesses will try to pre-censor all information uploaded to their servers, and again, the “freedom to engage in public communication without others’ permission” will be destroyed.
Foreign laws are codifying precisely these determinations. The U.S. Communications Decency Act Section 230 and Copyright Act (DMCA) Section 512, EU E-Commerce Directive Article 14, and Japan’s Provider Liability Act Article 3 have provisions that prevent courts from holding information intermediaries liable for information they are unaware of, and no country imposes an obligation to prevent the circulation of illegal information. Additionally, EU E-Commerce Directive Article 15 explicitly prohibits EU member states from imposing general monitoring obligations on information intermediaries. Europe’s latest movement, the Digital Services Act, also continues Articles 14-15 of the E-Commerce Directive in this regard.
3. The Situation in Korea
A. Limitation of Liability
What about Korea? Article 44-2 of the Information and Communications Network Act and Article 102 of the Copyright Act appear to have adopted the liability limitation provisions of foreign countries. There is no question that Korea attempted to fully adopt other countries’ intermediary liability safe harbor provisions through Article 102.
Liability Safe Harbor Provision | Liability Provision | ||
Korea: Copy Right Act Article 102 and Article 103 | when the information intermediary is aware of notification or has been notified of infringing activity | not liable if immediately removed or disabled access | shall immediately remove or disable access upon request(Article 103 paragraph 2). liability exempted if immediately removed or disabled access (paragraph 5) |
EU: e-Commerce Directive Article 14 | when the information intermediary is aware of circumstances of infringing activity | not liable if expeditiously remove or disable access to the material upon notification | X |
US: Digital Millennium Copyright Act Section 512 | when the information intermediary is aware of circumstances of infringing activity or have been notified of infringement | not liable if expeditiously remove or disable access to the material upon notification | X |
Japan: Provider Liability Act Article 3 | when the information intermediary knew or could have known about the violation of the rights | not liable if it was technically impossible to take transmission prevention | X |
The problem is the existence of Articles 103(1) and (2). While the laws of other countries all consist of a single provision or sentence, Korea has Article 103 as an independent provision in addition to Article 102. Articles 103(1) and (2) impose an obligation to delete immediately upon receiving a report of infringement, and as shown in the table above, no law in any other country has provisions with corresponding content. Other countries’ laws do not create an “obligation” to delete or block reported posts, but rather such deletion or blocking merely becomes a condition for exemption from joint liability for providing infringing material. In other words, they provide a “motivation” to delete or block reported material. Korea’s law appears to impose an “obligation” on information intermediaries to delete or block all reported posts due to the separate existence of Article 103.
The difference between “obligation” and “motivation” is significant. When only “motivation” is provided, information intermediaries have the freedom not to delete or block posts that they judge to be legal. It grants authority to maintain posts “if the information intermediary wishes.” In contrast, when an “obligation” is imposed, information intermediaries must delete or block even posts that they believe have no possibility of infringement.
Of course, one might think that ultimately, civil and criminal liability will only apply to platform operators when the post itself is illegal, and there is nothing wrong with the command to “delete if an illegal post is reported.” However, it is very difficult for platform operators to determine whether a post is copyright-infringing or not. If they delete a post, legal liability toward the poster is not significant. This is because most platform operators secure the civil law authority to suppress posts at their discretion through post management policies, even if the posts are not necessarily illegal. However, if they do not delete the post, they become joint tortfeasors for copyright infringement. Due to this asymmetric distribution of liability, even if platform operators judge that a reported post is not copyright-infringing, they have motivation to delete or block it rather than risk having it judged as copyright-infringing by a court’s final determination. This creates an effect of chilling platform operators into deleting or blocking even posts they judged to be legal.
Of course, whether this provision exists or not, platform operators are always chilled by the possibility that their judgment might be wrong and thus always have motivation to delete or block reported posts. However, without this provision, platform operators would only be liable for mediating information they knew or should have known was illegal under general tort law or aiding and abetting principles, and the chilling effect would be limited. The above provision serves as grounds for imposing liability for not ceasing to mediate information after receiving a report, even when they were unaware of its illegality. Therefore, this provision should be seen as creating a chilling effect that causes them to delete or block information they judged to be legal, ultimately resulting in the deletion or blocking of many actually legal posts.
A more serious problem is that, due to the conventional wisdom that “the Copyright Act fully adopted the advanced intermediary liability safe harbor provisions of the United States through the 2011 amendment,” the obligation to delete or block reported material in Articles 103(1)/(2) has been replicated in other legal systems such as defamation and privacy violations. This is Article 44-2 of the Information and Communications Network Act, which imposes an obligation on information intermediaries to delete or block illegal posts that have been reported for infringement.
Article 44-2 (Request for Deletion of Information, etc.) ① When rights of others, such as privacy invasion or defamation, are infringed by information provided through information and communications networks with the purpose of disclosure to the public, the injured party may request deletion of that information or posting of rebuttal content (hereinafter “deletion, etc.”) from the information and communications service provider that processed the information by demonstrating the fact of infringement. <Amended March 22, 2016>
② When an information and communications service provider receives a request for deletion, etc. of the relevant information under paragraph (1), it shall take necessary measures such as deletion or temporary measures without delay and immediately notify the applicant and the information poster. In this case, the information and communications service provider shall ensure that users are aware by posting the fact that necessary measures have been taken on the relevant bulletin board. [Omitted]
④ Notwithstanding the request for deletion of information under paragraph (1), when it is difficult to determine whether rights have been infringed or a dispute between interested parties is expected, the information and communications service provider may take measures to temporarily block access to the relevant information (hereinafter “temporary measures”). In this case, the period of temporary measures shall be within 30 days.
Although paragraph 4 gives discretion to temporarily delete or block, it creates a chilling effect that causes platform operators to delete or block even posts they judged to be legal for a period of 30 days.
In particular, the Constitutional Court of Korea interpreted the temporary measure obligation as applying even to information that has “the possibility” of being defamatory or privacy-infringing, actually interpreting it more broadly than the existing legal provisions (Constitutional Court Decision May 31, 2012, 2010Hun-Ma88). To align the temporary measure obligation with international standards, revision of the Copyright Act is first necessary. This is not very difficult. For example, Articles 103(1)/(2) could be amended as follows so that they become provisions setting forth necessary procedures for receiving the immunity of Article 102 rather than obligation-imposing provisions.
Current Provision | Amendment Suggested by Open Net |
(2) Where an online service provider is requested to suspend the reproduction or interactive transmission under paragraph (1), he/she shall immediately suspend the reproduction or interactive transmission of such works, etc. and notify a claimant to the right of such fact: Provided, That an online service provider referred to in Article 102 (1) 3 or 4 shall also notify the reproducer or interactive transmitter of such works, etc. | (2) In order to be exempted from liability under Article 102 (1), an online service provider shall immediately suspend the reproduction or interactive transmission of such works, etc. and notify a claimant to the right of such fact: Provided, That an online service provider referred to in Article 102 (1) 3 or 4 shall also notify the reproducer or interactive transmitter of such works, etc. |
The Network Act should also create a liability limitation provision similar to Copyright Act Article 102, providing immunity if content previously provided is immediately taken down upon receiving a rights infringement report.
If rather than aligning with international standards, one actually intends to codify the “obligation to delete reported illegal material,” to eliminate the chilling effect mentioned above, the scope of application of the deletion or blocking obligation should at least be limited to cases where platform operators “know of the illegality.”
B. General Monitoring Obligation
To align with international standards, provisions imposing ‘general monitoring obligations’ also need to be revised. Article 104 of the Copyright Act and Article 22-3 of the Telecommunications Business Act impose obligations on webhards to “block” or “prevent” the distribution of specific illegal information (obscene materials and posts infringing copyrights of specific originals), respectively, and Article 17 of the Act on the Protection of Children and Youth from Sexual Abuse imposes an obligation on all information intermediaries to “stop” or “prevent” the distribution of child pornography. In addition, the so-called “Nth Room Law” (Article 22-5, Paragraph 2 of the Telecommunications Business Act), which came into effect in 2020, imposes an obligation on platforms to monitor through filtering whether video files uploaded by users constitute illegal recordings, etc.
These constitute general monitoring obligations that are either discouraged or prohibited in other countries. Even if the target of monitoring has a narrow scope or causes great harm, platforms must check all information one by one in order to identify and preemptively block the target content.
4. International Trends Since 2016
A. Limitation of Liability
After Trump was elected as U.S. President in 2016, there was tremendous interest in fake news or disinformation. Research findings served as catalysts, such as that fake news (i.e., posts that were not only false in content but also disguised as articles from mainstream media outlets) spread more widely and quickly among Facebook users than real news articles during the election period, or that 46% of voters who voted for Trump believed that the Democratic National Committee was actually operating a child sex trafficking business. After his election, as Trump’s disruptive foreign and racial policies sparked various controversies or actually incited violence, a sense of crisis intensified that the marketplace of ideas was not functioning properly and that the history of a superpower—and indeed world history—could actually be altered. This sense of crisis intensified in Europe, leading to efforts to regulate so-called ‘fake news’ or various dangerous information inciting violence. In Germany, in January 2018, the so-called Network Enforcement Act (NetzDG) was passed, imposing obligations to delete ‘manifestly illegal information’ within 24 hours and all illegal information within 7 days in relation to hate speech and various types of expression acts defined as illegal by criminal code provisions. In Australia, the Criminal Code was amended in 2019 to impose an obligation to expeditiously remove and block abhorrently violent material—live-streamed information depicting murder, terrorism, rape, or torture—and stipulated that content must not be recklessly allowed to remain, with a mere takedown request from the administrative agency (eSafety Commissioner) creating a presumption of recklessness. In France, in July 2019, the ‘Avia Law,’ an Internet hate speech prohibition law, passed the National Assembly, imposing an obligation on platform operators such as SNS and search engines to delete ‘manifestly’ illegal content within 24 hours of receiving a report, and penalizing operators who violate this. Manifestly illegal content includes hate speech inciting discrimination, hostility, or violence based on race, religion, ethnicity, gender, sexual orientation, or disability; discriminatory insults against such groups or individuals; Holocaust denial; and sexual violence. Additionally, content glorifying or inciting terrorism or child pornography must be deleted within one hour upon request from administrative authorities.
These three laws are similar to Article 44-2 of our Network Act in that they impose deletion and blocking obligations for reported illegal monitored content, but can be considered more stringent in that they impose administrative penalties for failure to delete and block.
Laws imposing deletion and blocking obligations for reported illegal monitored content are spreading. Vietnam passed a Cybersecurity Law in 2022 planning to require deletion and blocking of ‘propaganda against Vietnam’ etc. within 24 hours after reporting, and Indonesia already required deletion and blocking of various ‘prohibited information’ within 4 or 24 hours after a government agency’s request through MR5 in 2021. Myanmar’s 2022 Cybersecurity Law (draft) also contains provisions requiring ‘prompt’ deletion of various illegal information after government requests. Southeast Asian laws cite Germany’s NetzDG as a comparative legal precedent during legislative discussions, but differ in that government agency requests are emphasized. The Philippines’ 2019 Anti False Content Bill aims for strict content regulation by administrative agencies.
However, these NetzDG-style legal systems have problems that violate the principle of limiting information intermediary liability. On June 18, 2020, the French Constitutional Council (Conseil constitutionnel) ruled the Internet hate speech prohibition law unconstitutional. First, regarding the provision requiring deletion of terrorist content or child pornography within one hour, it ruled unconstitutional on the grounds that ‘administrative authorities decide whether to delete, and platform operators cannot challenge this or receive court review.’ Second, regarding the provision requiring deletion of “manifestly illegal content” within 24 hours, it also found it unconstitutional, stating that ‘it proceeds without court judgment and the number of reports may be large, making judgment within 24 hours difficult.’ While the Avia Law does not appear to violate the original principle of limiting information intermediary liability in that it only imposes liability for posts reported by administrative agencies or private individuals, it similarly forces private censorship by information intermediaries in that it holds them responsible for posts when they have only recognized the existence of the post but have not recognized its illegality. The Constitutional Council’s dissatisfaction with entrusting illegality determinations solely to administrative authorities appears to be a manifestation of the constitutional principle that administrative agencies should not intervene in the realm of freedom of expression. Also, dissatisfaction with entrusting illegality determinations solely to platform operators is interpreted as concern about situations where lawful posts are also blocked.
B. General Monitoring Obligation
New trends are also emerging regarding general monitoring obligations. In the Eva Glawischnig-Piesczek case (Eva Glawischnig-Piesczek v. Facebook Ireland Limited (C-18/18)), the Court of Justice of the European Union ruled that imposing an obligation to discover specified information, such as information with identical wording or equivalent meaning, regarding ‘information that a court has ruled constitutes defamation’ does not constitute a general monitoring obligation. Contrary to previous interpretations, it permitted monitoring of all content in order to remove and block specific content.
Subsequently, the Poland v. Parliament and Council judgment (Poland v. Parliament and Council (C-401/19)), citing the above Eva Glawischnig-Piesczek judgment, ruled that Article 17 of the EU Copyright Directive, which requires platform operators to filter content based on information provided by copyright holders, does not impose a general monitoring obligation and therefore does not violate the EU Charter of Fundamental Rights.
However, the limitations of these judgments must be kept in mind. First, the imposition of content monitoring obligations is permitted only for information that a court has ruled constitutes defamation. Second, since the EU Copyright Directive is legislation of equal status to the EU E-Commerce Directive, the Court of Justice of the European Union’s judgment as described above is understandable to some extent. It remains to be seen what judgment the European Court of Human Rights will make from the perspective of human rights, which holds a higher status than legislation.
5. Conclusion
In the vast ocean of information that is the internet, the judgment of which information to draw up is given to each individual. Just because I post something on the internet does not mean everyone can see it. Only those who want to see it will see it. Preview through search strengthens that right to choose. In the era of broadcasting and newspapers, the freedom to choose what one wants to see was limited to the channel level, but in the internet era, that freedom has expanded to the article level, and the quantity and breadth of that freedom is much wider. Assuming adults, we should not assume individuals wounded by fragments of harmful information, but rather individuals who have acquired the information they want.
Numerous platform operators are deleting and blocking lawful posts due to the risk of legal liability, reducing or closing services, and exhausting resources that should be devoted to new technology development on monitoring posts. More importantly, these monitoring costs function as barriers to entry, delaying the birth of new competitive platforms.
The internet is a very special tool in terms of the history of civilization. Solutions to a significant number of problems facing humanity already exist. Millions of people die from starvation every year, but humanity already produces three times the food needed by the population annually, so the minimum amount of food needed to avoid starvation is discarded every year without finding consumers. A tremendous number of people die not because there is no cure for their disease, but because the cure—that is, information—is not properly distributed. What is important is the organization of will to implement solutions, and the internet accelerates mutual communication and information sharing as the first step for such wills to accumulate, and furthermore enables various political, technical, and industrial innovations to organize such will.
0 Comments