Social media governance: Danger of holding intermediaries liability for noticed contents – Kyung Sin Park – Submitted to Asian Internet History Project, 4th Decade

by | Jul 26, 2020 | Free Speech, Open Blog | 0 comments

Social media is punctuated by such concepts as “followers”, “friends”, “retweets”, and “likes”, “shares”, etc. Communication there is organized around automated timeline curating, which makes it more relationship-based than content-based. It is not the powerful contents that rule but the powerful people, i.e., with many followers and friends, that determine what contents are powerful. The end result is people with similar interests sharing commonly interesting contents among themselves, resulting in turn in the proverbial echo chamber and filter bubble. Speech is easily intensified together with its harmful effects.

There are several important points of discussion for social media governance.

Firstly, there are types of speech conducive to the virality of speech often found in social media: fake news and hate speech. Both phenomena tend to be amplified along the lines of political kinship among the speech participants, which are quite often starkly drawn in the field of social media. Much of content regulation explicitly targeting a broader group of platforms (i.e., services hosting user-created contents) are intended mainly for targeting social media. Evaluating social media governance tools always should start with evaluation of substantive laws concerning fake news and hate speech. These observations do not mean that there are no actionable fake news or hate speech. We should be just careful not just the mode of regulation but also its coverage.

Secondly, intermediary liability safe harbor looms more important here because some of the social media regulations attempt to impose liability for contributing to online illegal activities. Intermediary liability safe harbor is a principle that intermediaries should not be held liable for online activities that they do not know about. The idea is that, otherwise intermediaries will engage in prior censorship or ‘general monitoring’, either of which will violate the international norms as set forth in EU e-Commerce Directive[1] and the statements of UN Special Rapporteur on Freedom of Expression.[2]

However, even the imposition of liability for the contents that they know the existence of can undermine the goals of intermediary liability safe harbor, if the legality is not defined well, because social media platforms will “err on the side of deleting” and end up deleting lawful content, shrinking the online civic space. Germany adopted a comparable piece of legislation in 2018, the Netzwerkdurchsetzungsgesetz (also known, in English, as the Network Enforcement Act, or simply the NetzDG). In Australia, the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 requires online content service providers to remove “abhorrent violent material” expeditiously. The recent decision by the French Constitutional Council to strike down a French effort runs exactly on that reasoning.

I. Coverage of social media regulation

A. “Fake news”

Spreading information cannot be imposed liability merely for being false on account of being offensive, causing shock or disturbance, or interfering with public interest. For several decades, the UN Human Rights Committee, the European Court of Human Rights, and the highest courts of many countries have warned that punishing false information for no other reason or for very vague reasons such as public interest will be abused by authoritarian governments to punish rather truthful criticisms or if not, will still chill vigorous civil dialogue.

B. Hate speech

Hate speech is not “hateful speech”. Hate speech was recognized as punishable first under the Universal Declaration of Human Rights when it condemned discrimination and hostility across national, religious, and racial lines. That inspired an explicit provision requiring the State parties to ban speech inciting discrimination and hostility across national, religious, and racial lines. So hate speech must be, for a starter, speech showing hatred at people who are defied by national, religious, and racial lines.

It should be carefully contemplated whether any such speech shall suffice or only the speech likely to convert into actual harms on minorities shall be required for punishment. A special problem in that analysis concerns counter-speech by minority groups which are, given the discriminatory hierarchy of the society slated in the opposite direction, unlikely to cause actual harms. Also, can the harms justifying punishment be mental harms or have to be physical/financial harms? Hate speech regulation must take into account these issues.

II. Intermediary liability safe harbor

A. International standard

Intermediary liability safe harbor is a principle that intermediaries shall not be held liable for something they do not know about.

Unless we want to paralyse the freedom of unapproved uploading and viewing and therefore the power of the internet, an intermediary that cannot possibly know who posts what content, should not be held responsible for defamation or copyright infringements committed via third party content hosted on its services. If intermediaries are held liable for this unknown content, the intermediaries will have to protect themselves either by constantly monitoring what is posted on their service (i.e. general monitoring obligations) or by approving all content prior to being uploaded. If that occurs, it can be said that when a posting remains online, it remains online with the acknowledgement and tacit approval of the intermediary that was aware of the posting and yet did not block it. The power of the internet—the unapproved freedom to post and download—will be lost.

The United States made headway by stipulating that no ‘interactive computer service’ should be considered the speaker or publisher of that content.[3] Some think that went too far because it shielded liability for content clearly known to be unlawful even for the intermediaries themselves. In response, a ‘safe harbour’ regime could have been set up to exempt from liability only content not known about. The EU did just that, although adding a requirement to act on such knowledge in order to obtain immunity,[4] while the US Digital Millennium Copyright Act (DMCA)—perhaps heeding calls from Hollywood, the biggest rightholder group in the world—went further by limiting immunity only to when intermediaries take down all content on notification regardless of whether the content is illegal or intermediaries are aware of that fact.[5] The DMCA, incentivizing intermediaries into automatic takedowns, is often criticized[6] whereas the EU model makes available immunity to intermediaries who receive notification but are not convinced of its substance. In any event, notice-and-takedown safe harbours have spread.[7]

Table 1 shows a cross-jurisdictional comparison of the relevant provisions relating to third party postings against which a takedown notice was sent to the hosting intermediary (hereinafter ‘noticed posting’).

Table 1 Takedown notices for third party postings

  Liability exemption for ‘noticed posting’ Liability
Europe: e-Commerce Dir., Art. 14(1) On obtaining knowledge or awareness of the infringing information On condition that the provider acts expeditiously to take down the content, the service provider is not liable for the information stored N/A
United States: DMCA, s. 512(c) On obtaining knowledge or awareness of facts and circumstances of infringing material or activity; or on notification of claimed infringement If the service acts expeditiously to take down the material claimed to be infringing, the service provider shall not be liable N/A
Japan: Provider Liability Law, Art. 3(1) When the relevant service provider knew or there were reasonable grounds for the provider to know Unless it is technically possible to take down the infringing content, the service provider shall not be liable for any loss incurred from such infringement N/A
South Korea: Copyright Act, Arts 102 and 103 If an OSP actually knows of or has received a takedown notice and thereby learned of the fact or circumstances of an infringement If an OSP immediately takes down the noticed posting, the intermediary shall not be liable for any infringement. In the event of a takedown request, the OSP must immediately take down (Art. 103(2)), in order to limit its liability (Art. 103(5))

B. Imposition of liability for known contents

Germany’s social media law sounds innocent to the extent that it obligates social media companies to take down only ‘unlawful content’ and holds them liable for failure to do so.[8] However, the lessons from Asia where liability was imposed on failure to take down notified unlawful content speak otherwise. Such an innocent-sounding rule has profound repercussions. It is no wonder that Germany’s impact on other countries was discussed in a 2018 case on ‘whether online speech regulation laws in democratic countries could lead to a damaging precedent for illiberal states to also regulate online speech in more drastic manners’.[9]

The Chinese intermediary liability regime for defamation is neither strict liability nor a true safe harbour. It is limited liability in a sense that intermediaries are held liable only for: (i) unlawful content that (2) they are aware of. However, instead of formulating an exemption rule (e.g. ‘shall not be liable for unknown or lawful content’), China created a liability rule that imposes liability: (a) for known unlawful content or (b) for unlawful content notified by the rightholder. Such a liability-imposing rule suffers from the interpretation of the courts gravitating towards broad conceptions of knowledge and effective notification, unfairly holding intermediaries liable, and naturally incentivizing them into proactive censorship. Also, even if knowledge and effective notification are strictly interpreted, intermediaries are likely to act on lawful content as well as erring on the safe side of deleting the notified content.[10]

The theory to test runs as follows: pre-safe-harbour, intermediaries are liable for content that they know (1) exists and (2) is known to be unlawful. The liability-imposing rule applies to a subset of these knowledge-laden cases where the intermediary has received a notice of infringement and therefore has knowledge of the existence of the unlawful material. Technically, when it receives notification all it has is knowledge of the existence of some controversial material, which is not equal to knowledge of unlawful material. However, the liability-imposing rule holds them immediately liable if they wrongly decide on the illegality. Cornered by fear of liability, intermediaries tend to take down clearly lawful content. Is this also the case in South Korea? Article 44-2 (Request to Delete Information) of the Act Regarding Promotion of Use of Information Communication Networks and Protection of Information reads:

Anyone whose rights have been violated through invasion of privacy, defamation, etc., by information offered for disclosure to the general public through an information communication network may request the information communication service provider handling that information to delete the information or publish rebuttal thereto by certifying the fact of the violations. Paragraph 2. The information communication service provider, upon receiving the request set forth in Section 1 shall immediately delete or temporarily blind, or take other necessary measures on, the information and immediately inform the author of the information and the applicant for deleting that information. The service provider shall inform the users of the fact of having taken the necessary measures by posting on the related bulletin board. [omitted] Paragraph 4. In spite of the request set forth in Section 1, if the service provider finds it difficult to decide whether the rights have been violated or anticipates a dispute among the interested parties, the service provider may take a measure temporarily blocking access to the information (‘temporary measure’, hereinafter), which may last up to 30 days [omitted] Paragraph 6. The service provider may reduce or exempt the damage liability by taking necessary actions set forth in Paragraph 2.[11]

As is immediately apparent, the provision is structured not with phrases such as ‘the service provider shall not be liable when it removes . . .’ but starts out with the phrase ‘the service provider shall remove . . .’ Paragraph 6, referring to the ‘exemption from or reduction of liability in event of compliance with the aforesaid duties’, makes a feeble attempt to turn the provision into an exemption provision but the exemption here is not mandatory. This means that intermediaries will accept paragraph 2 obligations as mandatory, not conditional.

Historically, the predecessors of Article 44-2 simply required the service provider to take down content on the request of a party injured by that content and did not provide any exemption.[12] The law was amended in 2007 in Article 44-2 to create a ‘temporary (blind) measure’ for ‘border-line’ content, which the service provider can now resort to in fulfilling its responsibility under the previous law.[13] The central idea that continued to remain was that the intermediary must take action (temporary or not) on infringing content on notification. Again, the general idea of holding intermediaries liable for identified infringing content seems innocuous but the South Korean cases are compelling for why it should be abandoned.[14] Intermediaries responded by removing even lawful contents, and courts impose liability when the illegality is apparent only in hindsight reinforcing the censorial tendencies of intermediaries. Now, this failure to set up a liability-exempting regime does not directly incentivizes intermediaries into a general monitoring obligation or prior approval for uploading. It could simply maintain the pre-safe-harbour status quo based on general torts joint liability. However, the reality in Korea shows that such half-baked attempt has worsened the situation.[15]

Chinese and South Korean “safe harbour” provisions for defamation have the structure of a liability-imposing regime despite their claims to create a safe harbour.  Although they do not explicitly impose intermediary liability for unknown contents, the main evil to be prevented by safe harbour efforts in relation to general monitoring obligations or prior approval, the problem of contents known to exist but not yet known to be illegal remains there to incentivize intermediaries into deleting lawful contents in fear of liability. This state of affairs is in a sense worse than the pre-safe-harbour general torts liability because the existence of the procedure invites more people to submit takedown requests. Korea’s case shows how intermediaries may end up responding, i.e., almost automatic on-demand takedowns of often lawful contents. Alternatively, intermediaries may respond by investing in human resources to review all notifications and try to make as accurate decisions as possible[16] but such response becoming a standard is not satisfactory because platforms not affording such resources will struggle under legal liability forming around that new standard and continue the dominance of the current global giants, eroding the promise of the Internet in another abysmal way.

Given the state of affairs, the French Constitutional Council’s decision on Avia Law is a welcome development.

B. French Constitutional Council’s decision on Avia Law

Article 1, paragraph I of Avia Law would have amended an existing piece of legislation, Law №2004–575 of 21 June 2004 on confidence in the digital economy (loi n° 2004–575 du 21 juin 2004 pour la confiance dans l’économie numérique). Under the existing law, the French administrative authorities have the power to direct any online service provider to remove specified pieces of content which they consider to constitute (1) content which glorifies or encourages acts of terrorism, or (2) child sexual abuse imagery, supported by criminal sanctions. Where the online service fails to do so within 24 hours, or if the authority is not able to identify and notify the responsible online service, the authority can request ISPs and search engines to block access to the web addresses containing the content in question.

The new law (Article 1, paragraph I) of the Avia law would reduce the time period from 24 hours to 1 hour, and increases the potential sanctions for failure to comply to one year’s imprisonment and a fine of 250,000 euro (up from the current sanction of one year’s imprisonment and a fine of 75,000 euro).

The same law in paragraph II would also have amended the Law №2004–575 of 21 June 2004 on confidence in the digital economy by introducing a brand new regime to complement the existing one. Under this new regime, online service providers whose activity exceeded a particular size (to be set down in a decree) would have a new legal obligation to remove or make inaccessible certain forms of “manifestly illegal” content within 24 hours of being notified by one or more persons, not just administrative bodies. These forms of illegal content are broader than paragraph I’s terrorism glorification or child sexual abuse material, and include contents (1) condoning crimes, (2) encouraging discrimination, (3) denying or trivializing crimes against humanity, of slavery, of war, etc., (5) insulting persons due to sex, etc., or sexually harassing others, (7) pornographic representing children (8) encouraging terrorism (9) disseminating pornography to minors

The Constitutional Council, in striking down the 1-hour administrative censorship on terrorism/child pornography portion of Avia law focused on, among other things, the fact that the operative legality of the content was solely determined by administrative authorities, and given the short-time limit, the online service providers were unable to obtain a decision from a judge before having to remove content.

As to the 24-hour private notice-and-takedown component, the Constitutional Council noted that “the decision as to whether the content is “manifestly illegal” or not is not one that a judge makes, but solely down to the online service provider [when] the decision may be a difficult one involving very technical legal issues, . . .particularly on crimes related to the press.”

Overall, the Constitutional Council’s decision is primarily concerned with “false positives”, e.g., takedown of lawful contents whether administratively triggered or privately triggered.

The reasoning of the Constitutional Council is important, since a number of governments around the world are looking at introducing legislation which would impose obligations on social media platforms, search engines and other online service providers to take steps to limit the availability of different forms of illegal (and, in some cases, legal but harmful) content. Many of these proposals are similar to those in the Avia law, particularly obligations to remove content notified by an administrativec authority or private persons, and to remove certain forms of content within a specified period of time (whether notified of their existence or otherwise) with sanctions for failing to do so.

 

[1] EU Electronic Commerce Directive 2000/31/EC, Article 15(1)

[2] JOINT DECLARATION ON FREEDOM OF EXPRESSION AND THE INTERNET by The United Nations (UN) Special Rapporteur on Freedom of Opinion and Expression, the Organization for Security and Co-operation in Europe (OSCE) Representative on Freedom of the Media, the Organization of American States (OAS) Special Rapporteur on Freedom of Expression and the African Commission on Human and Peoples’ Rights (ACHPR) Special Rapporteur on Freedom of Expression and Access to Information, June 1, 2011;

[3] See the Communication Decency Act [1996] 47 USC § 230 (US).

[4] See Council Directive 2000/31/EC of the European Parliament and the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (e-Commerce Directive) [2000] OJ L178/1, Art. 14.

[5] See the Digital Millennium Copyright Act of 1998, 17 USC § 512(c) (US). Importantly, the notice-and-takedown safe harbour is not applicable to content where intermediaries have actual knowledge of its illegality even before and without notice being given by a rightholder or any other person.

[6] See Jennifer Urban and Laura Quilter, ‘Efficient Process or “Chilling Effects”? Takedown Notices Under Section 512 of the Digital Millennium Copyright Act: Summary Report’, Electronic Frontiers Foundation, Takedown Hall of Shame <https://www.eff.org/takedowns>; Wendy Seltzer, ‘Free Speech Unmoored in Copyright’s Safe Harbor: Chilling Effects of the DMCA on the First Amendment’ (2010) 24 Harv. J of L. & Tech. 171.

[7] See Daniel Seung, Comparative Analysis of National Approaches of the Liability of the Internet Intermediaries (WIPO, 2010)</ibt>.

[8] See the 2017 Network Enforcement Act (Gesetz zur Verbesserung der Rechtsdurchsetzung in sozialen Netzwerken, NetzDG) (Ger.).

[9] ‘Beyond Intermediary Liability: The Future of Information Platforms’, Yale Law School Information Society Project (13 February 2018) <https://law.yale.edu/system/files/area/center/isp/documents/beyond_intermediary_liability_-_workshop_report.pdf>.

[10] Kyung-Sin Park, “From Liability Trap to the World’s Safest Harbour: Lessons from China, India, Japan, South Korea, Indonesia, and Malaysia”, Oxford Handbook of Online Intermediary Liability, Edited by Giancarlo Frosio Print Publication Date: May 2020, Subject: Law, IT and Communications Law

Online Publication Date: May 2020 DOI:10.1093/oxfordhb/9780198837138.013.13

[11] Act Regarding Promotion of Use of Information Communication Networks and Protection of Information, Art. 44-2 para. 1 (Kor.). South Korean legislation can be found at <http://www.law.go.kr>.

[12] See Law no. 6360 of 16 July 2001, Network Act, Art. 44(1)–(2) (Kor.).

[13] See Law no. 8289 of 27 July 2007 (Kor.).

[14] See Woo Ji-Suk, ‘A Critical Analysis of the Practical Applicability and Implication of the

Korean Supreme Court Case on the ISP Liability of Defamation’ (2009) 5(4) Law & Tech. 78, 78–98.

[15] Kyung-Sin Park, “From Liability Trap to the World’s Safest Harbour: Lessons from China, India, Japan, South Korea, Indonesia, and Malaysia”, Oxford Handbook of Online Intermediary Liability, Edited by Giancarlo Frosio Print Publication Date: May 2020, Subject: Law, IT and Communications Law

Online Publication Date: May 2020 DOI:10.1093/oxfordhb/9780198837138.013.13

[16] Facebook has “150,000 content moderators reviewing notifications around the world, taking down more than 10 million postings each year”.  Remark by a Facebook representative at a workshop “Addressing Terrorist and Violent Extremist Content” at 14th Internet Governance Forum (Berlin), November 2019.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *