Lessons from platform regulation: Digital rights and governance at OGP Summit 2025 Plenary

by | Oct 14, 2025 | Free Speech, Innovation and Regulation, Open Blog, Open Government | 0 comments

K.S. Park answered the question “The growth of social media has shown that the lack of regulation has perpetuated its potential to exacerbate societal harms and weaken democracy. What are the critical lessons learned from this experience [especially regarding intermediary liability and content governance] that we may apply to the emerging governance frameworks for AI and other powerful technologies?” in a October 8 session titled Digital Rights and Governance at Open Government Summit 2025 as follows (video available here at 4:03:50):

There are three lessons: 

  1. Intermediary liability safe harbor is here to stay. DSA says so, because platform diversity is a goal that we should not give up on.  
  2. We can’t give away the safe harbor to platforms without requiring something in return. DSA does require something, which is notice-and-takedown. 
  3. Countries have different historical stages of historical development and need different solutions, depending on which stage they are.

Lesson 1: Intermediary liability safe harbor. Open Net just administered a 3-day school for platform governance to Asian activists and policy-makers (link).  We stressed this point.  Imposing liability on platforms for user contents that they are not aware the existence or illegality of will force them into reviewing all user contents before (as in prior censorship) or after uploaded (general monitoring).  The SME platforms who cannot afford the cost of prior censorship or general monitoring will have to shut down their services, leaving only big techs.  That is what has happened in Korea and Asia where the safe harbor has not been clearly established.  The resulting oligopoly will make censorship/surveillance too easy. Even one company’s compliant attitude will affect millions of postings/data. That is why DSA clearly states that it will inherit the intermediary liability safe harbor of the e-commerce directive 25 years ago. 

Lesson 2:  The liability exemption should no be given freely to the platforms.  That is what CDA230 exactly does, and information integrity suffers.  The liability exemption should be given only in return for the platforms’ diligent efforts to remove illegal contents. That is what DMCA 512 exactly has done that.  As platforms diligently and happily go through the motion of removing all allegedly illegal contents upon notice and restoring all allegedly legal contents upon appeals, 90-95% of allegedly illegal contents are being taken down without violating freedom of speech to the satisfaction of Hollywood and other right holderes.  DSA is an Europeanization of DMCA in that sense.  The restoration rate for Tiktok under DSA is quite high but we will have to see whether the rate goes down and achieves the version of “rough justice.”      

Lesson 3:  Countries need different solutions depending on their stage of historical development.  What works for Europe may not work for Asia.  As I explained earlier, notice and take down was optional, and was a carrot to incentivize platforms into diligent policing without violating free speech of users.  However, NetsDG, unlike DSA, began as a stick: mandatory notice-and-takedown whereby platforms were to be penalized for not removing illegal contents. But, as platforms would rather which incentivized platforms into taking down many lawful contents.   This inspired a dangerous trend in countries in Asia where the social structure is vertical and the government reigns over the civil society: That is, communication ministries gave themselves the central role of issuing takedown notices to platforms and ended up taking down many dissident contents, risking authoritarianism.  Thank, God. The French Constitutional Council struck down a French version of NetzDG, Avia Law, reasoning that free speech should not be disrupted by administrative bodies alone.  This is why many CSOs

Also, K.S. Park answered the question “Emerging technologies such as AI have already changed the ways economies and societies function. How can we ensure that these improve people’s lives and help build public trust? as follows (video here 4:16:55):

Intelligence is an evolutionary construct. AI is not intelligent in that sense as it has no agent in it that is endowed with instinct for self-preservation. AI is a stochastic machine that absorbs huge number of data sets and regurgitates the most probable response to human prompts. To make the story short, AI mimics your behavior.  To make AI work for you, your data must be trained upon it. To make AI work for everyone, it is paramount importance that people trust AI otherwise, AI will be caught in the vicious cycle of not having enough data, producing unfair results to some (like Amazon’s hiring AI that discriminated against women, or MS chatbot Tay that discriminated against minorities), and scaring away people from submitting their data for training. To avoid that cycle, the goverments’ role is important because, first, they have high quality data for training AI. OGP is important because the governments are making available all of the data for civic discourse and private sector. The governments can also use data protection law to activate anonymization and pseudonymization to protect privacy so people feel less scared in submitting their data.

K.S. Park also answered the question about age verification and the Brazilian dispute on platform accountability (X being blocked for not complying with a judge’s request) as follows (video here 4:36:49):

Platforms should be regulated but regulated differently. Platform for what? Platform carries our communication. Blocking an entire platform for some violations of law for instance in Nepal showed how unwise it is. Regulation of platforms will always be an double-edged sword. Age verification law is another regulation that can backfire against people. It will hurt all people’s privacy because everyone must submit their data to verify their age. Also, digital technology produces non-rivalrous goods, namely, data, which many can benefit without causing scarcity to others. OGP’s role in unleashing that benefit of digital technology should be remembered.

Other speakers and questions addressed to them:

  • Oscar López Águeda, Minister for Digital Transformation and the Civil Service, Spain: The EU is a global leader in rights-respecting technology governance. What is your top priority in this space, and specifically, how can Open Government approaches and multilateralism be strengthened to ensure Spanish and EU governance policy truly catalyses innovation and also protects human rights?
  • Daniela Chacón Mendoza, President, ACCESA, Costa Rica: Your own work sits at the intersection of information, participation and technology. Could you share one or two concrete examples of policy or practice in Costa Rica or the region that shows how digital governance can empower the public and enhance democratic accountability?
  • Mykola Vavryshchuk, Deputy Mayor, Khmelnytski, Ukraine: Digital tools have been crucial for Ukrainian local leaders like you, both for serving citizens in a time of crisis but also to combat disinformation. How have you leveraged digital tools and data to serve people in this time of crisis, and to maintain citizen trust and resilience during the war?
  • Velislava Petrova, Chief Programme Officer, Centre for Future Generations: Something that we hear in a lot of technology policy conversations, especially from companies, is that regulation hinders innovation. From your extensive work on governance of new technologies, what are some essential tools, data and consultative approaches policymakers need to employ to ensure proactive, evidence-based governance that catalyzes—rather thancurtails—innovation in the public interest?
  • Nighat Dad, Digital Rights Foundation, Pakistan and Member, Oversight Board: In your role in the Oversight Board and other international fora, what are the digital rights issues of perspectives that are currently missing or marginalised in global governance discussions? And what concrete mechanisms could OGP or similar platforms adopt to include more diverse voices in these discussions?
  • Daniel Mordecki, Executive Director, AGESIC. Uruguay: Uruguay revised its AI strategy recently, grappling with a right-based approach and to push for more room for innovation and accountability. What practical tools and oversight mechanisms is AGESIC implementing to ensure that the design and deployment of AI in the public sectorremains accountable, auditable, and serves the public interest?

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *