

K.S. Park spoke on the weakness of the current AI regulations as follows, at the October 3-4 Global Free Speech Summit hosted by the Future of Free Speech:
- Korea passed a law modeled after EU AI Act like Brazil did as my colleague mentioned. However, this is not sufficient. Humanity has automated one activity after another. Automobiles, dishwashers, etc. Finally, AI has automated thinking or decision-making, which is clearly a First Amendment activity. The common problem of the Korean law and EU AI Act is that they burden this first amendment activity just because it is carried out with particular type of software, a software that produces the most stochastically probable responses to human prompts.
- These laws do not regulate AI but only its applications. Hence the more burdens when the same AI is applied to certain higher-risk or higher-impact areas. However, these so called high-risk or high-impact areas are covered with existing regulatons already for the very reason that they are high-risk or high-impact. What these laws end up doing may be to make it more difficult to comply with the existing standards of safety and fairness if you try to do it with the new probabilistic software. Result?: it has the danger of not only holding back the power of AI to make safer, more ethical decisions but also making those areas less safe and less fair.
- Now, I am not saying that AI only augments a first amendement activity, and therefore there should not be AI regulation. There are three aspects of AI that need public intervention, if not regulation, tailored to the unique nature of the current version of AI, which is that it is based on machine-learning. First, better availability of the training data so that the benefits of AI is not limited only to the big players that can purchase it. Second, better data governance to protect privacy of people whose personal data is used for AI training. Finally, making the training data better in quality, meaning removing bias, hate, and falsity, so they do not get amplified.
- On better data availability, first, copyright law : Korea will dutifully follow whatever the first world decides on the fateful question: Is AI training on a copyrighted work a copying? When a human reads a book, it does not trigger copyright. How about when a machine reads it? Second, open government data: I am going to Spain for Open Government Partnership Summit after this. Government is the source of a lot of useful data by definition. Early this year in France, we launched Current AI, an attempt to pool the multi-government data for open sourcing, exactly for this purpose. We have not seen that in Korea.
- On better data privacy, Korea has GDPR-style data protection law but its mechanistic application is reducing the availability of personal data. For example, Korean court decisions are still not available for machine learning because of data protection concerns (link). Also, identifying someone in a critical news article or a complaint obviously involves non-consensual processing of personal data but it is being criminalized (link). Finally, over-protectivness of personal data led to this ironic rule that even deidentification is allowed only in exchange for an absolute commitment that it will never be reidentified. Now, as you know, some data subject rights require re-identification and those rights are paralyzed because of the absolute ban on reidentification (link).
- On better data ethics, Korea has suffered, as many other countries in Asia, from the vertical social structure whereby people look up for a solution than look across one another for it. It is already shown in Korea’s AI Basic Act which makes the government’s role prominent in vetting high impact AI, which opens up a possibility for government censorship. What we need is a careful convening of civil society and technical community as in the example of Southeast Asian Collaborative Policy Network that will work together to create the proverbial textbook for the current machines to learn from, meaning moderation and curation of the data.
- In the end, we have to see AI in continuum of technological progress whereby greater human activities have been automated. International human rights standards should be applied to AI activities with that in mind.
- In that regard, Korea’s two laws stand out as knee-jerk reactions to unfounded fear of AI – complete or near complete bans on deepfakes in electoral and sexual contexts. Election law bans all deepfakes consensual or non-consensual, truthful or false, parody or satire. Open Net has filed a constitutional challenge on this (link). Sex crime law rightly bans non-consensual deep-fakes but again does not require factual falsehood as an element of the crime and has no exception for satire or parody, so therefore has the potential of, for instance, banning the famous virtual nude figurine of Trump.
0 Comments