AI regulation we need: Asimov’s 3 laws of robots: UC Davis Lecture

by | Feb 5, 2026 | Free Speech, Innovation and Regulation, Open Blog | 0 comments

KS Park gave a lecture on artificial intelligence on January 26, 2026 at UC Davis Law School.

What is AI? Currently, simply a stochastic tool that analyzes across vast sets of human behavioral data and regurgitates the most statistically probable response to a genuine human prompt. Averaged over a large number of photos with captions, “cat” is most statistically probable to be one of the words attached to the cat photos and AI produces that average answer.   

AI learns ”things” like a child learns: lots of cat photos are shown and lots of non-cat photos are shown. Little by little, a child can tell a cat photo from a non-cat photo without being able to define what felinity is.  The strong correlation between cat photos and the word “cat” is somehow stored in the child’s brain and in the software.    

Note that AI does not really “learn things” but ”mimics human cognition”  – mimics the tendencies to correlate the word “cat” with a composite of certain features. 

Even the Nobel Prize winning Alpha Fold did not create anything new but was just trained on known (to humans) folded structure from known amino acid sequences and tries to predict unknown folded structure from known amino acid sequences. 

Zero creativity! Just statistics. But hugely and hugely helpful. It is no wonder Ben Affleck called AI writers’ output “converging on mediocrity” because AI is an averager. 

Now, because it is statistical and because its goal is to mimick human behavior, human behavioral data set is what drives the performance of AI.  Generally, the more or better data is tokenized and fed into the training data, the better performing AI is.  It is not SW that drives the performance. BTW, you already know that AI is SW like Microsoft Windows or iOS which can be copied in ad infinitum.  Many Open Source AI is distributed and developed exactly like Open Source software.  The only reason that we ordinary folks do not have them in out notebooks because of the computing power and the data that it cannot pull. 

 

What kind of AI regulation do we need?

I just said AI is a stochastic machine, it is a software-based tool for human endeavor.  Then do we have software regulation?  No.  Why do we have this fear of AI and why do we have to control it?  What distinguishes AI from pre-AI automation or pre-AI software? AI is replacing internet as a tool that gives powerless individuals the power of knowledge that was previously available only to the big governments and companies. There is a promising pro-democracy potential that we need to nurture within AI.

The unique harms of AI are, as far as we can deduce from the machine-learning-based nature of the current version of AI — among other things:  

1. monopoly on training data:  AI is as good as the training data it is trained upon – unlike ordinary SW that can be owned outright and controlled, users have to depend on availability of the training data and the data controllers will often have exclusive control )over the data

2. amplification of human unfairness: unlike SW which does not run on the human behavioral data, AI averages on behavioral data of bad individuals contained in the bad training data:

3. post-singularity existential threats (i.e., “AI conquering the world or decimating the human race”) – Exactly because AI is a machine, it therefore can be commanded to take anti-human actions that may or may not take into account the sanctity of human lives

 

Data monopoly

             Right of access to AI system is certainly a right to be enjoyed by many people in the same way that and for the same reason that right of access to Internet should be guaranteed.  However, it should be noted that, currently, even the right to Internet access is being given the stature of a human rights only in the limited sense of the right against internet shutdown or the right to continued access to internet. 

             People talk about privacy being threatened by AI but privacy regulation already covers data collection, data retention, access to data, data analysis (including, inferring sensitive personal information), data transfers and system outputs.  If we adjust data protection law for the purpose of AI harm reduction, the most important will be a stronger data portability right.  In order to enhance universal access to AI, we may want to strengthen data portability so that people do not gravitate toward and entrust their behavioral data with the limited number of big platforms. 

 

Amplifying human bias in training data

Given data-based nature of AI, in order to make AI not discriminate, we must diversify training data so that AI has a pro-diversity and tolerant personality.  For AI to work for you, your behavior data must be in the training data.  For democracy, meaning for making AI to work for everyone, everyone’s data must be in the training data.  Amazon’s hiring data probably lacked enough data on successful female executives. Diversity of the training data set is a key to AI.  Chatbot Tay of Microsoft became racist and sexist because it was trained on the pre-existing data sets in Twitter and learned therefrom  that being controversial is a key to getting more followers.

If there is a feature of human behavior that we as a collective do not want to see, we should decidedly remove from the training data. This is so because the current version of AI is based on machine learning.  It learns by trial and error as a child learns.  If you accept the metaphor to a child’s learning, what would you put in the textbooks used for training the child? Would you not curate carefully what is in the textbooks?

Now, let’s be careful. The government should not be involved in sanitizing the training data. We don’t want to see authoritarian governments making AI buffer when asked about political events.  The civic groups and AI developers should work together to sanitize and curate the training data. Maybe, we need a regulation that forces such collaborative sanitization process without specifying what needs be removed.  Any such regulation should have a full prohibition on data moderation by the government.  

 

“Singularity control”

That it is a tool or machine somehow under the maker’s control does not mean that it is safe.  There are tools that leave the maker’s control and cause harms or the maker may have bad ideas.  Computer viruses or real viruses are one example.  They may have been created for some human purposes but once they are unleashed onto the on/offline world.  They are faithful only what their source code or DNA command.  If AI’s source code commands self-preservation or proliferation as computer virus’ source code or DNA does, AI may launch nuclear missiles if it decides that absence or subjugation of the human race is better off for AI’s self-preservation. 

This possible threat to humans has generated discussions on criminal punishment of AI but such regime can be conceptualized only if AI is given freedom or assets that can be deprived and the recent study done by KS shows that people are not willing to grant AI freedom or assets, showing “punishment gap”. 

A better approach is to consider creating the community of practices similar to that of bio ethics such as institutional review board (IRB) so that any in vivo or in vitro creation of AI is carefully done so that it is not given an unbounded command such as “flourish at all cost”, “preserve your life at all cost”.  Maybe, Asimov’s 3 laws of robots are in order.   There is already a discussion of an internation treaty similar to a nuclear non-proliferation treaty (NPT).  

 

There are other concerns that generated regulatory appeal, which are often couched in terms of “rights”.

How about labor replacement?

New jobs are replacing old jobs.  There are services that cannot be replaced by machines, not because humans are better but because authenticity is part of the services.  What is on the stage can be replicated by machines. Education, performances, medicine, law, and all things on the supply side can be replaced by machines but there is one group of people who cannot be replaced by machines.  Students, audience, patients, clients, etc.  Why?  Because they have desires.  We would like to satisfy desires and there must be sentient beings with desires before someone decides to teach, entertain, treat, or counsel another person.  I am not saying that they will have to be paid to come to school, concerts, hospitals, and law firms although I can see advertising companies providing free services to attract them and sell ads on their eyeballs.  But the point is that people are paying for authenticity, not for services.  Jeans owned by people are sold at higher prices.  Also, look at opera, plays, sports, games where people’s skills are judged and evaluated.  At law schools, we were having this discussion about papers written by AI. I joked “why not have AI grade the papers?”  But I was not joking:  students’ skills are judged and evaluated but it is the students who give prompts.  Crafting smart prompts is a skill to be evaluated.  If there is a way to extract the skills of designing argument from the texts, instructors capture that and grade them.  But papers written by paper are often too long compared to the payload in that  sense and it is only correct to use AI to grade them. 

The right to algorithmic transparency and explainability

             The right to algorithmic transparency and explainability suffers from several problems. Firstly, algorithmic transparency really does not disclose anything about how AIs think.  They respond to to human prompts by regurgitating the most stochastically probable responses derived from the massive sets of human behavioral data.  The decisions of AI are made much more profoundly affected by the training data than the software.  Disclosure of the algorithm does not make AI “transparent” in any sense relevant to human rights.                             Secondly, there is a fundamental limit on explainability because of the way AI learns – like a child learning to tell a cat from a dog just after seeing what adults call cats versus dogs or by averaging over the human behavior data sets.  When we ask AI why AI thought that a picture was a picture of cat, the only answer AI can give is “because that picture looks more like 1 billion other pictures of cat than 10 billion other pictures of non-cats”. 

             Thirdly, human decisions are not any more explainable or transparent than machines.  Human decision-makers may be transparent about what they believe to be explanations but they themselves cannot explain in language acceptable to others.  This is the very reason that the real jury verdicts in real trials do not come with explanations.  The only reason that they can give is “Witness A sounded more credible than Witness B”. 

             Fourthly and more importantly, private individuals do not have the obligations to explain their decisions.  Even in anti-discrimination laws, choosing a job applicant for absolutely no reason as opposed to the reason of their race, gender, etc., does not violate the law.  It means that the explinability requirement can be applied to the government actions only.  In that case, the question is whether this explainability requirement adds anything to the constitutional protection of equality, which by the way happens to take the form of “rule against arbitrariness” or “rule against inexplainability” in German constitutional jurisprudence.  Then again, if the government decision is arbitratry and cannot be explained, should it not be unconstitutional regardless of whether it is done through AI. 

 

The right to not be manipulated

             The right to not be manipulated also impinges on people’s right to use software and other tools to maximize the impact of their speech.  A poor candidate in election cannot afford to hire artists to create visual campaign material should be allowed to use deepfakes to create impactful visual campaign.  A person with visual impairment should be allowed to create impactful visual material with the help of AI to amplify their messages.  The fundamental problem is the difficulty in distinguishing manipulation from persuasion.  Whether persuasion becomes manipulation or not depends on the power relationship between the speaker and the listener, not on whether AI was used.  This is whay, again, the no-manipulation rule may be better limited to the government actors who are already in the super-dominant relationship over citizens.  

 

The right to human decision and human-to-human interaction

             From the right of home makers, what changed the civilization was the laundry machine that freed the home makers from the hand-washing chores that consumed 3-4 hours each day.  Border officials who have gone through rigorous training have to spend hours and hours comparing the real faces to the passport photos.  Also, human judges may have to be Herculean in the sense of Ronald Dworkin in order to take into account all the relevant precedents but AI may drastically reduce that burden and help human judges make equitable decisions more quickly.  That some decisions have to be made by humans under-estimate the liberating power of technology — that is long as AI is used as tools of the humans and is supervised by human AI developers.

             That some decisions have to be made by humans or some other santient beings underestimates the difficulty of pinpointing when a decision is minsigificantch decision we want to humanize.  Even in corporations, governments, or elections, it is not clear who makes a decision.  If a corporation institute an AI price maker to maximize its profits, whose decision is that?  If the voters elect a president who carries out their election platform, whose decision is that?  If people consult with AI to decide who to vote for, is AI making the decision?  Depending on how much you granulate a process to shorter processes, there will be naturally decisions not made by human beings.  It will be immensely difficult to decide which decisions are significant so that they have to be made by humans and which other decisions are insigificant so that they can be made by AI.

Powered By EmbedPress

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *