“Quandary of Holding Robots Responsible”: AI in Asia Seoul Seminar, 2016/12/16

by | Dec 27, 2016 | Innovation and Regulation, Open Blog | 0 comments

A typical question about artificial intelligence goes like, if robots make a mistake, should robots be held responsible? should the programmer be? Now, as long as we are talking about soft AI, in other words, robots that we will own and control as slaves carrying out specific objectives given them by us, for instance, autonomously driven vehicles into which we will hard-code our moral directions such as derived from the renowned Moral Machines experiment, the obvious answer to the questions will be that the programmers will be responsible since the robots will be dependent upon not only the hard instructions given by programmers and the electric power and network resources provided by them.

 

How about strong AI, namely the robots past the point of singularity where AI can upgrade its own software, look for its own power and resources, etc.? Before we ask whether and how we will hold strong AI-robots responsible, we should ask what it means to hold someone or something responsible. If a robot does not function well or unethically, how would you punish them? Take away their batteries? Holding something responsible means taking away rights or at least some interests. We can hold responsible only those things that have rights or interests. Legally speaking, civil liability means taking away money that the thing has. Criminal liability means taking away freedom that the thing has. We cannot hold rocks responsible because rocks don’t have any freedom or money.

 

Therefore, the question of responsibility turns into a question of whether and under what conditions strong AI will be attributed rights or interests?

 

Is it free will? But, are animals really not free-willed? If a lion wakes up one day and wants to catch a zebra, how is that different from a human being wakes up one days and watches to have a coup of coffee. How is one decision qualitatively different from the other? Let’s save the metaphysical discussion about whether we have free will or whether it is compatible with the fact that it comes out of an eletro-mechanical chunk of protein abiding strictly by the laws of physics, called Brain. What is important is that, whether a human brain is determined or not, statistical free will exists in animals as well to the same extent that human beings have statistical free will. Maybe, it is for this reason that we believe that animals at least have some interests if not rights.

 

Is it intelligence? But we attribute rights to babies when their intelligence does not arise above animals. Why do we give rights to babies but not animals? Probably because we think that babies are capable of learning and have potential of becoming adults. Animals will be treated far differently if they can show to be capable of evolving into human adults very soon. At least, it is not contemporaneous intelligence that decides whether certain things are attributed rights or interests.

 

Let’s dig deeper. The concept of rights or interests presumes that the thing has certain will to survival. So maybe, one prerequisite to attribution of rights and interests is the existence of self-preservative instinct, especially in adverse environment punctuated by scarcity of resources.

 

What is the problem with soft AI? It does not have self-preservative instinct. It is is not programmed to play the game that human beings are playing, namely make most out of limited resources and limited lifespan. Look at Microsoft Chatbot Tay and why it failed. It was not given broad enough self-preservative instinct. It was not programmed to be popular but probably programmed only to be a popular Twitter. If it has learned that being a racist and a sexist will have consequences on the resources that it can obtain for its self-preservation, it would not have made those racist and sexist tweets. Actually, Tay was receiving free unlimited electric power and network resources so it did not even need self-preservation as its objective.

 

Then, the more interesting question is whether we will make the robots in a way that will have self-presevation instincts. Why would humans spend so much time and resources into building AI that will in turn expend time and resources toward self-preservation that not only NOT contributes to humanity’s welfare but can possibly compete with or threaten it? Maybe, we will build such AI so that it can live and flourish on Mars so that in long term it will become beneficial to humanity. But, even so, would we not build some strictures on its code, available resources, etc., to make sure that it does not become a threat?

 

 

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *