No Worries About AI Robots, for Now – Talk at AI in Asia by Digital Asia Hub 12/18/2016

by | Dec 20, 2016 | Innovation and Regulation, Open Blog | 0 comments

A typical question about AI goes like, if robots make a mistake, should robots be held responsible? should be its creators (i.e., programmers)? Now, as long as we are talking about soft AI, in other words, robots that we will own and control as slaves carrying out specific objectives given them by us, for instance, autonomously driven vehicles into which we will hard-code our moral directions such as derived from the renowned Moral Machines experiment, the obvious answer to the questions will be that the programmers will be responsible since the robots will be dependent upon not only the hard instructions given by programmers and the electric power and network resources provided by them.

How about strong AI, namely the robots past the point of singularity where AI can upgrade its own software, look for its own power and resources, etc., for self-preservation? Before we ask whether we will hold strong AI-robots responsible, we should ask what it means to hold someone or something responsible. If a robot does not function well or unethically, how would you punish them? Take away their batteries? Now, that would constitute punishment only if such action is against the robot’s interest.  But, does the robot have such interest? Holding something responsible means taking away rights or at least some interests. We can hold responsible only those things that have rights or interests. Legally speaking, civil liability means taking away money that the thing has. Criminal liability means taking away freedom that the thing has. We cannot hold rocks responsible because rocks don’t have any freedom or money.  As I shall show later, actually lack of interest or rights is a bigger hurdle to holding rocks responsible for their actions than their lack of intelligence or lack of free will.

Therefore, the question of responsibility turns into a question of whether and under what conditions strong AI will be attributed rights or interests.

Is it free will? But, do we have free will? We can save the metaphysical discussion about whether we have free will or whether it is compatible with the fact that it comes out of an eletro-mechanical chunk of protein abiding strictly by the laws of physics, called brain.  We can do so by starting from why we ask that question.  One reason is because we indulge in the specificity of humanness.  In other words, we are interested in what unique free will we have , distinguishing us from animals?  But, let’s face it.  Are animals really not free-willed? If a lioness wakes up one day and wants to catch and eat a zebra, how is that different from a human being wakes up one day and wants to have a coup of coffee. How is one decision qualitatively different from the other? Regardless of how much and whether a human brain is determined or not, statistical free will exists in animals as well to the same extent that human beings have statistical free will. The lesson here is that free will is not determinant of whether entities should be considered as humans.

Actually, let’s put the above discussion on its head.  Do we hold animals responsible?  We actually hold animals responsible under limited circumstances (i.e, using carrots and sticks in training animals).  How do we do so?  Because we believe that animals at least have some interests if not rights, that we can take away.  It is because they have will to self-preservation.

Is it intelligence? But we treat babies like human beings though under severe restrictions.  Many babies’ intelligence does not arise above animals, and yet we grant some human status to them that we do not grant to animals.  Why do we give human status to babies but not animals? Probably because we think that babies are capable of learning and have potential of becoming adults. Animals will be treated far differently if they can show to be capable of evolving into human adults very soon. So, the lesson of this discussion for us is that it is not intelligence that decides whether certain things are given human status or not.

I think that the concept of rights or interests presumes that the thing has certain will to survival. So maybe, one prerequisite to attribution of rights and interests is the existence of self-preservative instinct or will to survival, especially in adverse environment punctuated by scarcity of resources.

What is the problem with treating soft AI as human? It does not have self-preservative instinct. It is is not programmed to play the game that human beings are playing, namely make most out of limited resources and limited lifespan. Look at Microsoft Chatbot Tay and why it failed. It was not given broad enough self-preservative instinct. It was not programmed to be popular (i.e., popularity always enhances one’s chance for survival) but probably programmed only to be a popular Twitter. If it has learned that being a racist and a sexist will have consequences on the resources that it can obtain for its self-preservation, it would not have made those racist and sexist tweets. Actually, Tay was receiving free unlimited electric power and network resources so it did not even need self-preservation as its objective.

How about strong AI?  Here, the more interesting question is whether we will make the robots in a way that will have self-presevation instincts. Why would humans spend so much time and resources into building AI that will in turn expend time and resources toward self-preservation that not only NOT contributes to humanity’s welfare but can possibly compete with or threaten it? Maybe, we will build such AI so that it can live and flourish on Mars so that in long term it will become beneficial to humanity. But, even so, would we not build some strictures on its code, available resources, etc., to make sure that it does not become a threat?

 

 

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *