Why Asimov’s Laws of Robotics Don’t Work?

Asimov’s Laws of Robotics are concepts based on human perception of artificial intelligence, and as such, they seem completely reasonable. After all, don’t kill humans is a good rule for machines, whether they are intelligent or not.

The way people saw AI robots in the 1940s completely differs from what we think about now when hearing the term. At that point in time, people were thinking about robots that had legs and arms, deadly lasers in their eyes, and wish to destroy everyone and everything in order to conquer the world. In contrast, what we see now is more like social media advertisements and search engine algorithms.

Issac Asimov’s Laws of Robotics

0. A robot may not harm humanity, or through inaction allow humanity to come to harm.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Why does it seem that Issac Asimov’s Laws of Robotics work?

In the 1940s, when the Laws were originally stated, the fear of robots taking over the world was bigger, and unease while discussing AI grew. People needed reassurance that the robots will act in their favor, rather than against them.

Robots, computers and machines do what they are programmed to do, right? That would mean that we choose what robots do or not, and which Laws they obey to. It is all on us. While thinking in that logic, we can see why the Laws should work.

0th and 1st Law

0th and 1st law states that robot may not injure humanity or human being, and with that part of the law, fear of robots conquering the world while killing all of us disappears. Furthermore, the law states that robot may not through inaction allow humanity or human being to come to harm. It shows that robots were meant to protect us.

2nd Law

2nd law states that robots must obey orders human being has given to it, unless given orders conflict with the First Law. This law explains that robots are meant to serve us the best way they can and serve our orders. Not only that, but the last part of the law “except where such orders would conflict with the First Law”, or 0th law for that matter, reassure us that robots won’t be used against humanity by a person controlling them.

3rd Law

3rd law states that a robot must protect its own existence as long as such protection does not conflict with the First or Second Law. The 3rd law explains that robots must protect themselves, but because the law may not contradict with the 2nd law, human owners may terminate the robot if they find suitable to do so.

What did Asimov think about his Laws?

Here is a short video in which Asimov discusses Laws of Robotics:

The Laws show clearly the power humans have over robot and the length robots must go to protect humans. They brought in the reassurance people at that time needed by making some sort of social order. Humans are superior to robots, and robots are enslaved by humans.

Even though novels go feature a lot of different ways robots can break the rules and make life harder, Asimov himself believed his Laws of robotics can work.

In the the text he wrote for Compute! he states:

I have my answer ready whenever someone asks me if I think that my Three Laws of Robotics will actually be used to govern the behavior of robots, once they become versatile and flexible enough to able to choose among different courses of behavior. My answer is, “Yes, the Three Laws are the only way in which rational human beings can deal with robots — or with anything else.

Isaac Asimov
https://archive.org/stream/1981-11-compute-magazine/Compute_Issue_018_1981_Nov#page/n19/mode/2up

The problems with Asimov’s Laws of Robotics

Defining Terms

Defining terms such as human, hurt and robot is a crucial problem in Asimov’s Laws of Robotics. In order to define any of these terms, especially “human” would mean for us to solve ethics.

When I write the word human, you probably know what I mean, even though you don’t have a clear definition in your head. We learned what a human is through experience and social conventions. But in order to put that term into a Law that would be used by robots, we must have a clear definition. Is a human being also an unborn child, a fetus? This debate is led all over the world. And in order to put the term in the Laws, it must be solved. Does a dead person still counts as a human being? If yes, would that mean robots going around the planet and reviving people who are dead for years? If not, how many people who could have been saved by reviving techniques would be let to die? Many questions, and very few answers.

Building the Laws in the Robots

From the programming point of view, these laws are impossible to put into an AI or a robot. The laws are writen in English, and as such, do not belong in the programming system.

How to implement such vague laws, writen in a language used by humans, in the robots is a valuable question. Sadly, we still don’t have an answer.

Purpose of Robots

Some robots could be meant to serve us, and for those, if all other problems were solved, Laws would be appropriate. However, some robots have a purpose to hurt human beings or themselves. Robots used by governments, such as smart bombs and cruise missiles, break 1st and 3rd rule at the same time.

The Laws were made too subjectively and with different type of robots in mind, which doesn’t work in current world.

Is Robots Obeying Humans Truly What We Want?

Asimov’s 2nd Law of Robotics states that a robot must obey orders given it by human beings except where such orders would conflict with the First Law.

That means that we, for example, cannot give a robot an order to kill 100 people. However, we can order a robot to make a technology powerful enough to do it for us. That way 2nd rule is not broken and we would still get a powerful weapon that would serve it cause. Simple lie would stop robot from taking any actions to prevent us from hurting humanity and human beings.

Do we really want to have technology as such available to everyone?

Instinct to protect themselves

Asimov’s 3rd Law of Robotics states that a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

That would mean that robot gets instinct to protect themselves, and that automatically makes them less rational, if rational at all. For example, if we sent our robot out to the shop, robot could take much longer route to the store just in order not to get into a possible danger of a car accident. In the time taken to protect themselves from such a banal issue, with not so high possibility of actually getting hurt, robot could have done other things. In this example, irrational instinct to protect themselves would lead to waste of time.

Protecting Human Beings from Harm

Asimov’s 1st Law of Robotics states that a robot may not injure a human being or, through inaction, allow a human being to come to harm.

This Law would make life of a human being incredibly hard. Since robots are obligated to protect us and are not allowed to through inaction allow us to come to harm, we would be incapacitated. Perhaps we wouldn’t be able to go out of our homes, since there is possibility we will get hurt out there. We wouldn’t be able to work, to walk, to prepare food, to do our groceries, to have fun. All for sake of protecting us from not getting harmed.

In the books, it was shown that some danger were bigger than others. As an example, going to work could kill us, but poverty is also a killer, so we are allowed to work.

Harming Humans for The Sake Of Humanity

0th law is the law that goes above every other Law of robotics, and it states that a robot may not harm humanity, or through inaction allow humanity to come to harm.

If we take as an example climate issues as a problem humanity as a whole has, would that allow robots to kill certain number of human beings in order to fix it? Wouldn’t in that case harming humanity actually help humanity not come in harm?

The same issue could be presented through many examples and it would come to a conflict each time. Is that what we want to happen?

What is a solution then?

There is definitely no easy solution.

Perhaps Asimov’s Laws of Robotics shouldn’t be used at all. Perhaps they should be used as a guidance to AI developers. Or maybe change in the formulation of Laws would bring enough to solve some of the issues.

What do you think, what is a possible solution to the problem so big? Feel free to write your answer in comments.

Leave a Comment

Your email address will not be published. Required fields are marked *