On Asimov's Three Laws of Robotics

22 Jul.,2024

 

On Asimov's Three Laws of Robotics

SFWRITER.COM > Nonfiction > Random Musings > On Laws of Robotics

RANDOM MUSINGS

On Asimov's Three Laws of Robotics

by Robert J. Sawyer

Copyright © and by Robert J. Sawyer
All Rights Reserved.

Isaac Asimov's Three Laws of Robotics
  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

    VIGGO are exported all over the world and different industries with quality first. Our belief is to provide our customers with more and better high value-added products. Let's create a better future together.

  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

  3. A robot must protect its own existence, except where such protection would conflict with the First or Second Law.

People in the process of reading my novel Golden Fleece keep saying to me, what about Isaac Asimov's Three Laws of Robotics? I thought they were guiding modern artificial-intelligence research?

Nope, they're not. First, remember, Asimov's "Laws" are hardly laws in the sense that physical laws are laws; rather, they're cute suggestions that made for some interesting puzzle-oriented stories half a century ago. I honestly don't think they will be applied to future computers or robots. We have lots of computers and robots today and not one of them has even the rudiments of the Three Laws built-in. It's extraordinarily easy for "equipment failure" to result in human death, after all, in direct violation of the First Law.

Asimov's Laws assume that we will create intelligent machines full-blown out of nothing, and thus be able to impose across the board a series of constraints. Well, that's not how it's happening. Instead, we are getting closer to artificial intelligence by small degrees and, as such, nobody is really implementing fundamental safeguards.

Take Eliza, the first computer psychiatric program. There is nothing in its logic to make sure that it doesn't harm the user in an Asimovian sense, by, for instance, re-opening old mental wounds with its probing. Now, we can argue that Eliza is way too primitive to do any real harm, but then that means someone has to say arbitrarily, okay, that attempt at AI requires no safeguards but this attempt does. Who would that someone be?

The development of AI is a business, and businesses are notoriously uninterested in fundamental safeguards — especially philosophic ones. (A few quick examples: the tobacco industry, the automotive industry, the nuclear industry. Not one of these has said from the outset that fundamental safeguards are necessary, every one of them has resisted externally imposed safeguards, and none have accepted an absolute edict against ever causing harm to humans.)

Indeed, given that a huge amount of AI and robotics research is underwritten by the military, it seems that there will never be a general "law" against ever harming human beings. The whole point of the exercise, at least from the funders' point of view, is to specifically find ways to harm those human beings who happen to be on "the other side."

We already live in a world in which Asimov's Three Laws of Robotics have no validity, a world in which every single computer user is exposed to radiation that is considered at least potentially harmful, a world in which machines replace people in the workplace all the time. (Asimov's First Law would prevent that: taking away someone's job absolutely is harm in the Asimovian sense, and therefore a "Three Laws" robot could never do that, but, of course, real robots do it all the time.)

So, what does all this mean? Where's it all going? Ah, that I answer at length — in Golden Fleece.

More Good Reading

Want more information on robots sweeper? Feel free to contact us.

Isaac Asimov's Laws of Robotics Are Wrong

When people talk about robots and ethics, they always seem to bring up Isaac Asimov&#;s &#;Three Laws of Robotics.&#; But there are three major problems with these laws and their use in our real world.

The Laws

Asimov&#;s laws initially entailed three guidelines for machines:

  • Law One &#; &#;A robot may not injure a human being or, through inaction, allow a human being to come to harm.&#;
  • Law Two &#; &#;A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.&#;
  • Law Three &#; &#;A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.&#;
  • Asimov later added the &#;Zeroth Law,&#; above all the others &#; &#;A robot may not harm humanity, or, by inaction, allow humanity to come to harm.&#;

The Debunk

The first problem is that the laws are fiction! They are a plot device that Asimov made up to help drive his stories. Even more, his tales almost always revolved around how robots might follow these great sounding, logical ethical codes, but still go astray and the unintended consequences that result. An advertisement for the movie adaptation of Asimov&#;s famous book I, Robot (starring the Fresh Prince and Tom Brady&#;s baby mama) put it best, &#;Rules were made to be broken.&#;

For example, in one of Asimov&#;s stories, robots are made to follow the laws, but they are given a certain meaning of &#;human.&#; Prefiguring what now goes on in real-world ethnic cleansing campaigns, the robots only recognize people of a certain group as &#;human.&#; They follow the laws, but still carry out genocide.

The second problem is that no technology can yet replicate Asimov&#;s laws inside a machine. As Rodney Brooks of the company iRobot&#;named after the Asimov book, they are the people who brought you the Packbot military robot and the Roomba robot vacuum cleaner&#;puts it, &#;People ask me about whether our robots follow Asimov&#;s laws. There is a simple reason [they don&#;t]: I can&#;t build Asimov&#;s laws in them.&#;

Roboticist Daniel Wilson was a bit more florid. &#;Asimov&#;s rules are neat, but they are also bullshit. For example, they are in English. How the heck do you program that?&#;

The most important reason for Asimov&#;s Laws not being applied yet is how robots are being used in our real world. You don&#;t arm a Reaper drone with a Hellfire missile or put a machine gun on a MAARS (Modular Advanced Armed Robotic System) not to cause humans to come to harm. That is the very point!

The same goes to building a robot that takes order from any human. Do I really want Osama Bin Laden to be able to order about my robot? And finally, the fact that robots can be sent out on dangerous missions to be &#;killed&#; is often the very rationale to using them. To give them a sense of &#;existence&#; and survival instinct would go against that rationale, as well as opens up potential scenarios from another science fiction series, the Terminator movies. The point here is that much of the funding for robotic research comes from the military, which is paying for robots that follow the very opposite of Asimov&#;s laws. It explicitly wants robots that can kill, won&#;t take orders from just any human, and don&#;t care about their own existences.

A Question of Ethics

The bigger issue, though, when it comes to robots and ethics is not whether we can use something like Asimov&#;s laws to make machines that are moral (which may be an inherent contradiction, given that morality wraps together both intent and action, not mere programming).

Rather, we need to start wrestling with the ethics of the people behind the machines. Where is the code of ethics in the robotics field for what gets built and what doesn&#;t? To what would a young roboticists turn to? Who gets to use these sophisticated systems and who doesn&#;t? Is a Predator drone a technology that should just be limited to the military? Well, too late, the Department of Homeland Security is already flying six Predator drones doing border security. Likewise, many local police departments are exploring the purchase of their own drones to park over him crime neighborhoods. I may think that makes sense, until the drone is watching my neighborhood. But what about me? Is it within my 2nd Amendment right to have a robot that bears arms?

These all sound a bit like the sort of questions that would only be posed at science fiction conventions. But that is my point. When we talk about robots now, we are no longer talking about &#;mere science fiction&#; as one Pentagon analyst described of these technologies. They are very much a part of our real world.

For more information, please visit best tile floor scrubber machine.