OpEdNews Op Eds

The Three Laws of Pentagon Robotics

By (about the author)     Permalink       (Page 1 of 1 pages)
Related Topic(s): ; ; ; ; ; , Add Tags Add to My Group(s)

View Ratings | Rate It

opednews.com Headlined to H3 5/24/14

Become a Fan
  (117 fans)
- Advertisement -

The three laws of robotics, according to science fiction author Isaac Asimov, are:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

I would gladly have accepted a $20 million Pentagon contract for the job of pointing out these three laws.

OK, maybe $25 million.

Sadly, the Pentagon has instead hired a bunch of philosophy professors from leading U.S. universities to tell them how to make robots murder people morally and ethically.

Of course, this conflicts with the first law above. A robot designed to kill human beings is designed to violate the first law.

The whole project even more fundamentally violates the second law. The Pentagon is designing robots to obey orders precisely when they violate the first law, and to always obey orders without any exception. That's the advantage of using a robot. The advantage is not in risking the well-being of a robot instead of a soldier. The Pentagon doesn't care about that, except in certain situations in which too many deaths of its own humans create political difficulties. And there are just as many situations in which there are political advantages for the Pentagon in losing its own human lives: "The sacrifice of American lives is a crucial step in the ritual of commitment," wrote William P. Bundy of the CIA, an advisor to Presidents Kennedy and Johnson. A moral being would disobey the orders these robots are being designed to carry-out, and -- by being robots -- to carry out without any question of refusal. Only a U.S. philosophy professor could imagine applying a varnish of "morality" to this project.

The Third Law should be a warning to us. Having tossed aside Laws one and two, what limitations are left to be applied should Law three be implemented? Assume the Pentagon designs its robots to protect their own existence, except when . . . what? Except when doing so would require disobeying a more important order? But which order is more important? Except when doing so would require killing the wrong kind of person(s)? But which are they? The humans not threatening the robot? That's rather a failure as a limitation.

- Advertisement -

Let's face it, the Pentagon needs brand new laws of robotics. May I suggest the following:

1. A Pentagon robot must kill and injure human beings as ordered.
2. A Pentagon robot must obey all orders, except where such orders result from human weakness and conflict with the mission to kill and injure.
3. A Pentagon robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

This set of laws differs from Asimov's in a number of ways. For one thing, it completely lacks morality. It is designed for killing, not protecting. By prioritizing killing in the First Law, rather than protecting, this set of laws also allows for the possibility of robots sacrificing themselves to kill rather than to protect -- as well as the possibility of robots turning on their masters.

This set of laws differs much less -- possibly not at all -- from the set of laws currently followed by human members of the U.S. military. The great distinction that people imagine between autonomous and piloted drones vanishes when you learn a little about the thought habits of human drone pilots. They, like other members of the U.S. military, follow these laws:

1. A Pentagon human must kill and injure human beings as ordered.
2. A Pentagon human must obey all orders, except where such orders result from human weakness and conflict with the mission to kill and injure.
3. A Pentagon human must protect its own existence as long as such protection does not conflict with the First or Second Law.

- Advertisement -

The job of the philosophy professors is to apply these laws to robots while neither changing them nor letting on to have figured out what they are. In other words, it's just like teaching a course in the classics to a room full of students. Thank goodness our academia has produced the men and women for this job.

 

http://davidswanson.org

David Swanson is the author of "When the World Outlawed War," "War Is A Lie" and "Daybreak: Undoing the Imperial Presidency and Forming a More Perfect Union." He blogs at http://davidswanson.org and http://warisacrime.org and works for the online (more...)
 

Share on Google Plus Submit to Twitter Add this Page to Facebook! Share on LinkedIn Pin It! Add this Page to Fark! Submit to Reddit Submit to Stumble Upon


Go To Commenting

The views expressed in this article are the sole responsibility of the author and do not necessarily reflect those of this website or its editors.

Follow Me on Twitter

Contact Author Contact Editor View Authors' Articles
- Advertisement -

Most Popular Articles by this Author:     (View All Most Popular Articles by this Author)

Feith Dares Obama to Enforce the Law

Obama's Open Forum Opens Possibilities

Public Forum Planned on Vermont Proposal to Arrest Bush and Cheney

Did Bush Sr. Kill Kennedy and Frame Nixon?

Holder Asked to Prosecute Blankenship

Eleven Excellent Reasons Not to Join the Military

Comments

The time limit for entering new comments on this article has expired.

This limit can be removed. Our paid membership program is designed to give you many benefits, such as removing this time limit. To learn more, please click here.

Comments: Expand   Shrink   Hide  
1 people are discussing this page, with 1 comments
To view all comments:
Expand Comments
(Or you can set your preferences to show all comments, always)

CylonsThis has all happened before. It will all ha... by Samson on Sunday, May 25, 2014 at 1:07:32 AM