RLAI Reinforcement Learning and Artificial Intelligence (RLAI)
Transcript of Tom Keenan's opening remarks
--Rich Sutton, Feb 1 2005
Tom argued for the 'No' answer to the debate question - "Should artificially intelligent robots have the same rights as people".  Below is a transcript of his opening remarks, slighted edited by Tom.

Comments?  Extend the robot rights debate page.


This is a really, really old question.  Let me tell you a story. A man in Bellevue, Washington, finding that his car would not go through some six inches of snow, became enraged and attacked the automobile. He broke out the car’s windows with a tire iron and emptied a revolver into its side. “He killed it,” said police. “It’s a case of autocide.” - 1985 article by Robert Freitas of student law, so this is something we have been kicking around for a long time.  (This article is available online at http://www.rfreitas.com/Astro/LegalRightsOfRobots.htm and is well worth reading!)

My first point is that to attribute rights, the rights we traditionally give to humans to machines will make us feel silly, we will have do-gooders, robot’s rights organizations will be going out there organizing on behalf of what we know in our hearts are inanimate objects, objects we have created.  There are some very large practicalities involved here. So aside from making us feel silly it would be completely impractical.
 

If a robot had rights it would have responsibilities.  It could be made to testify in court or it could be held criminally responsible.  Right now robots cannot be held criminally responsible. Robots have killed people and it has always been either the operator or the programmer or some other such person who is held responsible.
Again, quoting from Robert Freitas excellent article: “The bottom line is it’s hard to apply human laws to robot persons. Let’s say a human shoots a robot, causing it to malfunction, lose power, and “die.” But the robot, once “murdered,” is rebuilt as good as new. If copies of its personality data are in safe storage, then the repaired machine’s mind can be reloaded and up and running in no time – no harm done and possibly even without memory of the incident.”  Was that temporary robo-slaughter?  The very definition that a robot can have a personality and be entitled to rights raises major legal and procedural questions.

I think my biggest objection to this premises is that it diminishes us as humans.  We had this discussion recently in an Environmental Design 604 class  with the new Master’s students.  If you were the last sentient being on earth, even if there weren’t dogs,  no other sentient beings, and you were loose in the Louvre– would you be justified in taking a hatchet to the Mona Lisa?  I can tell you that the majority of the students in that class said no  -  it has, on its own, an aesthetic existence.  You might think that that argues for robots having an existence.  I argue that there are things that are uniquely human, or at least we want to believe that.    In a sense to give robots rights cheapens our humanity.  It’s a little bit like people who attack for example marriage laws, and say if we gave gay people the right to marry it would diminish marriage – I don’t think it will, but if we gave rabbits and chinchillas the right to get married at City Hall, it certainly would.  There are certain things we attribute to being characteristic to human beings.

Jonathan Schaeffer, who is a great expert in games, knows that the game of chess did not self-destruct when a computer program beat the grand master, instead people happily play chess and continue to be amused by it knowing full well they are playing against a person who would doubtless loose to a machine – so it’s a qualitatively different and very separate category.   Basically what we have here in this argument is the pathetic fallacy which is attributing the attributes of a human to something that is inanimate, and a category error because these will always be very distinct.  Having said that, I would like to give some middle ground. I am quoting from a speech Bruce Sterling gave at the Computing Grand Challenge in 2002, which raises the possibility that something that should have rights is an amalgam of a human and a computer..  (You can read it at http://www.cra.org/Activities/grand.challenges/sterling.html)

“How many undiscovered judo throws are there, for instance? Imagine a soldier trained in forms of hand-to-hand combat that had been discovered in computer searches of the entire phase space of the physical mechanics of combat. He might perform weird but deadly movements that are utterly counterintuitive. He's simply stun the opponent through sheer disbelief. When he got wound-up, it would look like outtakes from THE MATRIX.”

I think what we want to do, is seriously consider attributing rights to those human/computer systems that are in fact truly amalgamated.  To do that ethically we will have to draw a very clear line when there is a human involved and when there is not a human involved.  The ability to draw that line is going to be the major challenge not the attributing of rights.  An engineer using a CAD program to design a bridge needs to be responsible not only for his or her own work, but also for the quality of the program (as much as humanly possible.)  A robot that drops a brick on my foot is not going to jail in the near future, even if it had a machine vision system and “should have known better”

However, I do agree with my opponent that robots may indeed rise up against us at some point, so we better stockpile all of our robocidal knowledge and keep it from them!


Extend this Page   How to edit   Style   Subscribe   Notify   Suggest   Help   This open web page hosted at the University of Alberta.   Terms of use  1286/0