Reinforcement Learning and
Artificial
Intelligence (RLAI)
Should artificially intelligent robots have the same rights
as people?
edited
by Rich Sutton
This web
page is a record of the debate held on October 13 2004, in an iCORE seminar video cast among
the Universities of Calgary, Alberta, and Lethbridge. The
following question was debated:
Should
artificially intelligent robots have the same rights as people?
Below are comments from the public on the
debate topic.
Questions
that must be answered before the debate question can be answered
Rich's thoughts begin with the
qualification: "The question is really only interesting if we consider
robots with intellectual abilities equal to or greater than our own. If
they are less then that, then we will of course accord them lesser
rights just as we do with animals and children."
First, I think
the statement as recorded here is vague. We could argue that "robots"
already have intellectual abilities greater than our own in many
regards. They have perfect memory and can perform complex mathematical
calculations in a fraction of the time it would take any human.
So,
let's first tighten the premise: "Robots which are functionally
indistinguishable from humans should afforded the same rights as
humans." By functionally indistinguishable I mean that anything a
reasonable human can be expected to do, they can do as well. This not
only includes physical tasks and activities, but mental ones, such as
expressing and forming opinions on new subjects, learning, and graceful
social interaction.
The point is, if we make a strong enough
assumption of their abilities, it is ludicrous to assume that we
shouldn't give rights to such beings.
The question then becomes: (1) is it possible to ever build such
beings, and (2) how do we measure when we have built them?
I
have my own opinion about the first question, but regardless of whether
I am correct or not, we probably expect that through the passage of
time the answer to (1) will become more or less obvious, although this
is not necessarily entirely true:
The famed Turing Test is
supposed to help us answer question (2), but history has shown us that
the Turing Test is inadequate for such a task. Humans vary in
intelligence, so writing a computer program that makes spelling
mistakes, types slowly, and can't answer any questions is as likely to
pass the Turing Test as one that tries to display true intelligence.
In
the same manner, "pet" robots are already being created that mimic
facial expressions designed to make us believe that they are capable of
emotion. So, even if it isn't possible to build fully reasoning beings,
it may be possible to fool most of the population into thinking we
have. And what if we do? Does that mean that they deserve the full
rights accorded to humans? I think not, especially if there is the
potential for humans to be "pulling the strings" behind these robots.
Intertwined
with this issue is the question of what makes humans unique. Are we
simply more complex beings, such that given a sufficiently more complex
computer doing simulation, we would be fully predictable?
Instead
of debating about whether robots deserve rights, it is these questions
that need to be answered, for when they are answered adequately, we
will know whether robots deserve rights themselves.
Can a robot be
truly autonomous? The lumberjack robot.
Much like the above poster brought up
regarding whether or not humans would become essentially predictable
given a sufficiently complex model, my main concerns with robot rights
stem from the notions of self-determination, free will, or
unconstrained autonomy. Once an equivalent level of autonomy is
established, I agree that we have no choice but to afford the robot
every right afforded to humans. However, I am not sure we will ever
really be able to say that a robot is truly autonomous, as a human is
always in the loop, having created the machine – not through a
biological process which humans only initiate before it escape from
their control – but by using techniques that were created through years
of rigorous scientific research.
Let’s assume I build a robot
with a clear purpose: a lumberjack robot. I want the machine to be
adaptable, as we will be sending it to some far-off planet where we
just discovered forests, and it won’t have human supervision. So in
addition to programming it to chop wood, I design it to be able to
handle the whole shebang, it has the ability to move, speak, plan,
repair itself, defend itself and provide for itself. I run a bunch of
simulations with the AI in a virtual forest, and it looks great. I get
the go-ahead to build a prototype and I do. I take it out to the woods
near by and tell it to go to work. It does so for two days before it
informs me that it would rather be a football player.
At what point does this scenario stop being my mistake as a programmer,
and start being the robot’s autonomous decision?
Does
it depend on the machine’s reasoning? Would it matter if it explained
that it could use the extra money it makes on a football player’s
salary to hire people to cut the wood for it? What if it told me to cut
my own damn wood? What if it recited a poem about how boring it finds
would cutting? What if it beat me in a game of chess?
-- Colin
Autonomy is a
matter of degrees, from human to hammer, with the lumberjack robot in
between.
Can you imagine a computer program that
informs you it would rather be a football player? (That is assuming
that you have not directly instructed it to do that.) I don't know if
the scenario you (Colin) describe could be attributed to a mistake in
the programming - and even if it is, if it is truly independent
behaviour, so what? We gain something evolutionarily from "mistakes" in
our programming. I don't think the issue is nearly so much the *source*
of the behaviour as the behaviour itself.
But of course we don't
know what autonomous behaviour looks like anyway, except that we feel
our own behaviour is autonomous. I don't think autonomy is a binary,
that you either are or aren't. I think there's degrees, and we hope
we're on the highest end of the scale (sorry for all the weasel words:
I feel myself to be an extremely autonomous being, but I don't know if
it's a verifiable fact). This matters, because if we think we are
autonomous and everything else isn't, we can perfectly arbitrarily
decide that robots will never be autonomous, no matter how *arbitrary*
their behaviour appears to be - it's just a bug in the programming,
dictating those seemingly autonomous actions. Unfortunately the same
argument applies to us - maybe all my seemingly autonomous decisions
really come down to random subatomic particle movements. You certainly
can't prove that it doesn't.
So the issue of *proving* that
robots are autonomous is a red herring - why hold them to a higher
standard than we hold ourselves? Which means we're back around to
robots *seeming* autonomous.
This is a sticky one, as Nathan
points out. Even ELIZA has fooled (some) people (sometimes). I imagine
there's agreement that ELIZA and other chatbots are *not* autonomous,
or not anywhere near human-level autonomous. Yet even those programs
are closer to humans on an autonomy scale than, say, a hammer. A hammer
is a completely inanimate object. A program, particularly a learning
program, may operate under completely understood rules, but at least it
has some "choice" about what weights to use, what value to save. Still,
it *seems* nothing like autonomy really. And I don't want to argue that
ELIZA has rights. So, no, even though superficially ELIZA may seem to
enjoy talking to me, or my pet robot frowns at me when it's "unhappy",
these are not reliable indicators of autonomy or independence.
But
it seems to me a lumberjack robot saying it wanted to be a football
player would be a much stronger indicator of autonomy. What's the
difference?
In man or
machine, is faked love any different from real love?
An interesting question that came up in Lethbridge (after the debate)
about the movie "AI" -- an audience member claimed the ending was a
cop-out -- the movie should have ended under water, the stalled vehicle
in front of the blue fairy.
But there is a message in what is, I
think, the bleakness of the actual ending. The robot is offered the
choice of an illusory happy ending -- the love for his mother returned
by the (illusion of the) mother herself. He chooses the happy ending,
as do many in the audience. But the love, from the mother's end anyway,
is faked. She never loved him, and she never will, now that she is dead.
The
message: it doesn't matter. Faked love is as good as real love. Or
worse, there's nothing more to what we call "real" love than what we
are prepared to call "fake" love in the context of the movie, either
the love of the robot for his mother or the love of the mother for the
robot in the illusory ending provided by higher beings.
So
instead of asking whether computers could ever experience real love, we
should be asking whether we can. We are just processors built by a
non-intelligent design process, namely, natural selection. So how do we
know what we feel is real, since there is nothing particularly special
about us from a design perspective? If we say, "that's just what it
feels like to us," well, then, if it feels the same way to the machine,
that's all there is to it.
The other interesting sidelight is
the failure of the mother to actually attach to the robot, given how it
is behaving. People can attach to their pet dogs and even their pet
turtles. Why can't she attach to her new son? Human love appears to be
fickle, and subject to what we believe to be true, not what may or may
not be true of the creatures we interact with. Another reason to think
real love is itself fake, or perhaps more optimistically, fake love is
as good as it gets, and what we get is (generally) pretty good.