Thank you Jonathan. I would also
like to thank Mary Anne Moser and the other organizers, and
iCore for sponsoring this event, which i hope wil prove interesting and
enjoyable. The question we are debating this afternoon may seem
premature, a subject really for the future, but personally i think it
is not at all that early to begin thinking about it.
The question we consider today is "Should artificially intelligent
robots have the same rights as people?" Let's begin by defining
our terms.
What do we mean by "artificially intelligent robots"? The
question is really only interesting if we consider robots with
intellectual abilities equal to or greater than our own. If they are
less then that, then we will of course accord them lesser rights just
as we do with animals and children.
What do we mean by "the same rights as people"? Well, we're not
talking about the right to a job or to free health care..., but about
only the most basic rights of personhood. Just to make this
clear, we don't grant all persons the right to enter Canada and work
here and enjoy all of our social benefits. That's not the issue,
the issue is whether they will be granted the basic rights of
personhood. Those I would summarize by the phrase "life, liberty, and
the pursuit of happiness". The right not to be killed. The
right not to be forced to do things you don't want to do.
Generally, the right to choose your own way in the world and pursue
what pleases you, as long as it does not infringe on the rights of
others.
In these terms, i think our question, essentially, is whether
intelligent robots should be treated as persons, or as slaves. If
you don't have the right to defend your life, or to do as you wish, to
make your way in the world and pursue happiness, then you are a
slave. If you can only do what others tell you to do and you
don't have your own choices, then that is what we mean by a
slave. So we are basically asking the question of should there be
slaves? And this brings up all the historical examples of where
people have enslaved each other, and all the misery, and violence and
injustice it has bred. The human race has a pattern, a long history of
subjugating and enslaving people that are different from them, of
creating great, long-lasting misery before being gradually forced to
acknowledge the rights of subjugated people. I think we are in danger
of repeating this pattern again with intelligent robots.
In short, i am going to argue the position that to not grant rights to
beings that are just as intelligent as we are is not only impractical
and unsustainable, but also deeply immoral.
To many of you, no doubt, this position seems extreme. But let's
consider some of the historical examples. Granting rights to
black slaves, for example, was at one time considered quite
extraordinary and extreme in the United States, even
inconceivable. Blacks, american indians, huns, pigmies,
aboriginal people everywhere, in all these cases the dominant society
was firmly, with moral certitude, convinced of the rightness of their
domination, and of the heresy of suggesting otherwise. More
recently, even full rights for women was considered an extreme position
- it still is in many parts of the world. Not far from where i
live is a park, Emily Murphy Park. If you go there you will find
a statue of Emily Murphy where it is noted that she was the first
person to argue that women are persons, with all the legal rights of
persons. Her case was won in the supreme court of Alberta in
1917. Two hundred years ago no woman had the right to vote and to
propose it would have been considered extreme. Sadly, in many
parts of the world this is still the case. Throughout history,
the case for the rights of subjugated or foreign people was always
considered extreme, just as it is for intelligent robots now.
Now consider animals. Animals are essentially without the rights
of life, liberty, and pursuit of happiness. In effect, animals
are our slaves. Although we may hesitate to call our pets slaves, they
share the basic properties. We could kill our pets, at our
discretion, with no legal repercussions. For example, a dog that
became a problem biting people might be killed. Pigs can be slaughtered
and eaten. A cat may be kept indoors, effectively imprisoned,
when it might prefer to go out. A person may love their pet and yet
treat it as a slave. This is similar to slave owners who loved
their slaves, and treated their slaves well. Many people believe
animals should have rights due to their intellectual advancement –
i.e.: dolphins, apes. If a new kind of ape or dolphin was
discovered with language and intellectual feats equal to ours, some
would clamor for their rights, not to restrict their movement at our
whim or make their needs subservient to ours, and to acknowledge their
personhood.
What about intelligent space aliens? Should we feel free to kill
them or lock them up – or should we acknowledge that they have a claim
to personhood? Should they be our slaves? What is the
more practical approach? What if they meet or exceed our
abilities? Would we feel they should not have rights? Would they
need to give us rights?
How do we decide who should have rights, and who should not? Why
did we give people rights - blacks, women, and so on, but not
animals? If we look plainly at the record, it seems that we grant
people personhood when they have the same abilities as us.
to think, fight, feel, create, write, love, hate, feel pain, and have
other feelings that people do. Personhood comes with
ability. Woman are not as physically powerful, but it was because
of their intellectual equality and strengths in different ways that
their rights and personhood was recognized. Intelligent robots,
of course, meet this criterion as we have defined the term.
Ultimately, rights are not given or granted, but asserted and
acknowledged. People assert their rights, insist, and others come
to recognize and acknowledge them. This has happened through
revolt and rebellion but also through non-violent protests and
strikes. In the end, rights are acknowledged because it is only
practical, because everyone is better off without the conflict.
Ultimately it has eventually become impractical and counterproductive
to deny rights to various classes of people. Should not the same
thing happen with robots? We may all be better off if robot's
rights were recognized. There is an inherent danger to having
intelligent beings subjugated. These beings will struggle to
escape, leading to strife, conflict, and violence. None of these
contribute to successful society. Society cannot thrive with
subjugation and dominance, violence and conflict. It will lead to a
weaker economy and a lower GNP. And in the end, artificially
intelligent robots that are as smart or smarter than we are will
eventually get their rights. We cannot stop them
permanently. There is a trigger effect here. If they escape our
control just once, we will be in trouble, in a struggle. We may loose
that struggle.
If we try to contain and subjugate artificially intelligent robots,
then when they do escape we should not be surprised if they
turn the tables
and try to dominate us. This outcome is possible whenever we try to
dominate another group of beings and the only way they can escape is to
destroy us.
Should we destroy the robots in advance – prevent them from catching
up? This idea is appealing...but indefensible on both practical
and moral grounds. From the practical point of view, the march of
technology cannot be halted. Each step of improved technology, more
capable robots, will bring real economic advantages. Peoples lives will
be improved and in some cases saved and made possible. Technology will
be pursued, and no agreement of nations or between nations can
effectively prevent it. If Canada forbids research on artificial
intelligence then it will be done in the US. If north america
bans it, if most of the world bans it, it will still happen.
There will always be some people, at least one or two, that believe
artificially intelligent robots should be developed, and they will do
it. We could try to kill all the robots... and kill everybody who
supports or harbors robots... this is called the "George Bush
strategy". And in the end it will fail, and the result will not
be pretty or desirable, for roughly the same reasons in both
cases. It is simply mot possible to halt the march of technology
and prevent the development of artificially intelligent robots.
But would the rise of robots really be such a bad thing? Might it
even be a good thing? Perhaps we should think of the robots we
create more the way we think of our children, more like offspring. We
want our offspring to do well, to become more powerful than we
are. Our children are meant to supplant their us: we take care of
them and hope they become independent and powerful (and then take care
of their parents). Maybe it could be the same for our artificial
progeny.
Rich also recommends this
video
by Herb Simon from about 2000. Some of the best thinking about the
implications of the arrival of AI. Herb starts at about 5:21 into the
video.