RLAI Reinforcement Learning and Artificial Intelligence (RLAI)
Text of Michael Stingl's debating notes
--Rich Sutton, October 24, 2004
These are the notes that Michael Stingl spoke from in the debate. 

Comments?  Extend the robot rights debate page.


Could Computers Ever Be Worthy of Moral Respect?

To give some content to the “it depends” position, I want to make some conceptual clarifications that I think will prove useful as the discussion unfolds.


First, we need to distinguish between moral agents and creatures or things worthy of moral concern or respect.


Some people claim there is complete overlap, but this is unlikely.  (Moral agents are all moral patients, but not all moral patients need be moral agents.)


Moral agents — creatures or things capable of moral responsibility  — moral agents are creatures or things it makes sense for us to hold responsible for having done right or wrong.

Hurricanes do bad things — but we can’t hold them responsible for having done wrong.


For moral responsibility — or moral agency —

Agent must know that what it did was wrong (“knowledge condition”)

Agent must have been capable of choosing otherwise (“free will condition”)


Here, I just want to say — difficult to know when a creature knows right from wrong:  dogs, for example, seem to fail in this regard....


Deeper problem — and again think of dogs — hard to decide if a dog exhibits free will, or free choice, in doing what it does.

If free will requires acting based on reasons, it’s not clear that dogs can do this.

A desire to eat may motivate a dog to run to its bowl, and hunger may be the cause of the desire.

But is hunger the dog’s reason for running to its bowl, as opposed, say, to continuing to chase its ball?

Reasons can be weighed against other reasons, and a considered judgment can be made of what to do.

Not clear dogs are capable of reasons, never mind assessing the relative weight of reasons.


So — first point — reasonableness seems to be required for moral agency — and reasonableness seems to require (1) knowledge, (2) free will, and (3) the capacity for balancing reasons against one another.


Second point — is reasonableness required to be a moral patient — something worthy of moral concern or respect?


On one popular view, reasonableness is required for moral respect.


If a creature is reasonable, it has self-chosen ends toward which it acts — ends that it has autonomously chosen for itself (unlike the dog and its desire to eat).

It is Kant’s view that none of us — as autonomous choosers of our own ends — should interfere with the autonomous choices of others.

Golden rule: if you don’t want to be blocked from the autonomous pursuit of your ends, you should not block other autonomous creatures from pursuing their ends.

The underlying idea is fairness — what’s fair for me is fair for you — and the deeper idea that it matters to an autonomous creature that it’s autonomous choices are blocked.

This idea, of “mattering to,” is key to the question of who or what demands moral respect — if what happens to you doesn’t matter to you, you are not worthy of moral respect.


Consider a lawn mower left out in the rain.  Rust is bad for the lawn mower.  But it is not morally bad, since it doesn’t matter to the lawn mower that it is rusty.


But there are different ways of understanding “mattering to.”

One way to understand it is in terms of the capacity to autonomously choose one’s own ends.  (What we’ve just done.)


But another question we might raise about a creature or a thing: can it suffer?

Babies can’t reason, neither can dogs or horses.  But we can ask: do they suffer?


To experience suffering ourselves is to know that it is inherently bad.  What makes my suffering bad is not that it is mine, but that it is suffering.

If my suffering is bad, because it is suffering, than fairness demands that I regard your suffering as bad, again, because it is suffering, where suffering is bad simply because of the kind of thing that it is.


So a second way to understand the golden rule:

If you don’t want suffering visited upon you, you should not visit it upon others  — those others who are capable of suffering.


So what does it require, to be capable of suffering?

Again, my lawn mower fails the test — it gets rusty in the rain, but it doesn’t suffer.


What makes us think that animals, or babies for that matter, suffer?

Behaviour like ours

A central nervous system like ours — pain receptors at one end of the nervous system — pain processors at the other end.


How would we know — that a mechanical pain processor — was creating the experience, or psychological feeling, of pain?

If we could create pain processors — that enable computers to actually experience, or feel, pain, would it be moral to do so?


The upshot:

Computers would be worthy of moral respect if either:

(1) They could autonomously choose their own ends;

OR

(2) They could feel pain.


Extend this Page   How to edit   Style   Subscribe   Notify   Suggest   Help   This open web page hosted at the University of Alberta.   Terms of use  1350/0