Lecture March 31, 2003

Testing Our Assumptions about AI: Robots and Humans in the Future: Asimov’s and Joy’s perspectives – March 31, 2003 Lecture

Overview of lecture:


1. What are Asimov’s assumptions about robots?
2. Asimov’s focus on ethics. Is this the answer?
3. What are Joy’s assumptions about KMD and GNR?


**********


1. Asimov throughout his many stories on the evolution of robots has certain assumptions:

1.1. He has a built-in bias re: American superiority about technology: for example, the U.S. will lead the robotics industry.

1.2. Humans will continue to fear robots; therefore, we need to develop “ethical” robots.

(see below for a discussion--contrast this with Moravec's view (and Kurzweil’s in Joy article) that we will recognize the inherent positive attraction to become robot-like)

1.3. Computers will get us out of our problems.

( see quotation from the story, "That Thou art Mindful of Him": "machines [computers] steered mankind through the rapids and shoals of history."

1.4. Like traditional AI theorists, Asimov believes that robots will develop sufficient models of the world to fully understand our human world.

(in the story, Henry scans movies to figure out how our world works.)
_________

Re: Assumption 1.2 - some reasons to account for humans ’ fear of "robots":

1. general religious fear : only God can create life

2. the history of the way artificial man was/is portrayed through art/literature:

+ the earlier pre-19th c. use of "artificial man" in literature - inspired fear because of people's superstitions.

+ in the 19th century a general fear of machines: that they could be unpredictable and dangerous: feeds into the powerful myth that "man's creations kill him" (Frankenstein plot)

+ in the 20th century, robots develop into negative symbols of the machine age that man is unable to control.

+ specific negative use of the term "robot" as envisioned by Karl Kapek in 1926
(from "robota" from the Czech dramatist's play, R.U.R.)

plot: robot is an artificially manufactured person who has mechanical efficiency but is without a soul. Inventor's idea was to have these mechanical beings free humanity from having to work. Others take the robots and use them in war to kill humans; then another person endows the robots with feelings. The robots decide they are superior to human beings and kill everyone on earth.
3. extension of technophobia -- people's fear of anything new


> specific fear that technological change (including robots) will produce undesirable alterations in human society


> human fear of "the other"- seeing others as alien and thus in need of control.


For example, see movie plot of Blade Runner (1982 - from a story by Philip Dick): One view of what will happen when there is an erasure of the normal boundary between humans and machines.

Plot: involves machines that look so human they can no longer morally be treated as anything but people, but they're called replicants and are condemned to spend their time as slave labour on other planets. The replicants are persecuted because, while they were created by the humans, they aren't human.

4. Economic reasons: fear of loss of employment - see earlier lectures on “computers and work”

+ in the 19th century, group of labourers (called Luddites) smashed machines since they saw machines as leading to unemployment:
(reference in Joy article to the “The New Luddite challenge as articulated by the Unabomber; here the problem includes but goes beyond unemployment)


RE: assumption #1.3: how will computers get us out of our problems, and ensure the best future?

Answer: program them to have "ethics"

3 laws of robotics--build in certain safeguards to protect humans from their machines:

1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

-->in one of his stories, he has the creator of the robots say in essence that the 3 laws of robotics succeed where the 10 commandments have failed.

Stanislaw Lem's objections to Asimov's rules:

--> he's just inverted the old paradigm: whereas the homunculi used to be the villains, Asimov has the robot the "positive" hero, doomed to eternal goodness by engineers.

(-->reversal from “Frankenstein figure” to “Saviour figure”)

Asimov's justification for "programming" ethics into robots--humans have never been free anyway; for the group to survive we have had to accept some limitations on our actions. Now for humans to survive we have to put our faith in "ethical" robots.

plot of "That Thou art Mindful of Him": the robot George Ten is given "judgement" and ultimately decides that robots are more fit than humans and so should be obeyed over humans (see law #2).

Is Asimov right? Assuming that it’s possible, should robots be programmed with ethics?

 

3. What are Joy’s assumptions? See www.wired.com/wired/archive/8.04/joy_pr.html

1. Rejects the “dreams of robotics” : 1) intelligent machines will do our work AND 2) we will gradually replace ourselves with our robotic technology, achieving immortality.” (p. 5)

2. Believes that the Unabomber’s dystopic scenario looks very probable….

Why? Because in the fields of genetics, nanotechnology and robotics, we face new problem: self-replication…(“a bomb is blown up only once—but one bot can become many, and quickly get out of control.” AND

Technology not in hands of countries, but corporations and individuals…AND will not require “large facilities or rare raw materials. Knowledge alone will enable the use of them.” (p. 3)
> see his example of the “gray goo problem” (p. 6)

Conclusion: “We are being propelled into this new century with no plan, no control, no brakes.” (p. 9)

This page last revised 04/03/03