NATS 1700 6.0 COMPUTERS,  INFORMATION  AND  SOCIETY

Lecture 12: Artificial Intelligence : Strong and Weak

| Previous | Next | Syllabus | Selected References | Home |

| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 |

 

Introduction

  • In this and the next couple of lectures we explore in greater detail various goals, problems and results encountered in a few representative areas of artificial intelligence. Read the clear  Read !  introduction by Ashok Goel of the Georgia Institute of Technology. Other excellent resources are John McCarthy's What is Artificial Intelligence?, and ThinkQuest's The History of Artificial Intelligence.
  •  

    COG Looking at a Cube

    COG Looking at a Cube

  • We begin with a discussion of an argument against Turing's test developed by John Searle, an American philosopher. The Chinese Room, as this argument is called, claims to refute not only Turing's idea, but Strong AI in general. John Nugent offers a good  Read !  overview . while Steven Harnad presents a fairly comprehensive summary of the issue in  Read !  Minds, Machines and Searle. A more difficult, but more complete article appears in the Internet Encyclopedia of Philosophy,  where you also find a good bibliography.
  • It is interesting to note that, in the Monadology, Leibniz "had asked the reader to imagine what would happen if you magnified the insides of the head more and more until it was so large you could walk right through it like a mill (in the sense of a place for grinding flour). The 17th paragraph of the Monadology begins: ' Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception.'  We may see that Leibniz anticipated John Searle's Chinese room paradox by many centuries." This quote is from Michael Arbib article Warren McCulloch's Search for the Logic of the Nervous System, which appeared in Perspectives in Biology and Medicine, 43.2, Winter 2000. The article is is a very good intellectual biography of neurologist Warren McCulloch (1898-1968)--one of pivotal figures in the development of artificial intelligence, and neural nets in particular.
  • A few interesting websites may offer you some relief from philosophical discussions and provide tangible examples of the progress made toward the construction of robots which exhibit some degree of human intelligence. One is the  Read !  Humanoid Robotics Group  at MIT. You may also read some of the papers by Rodney Brooks,  director of the Artificial Intelligence Laboratory at MIT (the first image below is from this laboratory).
  • When we discuss organic intelligence and artificial intelligence, it is important to keep in mind the possibility of a synthesis of the two. To see what I mean by this statement, see for example  Read !  Brain Cells Fused with Computer Chip. This is a rapidly growing area of research. See for instance Electronic Chip, Interacting With The Brain, Modifies Pathways For Controlling Movement.

 
Topics

  • In Minds, Brains, and Programs (Cambridge: Harvard University Press, 1984), John Searle defines Strong AI as the belief that the human mind is nothing but a computer program, that the particular structure of the brain is unimportant, and that the Turing Test is the essential tool for demonstrating the validity of this belief. Searle defines Weak AI as the simulation or modeling of human cognition by means of computers.

    "Simulation and modeling are not necessarily the same.

    • Simulated performance does not require that the same methods are being used by machine and human. (For example, many game programs such as Deep Blue use a method called "alpha-beta look ahead search with minimax pruning" to calculate the move most likely to produce a win. It seems unlikely that this is what humans are doing.)
    • Modelers, on the other hand, explicitly try to reproduce the mechanisms that are operative in human cognition. There are many theories about the correct level of abstraction for representing these mechanisms. Two different views are represented by the "symbol-processing AI" approach and the artificial neural network approach.
    • Neither simulation nor modeling implies that the programmed computer comes to possess mental properties."
      [ from Colin Allen's course on Philosophy of Mind at Texas A&M University ]
  • Before examining Searle's refutation of the premises of Strong AI, it is helpful to review once again the questions and answers relative to the basic question:  Read !  Can Machines Think?.
  • Since most of AI clearly involves learning, it is also advisable to review the main ideas about it:  Read !  What is learning? What does it mean to learn?.
  • So, what is Searle's Chinese Room Argument about? Here is what Searle writes in Minds, Brains and Science (op. cit., p. 32)):

    "Imagine that a bunch of computer programmers have written a program that will enable a computer to simulate the understanding of Chinese...Imagine that you are locked in a room, and in this room are several baskets full of Chinese symbols. Imagine that you...do not understand a word of Chinese, but that you are given a rule book in English for manipulating these Chinese symbols. The rules specify the manipulations of the symbols purely formally, in terms of their syntax, not their semantics...Now suppose that some other Chinese symbols are passed into the room, and that you are given further rules for passing back Chinese symbols out of the room. Suppose that unknown to you the symbols passed into the room are called 'questions' by the people outside of the room, and the symbols you pass back out of the room are called 'answers to the questions.' Suppose, furthermore, that the programmers are so good at designing the programs and that you are so good at manipulating the symbols, that very soon your answers are indistinguishable from those of a native Chinese speaker...Now the point of the story is simply this: by virtue of implementing a formal computer program from the point of view of an outside observer, you behave exactly as if you understood Chinese, but all the same you don't understand a word of Chinese. But if going through the appropriate computer program for understanding Chinese is not enough to give you and understanding of Chinese, then it is not enough to give any other digital computer an understanding of Chinese."

  • You may also want to read Searle's article "Is the Brain's Mind a Computer Program?" in Scientific American (262: 26-31, 1990) in the library.

          

    Simulating High-Level Robot Behaviors

  • Searle thus argue that the Turing's test is not sufficient to determine whether a computer really exhibits human intelligence. Or, in other words, intelligent behavior is not a proof of (human) intelligence. Moreover, if Strong AI is indeed founded on the assumption that the "human mind is nothing but a computer program" then Strong AI is hopelessly mistaken. The Chinese Room is simply an instance of 'simulation' (Weak AI). According to Searle, what truly characterizes the human mind is 'intentionality':  "a form of energy whereby the cause, either in the form of desires or intentions, represents the very state of affairs that it causes." (op. cit., p. 66)
  • The Chinese Room Argument has generated a large amount of reactions, as you can see in the references suggested in the introduction to this lecture.
  • A completely different approach to artificial intelligent is that of Marvin Minsky, at MIT Media Lab and MIT AI Lab. His idea of human intellectual structure and function is presented in The Society of Mind (New York: Simon and Schuster, 1986), also available, in an enhanced edition, as a CD-ROM. You can get a good summary of his views on artificial intelligence (Strong AI) in his article  Read !  Why People Think Computers Can't.
  • Minsky attacks the problem of machine intelligence not by critiquing what may go on in the machine, but by questioning what we believe is going on in our mind. "We naturally admire our Einsteins and Beethovens, and wonder if computers ever could create such wondrous theories or symphonies. Most people think that creativity requires some special, magical 'gift' that simply cannot be explained. If so, then no computer could create--since anything machines can do (most people think) can be explained...We shouldn't intimidate ourselves by our admiration of our Beethovens and Einsteins. Instead, we ought to be annoyed by our ignorance of how we get ideas--and not just our "creative" ones." In other words, why are we so convinced that our mental activities are so special? This is an assumption that should be questioned.  "Do outstanding minds differ from ordinary minds in any special way? I don't believe that there is anything basically different in a genius, except for having an unusual combination of abilities, none very special by itself."
  • Minsky thus goes back to an old question: what do we mean when we say that we think? "The secret of what something means lies in the ways that it connects to all the other things we know." Instead of conceiving of our brain as a monolithic organ with magical properties, it may be more accurate to look at it as a set of distinct but connected subsystems, each one with its own limited, and perhaps specialized, abilities. Individual termites are not particularly smart, but a termite colony appears to be. It keeps its 'castle' air conditioned even in the harshest summer day in Africa, it cares for its newborn in well-organized nurseries, it cultivates mushroom gardens, it's always 'aware' of the state of the entire colony, and so on. Why not approach the problem of the organization of our brain along termite lines?

    Bird Flock

    Misjudgment in a Bird Flock

  • "That's why I think we shouldn't program our machines that way, with clear and simple logic definitions. A machine programmed that way might never 'really' understand anything--any more than a person would. Rich, multiply-connected networks provide enough different ways to use knowledge that when one way doesn't work, you can try to figure out why. When there are many meanings in a network, you can turn things around in your mind and look at them from different perspectives; when you get stuck, you can try another view. That's what we mean by thinking!"
  • Indeed nature shows that organized collective behavior of relatively simple units is a frequent phenomenon. Consider, for example, a flock of birds. "Physicists have been studying the remarkable process of how a flock of birds moves flawlessly as an organized group, even if the individual birds make frequent misjudgments. If a bird in a flock makes an error in the direction it should travel, it will tend to swerve side-to-side rapidly. You might think this error in judgment would overwhelm the other birds, causing the flock to become disoriented and fly apart very quickly. But this process actually helps keep these misjudgments under control, by quickly spreading the error among many birds so that it becomes very diluted." [ from Misjudgment in a Bird Flock ]
    Very interesting work has been done by Laurent Keller and others, who have taught robots some of the behavioural rules used by ants, and found that they can form cooperating, autonomous groups that are more successful than individual robots. See for example Machines that Think .
  • For a non-technical introduction to similar problems and the computer simulations created to understand them, see John L Casti, Would-Be Worlds: How Simulation is Changing the Frontiers of Science, John Wiley & Sons, 1997. Another short but great source is Mitchel Resnick, Turtles, Termites, and Traffic Jams: Explorations in Massively Parallel Microworlds, The MIT Press, 1997.

 
Questions and Exercises

  • In your own words, summarize the Strong AI and Weak AI positions.
  • Do you agree or disagree with Searle's Chinese Room Argument? Why?
  • Read The Singularity. A Special Report , a rather comprehensive coverage of the possible evolution of computers and their impact on human beings.

 


Picture Credits: The COG Shop at MITJames Kuffner. Dpt of Mechano-informatics, U of TokyoAIP
Last Modification Date: 07 July 2008