Lecture March 10, 2003

Testing Our Assumptions about AI: What does it mean to “understand” Natural language? March 10, 2003

Overview of Lecture:


1, Shank’s “understanding spectrum” as it relates to humans and computers

2. Understanding language fully requires 3 levels: syntax, semantics and pragmatics

2.1. computers and syntax and semantics?
2.2. how to teach a computer “common knowledge” / common sense? (included in pragmatics)

example of frames (Minksy)
example of CYC


**********


1, Shank’s “understanding spectrum” as it relates to humans and computers

1. 1. How do we “understand”?

(according to R. Shank, The Cognitive Computer, we can “understand” from one end of the continuum below to the other depending on circumstances):

 

Human Understanding Spectrum

Making Sense
.............................................................
Complete Empathy
|
|

Highly literal

structure

People restricted with similar beliefs, goals and experiences

 

 

1.2. How far will a computer “understand”


Computer Understanding Spectrum

Making Sense
......................................
Cognitive Understanding
   

- Will learn

- relate present experience to past experience

- will explain itself

A computer program will NOT have “complete empathy” like we are capable of.

2. To understand language we now operate on 3 levels: computers will have to have these same abilities:

SYNTAX = able to parse a sentence (we figure out (normally without thinking about it consciously) the grammatical relationships between words—nouns, verbs, etc.)
SEMANTICS: know the meaning of words

--> frequently we're required to know multiple meanings which we sometimes sort out according to syntax:

" Visiting relatives can be boring"
" visiting" = verbform or adjective?

PRAGMATICS: ability to interpret sentences in context.

-->to understand on a pragmatic level we have to share background knowledge and share cultural knowledge and assumptions (values and beliefs)

Have to have a "model" of the speaker (see previous definition of “intelligence”)

2.1. What can computer programs do?

Syntactic parsing (a program that analyzes sentence structure during language comprehension is called a parser):

Here are some rules:

S --> NP VP
(a sentence is made up of at least a noun phrase + a verb phrase)

NP --> (det) N (PP)
(a noun phrase is made up of possibly a determiner + a noun + possibly a prepositional phrase)

VP --> V (NP) (PP)
(a verb phrase is made up of a verb + usually a noun phrase + possibly a prepositional phrase)

PP -->P NP
(a prepositional phrase is made up of a preposition + noun phrase)

--> As well, words are "tagged" in dictionaries with their part(s) of speech; for example

N --> boy, girl, dog, cat, ice cream....

V --> eats, likes, bites....

P --> with, in, near....

det --> a, the, one....

Here’s an example of the parsed sentence: “The dog likes ice cream”

Look at how a computer parser saw 5 different ways to parse the sentence: “Time flies like an arrow.”

5 different computer trees:

1. Time proceeds as quickly as an arrow proceeds. (the intended reading).

2. Measure the speed of flies in the same way that you measure the speed of an arrow.

3. Measure the speed of flies in the same way that an arrow measures the speed of flies.

4. Measure the speed of flies that resemble an arrow.

5. Flies of a particular kind, time-flies, are fond of an arrow.

(from Pinker, The Language Instinct)

Here are some more problem sentences: The first 2 ambiguous sentences are from the film "Thinking Machines":

1. "Left waffles on Falklands"
(taken from a British newspaper headline at the time of the war between Britain and Argentina)

2. "Sharon to press his suit in Israel."

3. “I saw the man with the binoculars."

What is meant by these sentences?

SO the possibilities for computers to parse (utilize syntax and semantics) to "understand" a sentence is further limited by pragmatics--the intended meaning
We derive the pragmatic meaning by using some or all of the following:

--> general world knowledge (e.g., objects fall when you release them from your hand)
--> knowledge of the context (e.g. see example #1 below)
--> an "empathy" (see Shank's human understanding spectrum) with the sender (e.g., if people are very close, they will use words to express their thoughts that others might not understand)

Example #1:

" That was really funny." (when the intended meaning is sarcasm and the speaker is intending the opposite of what she is saying)

How will a computer with Natural Language Processing ever understand the "real" meaning of these last sentences?

2.2. So how to "teach" a computer "common knowledge? or “common sense”?

2.2.1. Example of Minsky's "frames."

--> for example, teach a computer about a birthday party, by indicating what is "normally" the context for a young child's birthday party. The computer program is set up with “slots” which are usually predicted by the situation. For example,

PRESENT: must please host; must be store bought and gift- wrapped
GAMES: hide and seek, ....
DECOR: balloons, favours
CAKE: candles, other kids sing "happy birthday" song, host makes wish and then blows out candles

See other examples of SAM and PAM as outlined in the Wessells' article in kit.

In the documentary “Thinking Machines” see Doug Lenat's project called CYC (short for encyclopedia). He wants to encode all the common sense knowledge that you and I already know.

Lenat and his team need to encode millions of pieces of information into the computer's knowledge base; knowledge that to us is so obvious that it's not in an encyclopedia. He claims once that is done, he will get it to "read" and "understand" an encyclopedia.

Dreyfus' refutation: much of what we know as "common knowledge" is not "rules" but skills...(see film for the full text).
For information on CYC see http://www.cyc.com/overview.html

(We will look further at CYC when we read Adam’s article in the kit).

Supplementry Sites (not required reading):

To play with ELIZA go to : http://www-ai.ijs.si/eliza/eliza.html

To find out all about “bots” go to www.simonlaven.com

I was not able to find a free demo of the 2001 and 2002 Loebner prize winner, ALICE
(the web site is

To find out about the schism in AI world between AIers go to “Artificial Stupidity” at http://salon.com/tech/feature/2003/03/27/loebner_part_2/print.html

To find out what kind of personality you have (but you have to pay $10) go to
www.alicebot.org/claudio.html

This page last revised 03/10/03