Testing our Assumptions about AI: Part
4 – Expert Systems – 19 March 2003
1. What are they and how do they work? -->What is MYCIN?
2. How do we function as "experts"?
3. What are the significant differences between us and "them"?
4. Other questions Wessell raises: What are the risks and benefits associated
with expert systems? (see kit)
****************
1. Definition of an expert system:
" a rule-based system that embodies part of the skill of a human expert
in a computer program" (Forsyth, 1984; cited in Wessells, 1990).
General Characteristics of an Expert System:
• designed
to offer advice about how to perform a PARTICULAR task
• its knowledge
is limited to a specific DOMAIN
• most can
tell the user how it arrived at a decision
• typically,
an expert system is easy to interact with—users can use natural language
• contains
rules expressed in "IF-THEN" format
(Wessells, in course kit)
MYCIN ( an early system that diagnosed and treated infectious
diseases) and EON (change in terminology to “knowledge” systems
See http://www.wired.com/wired/archive/6.09/crucialtech.html?pg=7
à EON is different from stand-alone expert systems like MYCIN:
à it uses component-based architecture:
if today’s expert systems are like documents in a file cabinet, if you
add something on one page, the others remain unaffected.
“ BUT with this new kind of expert system, alter a single fact and it ripples
through the system seamlessly, updating all related data.”
à “this means the database gets smarter each time
a doctor revises a patient’s file.”
(from Wired archive)
2. How are we "experts"? How can computers be experts?
(by emulating us)
1. A specific goal (problem) sets our mind in motion.
2. A vast collection of FACTS and RULES wait to be called upon to help
reach the goal.
3. PRUNING helps us carry out a quick efficient search for only those
rules that pertain to the immediate goal.
4. We make inferences from facts and rules that we know.
5. We use a set of strategies for generating and testing hypotheses concerning
the problem at hand. (Wessells calls this an INFERENCE ENGINE which has
two commonly used strategies -- BACKWARD and FORWARD CHAINING -- see kit)
(from Levine, Drang, and Edelson, A Comprehensive Guide to AI and Expert Systems,
1986 and Wessells)
++++++++
Facts and Rules
Levine et al's premise: "What is generally considered to be "intelligence" can
be broken down into a collection of "facts" and a means of utilizing
these facts to reach "goals." (p. 5)
(do you agree with this definition of "intelligence"?)
FACT 1: A burning stove is hot.
RULE 1: IF I put my hand on a burning stove,
THEN
it will hurt.
Inference Mechanism
We're told:
1. Jim's parents are John and Mary.
2. Jane's parents are John and Mary.
GOAL: determine the relationship between Jim and Jane.
Pruning zeros in on what rule?
IF parents are the same, THEN children are siblings.
We infer Jim and Jane are siblings.
Now we have new knowledge.
Here's an interesting example of verification through the inference
mechanism:
" Suppose a murder has been committed: A person is found locked in
an apartment, shot three times. The medical examiner rules out the possibility
of suicide because of the angle of the wounds (PRUNING in action), and the police
immediately go to work. The first thing they consider is who else
besides the victim had a key to the apartment. They question the landlord
and several neighbours, who say the murdered person had a friend who frequently
came to use the apartment. Further investigation reveals that the couple
had recently been quarrelling.
The police now have a suspect. They are able to INFER from their interview
that the victim's friend is probably the murderer (FORWARD CHAINING), using
the data from the interview to arrive at the conclusion, but they need some
concrete evidence to nail down the case. Their best chance of nabbing
the suspect is to find the murder weapon. They obtain a search warrant
for the friend's apartment and look through the belongings, but to no avail. Finally,
a detective finds a gun in a garbage can in a nearby alley. A fingerprint
check verifies that the gun indeed was handled by the friend, and a ballistics
test establishes that it is the murder weapon. Case solved.
By obtaining a new piece of data and seeing if it was consistent with their
original conclusion, the police verified the goal of identifying the murderer. The
process of using a conclusion to look for supporting data is known as "BACKWARD
CHAINING." In this case, the conclusion is the suspect and the data
is the weapon."
(from Levine, Drang, and Edelson, A Comprehensive Guide to AI and Expert
Systems, 1986, p. 17)
3. Differences between Human Beings and "Knowledge" Systems
Human Expert |
Knowledge Systems |
1. can be expert more than
one fiel |
one domain |
2. reason using general
and from heuristics analogy |
needs domain
heuristics |
3. learns from experiences |
restricted to learning from "rules" taught
by human |
4. possess common sense/can
act
spontaneously
|
no common sense |
5. can be biased |
bias of initial rules |
6. can jump to conclusions
maintain those con-clusions in
face of disconfirming evidence |
does not jump to conclusions
and |
7. can avoid/misread some detail |
does not skip details |
4. Questions Wessells raises:
Who will benefit from these developments in AI?
Who is responsible for problems/errors?
Will expert systems become more credible than humans?
Will expert systems add to dehumanization? |