Expert Systems: Cyc and Soar

Adam does not consider AI a failure, but she is concerned around gender-biased AI systems which work to further dominant concepts of knowledge. For Adam, AI succeeds in that they reproduce the knowledge of expertise — the very thing they are designed to do. But at the same time, it fails for it cannot function outside the realm of its expertise. If asked to solve any problem outside its domain, it would break (Adam, 1998, p.42). To examine the problems related to expert systems, Adam chooses the Cyc project and the Soar project as her examples. Adam contests that the major problem with projects such as the Cyc one is that machines do not question the extent to which expertise is a valid assumption in a world where women have long fought for pluralistic discourses. Even worse, besides the fact that Cyc and similar projects do not challenge the concept of expertise, they work in propagating a view that is originated from the white, middle-class, male expert "into a prestigious form of new computer system" (Adam, 1998, p. 42).

Adam also constructs a direct criticism on Lenat’s Cyc project. Lenat assumes the existence of a general consensual knowledge amongst people "be they a professor, a waitress, a six-year-old child, or even a lawyer" (Adam, 1998, p. 42) or be they male or female. Ironically, what becomes the knowledge in the Cyc system is the white middle-class male knowledge which is seen as the rational knowledge in opposition to what becomes qualified as belief. Adams brilliantly acknowledges that "what constitutes knowledge over mere belief is TheWorldAsTheBuildersofCycBelieveitToBe" (Adam, 1995, p. 364). In terms of the knowledge that goes into the Cyc system, it follows the pattern of rationality/irrationality knowledge. What is seen as rational is translated into symbolic knowledge and what is seen as irrational is regarded as non-knowledge. The consequence is that much of skill-based knowledge — what Adam argues is a great part of women’s knowledge — is regarded as belief and excluded from the system.

The problem with Soar and the subject lies in the assumption that a motivated subject is expected to act in the ‘rational’ way the Soar system has predicted. Soar assumes that problems are the determinant key to problem solving behaviour. If a problem shows to be highly complicated its solving method will be broken down into smaller subtasks — this is called ‘subgoaling’. The trouble that emerges from this assumption is the expectation that all humans would be motivated in the same way and therefore react in the same manner to a given problem. Moreover it believes that problems are to be solved individually undermining the existence of collective problem solving. The goal-seeking strategy is based on the ‘man of reason’ who finds through the use of rationality a way out of irrational problems (Adam, 1998, p. 123). The way rationality is organized in Soar is entirely on the basis of IF-THEN pairs. The problems Soar proposes to solve become then in a way quite unrealistic. Adam exemplifies how it is that Soar would solve a chess game problem but it would break if faced with a problem such as how to cook and get the children from school in the most efficient way. In the same way that Cyc fails to recognize body related knowledge, Soar has no way of replicating skill type knowledge into its system. Once more, what constitutes the knowledge into expert systems reinforces the traditional epistemological view on rationality/irrationality dualism which is many times translated into masculinity/femininity type of knowledge.

 

~ Intro ~ Alison Adam ~ Artificial Intelligence ~ Adam's views ~ Feminist Epistemology ~ Expert Systems ~

~ Strengths ~ Weaknesses ~ Traditional Epistemology ~ Works Cited ~

 

 

 


Valentina Mello Ferreira Pinto
Communication Studies/Humanities Double Major

Communication Studies Program, Social Science Division
York University, Toronto, Ontario, Canada, M3J 1P3