Dispelling the "Mystery" of Computational Cognitive Science

Christopher D. Green
Department of
Psychology
York University

Toronto, Ontario M3J 1P3

christo@yorku.ca

It was with great interest that I read Crowther-Heyck’s insightful and informative article “George Miller, language, and the computer metaphor” (1999).  As an account of the decade during which Miller developed from a man who claimed psychology to be "the science of behavior" to one who reclaimed William James’ definition – viz., "the science of mental life" ‑ it has much to recommend it.  The article is framed, however, by a congeries of assumptions about cognitive science in particular, and philosophy of mind more generally, that I believe are not really borne out by the history of the discipline, and that make the relatively straightforward question of why computationalism was seen as a critique of behaviorism seem deeply mysterious and even, as Crowther-Heyck puts it as one point, "ironic"(p. 56).

It is perhaps best to start with a couple key passages from Crowther-Heyck's article. Near the beginning of the article, he writes,

The vehicle for the reintroduction of mind, and a vital agent of behaviorism’s overthrow, was the idea that the brain is a computer.  This assertion has become a commonplace one in the historical literature on the “cognitive revolution,” and yet it is not obvious why this idea should have had revolutionary, antibehaviorist implications for psychologists in the 1950s and 1960s.  Why should the analogy between humans and computers have reintroduced mind, will, and consciousness to psychology when computers, after all, possess none of these? (p. 37)

He revisits this theme at the end of the paper, where he writes,

There is much irony in the use of the computer metaphor of mind as an argument against behaviorism. The computer program is, in a sense, the ultimate product of logical positivism, and the logical positivists has argued that mind was an unscientific concept…. It was, in principle, possible to interpret the computer-mind analogy as further support for the elimination of mind (in the traditional sense) from psychology. Mind and purpose mean little without free will, for example, and (as yet) there is no plausible voluntaristic theory of computer action that one could map on to human behavior.  Psychologists could have argued that if the mind can be modeled by a computer program then there can be no “mind” in the traditional sense, only mind in the sense of a neurological program for associating stimuli and responses. (p. 56)

There is much in these two passage that would appear to betray a relatively fundamental misapprehension about what, exactly, the cognitive revolution stood for -- i.e., of the ways in which it stood apart not only from behaviorism, but also from the 19th-century mentalism that behaviorism had displaced. 

If, as Crowther-Heyck claims, the aim of computational cognitive science had been to reintroduce the whole of "traditional" philosophy of mind -- including issues of consciousness and free will -- then its success would, as he says, be inexplicable and perhaps even "ironic."  But it was never the main goal of cognitivism to revive consciousness and free will.  These had been the fundamental issues of Wundt (1897) and of the Chicago functionalists (e.g., Angell, 1904), and the behaviorists showed them to be deeply problematic for anything purporting to be a "scientific" psychology.  Put bluntly, if the will is truly free, then there can be no lawlike generalizations about it to be had.  This was not, as Crowther-Heyck implies, simply the dogma of the logical positivists.  It insight dates back to in one form or another to the Ancients, and the most important modern formulation derives from Kant, who argued in the "Third Antinomy" of the Critique of Pure Reason (1787/1929, pp. B472-B480) that the issue of the freedom of the will can never be an object of science, but rather that freedom serves only as a "regulative principle" making "practical" ethical reason possible.  That the 19th century positivists -- who disliked most everything about Kant's philosophy -- roughly agreed with him on this point is testament to how widely freedom of the will was regarded as being problematic long before the behaviorists came along.

The difficulty with consciousness is somewhat different. Being apparently inherently subjective, consciousness would seem to evade the objectification required by (at least traditional forms of) science. Even if the "subjectivity problem" could be resolved, deep problems would remain. As the leading computational cognitivist Jerry Fodor (1992) once succinctly put it, "Nobody has the slightest idea how anything material could be conscious.  Nobody even knows what it would be like to have the slightest idea about how anything material could be conscious.  So much for the philosophy of consciousness" (p. 5)  During the time period that is the topic of Crowther-Heyck's paper, it was not computationalists who were discussing consciousness. It was, rather, the central-state identity theorists (e.g., Place, 1956) -- a group who hardly make an appearance in the paper -- who were wondering whether consciousness might be the straightforward result of a neurological processes.[1]

The problem that the emerging cognitivists saw with behaviorism, however, was it they had thrown the proverbial baby out with the bathwater.  Although consciousness and free will might still be recalcitrant from a scientific point of view, the "third" aspect of the traditional view of mind -- intentionality -- might be susceptible to scientific analysis after all.  Furthermore, although it is true, as Crowther-Heyck notes, that computers do not seem to possess either consciousness or free will, they seemed a good bet to be able to explain intentionality (see, e.g., Green, 1996).  This is because, as computers were employed by the very "proto-artificial intelligence" researchers Crowther-Heyck names -- McCulloch, Turing, Newell, Simon -- they employed symbolic representations of the domains about which they were "intelligent."  Such representations just are (or so it was argued) instances of intentionality: they refer to things apart from themselves -- i.e., they mean something -- and computational processes of the systems in question take place over those "meanings." 

The focus on mental representation is strikingly brought forth even in titles of their papers: "How we know universals: The perception of auditory and visual forms" (Pitts & McCulloch, 1947/1965); "Machines that think and want" (McCulloch, 1950/1965); "What is a number, that a man may know it, and a man, that he may know a number?" (McCulloch, 1961/1965); "Programming the logic theory machine" (Newell, Shaw, & Simon, 1957); "GPS, a program that simulates human thought" (Newell & Simon, 1963); "Computing machinery and intelligence" (Turing, 1950).  Note that none of these classic papers were about conscious feeling or willing.  They were all about "knowing," "thinking," "wanting," and being "intelligent"; in short, they were about mental representation (often of "abstract" objects, such as numbers and logical propositions) and the mental manipulation of such representations.

To put the matter in terms of historical figures: computational cognitive scientists never intended to revive the psychology of Wundt and Angell.  They were in agreement with Watson that, from a scientific standpoint, it had been a failure.  It was, instead, the psychology of Brentano (1874/ 1973) they intended to revive and render into scientific form.[2]

Seen from this vantage point, the apparent mystery and irony of the psychologists of the 1950s invoking non-conscious, non-free willed computers to foment revolution against the behaviorists simply vanishes.  It wasn't consciousness and free will they meant to revive; it was, instead, intentionality.  This is also explains why Miller's computer was not seen in at all the same light as Hull's "telephone switchboard."  To Crowther-Heyck, they seem so similar that the computer could have been used to advance the cause of behaviorism just as easily as the switchboard.  But this is only because he fails to appreciate the crucial difference (for cognitive scientists, anyway): telephone switchboards do not employ symbolic representations to mediate their "behavior"; they are straight-through "stimulus-response" machines.  By contrast, computers do use such representations, and as such are able to change their "behavior" in accordance with their stored "knowledge."  This distinction was absolutely crucial in the victory of Chomsky's theory of language over Skinner's (and probably more importantly, over Bloomfield's, 1930).  As Crowther-Heyck notes, S-R behaviorism had no way of accounting linguistic phenomena such as embedding. The available machinery was simply too sparse.  With the introduction of symbolic representations -- not only of grammatical[3] rules, but of the clauses of not-yet-completed sentences -- phenomena like embedding were a piece of cake.

The error, as I see it, arises (perhaps ironically) from Crowther-Heyck implicitly adopting the behaviorists' understanding "the mental."  When they rejected the mentalism of the functionalists and of Wundt, they believed that all aspects of the mental were equally problematic, from a scientific point of view, because they were all hopelessly entangled with each other.  When Crowther-Heyck refers, as he does on more than one occasion, to the "traditional view" of mind, he means just this (though he does not seem to adopt the behaviorists' view that they are "hopeless").  This leads him to miss a key aspect of the cognitivist reply to the behaviorists. It was not, "Mentalism Again!" It was, rather, "using modern logic and computers, maybe we can tease out a rigorous analysis of intentionality that will solve some of the difficulties behaviorism is having, and leave problematic concepts like consciousness and free will behind."  If this be doubted, consider the following passages from McCulloch (1951/1965): "By the term mind I mean ideas and purposes" (p. 72); "I don't believe that I brought up the question of consciousness.  If I had to, in a medical sense, I would use the word only to say that this patient was or was not conscious, according to whether he could or could not bear witness to what I could also bear witness to" (p. 129).  Or from Turing (1950): "I do not wish to give the impression that there is no mystery about consciousness…. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper [viz., intelligence]" (p. 447).  Perhaps the most eloquent evidence of the emphasis on representation is the fact that there is not a single entry corresponding to "consciousness" in the index of Simon's Sciences of the artificial (1969), but there are over a dozen for "representation."

So, it would seem that it was the development of a rigorous account of mental representation and its dynamics, rather than a full-bore return to traditional mentalism, that constituted the cognitivist challenge to the behaviorist hegemony of the middle part of this century.  Once this is fully grasped, and it is conceded that it was plausible at mid-century to believe that computers might give us such an account, the apparent "mystery" of the computationalist response to behaviorism is dispelled.

 

References

Angell, J. R. (1904). Psychology: An introductory study of the structure and function of human consciousness.  New York: Henry Holt.

Bloomfield, L. (1930). Language. New York: Holt, Rinehart, and Winston.

Brentano, F. (1973). Psychology from an empirical standpoint (A. C. Rancurello, D. B. Terrell, & L. L. McAlister, Trans.). New York: Humanities Press. (Original work published 1874)

Chalmers, D. J. (1996). The conscious mind: In search of a fundamental theory. New York: Oxford.

Crowther-Heyk, H. (1999). George Miller, language, and the computer metaphor.  History of Psychology, 2, 37-64.

Dennett, D. C. (1991). Consciousness explained. Boston: Little, Brown.

Fodor, J. A. (1992, July 3).  The big idea. Can there be a science of mind?  Times Literary Supplement. pp. 5-7.

Green, C. D. (1996). Where did the word "cognitive" come from anyway? Canadian Psychology, 37, 31-39.

Kant, I. (1929). Critique of pure reason (N. Kemp Smith, Trans.). London: Macmillan. (Original B edition published 1787)

McCulloch, W. S. (1965). Machines that think and want.  In W. S. McCulloch, Embodiments of mind (pp. 307-318). Cambridge, MA: MIT Press. (Original work published 1950)

McCulloch, W. S. (1965). Why is the mind in the head?  In W. S. McCulloch, Embodiments of mind (pp. 72-141). Cambridge, MA: MIT Press. (Original work published 1951)

McCulloch, W. S. (1965). What is a number, that a man may know it, and a man, that he may know a number?  In W. S. McCulloch, Embodiments of mind (pp. 1-18). Cambridge, MA: MIT Press. (Original work published 1961)

Newell, A., Shaw, J. C., & Simon, H. A. (1957). Programming the logic theory machine. Proceedings of the Western Joint Computer Conference, 230-240.

Newell, A. & Simon, H. A. (1963). GPS, a program that simulates human thought. In Feigenbaum, E. A. & Feldman, J. (Eds.), Computers and thought (pp. 279-293). New York: McGraw-Hill.

Pitts, W. H. & McCulloch, W. S. (1965). How we know universals: The perception of auditory and visual forms. In W. S. McCulloch, Embodiments of mind (pp. 46-66). Cambridge, MA: MIT Press. (Original work published 1947)

Place, U. T. (1956). Is consciousness a brain process?  British Journal of Psychology, 47, 44-50.

Simon, H. A. (1969). The sciences of the artificial. Cambridge, MA: MIT Press.

Turing, Alan M. 1950. Computing machinery and intelligence. Mind, 59, 433-460.

Wundt, W. (1897). Outlines of psychology (C. H. Judd, Trans.). New York: Gustav E. Stechert. (Original work published 1897).

 

Footnotes

[1] In the late 1970s, and into the 1980s, some computational cognitive scientists (e.g., Hofstadter, 1979) thought that recursion might explain consciousness, but this was premised on a widely recognized conflation between reflexive consciousness (i.e., self-consciousness), and consciousness per se (i.e., the "quale" or "raw feel" of seeing, hearing, or feeling something).  In very recent years there has been a flood of material on consciousness (e.g., Chalmers, 1996, Dennett, 1991; Flanagan, 1994;  Tye, 1995) but this represents a breakdown of the original consensus that first brought cognitive science into existence.

[2] I use these historical figures only as metonyms. I doubt most early cognitive scientists knew anything of Brentano specifically.  The point is that the primary aim exemplified in their work (their sometimes confused statements notwithstanding) was to give an account of representation -- "intentionality" in Brentano's terminology -- not of consciousness or will.

[3] It is important to note here, as well, that the representations of grammatical rules in Chomsky's theory were thought to be unconscious.  This demonstrates more clearly than anything how the cognitivists had disentangled the question of consciousness from that of intentionality, whereas the behaviorists had "chucked" out everything mental without bothering to check whether all the various aspects were "of a piece" with one another.