Are Connectionist Models Theories of Cognition?

Christopher D. Green
Department of Psychology
York University
North York, Ontario M3J 1P3
CANADA

e-mail: christo@psych.toronto.edu

Presented May 1997 at the conference of the International Society for Theoretical Psychology, Berlin, Germany

Connectionist models of cognition are all the rage now. It is not clear, however, in what sense such models are to be considered full-fledged theories of cognition. This may be problematic, for if connectionist models are not to be considered theories of cognition, in the traditional scientific sense of the word, then the question arises as to what exactly they are, and why we should pay attention to them? If, on the other hand, they are to be regarded as scientific theories it should be possible to explicate precisely in what sense this is true, and show how they fulfill the functions we normally associate with theories. In this paper, I begin by examining the question of what it is to be a scientific theory. Second, I describe in precisely what sense traditional "symbolic" computational models of cognition can be said to fill this bill. Third, I examine whether or not connectionist models can be said to do the same. My conclusion is that connectionist models could, under a certain interpretation of what it is they model, be considered to be theories, but that this interpretation is likely to be unacceptable to many connectionists.

A typical complex scientific theory contains both empirical and theoretical terms. The empirical terms refer to observable entities. The theoretical terms refer to unobservable entities that improve the predictive power of the theory as a whole. The exact ontological status of objects referred to by theoretical terms is a matter of some debate. Realists believe them to be actual objects that resist direct observation for one reason or another. Instrumentalists consider them to be mere "convenient fictions" that earn their scientific keep merely by the predictive accuracy they lend to the theory. I think it is fair to say that the vast majority of research psychologists are realists about the theoretical terms they use, though they are, in the main, unreflective realists who have never seriously considered alternative possibilities.

Let us begin with a relatively uncontroversial theory from outside of psychology-Mendelian genetics. In the Mendelian scheme, entities called "genes" were said to be responsible for the propagation of traits from one generation of organisms to another. Mendel was unable to observe anything corresponding to "genes", but their invocation made it possible for him to correctly predict the proportions in which succeeding generations of organisms would express a variety of traits. As such, the gene is a classic example of a theoretical entity. For present purposes, it is important to note that each such theoretical gene, though unobservable, was hypothesized to correspond to an individual trait. That is, in addition to the predictive value each theoretical gene provided, each also justified its existence by being responsible for a particular phenomenon. There were no genes in the system that were not directly tied to the expression of a trait. Although some genes were said not to be expressed in the phenotype (viz., recessive genes in heterozygous individuals), all were said to be directly involved in the calculation of the expression of a specific trait. That is to say, their inclusion in the theory was justified in part by the specificity of the role they were said to play. It is worth noting that the actual existence of genes remained controversial until the discovery of their molecular basis-viz., DNA-and our understanding of them changed considerably with that discovery.

Now consider, as a psychological example of theoretical entities, the model of memory proposed by Atkinson and Shiffrin (1971). It is a classic "box-and-arrow" theory. Information is fed from the sensory register into a holding space called Short Term Store (STS). If continuously rehearsed, a limited number of items can be stored there indefinitely. If the number of items exceeds the capacity of the store, some are lost. If rehearsal continues for an unspecified duration, it is claimed that some or all of these items are transferred to another holding space called Long Term Store (LTS). The capacity of LTS is effectively unlimited, and items in LTS need not be continuously rehearsed, but are said to be kept in storage effectively permanently. STS and LTS, like genes, are theoretical entities. They cannot be directly observed, but their postulation enables the psychologist to correctly predict a number of memory phenomena. In each such phenomenon, the activity of each store is carefully specified. The precision of this specification seems to be at least part of the reason that scientists are willing to accept them. Indeed, many experiments aimed at confirming their existence are explicitly designed to block, or interfere with, the hypothesized activity of one in order to demonstrate the features of the "pure" activity of the other. Whether or not this could be successfully accomplished was once a dominant issue in memory theory. The question of short term memory effects being "contaminated" by the uncontrollable and unwanted activity of LTS occupied many experiments of the 1960s and 1970s.

With the entry of computer models into psychology, the theories have become even more complex, employing dozens of theoretical entities. A recent version of Chomskyan linguistic theory proposed by Berwick (1985), for instance, postulates more than two dozen rules that are said to control the building and interpretation of grammatical sentences. But even here the empirical data must bear fairly directly on each theoretical entity. None of these rules is without specific predicted effects. Each of the rules performs a certain function without which the construction and interpretation of grammatical sentences could not correctly proceed. For instance, RULE ATTACH-VP, sensibly enough, attaches verb phrases to sentences; RULE ATTACH-NOUN similarly attaches nouns to noun phrases; and so forth. Part of what justifies the inclusion in the theory of terms referring to each of these entities is the fact that they are explicitly connected to specific empirical phenomena.

Each of the models I have described so far, however, are symbolic models in that each theoretical entity represents something in particular, even if that something is itself theoretical. Connectionist researchers, however, explicitly reject this sort of modeling, in the main. In a typical connectionist model, there are dozens, sometimes hundreds, of simple units, bound together by hundreds, sometimes thousands, of connections. Neither the units nor the connections bear any symbolic relation to the cognitive domain the network is being used to model. Similarly, the rules that govern how the activity of one unit will affect the activity of another unit to which it is connected are extremely simple, and not in any way related to the cognitive domain that the network is being used to model. Ditto for the rules that govern how the weights on the connections between units are to be changed.

This is all considered a distinct advantage among connectionists. Neither the units nor the connections represent anything in the way that variables and rules do in traditional symbolic computational models. Mental representations, to the degree that they are admitted at all, are said to be distributed across the activities of the units as a group. Any representation-level rules that the model is said to use are, as well, distributed across the weights of all of the connections in the network. This gives connectionist networks their characteristic flexibility: they are able to learn most any cognitive domain; able to generalize their knowledge easily to new cases; able to continue working reasonably well despite incomplete input or even moderate damage to their internal structure; etc. The only real question is whether they are, indeed, too flexible to be good theories. Or whether, by contrast, there are heretofore unrecognized features of good theories of which connectionist models can apprise us.

Each of the units, connections, and rules in a connectionist network is a theoretical entity. Each name referring to it in a description of the network is a theoretical term in the theory of cognition it embodies. With the previously described theories, it was evident that each theoretical entity had a specific job to do. If it were removed, not only would the performance of the model as a whole suffer, but it would suffer in predictable ways, viz., the particular feature of the model's performance for which the theoretical entity in question was responsible-i.e., that which it represented-would no longer obtain. The units and connections in a connectionist net-precisely in virtue of the distributed nature of their activity-bear no such relation to the various activities of the model. Although, on the one hand, this seems to increase the model's overall efficiency, on the other hand it simultaneously seems to undermine the justification for each one of the units and connections in the network. To put things even more plainly, if one were to ask, say, of Berwick's symbolic model of grammar, "What is the justification for postulating RULE ATTACH-NOUN?" the answer would be quite straightforward: "Because without it nouns would not be attached to noun phrases and the resulting outputs would be ungrammatical." The answer to the parallel question with respect to the a connectionist network-viz., "What is the justification for postulating (say) unit123 in this network?"-is not so straightforward. Precisely because connectionist networks are so flexible, the right answer is probably something like, "No reason in particular. The network would probably perform just as well without it."

If this is true, we are led to an even more pressing question: exactly what is it we can actually be said to know about a given cognitive activity once we have modeled it with a connectionist network? In the case of, say, the Atkinson and Shiffrin model of memory, we can say that we have confirmation of the idea that there are at least two forms of memory store-short and long term-and this confirmation amounts to a justification of sorts for their postulation. Are we similarly to say that a particular connectionist model with, say, 326 units that correctly predicts activity in a given cognitive domain confirms the idea that there are exactly 326 units governing that activity? This seems ridiculous-indeed almost meaningless. Aside from the obvious fact that we don't what the "units" are units of, we might well have gotten just as good results with 325, or 327 units, or indeed with 300 or 350 units. Since none of the units correspond to any particular aspect of the performance of the network, there is no particular justification for any one of them.

Of course, it might be argued that the mapping of particular theoretical terms on to particular aspects of the behavior being modeled is unnecessary; it is just an historical accident, primarily the result of our not having been able to keep simultaneous control of thousands of theoretical terms until the advent of computers.

The real question, however, seems to be about what one can be said to have really learned about the phenomenon of interest if one's model of that phenomenon contains far more terms that are not tied down to the "empirical plane," so to speak, than it does entities that are. Consider the following analogy: suppose that an historian wants to understand the events that lead up to political revolutions, so he tries to simulate several revolutions and a variety of other less successful political uprisings with a connectionist network. The input units encode data on, say, the state of the economy in the years prior to the uprising, the morale of the population, the kinds of political ideas popular at the time, and a slew of other important socio-political variables. The output units encode various possible outcomes: revolution, uprising forcing significant political change, uprising diffused by superficial political concessions, uprising put down by force, etc. Among the input and output units, let us say that the historian places exactly 72 units which, he says, encode "a distributed representation of the socio-political situation of the time." His simulation runs beautifully. Indeed, let us say that because he has learned the latest techniques of recurrent networks, he is actually able to simulate events in the order in which they took place over several years either side of each uprising.

What has he learned about revolution? That there must have been (even approximately) 72 units involved? Certainly not. If the "hidden" units corresponded to something in particular-say, to political leaders, or parties, or derivative socio-political variables-that is, if the network had been symbolic then perhaps he would have a case. Instead, he must simply repeat the mantra that they constitute "a distributed representation of the situation," and that the network is likely a close approximation to the situation because it plausibly simulates so many different variants of it.

It must be concluded that he has not learned very much about revolution at all, however. The simple fact of having a working "simulation" seems to mean little. It is only if one can interpret the internal activity of the simulation that the simulation increases our knowledge; i.e., it is only then that the simulation is to be considered a scientific theory worthy of consideration.

One way we psychologists might try to avoid the fate of our fictional connectionist historian is to claim that connectionist units do correspond to something closely related to the cognitive domain; viz., the neurons of the brain. Whether this is to be considered an analogy or an actual literal claim is often left vague by those who suggest it. Most connectionists currently seem wary of proclaiming too boldly that their networks, indeed, directly model the activity of the brain (see, e.g., McClelland, Rumelhart, & Hinton, 1986; Smolensky, 1988).

The general aversion to making very strong claims with respect to the relation between connectionist models and brain is not without good reason. Crick and Asanuma (1986) describe five properties that the units of connectionist networks typically have that are never, or are only rarely, seen in neurons, and two other properties found in neurons that are rarely found in the units of connectionist networks. Perhaps most important of these is the fact that the success of connectionist models seems to depend upon the fact that any given unit can send excitatory impulses to some units and inhibitory impulses to others. No neuron is known to do this, and even as strong a promoter of connectionism as Paul Churchland (1990, p. 221) has recognized this as a major hurdle to be overcome if connectionist nets are to be taken seriously as models of brain activity. What is more, despite some obvious, but possibly superficial, similarities between the structure of connectionist units and the structure of neurons, there is currently little hard evidence that any specific aspect of cognition is instantiated in the brain by neurons arranged in any specified connectionist configuration.

This having been said, it would appear at present that the only way of interpreting connectionist networks that would make them serious candidates for theories of cognition, is as literal models of the brain activity that underpins cognition. This means, if Crick and Asanuma are right in their critique, that connectionists should start restricting themselves to units, connections, and rules that employ all and only principles that are known to be true of neurons. Other interpretations of connectionist networks may be possible in principle, but at this point none seem to have appeared on the intellectual horizon. Without such an interpretation, connectionist modelers are left, more or less, in the position of the fictional connectionist historian whose tale was told above. Even a simulation that is successful in terms of transforming certain inputs into the "right" outputs does not tell us much about the activity it is simulating unless there is a plausible interpretation of its inner workings. All the researcher can claim is that the success of the simulation confirms that some connectionist architecture is involved, and perhaps something very general about the nature of that architecture (e.g., that it is self-supervising, recurrent, etc.). There is little or no confirmation of the specific features of the network because so much of it is optional.

Now, it might be argued that this situation is no different from that of early atomic theory in physics. Visible bits of matter, and their interactions with other bits of matter were explained by the postulation of not just thousands, but millions upon millions of theoretical entities of mostly unknown character-viz., atoms. This, the argument would continue, is not so different from the situation in connectionism. After all, as Lakatoš (1970) taught us, new research programs need a grace period in the beginning to get themselves established. Although I don't have a demonstrative argument against this line of thought, I think it has relatively little merit. We know pretty well what atoms are, and where we would find them, were we able to achieve the required optical resolution. Put very bluntly, if you simply look closer and closer and closer at a material object, you'll eventually see the atoms. Atoms are, at least in that sense, perfectly ordinary material objects themselves. Although they constitute an extension of our normal ontological categories, they do not replace an old well-understood category with a new ill-understood one.

By contrast, the units of connectionist networks (unless identified with neurons, or other bits of neural material) are quite different. They are not a reduction of mental concepts, and as such, give us no obvious path to follow to get from the "high-level" of behavior and cognition to the "low-level" of units and connections. That it is not a reductive position is in fact often cited as a strength of connectionism but, if I am right, it is also the primary source of the problems I have been discussing here.

To conclude, it is important to note that I am not arguing that connectionist networks must give way symbolic networks because cognition is inherently symbolic. That is an entirely independent question. What I am suggesting, however, is that the apparent success of connectionism in domains where symbolic models typically fail, may be due as much to the huge number of additional "degrees of freedom" connectionist networks are afforded by virtue of the blanket claim of distributed representation across large numbers of uninterpreted units, as it is to any inherent virtues connectionism has over symbolism in explaining cognitive phenomena. Until we work these issues out, the apparent success of connectionist models of cognition must remain somewhat suspect.

References

Atkinson, R. C. & Shiffrin, R. M. (1971). The control of short-term memory. Scientific American, 225, 82-90.

Berwick, R. C. (1985). The acquisition of syntactic knowledge. Cambridge, MA: MIT Press.

Churchland, P. M. (1990). Cognitive activity in artificial neural networks. In D. N. Osherson & E. E. Smith (Eds.), Thinking: An invitation to cognitive science (Vol. 3, pp. 199-227). Cambridge, MA: MIT Press.

Lakatoš, I. (1970). Falsification and the methodology of scientific research programmes. In I. Lakatoš & A. Musgrave (Eds.), Criticism and the growth of knowledge (pp. 91-196). Cambridge: Cambridge University Press.

McClelland, J. L., Rumelhart, D. E., & Hinton, G. E. (1986). The appeal of parallel distributed processing. In Rumelhart, D. E. & McClelland, J. L. (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition (vol. 1, pp. 110-146). Cambridge, MA: MIT Press.

Smolensky, P. (1988). On the proper treatment of connectionism. Behavioral and Brain Sciences, 11, 1-73.