Heaps, Bits, Hunks, and Minds(1)
The results of severing the great cerebral commissures in humans are by now well-known. I will not rehearse the details here; they are ably documented elsewhere.(2) It will suffice if I merely state my general reading of the results. As I see it, the high points are these: (1) In their normal daily activities commissurotomized subjects appear almost entirely normal; (2) under controlled conditions, when sensory input can be given to one cerebral hemisphere but not the other, it appears that the unstimulated hemisphere remains ignorant of the content of the stimulus (the evidence here resting on, and contributing to the confirmation of, lateralization of language ability and left/right motor control); (3) occasionally subjects appear to suffer conflicts of desire expressed, for example, by the left hand's attempting to undo the work of the right.
A wholly natural response to these results is to come to believe that commissurotomized subjects ('split brain' subjects) have split minds: that associated with each subject are two centres of consciousness, or two minds. Each hemisphere might be viewed as a potentially independent data processor which in normal subjects is tied by the data transfer across the corpus callosum into a single system which uses the capabilities of both hemispheres; in split brain subjects the transfer ceases, and the two hemispheres become independent systems. The rough conclusion then would be that most human beings have one mind, split brain subjects two.
Against that conclusion, of course, stands the evident normalcy of the split brain subjects in their quotidian rounds. Their activities display, for the most part, the same degree of integration and single-mindedness of purpose that ours do. Even the mildest flirtation with the paradigm case argument would make one uncomfortable about the conclusion that each commissurotomized human constitutes two persons.
Not much importance attaches, perhaps, to answering the question whether in the cases at hand we have one person to a body. Aside from some minor ethical puzzles, what matters is whether the split brain results reveal difficulties in our conception of persons. Thomas Nagel, for one, thinks they do.
Nagel's position is not easy for me to get a grip on, so my discussion may display some unintended injustice. His view of the ordinary conception of mind is that of a strongly unified centre of activity, a transparent centre, so to say, which does not admit of decomposition into parts. My impression is that he sees the ordinary concept of mind as that of a thing which is, in some categorical sense, indivisible. One must stress, of course, that Nagel is sketching an account of a concept; he is not, for example, saying that there are any minds. He thinks that the split brain results create havoc for that everyday concept of mind. His argument runs roughly as follows:
We are forced to say that in split brain subjects there are (at least) two centres of consciousness. The integration and coherence of the subjects' everyday behaviour is to be accounted for by (1) exposure of the two hemispheres to the same environment, and (2) massive subcortical communication between the two hemispheres. (Interhemispheric communication might include transfer of affect through the brain stem (sometimes intentionally exploited by one hemisphere), subtle cuing through muscle movements, and so on). The model forced on us seems to be that of two distinct minds undergoing very similar experiences and maintaining a continuous dialogue of extremely good data quality.(This is not at all incredible: the behaviour of twins who have little contact with anyone except each other might disturb us).
If we reflect on he situation of normal subjects, Nagel thinks, we see that the corpus callosum is simply a much better device for communicating between the hemispheres. The massive transfer of data made possible by this mass of neural tissue makes possible behavioural integration even in the contrived experimental situations in which the split brain subjects reveal the separateness of the hemispheres. But no difference in kind can be appealed to: dialogue is dialogue, however carried on. So, it would appear, there are (at least) two minds associated with each body that has an intact corpus callosum:
The co-operation of the undetached hemispheres in controlling the body is more efficient than the co-operation of a pair of detached hemispheres, but it is co-operation nonetheless.(3)
But things get even worse. Each of the hemispheres is organized into neural subsystems. Presumably the flow of information from the subsystem to subsystem proceeds by means not very different from that in which information flows between hemispheres in normal subjects. So the unity of each of the minds we are led to in the first stage of the argument is as spurious as that of the single mind with which we began. No obvious end to this series suggests itself; so we seem to be left with he result that the idea of a definite countable number of minds associated with a body simply disappears. And, though Nagel does not put it quite this way, our concept of a mind is turned out to pasture with phlogiston and the ether, without even the consolation of service at stud.
As an argument, this just won't do. In rough outline, its formal structure is that of the following: Consider two physical set-ups, one a perfect cone of sand, the other two cones linked by a bridge of sand about a third the height of the cones. About the two cone set-up one's natural inclination is to say that there are two heaps of sand; the bridge is merely a connecting link between the two individuals. But if one reflects on the one cone set-up, it is clear that the situation is exactly the same, except for the fact that the connecting bridge is much higher (as high as each of the two individual cones). So we should say of the one cone set-up that there also are two heaps of sand, connected by a bridge. However, once this step is made, it is easy to see that the two individuals so arrived at themselves submit to the same decomposition. Thus, even in the case where all the surface of the sand deviated trivially from the shape of a perfect cone, we would have to say that there were not a countable number of heaps of sand. But the concept of a heap is a count-concept, so what we must do is acknowledge the inapplicability of heap to reality.
Let us resist the temptation to say there is a grain of truth in this. What is clear is that there are many cases in which the question whether O1 and O2 are two distinct O's, or merely parts of one O, is settled by seeing how closely linked O1 and O2 are. Since closeness of link is a matter of degree, it will hardly be surprising that in many cases the question whether one has O or two is not worth answering. The most obvious cases are the ones analogous to that just given: many nouns serve to form somewhat undistinguished sortals out of mass terms; 'heap', 'bit', and 'hunk' provide criteria for individuation and re-identification in the way that more respectable sortals do, if weakly.
Whether a noun is a weak or a sturdy sortal is, I suppose, a matter of degree ("dune" may be sturdier than "heap"). At one extreme there are things whose existence depends simply on a reasonable amount of spatial coherence of some stuff, at the other, things whose existence depends on a complex and systematic interrelation of a number of parts. These differences are real ones - if you cut a bit in half, you have two bits, which is rare with typewriters - and they have some bearing on how fluid the identity and counting conditions for a concept will be. But even very sturdy sortals can display the kind of counting indeterminateness that "heap" does (imagine a series of cases with siamese twins at one end and a man with four legs at the other).
The existence of such "heap of sand" indeterminateness shows that Nagel's argument is fallacious at one level of representation of its form. One might acknowledge this and still deny that this notion of a weak sortal was of any constructive use in the discussion of split brain results. I think this response would be wrong, for reasons which I will try to unfold. But by way of softening up, let me mention the kinds of questions that, it seems to me, would occur immediately to anyone wondering whether split brain subjects, in their everyday life, were single or double-minded. They would have to do with the amount of information being transmitted from one hemisphere to the other, with the mechanism of the transfer (especially so far as matters of dependability and quality, compared with the normal case, are concerned), and with the degree to which something like inferences had to be attributed to the receiving hemisphere. So, at least, went my hipshot reaction.
The reaction suggests - again, at least to me - that the weak sortal model might be a not unilluminating first stab at the problem. The unity of a mind might consist simply in the appropriate amount and kind of information passing from one part of the brain to another, just as the unity of a heap consists in the appropriate size and arrangement of the stuff linking any peaks that rise above their surroundings. If something like this were near right, one would expect the identity and individuation criteria for minds to show considerable weakness and mushiness when pushed a bit; since I think that they do, I think the suggestion has something to be said for it straight off.
The best way to support a suggestion of this kind is to examine some imagined examples to see whether the correct identifying and individuating judgements concerning them sit easily with the rough account of the concepts involved. My cases are of two kinds: first I will discuss identifying and individuating problems for items about whose identity we perhaps care less than we do about that of minds, namely data processing systems (as made, e.g., by IBM); second I will examine some science-fiction cases where what are being identified and individuated really are minds, but ones whose physical basis is arranged somewhat differently from ours.
Let us imagine a data processing system (this may not correspond too closely to any actual system) which consists of several physically distinct subunits. Imagine the subunits to be a fast access memory, a slow access memory, assorted input and output peripherals, and a central processing unit which handles computations and system control. These units will be wired together in ways which permit the system to do the necessary read/write operations, and to drive the appropriate output devices.
We imagine two of these systems side by side, each with its own set of subunits;that is, each memory, and each peripheral, would link with only one CPU, and the CPU's would not be interconnected. Jobs would be submitted to one or the other of the systems at the user's discretion. Unquestionably there are here two data processing systems. Communication between the two systems is limited to such trivial cases as the manual transfer of card decks from an output device of one system to an input device of the other.
The organization just described would not, of course, be long tolerated in practice. If one had two sets of memory devices they would be much more efficiently used if each CPU had access to both. For long term storage of such things as the standard repertoire of programs, it would be foolish to have duplication. For short term storage of data relevant to a particular job, there would be obvious benefits to allowing one CPU to 'invade' the unused space of the other when it ran up against the limits of its own memory. Naturally it would be necessary when these access channels were set up to design the Operating System in such a way as to ensure that the two CPU's did not interfere with each other (in the way that regions are prevented from clashing when one CPU is operated as a multiprocessor).
Similarly, advantages are to be gained by integrating the peripherals of the two systems. If input from any device can be assigned to either CPU, a common queue can be set up, with the decision to which CPU a job will be sent left up to the Operating System.
Integration of memory and of input/output devices already presupposes some degree of integration of the CPU's, since the Operating System must engage both. Further gains in efficiency would presumably be obtained by linking the CPU's directly, so that larger jobs could be handled, and so that the existing procedures for dividing a CPU into zones for multiprocessing of small jobs cold be extended. With this final integration, which has the effect that information can pass from CPU to CPU as readily as within a single CPU, one would clearly be prepared to say that there was now just one data processing system, not two.
There are many changes that could be rung on this kind of story. Some might make the judgement that there is a single system less clear. Suppose, for example, that the machine codes of the two original systems were quite different; an interpretive stage would have to be interposed at each interaction point. Or one might suppose that the CPU-CPU linkage were rarely engaged, coming into play only when the overall load on the system(s) required it. These alterations in the case might motivate a weakening of the judgement that there was only one system. I suspect that whether it did would depend on the details of particular examples.
What matters for present purposes is simply that there are clear cases where one must say two systems are present, and ones where one must say one system is. And the difference between them is a difference in degree. At least I find it impossible to believe that anyone would care to maintain that when subunits are fully integrated, including CPU's, there are nevertheless two systems which happen to communicate very well, or that anyone would hold that somewhere in the transition from the two-system case to the one-system case something special happened.
For a wide range of intermediate cases, I should think, it would be fatuous to expect to be able to make numerical judgements, and daft to care. If we made it common practice to build one or more of the intermediate cases, we would not long agonize over whether to call them one system apiece or two; we would take a cue from our forebears and their ponds, puddles, inlets, and arms, and begin to talk about, say, Mark VII Semi-tandem Processing Rigs. When we need to count, we make a count concept.
No-one cares much about the countability of mechanical information processing systems, I suppose, while many, including Nagel, care a lot about the countability of (human) persons. But I do not see any reason to expect Nature to be kinder than IBM might be. Up to now, for the most part, what we have known of human beings has supported the view that they come one person apiece. So far as we yet know, split brain subjects may fall somewhere in the umbra: too much integration of subsystems to say two processing systems, too many important gaps to say just one. It must be remembered that there is neural communication between the hemispheres even without the corpus callosum, and probably considerable communication by non-neural means (this might be analogous to communication between CPU's in a high-level language with an interpreter at either end).
So far we have been rather like people who might live on a flat plain, featureless but for hard-edged obsidian cones. They would never have trouble counting heaps. And they might be bothered by their first encounter with a sand and gravel yard. Nature's kindness would have prepared them ill. But there is no reason they could not cope. And their coping would not consist in denying that there ever were any heaps, when it's as plain as the nose on your face that there are.(4)
Perhaps it would have been better for us had Nature not spoiled us in respect of our concept of a mind. Thinking about split brain research may be our own introduction to the school of hard knocks; our easy counting principle, "one body, one mind", may have to give way to something more complicated. The split brain cases lead to contemplation of the possibility that with one body there may be two minds, or worse, that there is not less than one, not more then two, and there's no more to say. Interesting as this direction is, I suspect that the opposite one - one mind, several bodies - may be conceptually more fruitful.
Imagine a species equipped with sensory and manipulative apparatus much like ours, but with brains somewhat differently organized. A single organism would have a smaller brain than one of us does, but each would have an ultrasonic broadcaster and receiver of high information capacity. Suppose it possible for the information flow among organisms to match in quantity that among our neural subsystems. Given this, a set of several bodies would exhibit the kind of integration of behaviour that a single human body normally does.(5)
If we imagine members of this species to arrange themselves nicely into isolated groups, the members of each group always staying close together, we have little difficulty imagining ourselves counting the number of minds in a given area. But we can, of course, ring changes here too. Some would create very little difficulty for counting: Suppose that from time to time one of these persons detaches a unit of three or four bodies to scout out unfamiliar territory ("let your fingers do the walking"). For a while, communication between that unit and the main group would not exist; on return, the smaller would be re-integrated. It seems clear that we should say that the original mind persisted throughout the operation, just as we say a heap does if we remove a bit of it for a while and then put it back. Of course, we can imagine cases where nothing is clear, just as we can for heaps.
I will not spell out cases; readers will probably already have thought of several as good as any I can make up. But before closing this off I wish to make a brief point about the utility of considering these kinds of cases.
When thinking about split brain subjects there is a temptation to try to imagine what it would be like from the inside, and to suppose that since one can't visualize being no exact number of persons something has gone drastically awry. Certainly Nagel seems to have this kind of worry. The advantage of working with the kinds of cases I have just alluded to is that one can't really visualize what it's like to have the experience of even a perfectly normal easily countable many-bodied mind. So one is not tempted to think that one should approach these questions by a kind of introspective verstehen. As far as I can see, if one wants to explore the possibilities allowed for by the concept of a mind, introspection is quite the wrong path to follow.
Let me illustrate with an example. I take it that anyone attempting to decide by introspection whether the relation 'x is an element of the same consciousness as y' is transitive, would answer in the affirmative. Consider, however, a small modification of our imagined species. Suppose that rather than being tight-knit and widely separated, three of them, A, B, and C, are as follows:
A shares one of its bodies with B, and B shares a different one with C. What "sharingcomes to is explained in part by the generalization that, for these items, if X and Y both belong to S, then S can (by behavioral evidence backed up by an account of the information processing mechanisms involved) make direct phenomenological comparisons of sensory input to X and Y (there will be more to belonging than this, of course). Call an unshared body of A's, W, the one shared by A and B, X, by B and C, Y, and an unshared body of C's, Z. Suppose there to be sensory inputs w, x, y, z, in the corresponding bodies. A can compare phenomenologically w and x, so w and x are elements of the same consciousness (A's); x and y are elements of B's consciousness, but y is not an element of A's; similarly for y and z, mutatis mutandis. So we have Rwx, Rxy, Ryz, but not Rwz. One could resist this conclusion by appealing to a difference between sense-data and sensory inputs, but I think sufficient detail about the mechanisms of perception for our imaginary creatures could easily make such a move seem woefully ad hoc.
I think our concept of mind has always been resilient enough to accommodate this type of example. Science fiction abounds with minds splitting, merging, branching and dissolving. So I do not think the concept is in danger from the split brain results. One could even hold a Cartesian theory of minds(6) while acknowledging that "mind" was a weak sortal; though Nagel is probably right to emphasize our coming to think of ourselves as physical data processing devices as the source of real worries about counting. These worries are disturbing to us only because Nature has been overly kind in the experience She has given us; had She been less kind and more various we would be as easy with minds as with heaps, bits and hunks. Split brain research and computer architecture are beginning to repair Nature's omissions.
There is, however, a sense in which I am agreeing with Nagel: to compare 'mind' with 'heap' is to make a comment about the place of minds in the ultimate ontology. Where I disagree with Nagel is in his apparent belief that our prototheorizing ancestors were committed by their concepts to some rather dubious metaphysics. For what it's worth, I think the concept of mind is sturdy enough to stay by the hearthside, to continue about its daily chores as always. And no doubt it will always play some role in our thought, along with the new concepts we will learn from neurophysiology and from our future mechanical companions and successors.
1. This was read at the University of Manitoba, 1976, and at the Atlantic Provinces
Philosophical Association meetings in Charlottetown, 1977.
2. An excellent set of references may be found in Thomas's Nagel 'Brain Bisection and the
Unity of Consciousness', Synthese, v. 22, no. 3/4, May 1971. I also draw on a talk given
by J. E. Bogen to the Canadian Philosophical Association, June 1974. Added 1995: Only
on the order of a dozen operations were done, since after a while the patients' epileptic
symptoms returned; the words to cases ratio for split brains must be pretty high (J E
Alcock, personal communication).
3. Nagel, op. cit., pp. 410-411.
4. This kind of coping is fairly comon, I think. Consider the shift from a creationist to an
evolutionary biology, and what happens to 'species' thereunder.
5. Added 1995: Great minds! Vernor Vinge, in A Fire Upon the Deep, New York : TOR,
1992, imagines just such creatures.
6. But not, perhaps, a Sartrean theory of consciousness.
2. An excellent set of references may be found in Thomas's Nagel 'Brain Bisection and the Unity of Consciousness', Synthese, v. 22, no. 3/4, May 1971. I also draw on a talk given by J. E. Bogen to the Canadian Philosophical Association, June 1974. Added 1995: Only on the order of a dozen operations were done, since after a while the patients' epileptic symptoms returned; the words to cases ratio for split brains must be pretty high (J E Alcock, personal communication).
3. Nagel, op. cit., pp. 410-411.
4. This kind of coping is fairly comon, I think. Consider the shift from a creationist to an evolutionary biology, and what happens to 'species' thereunder.
5. Added 1995: Great minds! Vernor Vinge, in A Fire Upon the Deep, New York : TOR, 1992, imagines just such creatures.
6. But not, perhaps, a Sartrean theory of consciousness.