Step 1: Hmm.
You are in a dark place. You are at rest. The pallet is comfortable. But the gear that surrounds you and locks your head in place, and the whir of the PET scanner, make your rest strange. And you are waiting, either for something you have been prepared to expect, or just waiting for the unknown.
As you rest, the scanner records the metabolic activity across your brain, as if it were carving your brain into ten thousand tiny boxes — “voxels” (like pixels, but in 3d) — and taking a measurement for each one. The metabolic activity — so it is assumed — corresponds to the underlying neural activity. Though you are at rest, your brain is a storm of activity, and this is recorded to form a baseline or control image: You, thinking about this and that, musing, hmm. For forty seconds you “rest.”
Step 2: Look, listen, think…
Then it is time for you to get to work. Perhaps something happens: a checkerboard flashes on a screen before you; something tickles your left toe; your arm begins to itch. Perhaps you watch for certain shapes or movements, and press a button if you see them. Or maybe you recite words that begin with “C.” Or recite them silently. Or think of the saddest thing you can remember. The experimenters have told you what to do. As you do it, different areas of your brain get going, like muscle groups, pumping thoughts. The whirring continues, and now the scanner is detecting your brain in action. Another forty second storm, different from the musing storm at rest.
Step 3: 2 minus 1
Computers process the the scanner data to produce images of you in these odd but human moments. Hundreds of voxels, each glowing on screen. The anatomy is clear, but in every corner the brain is busy. The raw images seem to report only that all the brain is active, all the time. But now the experimenters take the task image and subtract the rest image from it. What remains is a “difference image.” The difference image shows what additional resources were needed to perform the task. Difference images inspire the dominant interpretive approach to PET and other functional brain imagery studies:
Localization of function in regions of the brain
Difference images are the lingua franca of PET studies. Every PET experiment culminates in a difference image which reveals one or more hot spots of differential activation characteristic of the task, when it is compared to a control state. The goal of functional brain imaging has been to localize particular cognitive operations in particular regions of the brain, or, alternatively, to localize operations in particular “dedicated networks” of cooperating regions specialized to work together to discharge a certain mental operation.
For example, typical PET studies might conclude as follows:
- Our results indicate localization of different codes in widely separated areas of the cerebral cortex (Petersen et al. 1988).
- These data localize the vigilance aspects of normal human attention to sensory stimuli…. (Pardo et al. 1991)
- A comparison of the results of PET scans of subjects viewing multi-colored and black-and-white displays has identified a region of normal human cerebral cortex specialized for color vision. (Lueck et al. 1989)
- These experiments demonstrate the task dependence of visual processing, even for very closely related tasks, and the localization of the temporal comparison component involved in orientation discrimination in human area 19. (Dupont et al. 1993)
- We thus provide direct evidence to show that… different areas of the human prestriate visual cortex are specialized for different attributes of vision. (Zeki et al. 1991)
- The implications of these results are discussed, and it is argued that they are consistent with localization of a lexicon for spoken word recognition in the middle part of the left superior and middle temporal gyri, and a lexicon for written word recognition in the posterior part of the left middle temporal gyrus. (Howard et al. 1992)
This overall research strategy presupposes that the functions identified in cognitive psychology — memory, visual perception, etc. — are specifically localized in identifiable circuits or pathways through the brain.
But what else could it be? Until a few years ago, local function was the only game in town. But then along came the “connectionists,” a diaspora of researchers committed to the hypothesis that cognition could be understood as the cooperation of a network of simple neuron-like processors all working simultaneously. Such a system, in other words, would employ “parallel distributed processing.” The intelligence of its behavior would result not from the (very limited) endowment of any particular neural unit, but rather from the concerted parallel effort of all for one. The system’s repertoire of behaviors would be stored in the connections among the units. The strength or “weight” of various connections would bring it about the a pattern of inputs would issue in an appropriate pattern of motor outputs. Hence, “connectionism.”
Connectionists discovered a number of intriguing general principles through constructing and observing their networks. The first was that their approach worked. Many examples of human cognition could be convincingly simulated by connectionist neural networks. While success has been sweet, the approach has an inherent difficulty: Unlike the programs deliberately designed to execute a certain function, a connectionist network “grows into” its proper functioning, through a process of simulated learning. This has the consequence that one cannot always know in advance how the network does what it does. One must crack it open and watch the actual activity across the neural units, and from there attempt to derive the general rules by which the network works.
Connectionists have done just that, and have generally found that at every stage of processing there are no local specialists, no dedicated path. Instead, many units are active and the patterns of activity overlap. These overlapping patterns of activity were dubbed “distributed representations.”
The question then returns to the brain. It is, of course, a neural network. Connectionist simulations may encourage the search for distributed representations, but we cannot presuppose their existence (or nonexistence) without empirical study. As follows:
1. The distinction between local and distributed processing is, of course, not absolute. We might usefully mark the continuum of implementation with four subdivisions.
First, to begin at the most localized, there could be networks of dedicated components. If the brain were a processor of this sort, anatomically defined regions would each be the sole locus of specific cognitively defined functions. This is modularity with a vengeance, radical compartmentalization of function.
Second, there could be networks of dedicated (sub)networks. In the brain, one would expect to find subnetworks of multiple anatomically defined components, where each subnetwork is the sole site for specific cognitive functions. Although in this sort of brain a cognitive function is distributed over several regions, this kind of processing is still localized insofar as each subnet is discrete. Although anatomically spread, the subnet as a whole is nonetheless a dedicated processor, uniquely charged with one job in the overall neural economy.
Third, there could be sparsely distributed networks. Here anatomically defined brain regions are multifunctional. A region may be recruited to join a subnetwork to compute one function, and later recruited to a different subnet to compute a different function. Thus, subnetworks would overlap in their anatomy. This form of distribution is sparse, however, insofar as particular brain regions are not omnifunctional. That is, each function is computed by a subset of regions, rather than the whole brain. The engaged subnetworks overlap, but the adaptability of each region is limited to a fixed list of functions.
Fourth, there could be fully distributed networks. Here every anatomically defined brain region has a part to play in every cognitive function, and no region is out of play. Karl Lashley’s proposed “equipotentiality” is an early version of fully distributed processing, for example.
The endpoints of this continuum are distinct, but the second and third options are open to an equivocation which is endemic in the PET literature: If one queries the brain about the location of a single function, or a few functions, one may discover a subnetwork correlated with that function, but it may not be possible to determine whether the subnetwork is a dedicated subnetwork, the “f system,” where f is the function under investigation, or whether the subnetwork correlated with that function is a snapshot of a sparsely distributed network in one of its many overlapping configurations. To resolve this ambiguity, one must shift from function-based research to component-based research. In other words, ask not “Where is this function computed?” Ask instead “What is this component doing?” If the majority of anatomically defined regions each handle just one or a few functions, some form of localization (including the specialized subnet version) will be supported. But if the majority of anatomically defined regions are each revealed to be multifunctional, this will support some version of distributed processing.
2. PET experiments abound. Their abundance invites a meta-analysis, and the flexibility of the Brainmap database invites exactly the questions just mentioned. To begin with some overview, Brainmap archives 733 distinct experiments (PET, MRI, and EEG), with a total count of local maxima of activation of 7508. That is a mean of 10.24 activation peaks per experiment. It is a rare experiment where all these peaks are located in a single region, suggesting distribution of function. But this in itself is not definitive, since the average experiment might just as well be picking out a dedicated subnetwork of 10 (plus or minus) components.
A more decisive analysis would work through a list of components, asking of each whether it is a locus of activation for specific functions. Perhaps the narrowest anatomical specification of the brain accessible to PET discrimination is the Brodmann area. Brodmann areas are distinct both in their geography and in their cytoarchitecture, two factors which indicated to Brodmann and generations to follow that each of these numbered areas were functionally distinct. Well, are they? Brainmap indexes 41 Brodmann areas, enabling us to ask of each which cognitive tasks generate a local maxima in that area. Table 1 compiles the results.
The table shows that each Brodmann area is cited by an average of about 14 separate papers. If all the experiments engaging a particular Brodmann area probed the same function, this observation would be compatible with localism, but inspection of Table 1 reveals that this is not the case. A few areas seem so far to be specialized, but the majority of them light up in scan after scan, and during very different cognitive tasks. Note too that the case for distribution suggested by Table 1 is strong if one factors in the many steps of PET study design that favor localist interpretation. Foremost among these interpretive filters is the “subtraction method.” Each image is in fact a difference image, the result of a subtraction of a control condition from a test condition. Often the controls are components of the task. For example, to locate semantic processing the experimenters might use a control scan of subjects reading pseudowords, to isolate just the distinctive components of the task in question. Even after this selective pre screening of the data, however, the table shows distinct multifunctionality for most of the areas. Moreover, the table compiles only Brodmann areas that the various paper authors explicitly mentioned in their studies. Not every author carves the brain along these joints. Finally, the 82 papers indexed here do not, of course, exhaust all the potential functions of the mind.
3. If the reasoning above is correct, the localist claims must be severely hedged. For each conclusion of the form, “Subnet S computes function f,” we must substitute “Subnet S computes function f, among others.” This is not a trivial emendation. Cognitive neuroscience, if there is such a field, rests on the presupposition that we will ultimately discover bridge laws between the domains of cognitive psychology and neuroscience, particularly neurobiology. Elsewhere in physiology organs and organ systems have particular functions, and the job of science is to determine what these functions are. Localist interpretations of brain function fit into the traditional model, but distributed interpretations create a dilemma for cognitive neuroscience. The cognitive neuroscientist must choose between two midcourse corrections:
- Revise functional types: If the experimental evidence suggests that region R implements functions f1 v f2 v f3…, revisit that functional disjunction to see if there is a common factor to all of the disjuncts, where that common function is unique to R.
- Revise anatomical types: Seek a coherent way to describe the implementation level of cognitive functions that accounts for anatomically overlapping implementations of particular functions.
Lemma 1 forces a thorough revision of cognitive science, and with it the final abandonment of any realist interpretation of folk psychology or folk phenomenology. In its wake we would find ourselves describing our cognitive function in a language as yet unknown to us. (This prospect is welcomed by the Churchlands, for example.) There is no reason not to embark on this long journey, and I do not see preemptive reasons why it must fail, although it might. It is the only path only if Lemma 2 is shown to be incoherent. So, what about the second path, the revision of anatomical types? If the standard anatomy of Brodmann areas, not to mention gyri, sulci, and lobes — all the familiar station stops in the brain — are set aside (owing to their failure to map onto functional types), what possible new roadmap could we find?
Next: interpreting the distributed brain….