Consciousness and Its Discontents

Dan Lloyd
Department of Philosophy and Program in Neuroscience
Trinity College
300 Summit Street
Hartford, CT 06106
Office: 203 297 2528
Fax: 297 5358
dan.lloyd@trincoll.edu

Abstract

Our heads are full of representations, according to cognitive science. It might seem inevitable that conscious states are a type of brain-based representation, but in this paper I argue that representation and consciousness each form conceptually distinct domains. Representational content depends on context, usually causal, as shown by familiar cases in which context varies while brain states do not — twin earth cases and brains-in-vats, for example. But these same cases show that conscious content does not depend on context. The vatted brain, for example, enjoys the same experiences as its in vivo counterpart. The structure of experience — its parts and their distinctive characters — is the dynamic structure of the brain, viewed “from within.” I call this position methodological phenomenalism (MP), and consider its prospects as a foundation for a science of consciousness. I close with a consideration of MP on the subjective “character” of conscious states. Turning away from representation dissolves the perplexity of subjectivity, leaving hopeful prospects for the scientific study of consciousness.

CONSCIOUSNESS AND ITS DISCONTENTS

In my experience of the world there is perhaps nothing so mysterious as the fact that there is a world in my experience. That phenomenal world presents itself to me. It fills up my experience, and encompasses all that I sense and all that I know. When I admire a sunset or worry about class warfare, neither the sunset nor economic oppression are literally in my consciousness. Rather, my thought is occupied with the sunset-as-it-presents-to-me and class-warfare-as-it-presents-to-me. Their presentation includes everything about them. Even my awareness of the externality of these objects of awareness is a part of their presentation. Their externality lies within the presentation, within the experience. And lately we can add, within my skull. The manifold presentations of consciousness are identical — somehow — with states of my brain.

But the closed circle of the contents of consciousness, what I have called their presenting, is not all there is. My inner world somehow registers a real outer world, where sunsets and, on occasion, class warfare are real. To see the relation between the two worlds, I must step outside my first person perspective and look at myself or others from a third person perspective. Here I find myself not experiencing content within my mind but assigning it to my mind, and to other minds. From behavioral and environmental clues I piece together a picture of a system interacting with the world, where these intereactions are controlled or mediated by an internal model of the world. Philosophers and cognitive scientists have long pondered this inner model — or image, or story, or network of beliefs and desires, or data-structure. Its essential feature is also its most perplexing: representationality, a semantic relation between a token or vehicle of representation (in a medium) and what it represents. Representation links the activity in my head with events in the world, endowing some internal states with “aboutness” or “of-ness” (or, in philosophers’ terms, intentionality. See Lloyd 1989, Ch. 1.) So what was presentation in my own, first person case, can, form another point of view be regarded as re-presentation.

How are presentation and representation related? One might hope for conceptual unification, arguing that presentation is just representation viewed under alternative descriptions, or that presentation is one species of representation (as would be required to preserve a belief in unconscious representations). How nice this would be, since representations exhibit a point-of-view and subjective idiosyncrasy appropriate to consciousness, and since so much of cognitive science revolves around representation. Much of the work of building a theory of consciousness would be sped along with the foundation of representation theory on which to build (See, for example, Lloyd 1989, Ch. 7).

But, alas for optimism, I fear that presentation and representation bear no conceptual relationship to each other, but at best only a contingent empirical connection. In this paper, I will argue that the problems of consciousness and the problems of representation do not intersect, in spite of some suggestive analogies. But from that disappointment springs liberating possibilities, perhaps even a new science of consciousness.

I. The Medium is the Message

“Meanings just ain’t in the head,” declared Hilary Putnam some years ago, to which we can simply add, “but states of consciousness are!” Putnam and othershave dissociated psychological states from their semantics, developing cases where worldly conditions alter semantic content independently of conditions inside the skull. It is not hard to adapt the traditional thought experiments from this genre to engage intuitions about representation and consciousness. These thought experiments work to establish that representation is a relational property, often quite a complex one, involving conditions external to the organism. Each of them also works to establish the divergence of representational content from the presentational contents of consciousness.

Consider twin-earth, for example (Putnam 1975). This cosmic shell game imagines a planet called twin-earth that is just like earth, except that typically terran H2O is replaced by XYZ, chemically distinct but otherwise watery. And twin-me is just like me, down to the molecular level. This intricate duplication scheme entails identity of neural state between me and my twin. And that in turn implies that our conscious states of mind are identical — my thought of “water” in phenomenally indistinguishable from my twin’s. Yet the celebrated intuition is that the representational content is different — my thought that water is wet is about H2O, my twin’s about XYZ. Similar anti-individualist arguments by Tyler Burge have a similar structure and result (Burge 1979, 1986).

Twin-water is a specific example that can be (and has been) easily varied. Alternatively, one can move from retail to wholesale, with the ever-popular tale of the brain in a vat, a disembodied brain living in a virtual world of simulated sensory inputs. Were my brain vatted from birth, the representational disruption would be radical. Every thought I might have would not be about what I take it to be about. But once again within the thought experiment I have the same thoughts as a normal person. They are subjectively indistinguishable (owing to their neurological identity), or in the language of this paper, they present identically.

All these cases have a common structure. If you happen to be a victim of one of these thought experiments, your neural makeup is held constant, while your context varies. The experiments share a common outcome: Some aspect of representational content varies with context, independently of internal state. If we make the highly plausible assumption that states of consciousness covary with neural states, then presentational content is independent of representational content. Indeed, the thought experiments would fail in their purpose if there were any internal clues, conscious or otherwise, to the contextual and sematic scene-changes external to the agent. The steadiness of the brain entails steadiness of conscious content, but representational content rides with the context, changing as it changes.

Each of the above examples engages pretheoretical intuitions about meaning and representation. Yet there have been theories of representation advanced that attempt to explain how representations get their content, thus explaining why these thought experiments work. These reveal that internal states in an information processing system can bear complex representational content while lacking the complexity to support the presentational intricacy typical of a state of consciousness.

Theories of representation generally identify representations by the roles they play in certain contexts. The critical role played varies, with theories falling into two very general categories. There are what Ned Block calls “short arm” or internalist theories, where the appropriate role involves other representational states within the organism, and there are “long arm” or externalist theories, in which the putative representation stands in a certain relation to events in the world outside the representing system (Block 1986). (And there are theories that combine short- and long-arm components, of which Block’s is one.)

A “short arm” theory might hold that a state is a representation insofar as it can be interpreted as “fitting” inferentially into a network of other interpreted representational states (see, for example, Haugeland 1985, Lloyd 1989, Ch. 2). Under this sort of theory, nothing is a representation all by itself. A token becomes a representation only in the context of a system, since only whole systems can be interpreted. Representing systems are like texts in code, and the job of the cognitive sceintist is to decode them. Once the code is understood in general, new tokens can be interpreted by observing their causal antecedents and consequents. Their content then becomes whatever it ought to be in order to preserve the interpretation of the whole system.

A “long arm” theory is exemplified by various means of correlating states internal to a system with states in the world — relevant long-arm relations are usually causal, but in some cases information-bearing connections that are not necessarily causal may serve. Here a token represents what it indicates, or in other words the internal representational state is a reliable sign of an external state of affairs. (See, for example, Dretske 1981.)

Both types of theories are indifferent to the internal structure of the representation itself. Representations can be exceedingly complex, or they can be very simple. The crucial point here, however, is that very simple tokens can represent very complex contents. For example, I could use a doorbell to stand for just about anything in either theory — all that matters is that the doorbell have the appropriate connections to its inner or outer context. If I live in a remote cabin visited by none but the Fedex people, then the doorbell reliably indicates a Fedex arrival, by the long arm theory. If only one Fedex driver serves my route, then the doorbell represents her.

The possibility of minimally simple representations has been something of a problem for representational theorists, who find themselves professionally obliged to join their critics in discussing the mental life of thermostats. Perhaps a thermostat can represent; perhaps not. But the simplicity of some representations is a deeper problem if representation and consciousness is supposed to coincide. The doorbell in the above example may represent a very complex state of affairs, but by no stretch can we imagine that the doorbell has a phenomenal awareness of Veronica the Fedex driver, even if it is a perfect indicator of Veronica and none other. Doorbells have no phenomenology whatsoever. Or, in the language I’ve adopted her, there may be representation but there’s no presentation. Doorbells (unlike brains) simply lack the internal structure to present a world within.

The gulf between representational content and presentational or conscious content may help understand the heated skepticism in some quarters surrounding the idea of computer mentality. Searle’s well-known “Chinese room” experiment merits a new visit in light of the distinction (Searle 1980). As Searle initially describes the room, it is a place where formal symbols are manipulated. By the short arm theory, if the system is complicated enough to carry on a conversation (in any language), then its internal states must have content. Several additions to the basic room buttress the case for representational content. Perhaps the most persuasive is the “robot reply,” in which the room acquires sensory and motor interfaces with the world. At this point, it seems that lingering worries about representation subside. Now the slip of paper inscribed, “Hot day, isn’t it?” is not just vamping but becomes a report of external conditions, the room’s way of communicating its internal representation of a property of the external world. But Searle will have none of it, forcefully dismissing any claim that even this system understands its world. If for “understanding” we substitute “conscious awareness,” we may discern grounds for agreement with him. What we don’t know is whether the internal state of the room has the complexity we would expect to find in the phenomenal presentation of a sweltering day — how, in all its detail, such a day would seem to us.

How Searle’s room works is unspecified, but the innards of Searle’s real target, the digital computer, are not so mystifying. The glory of computers, fully appreciated by Turing and fully exploited forever after, lies in their ability to employ very simple states and basic operations in infinitely elaborated causal contexts. There representational powers are accordingly virtually unlimited. But as they flash through their microsecond sequences of states, we have to wonder whether any of these states presents a world equal in complexity to the world it represents. Once again, the root distinction between presentation and representation. Though widely read and a quick study, the computer is acutely myopic and catastrophically narrow-minded. It may encode a whole world, and dole out its knowledge, but none of its internal processing states seems to flower into an inner world as rich as the outer world or even as rich as the world described in its own memory.

In sum, intuition and theory converge to drive a wedge between two kinds of content, the rich phenomenal content which is presented to an experiencing mind, and the equally rich representational content which can arise in many contexts. The first depends on internal structure while the second depends on dynamic context. The two can vary independently of each other, and so, conceptually, these are two very different domains. Or, if the idea of content is exclusively used to mean representational content, then consciousness is discontented.

II. Toward a science of consciousness

Considering the roller-coaster ride of life, it makes sense that evolution would engineer organisms whose inner worlds tracked the vicissitudes of their environments. But the lesson of the previous section is that we cannot use this happy harmony to assign content to conscious states. They have another kind of content, presentational content. We are quite literally of two minds, a representational mind and a presentational mind. As a vehicle of representational content, the brain is the mind’s canvas, upon which an image of the world is inscribed. The canvas is not much like the world, but a representation of it nonetheless, and through representation the brain becames a vehicle of one kind of content. Classical materialism, then, identifies the brain with the medium of representation, rather than with the content represented in the medium. But if the arguments in the first section are persuasive, we must add to our understanding of the core identity assumed in materialism. Brain states are the medium (for representational content), but they are the content (of consciousenss) as well. The brain becomes not only canvas, but content. Presentational content, my experience of the world, is somehow identical to states of my brain.

A materialist theory of consciousness must accordingly show presentational content to be an aspect of brain function.

Without representation, what becomes of the science of consciousness? It may appear that the disconnection of representation and consciousness discards our best hope of a way in to the problem of consciousness. In this section, I address this worry, and ponder the science of presentation, on its own terms.

Good old fashioned research into the representing brain (or mind) depended on correlating the inner and the outer, indexing internal states by their external content, their “objects of representation.” There were, accordingly, two sets of entities in need of independent identification: One needed a scientifically respectable way of identifying stimuli in the world, and an equally respectable way of identifying the internal states that registered that outer world. For most of the short history of cognitive science, internal states were identified indirectly, based on a subject’s deliberate responses. With the rise of cognitive neuroscience, however, a distinct second window has opened on the internal realm. Various research techniques, like PET, fMRI, EEG, and their combinations, reveal brain responses independently of subjects’ own communicative gestures. (See, for example, Posner and Raichle 1994.) As the century closes, the representational science of mind and brain is moving dramatically toward a three-fold correlation in which cognitive states are indexed by their worldly sources, their behavioral effects, and their real internal tokens. Representation science may soon show us not only what the mental canvas depicts, but the canvas itself.

Now, however, we question the link to the world. Can we make sense of another kind of content, the presentational, without slipping back into the comfortable frameworks of representation? To help toward this goal, let us cast a metaphoric shroud over the world, leaving for consideration only the brain, and the experienced phenomena that arise within it. We might call this stance “methodological phenomenalism” (MP), with a nod to Fodor’s Methodological Solipsism (1980). Fodor’s proposed research strategy was designed to isolate the inboard component of mental states or “narrow content” (which Fodor, like almost everyone else in those days, considered quite apart from questions of consciousness). Methodological phenomenalism isolates the inboard component of consciousness itself. Indeed, if the arguments of the first section are accepted, inboard components are all there is to conscious experience. MP, in short, pretends that we are all brains in vats.

But although MP brackets and discounts the causes of internal states, it spares the effects. That is, the brain may still be probed in exactly the same way it was before: the brain in the pretend-vat still speaks about its supposed circumstances, still pushes illusory buttons (which button-presses we can observe) and still manifests metabolic and electrophysiological activity we can record. We are left, in short, with two kinds of data already in play in good old fashioned representation science, a glass that is half full.

MP forces us to consider the data of both first-person reports and functional brain imaging in new ways, however. This can be illustrated with a small (Cartesian) meditation: As I write (in a cafe in a city square), I can see a stop light overhanging a busy intersection. To the good old fashioned representational scientist, a red light inspires the questions, Is the light represented in the mind/brain? If so, how and where? It would be relatively easy to determine if subjects reliably detect the light, discriminating red lights from green despite various distracting contexts or impediments to viewing, and so forth. This would answer the first part of the question, telling us that subjects in appropriate circumstances do indeed form a representation of the light. The how and where questions might be hard in practice, but in principle they are easy too: We scan (with our chosen technology) brains of people looking at red lights, average them (to cancel the noise of stray thoughts), and then compare them to the (averaged) scans of people in very similar perceptual circumstances (e.g. looking at green lights) to isolate just those responses specific to the red light — this “subtraction” removes the representational manifestations of the perceived context, leaving just the activity specific to the red light and no other cause. By this method, then, perhaps a specific area of the brain, or a number of areas, could be identified as the brain’s red light district. At this point the representation scientist is content.

But now we practice a little methodological phenomenalism, pretending that the red light is a pure phenomenon, a presentation without a correlate in the world. We might ask, Is the red light presented in the mind/brain, and if so, how and where? But under the normal reading of the question, it refers to the real red light, and thus MP rules it out of bounds. We are limited to exploring the presentation itself, the experience that seems to be of a red light. But this too is distorting, since there is no experience which is exhaustively limited in its content to the presentation of an apparent red light. The red light perception is always an integral part of a more complex scene, and its contribution depends on many factors. It is one sort of presentation to the impatient cab driver at the intersection, another to the father walking a toddler across the street in front of the cab, another to the toddler herself, and yet another to the philosopher sitting in the cafe fishing for parables. The stimulus isolation practiced by the representation scientist in pursuit of an internal token that means “red light” cannot be employed in phenomenology, for there are no presentations of stimuli in pure isolation. Hang the light in darkness and silence in a basement laboratory, and the human subject will still be experiencing a complex scene, a red-light-in-the dark-room-with-a-psychologist-taking-notes-behind-the-door…. The corresponding isolation of areas of the brain is also inappropriate for presentational content. The presentation I experience never corresponds to the difference between that experience and another person’s in similar circumstances. The red-light-activation-zone does not locate and isolate any experience at all.

The example shows that the “grammar” of presentation differs from that of representation. Scientific methods for isolating variables deliver up whole representations, each indexed to a stimulus, but at best only parts of presentations, parts that may not have a fixed identity in isolations from whole moments of experience. In sum, presentational content is complex. From this observation, the various foci for presentation science emerge:

First, we can characterize the internal complexity of individual presentational states. It is a structure of phenomenal properties experienced in specific relations to one another. Spatial and temporal relations are most apparent, along with myriad abstract relations among the experienced elements. The red light is up there, and it’s about to change. To the driver, it means further delay, to the pedestrian, a rushed opportunity to cross, to the melancholy philosopher, a tiny invocation of the myth of Sisyphus. These “nonsensory meanings” are just as much a part of the occurring experience as a glowing redness.

Second, any experience is complexly related to other, possible experiences. It is specific, by which I mean it is distinct from other possible experiences. Each experience is a metaphorical point in a space of possible experiences, and that space is orderly. The experience of a sunset is similar to the experience of a sunrise — these two experiences are nearby in the space of possible experiences — while being relatively “remote” from the experience of a bath or a clambake. (Austen Clark 1993 has even argued that this space of phenomenal experience is sufficient grounding for a theory of qualia.)

With this twofold complexity in mind, we return to the brain. Substitute “brain state” for “experience” in the preceding paragraphs, and the challenges as well as the possibilities of presentation science emerge. As nature’s preeminent parallel processor, the human brain flashes through myriad states of great internal complexity, states which accordingly can be related to one another along myriad dimensions. MP holds that phenomenal structure and the structure of neural activation are one and the same. Though these two structures seem so different, in fact there is just one complex structure here, appearing under different descriptions, or seen from different points of view.

Of course, a host of empirical questions arise about the coordination of the phenomenal and the neural descriptions. This is good news, for it presents many promising starting points. For example, the statistical techniques of multivariate analysis are designed to accommodate complexity. One of these techniques, hierarchical cluster analysis, is frequently employed by connectionists to understand the behavior of complex artificial neural networks. In other papers, I’ve discussed the intersection of the findings of connectionists with broad features of phenomenology (Lloyd 1995, 1996). Work in progress employs multivariate techniques to functional brain imagery, treating separate PET scans as distributed activation states correlated with complex phenomenal states during particular cognitive and perceptual tasks. Soon it will be possible to locate dozens of classic PET experiments in a single high-dimensional “brain activation space” and compare their relations to each other to the relations between corresponding phenomenal states. It is, of course, an open question whether this comparison will yield a systematic account of the relations between phenomenal states and the brain. But it would be interim progress at least to articulate a method for neurophenomenology, an approach that shows how the relations between consciousness and the brain might be explored empirically. And who knows, it might just work.

And if it does, where does that leave that ultimate question, the “hard problem,” namely, the problem of subjective character of the phenomenal? The problem is virulent within the representational framework, owing to the independence of medium and content. A given brain state might represent red, for example. But in another context that same state might represent another color, or red might be represented by a different brain state. Given this variability, it would be natural to question why a particular state of the brain should have a particular content, or any content at all.

But according to MP, the medium, in all its structured details, is the message. A pattern of activation in the brain is a particular experience because it has exactly the structure of the experience, and fits within a system of experiences in the neural medium. The elements of an experience, including particular phenomenal properties, can be teased out of a suite of experiences. The reddishness of items that present redly could be discovered to be a particular neural subpattern common to sunsets, apples, fire engines, and so forth. Phenomenal redness would just be that subpattern. What the subpattern is, and whether reddishness would remain a useful predicate in phenomenology, would be empirical questions. Perhaps vast, intricate, and difficult empirical questions, but empirical nonetheless.

Should this rosy science meet with success, what would remain of current pre-emptive aloofness expressed in some quarters with respect to scientific theories of consciousness? Would it be open to the discontented skeptic to hold that the neural patterns embodying phenomenal states are either valueless or incomplete as explanations of consciousness? The answer, I’m afraid, is yes. Skepticism is a permanent possibility. What we call water might turn out to be XYZ, too. I could be deceived about the nature of water; perhaps we are all deceived. But the H2O hypothesis is so well-entrenched in theory, and so capable at both explanation and prediction, that the XYZ alternatives are mere logical possibilities. When a science of consciousness finally emerges from the tentative ideas of our time, will not its confirmed hypotheses leave us all, even us philosophers, quite content?


REFERENCES

Block, N. 1986. Advertisement for a Semantics for Psychology. Midwest Sutdies in Philosophy, 10: 615-678.

Burge, T. 1979. Individualism and the Mental. Midwest Sutdies in Philosophy 4:73-121.

Burge, T. 1986. Individualism and Psychology. Philosophical Review.

Chalmers, D. 1996. The Conscious Mind. Oxford University Press.

Clark, A. 1993. Sensory Qualities. Oxford: Clarendon Press.

Dretske, F. 1981. Knowledge and the Flow of Information. Cambridge, MA: MIT Press.

Fodor, J. 1980. Methodological Solipsism Considered as a Research Strategy in Cognitive Psychology. Behavioral and Brain Sciences, 3: 63-73.

Haugeland, J. 1985. Artificial Intelligence, The Very Idea. Cambridge, MA: MIT Press.

Lloyd, D. 1989. Simple Minds. Cambridge, MA: MIT Press.

Lloyd, D. 1995. Consciousness: A Connectionist Manifesto. Minds and Machines, 5(2):161-185.

Lloyd, D. 1996. Consciousness, Connectionism, and Cognitive Neuroscience: A Meeting of the Minds. Philosophical Psychology, 9(1):61-81.

Posner, M. and M. Raichle. 1994. Images of Mind. San Francisco: Scientific American Press.

Putnam, H. 1975. The meaning of “meaning.” In K. Gunderson, ed., Language, Mind, and Knowledge. Vol. 7, Minnesota Studies in the Philosophy of Science. Minneapolis: University of Minnisota Press.

Putnam, H. 1981. Reason, Truth, and History. Cambridge University Press.

Searle, J. 1980. Minds, Brains, and Programs. Behavioral and Brain Sciences, 3: 417-424.