Skip to main content

Differences between Conscious and Non-conscious Processing: Why They Make a Difference

 

Ralph Ellis
Clark Atlanta University
Atlanta GA 30314

[email protected]

There is an old story about a gathering of scientists for the purpose of awarding a prize in special recognition of the most significant achievement of the twentieth century in the area of cognitive theory. After some discussion, a well-known artificial intelligence worker stood and said, "I believe there can be little doubt as to the most important breakthrough of the century: Obviously, it is the thermos bottle." After a dramatic pause, he continued, "If the liquid is hot, it keeps it hot; if cold, it keeps it cold. My friends, I ask you! -- How does it know?"

This story well illustrates the major shortcoming of recent cognitive theory as far as the understanding of human consciousness is concerned. Equivocal usages of terms like 'know,' 'see,' 'learn,' 'remember,' etc., are now so commonplace that it is no longer possible even to meaningfully ask whether a given theory or hypothesis has any relevance to the study of the corresponding conscious processes or not. By 'equivocal usages,' I mean the use of the main terms that are available to talk about conscious processes to denote functions which obviously do not involve consciousness, but with the implicit suggestion that, if only we could learn enough about how computers, thermostats or thermos bottles 'know,' 'see,' and 'remember' things, this would somehow help us to understand how it is that the conscious processes called 'knowing,' 'seeing,' 'remembering,' etc., are produced. Human cognition involves both conscious and non-conscious processing, and it is important to understand both kinds.

The purpose of the present book is to distinguish 'knowing,' 'seeing,' 'remembering,' etc., in their metaphorical or non-conscious senses, from knowing, seeing, remembering, etc., in their conscious senses, with a view toward understanding how and why conscious cognitive functions are associated with the brains of living organisms, and often are structured quite differently from non-conscious cognitive functions. In order to understand how and why conscious cognition occurs, we must also understand what consciousness is.

By contrast to much of what is being done in contemporary cognitive theory, I shall argue in support of both the truth and the importance of the following hypotheses:

1. Consciousness is a process, brain function is its substratum, and this does not necessarily mean that consciousness is caused by the actions of its substratum (the contrary arguments of Searle 1984, Smart 1959, and other epiphenomenalists notwithstanding). In living organisms, the form of the process often determines what happens to the substratum rather than the other way around. (On this point, I am in essential agreement with Merleau-Ponty 1942-1963, and with Varela et al 1993, although my reasons for this conclusion are a little different from theirs.) I shall argue that the central difference between conscious and non-conscious cognition is the presence of emotional intensity, which gives the process the motivational force needed to appropriate, shape, and even reproduce elements of its own substratum.

2. Imagination (as Rosch 1975 and Newton 1993 have suggested) is the basic building block of all consciousness. I. e., all contents of consciousness involve a subjunctive and imaginative element. They involve in one way or another imagining what would happen if something were other than the way it is. Even the perceptual consciousness of an infant, according to Piaget (1928-1965, 1969), involves imagining what could be done with the object if the infant were to reach out, grasp it, throw it, beat on it, etc. This idea that identifying an object involves imagining how it could be manipulated has been supported in more recent developmental research by Becker and Ward (1991), and by Streri, Spelke and Rameix (1993), confirming in humans the same principle that Held and Hein (1958) found for cats: When deprived of the opportunity to manipulate and interact with the objects they were looking at, kittens ended up being functionally blind.

 Even perceptual consciousness, then, is in part imaginative and subjunctive. This means that attentive consciousness always involves an implicit or explicit process of 'imaginative variation' as described by Husserl in his Lectures on Phenomenological Psychology, which is equivalent with saying that it involves counterfactuals in the same straightforward sense discussed by David Lewis in Counterfactuals. A being which registers only the presence of objects as they actually are, or reacts behaviorally only to actual stimuli, is not a conscious being. To consciously see an object requires not only that light impinge on the retina, and that a nerve impulse travel through the thalamus to stimulate the primary projection area of the occipital lobe (Posner and Petersen 1990; Aurell 1983, 1989; Luria 1973), thus causing stimulation of the 'feature detectors' in the columns of neurons in the primary projection area (Hubel and Wiesel 1959). None of this yet results in consciousness of the object, as we know from PET scans and other measures of electrical activity in localized brain areas (Posner and Rothbart 1992; Aurell 1983, 1984, 1989; Posner 1980). Seeing occurs only when we attend to (i.e. look for) the object on which we are to focus. And looking for involves asking the question about a concept or image, 'Is this concept or image instantiated by what is in my visual field right now?' But the forming of an image or a concept requires a much more complex, active and global brain process than merely to receive and react to data from the senses. One reason for the importance of this point is that, even though some learning without awareness does occur, as documented by 'blindsight' and 'priming' experiments (Bullemer and Nissen 1990; Cohen et al 1990; Weiskrantz 1986), there are many kinds of learning and information processing that do not occur except with the help of conscious attention (Cohen et al 1990; Posner and Rothbart 1992; Hillis and Caramazza 1990).

This implies another main difference between conscious and non-conscious processing: In conscious processing the imaginative act precedes the perceptual one as part of the arousal and attentional mechanism (Bruner 1961; Ausubel 1963; Neely 1977; Broadbent 1977; Logan 1986). This is confirmed in more recent empirical studies by Mele (1993), Rhodes et al (1993), Lavy et al (1994), Sedikedes (1992), Higgins and King (1981), and Wyer and Srull (1981). A desire or interest originating in the midbrain leads to limbic activity and general increase in arousal (Hebb 1961), at which point the prefrontal cortex translates the emotional feeling of desire or interest into the formulation of questions (Luria 1973: 188-189, 211, 219ff, 1980; Sperry 1966), which entail images (requiring parietal activation), concepts and abstractions (Ornstein and Thompson 1984: 41-60) which often also involve symbolic activity (entailing interaction of left temporal syntactic and right parietal semantic functions, as discussed by Miller 1990: 78ff; Tucker 1981; Springer and Deutsch 1989: 309ff.; and Dimond 1980). Only at the point when the whole brain knows what it is 'looking for' in this sense does the occipital activity resulting from optic stimulation become a conscious registering of a perception, an attentive seeing of the object. This means that a conscious registering of a perceptual object leads to much more extensive processing of the data than the non-conscious registering of it could possibly lead to. It means that I am much more likely to remember the data, act on it, think about its further significance, and, if it is significant, look for recurrences of the object in the future. This last point is confirmed empirically by Higgins and King (1981), Wyer and Srull (1981), and many others whom we shall discuss later.

3. I shall argue (notwithstanding the contrary arguments of Fodor 1975, 1981, 1983, the Churchlands 1979, 1986, and Dennett's earlier work -- for example, see 1969) that the difference between conscious and unconscious cognition makes a difference, and that conscious cognition is structured completely differently from unconscious cognition. Neurophysiology corresponds to both conscious and unconscious cognition, not only to unconscious cognition. As Thomas Natsoulas (1994) has argued, we cannot simply regard consciousness as an 'appendage' which has been superadded to processes which could also have occurred on an unconscious basis. Nicholas Georgalis (1994) argues for a similar distinction between conscious and unconscious processes on epistemological grounds: The sheer fact that information gets processed somehow or other does not mean by definition that consciousness of that information occurs, yet clearly the consciousness in many instances is needed to facilitate the processing. When consciousness is involved, what is happening neurophysiologically is fundamentally different from the way the brain functions when information is processed on a non-conscious basis. But many things about consciousness cannot be learned through objective methods; they also require a (rigorous) phenomenological method. And many of the questions we answer through objective methods would never be asked if not for subjective concepts -- as Dennett (1991) grants in his chapter on 'heterophenomenology' (notice the change here from his earlier thinking). As Posner and Rothbart (1992) point out, "The use of subjective experience as evidence for a brain process related to consciousness has been criticized by many authors. . . . Nevertheless, if one defines consciousness in terms of awareness, it is necessary to show evidence that the anterior attention network is related to phenomenal reports in a systematic way (98)."

4. A great deal of confusion and fruitless argument results from failure to understand the ontological status of consciousness. Oversimplified reactions against 'dualism' are now so commonplace that many neuroscientists feel compelled to ignore the role of consciousness on pain of being labelled as 'dualists' and therefore as 'dewy-eyed metaphysicians.' It is often assumed that the only alternatives to a metaphysical dualism (or what Popper and Eccles 1977 called 'interactionism') are (i) causal epiphenomenalisms, which posit that consciousness is a byproduct of (and cannot itself cause) brain processes, and (ii) theories of strict psychophysical identity, which posit that 'consciousness' does not mean anything other than 'brain functioning.' In my view, this oversimplification of the theoretical options constitutes a false limitation of alternatives; if we were to confine ourselves to these options, then there would indeed be an inexorable logic which leads from this starting point to the conclusion that consciousness plays no role in facilitating or producing cognitive functions. It seems to most neuroscientists today that, if one causal antecedent for a phenomenon (a physical one) is both necessary and sufficient to explain the phenomenon, then no other antecedent (say, a conscious one) can be either necessary or sufficient to explain that same phenomenon. If consciousness is neither necessary nor sufficient to explain cognitive functioning, then it plays no role in bringing it about. And if consciousness can play no role in bringing about cognitive functioning, then certainly neuroscientists should ignore it in their work. This is the essential basis of both 'reductive' and 'eliminative' materialisms (for example, see Smith and Jones 1986). I shall argue, however, that the premise of this inexorable logic is false. Metaphysical dualism, psychophysical identity, and causal epiphenomenalism are not the only three possible conceptualizations for the relationship between consciousness and its physiological correlates. Neuroscientists therefore need not accept the harmful conclusion that they must avoid all references to the important role of consciousness in cognitive processes.

Many puzzling questions can be answered only if we correctly understand the ontological status of consciousness. For example, the paradox of 'memory traces' is solvable when we realize that the continuation of a process can include an almost infinite variation of combinations in the patterns of electrical and chemical change in all the neuronal circuits involved -- a complex pattern of changes which in essence is a behavior which can be triggered by a cue, much as a conditioned response can be triggered by a stimulus (as Merleau-Ponty 1942-1963 suggests). To ask how we 'remember' how to re-enact this complex pattern is like asking how someone with a nervous twitch 'remembers' to twitch the muscle. To ask where the memory is 'stored' is like asking where a thermostat 'stores' its tendency to return to the temperature that has been set. And to ask how it is that there are so many more memories stored in the brain than there are neurons or even neuronal connections (Coulter 1983) is like asking how it is that an infinite number of melodies can be played on an instrument with only twenty-six keys. But if we try to think of 'memory traces' as quasi-permanent spacial configurations of substances or thing-like entities in the brain, we will never find them. To recall a memory is to re-enact an efferent behavior, accompanied by a 'feeling of recognition' or 'feeling of familiarity' (the terms used by Mandler et al 1990; Jacoby and Kelley 1992; and Mayes 1992, with regard to the subjective conviction that we remember something, as opposed to merely performing behaviorally as though we knew it). I shall discuss this issue in detail in Chapter 6.

5. As soon as the above theses have been established, it will then be possible to show that all consciousness is permeated and directed by emotion (in agreement with Edelman 1989 and Gray 1990). But here again we must distinguish between feelings in their conscious and non-conscious senses. There are 'desires' in the non-conscious or metaphorical sense (as when an ion 'wants' to neutralize its electric charge). And there are 'representations' in the non-conscious sense (as when information about the appearance of an object is projected onto the 'primary projection area' of the occipital lobe, where columns of neurons record the lines, angles and colors of the object, but with no conscious awareness of the object). But 'desire' becomes desire in the conscious sense only when it becomes a process which is capable of and motivated toward appropriating and reproducing elements to be used as its own substratum by growing to include a representation of the missing elements, or at least a representation of contents ideationally related to the missing elements. For example, the 'desire' for cellular sustenance grows to include proprioceptive images of oneself eating (Newton 1994), and then imaginary representations of edible objects which finally find matching patterns of activity in the primary projection area if sensory input from such an object in the environment is received. A desire which is conscious, even to a minimal extent, is one which is capable of controlling, appropriating and reproducing elements of its own substratum in such a way as to form within itself a representation (however vague and approximate) of that of which it is a desire. Without this primacy of the process over its own substratum, an event cannot qualify as a conscious event.

The reason for this (I shall argue) is that the mind-body problem can be solved only on the condition that consciousness is a higher-order process which takes lower order processes, such as electrical and chemical events, as its substrata. Consciousness, as we shall see, is like a wave which takes a material medium as its substratum. There are important senses in which the movement of the medium does not simply cause the wave to have the pattern that it has, but just the reverse is true. The wave, originating elsewhere, causes the medium to vibrate or oscillate in the pattern that it does.

By saying that the wave 'causes' the particles to oscillate in certain patterns, I do not mean to imply that the particles do not also 'cause' each other to oscillate, but they do so in a different sense. Let me briefly and preliminarily suggest the difference between these two senses. In one sense, we say that the reason people engage in sexual behavior is because they enjoy it; and we can even explain this enjoyment in terms of chemical reactions in the reproductive and nervous systems. This would be a 'mechanistic' explanation -- an explanation of a total situation in terms of the behavior of the elements of its substratum. But in another sense, we say that the reason people engage in sexual behavior is that reality is such that, in an ecological situation like ours, beings that enjoy sexual behavior are more likely to reproduce themselves. This would be an explanation of the behavior of the substratum elements in terms of the nature of the overall situation -- an explanation in terms of 'process.' The fact that beings that enjoy sexual behavior are more likely to reproduce themselves is a statement about a process that characterizes our ecosystem, and there is a sense in which this process partly determines the behavior of any particular substratum elements which enter into it (Neisser 1967, 1976, 1994). But this does not contradict the fact that the same behavior can be explained mechanistically. Which type of explanation we use depends on whether we are trying to understand one element's behavior in terms of a previously established understanding of another element's behavior, or whether we are trying to understand why the whole situation is patterned in such a way that any elements which enter into it will inevitably behave in certain ways.

What I am suggesting is that, when the pattern governing a substratum originates to a great extent outside of the system to be explained -- for example, when a sound wave, originating elsewhere, causes a wooden door to vibrate -- it is misleading and even false to say that the substratum of the system causes that pattern to occur. And I am suggesting that the relation of the pattern of consciousness to the behavior of the brain is very much like the relation of a sound wave to the wooden door in this example. The pattern of the wave in this case arises partly from the relation of the organism to its environment, from the structure of language communicated to us by others, and from the structure of intelligible reality. There are also some disanalogies, in that the sound wave originates entirely, and in its final pattern, outside the door, while the pattern of consciousness is influenced not only by the environment but also by the motivational activities of the midbrain as it interacts with other brain areas. Nonetheless, there is an important sense in which the order and rhythm constituting the larger patterns of interaction among particles cannot be reduced to a simple summation of the movements of the particles themselves: most importantly, the pattern may be realizable in a variety of possible material instantiations (Putnam 1994), and in the case of consciousness the pattern may even seek out, appropriate and reproduce the substratum elements needed to instantiate the pattern.

In some respects, the brain also acts as an amplifier and equalizer of this 'wave-patterning' process. I. e., the brain 'amplifies' a consciousness which at first is only very faint (and appropriates only a small amount of material as its substratum) by allowing it to grow so that many elements of the brain now become substrata for an expanded version of that same process. This is confirmed by Edelman (1989) and Posner and Rothbart (1992), who see the function of the focusing of attention via anterior activation (observed with PET scans) as in part serving the function of enhancement of signals (for example, Posner and Rothbart 1992: 103). And the brain 'equalizes' the pattern (i.e., refines the pattern, eliminating irrelevant 'static') in the sense that an initial 'desire' cannot seek out that which it desires until a more and more refined image or concept of the desired state of affairs can be produced. The more exactly the image or concept corresponds to the desire (i.e., can serve to provide it with an appropriate substratum), the more conscious the organism is of that particular desire (Ellis 1986). The more closely the 'representation' produced by a 'desire' matches the desire in this sense, the more conscious the organism is of both the representation and the corresponding desire (Ellis 1990). I shall suggest that this ontology of consciousness is more consistent with some version of connectionism (perhaps a 'nonlinear processing' version) than with other cognitive architectures which have been proposed. According to connectionism, as Bechtel (1987) explains,

The behavior of the system results more from the interaction of components than the behavior of the components themselves . . . [and] does not rely on internal representations as its processing units. It produces internal representations as responses to its inputs and it is possible to develop systems that adjust the weights of their connections in such a way as to develop their own system for categorizing inputs (19-21). . . When a pattern is not present in the system, it is not stored, as in traditional cognitive models. Such patterns are not retrieved from memory, but are reconstructed on appropriate occasions (Bechtel 1987: 22).

Poggio and Kotch (1985) believe that many cognitive processes -- most obviously mental imagery -- can be understood only if we adopt this viewpoint by preference over the digital computer-inspired models of the past two decades:

Neurons are complex devices, very different from the single digital switches as portrayed by McCulloch and Pitts (1943) type of threshold neurons. It is especially difficult to imagine how networks of neurons may solve the equations involved in visual algorithms in a way similar to digital computers. We suggest an analogue model of computation in electrical or chemical networks for a large class of visual problems, that maps more easily with biologically plausible mechanisms (Poggio and Kotch 1985: 303). 

However, it is also very true, as Tienson (1987) and Butler (1993) both astutely point out, that no version of connectionism as yet devised is really consistent with what we know about the actual connections between neurons in the human brain. Part of the purpose of this book is to point in the direction of a better neurophysiological substrate for the contents we actually phenomenologically experience, and thus also toward a more workable version of connectionism.

6. Finally, we can deduce from these considerations, combined with some further neurological information, that symbolic behavior utilizes this same process. A symbol for a state of consciousness is a representation which works especially well to help provide the substratum for that state of consciousness. Saying the word 'tree' helps me to enact the corresponding consciousness in myself (Ellis 1986). But I would not have said the word 'tree' in the first place had I not already desired to enact that pattern of consciousness or at least some similar or related pattern of consciousness (Gendlin 1962). The use of symbols is a way for consciousness to grow to include more and more substratum elements, thus expanding the process that appropriates that substratum. The symbolic behavior serves both to 'amplify' and to 'equalize' the pattern which is the state of consciousness.

In the same process, the symbolization also serves to change the state of consciousness into a next one which is 'called for' by it (Gendlin 1973, 1981, 1992). Since consciousness has the ontological status of a higher-order process or pattern of change in its substratum, it wants not only to reproduce elements of its substratum in order that the process may remain the same in certain respects, but also that the process may change in certain respects (Gendlin 1971; Rogers 1959). Every process contains within itself a tendency to evolve over time into a somewhat differently patterned process. I shall explain why more fully in the appropriate part of the text.

These and many other consequences of the ontological status of conscious processes will be explored as we proceed. But first, some groundwork must be laid on the basis of which the theses I have been preliminarily summarizing here can be established. The first part of this groundwork involves reassessing the legacy of behaviorism in contemporary cognitive science.

1. The Legacy of Behaviorism

 When John Watson made the original case for behaviorism in the social sciences at the turn of the twentieth century, the state of neuroscience and the development of phenomenology were very different from what they are today. Almost nothing was known about the physiology of the brain. Husserl had not even published his first important book, Logical Investigations, which would initiate the process of working out a careful and systematic way to observe the subjective experiencing process 'from within.' Certainly, the sophisticated phenomenological methodologies of Giorgi (1971), Gendlin (1961, 1971, 1973, 1992), Gurwitsch (1964), and Merleau-Ponty (1942-1963, 1962) were completely unheard of. Self reports of internal experiencing were therefore inaccurate, unreliable, ambiguous, difficult to quantify, impossible to control, and often tended not to be repeatable in a precise way. Moreover, subjects often tended (as we still do) to substitute explanatory interpretations for what we experience in place of naive, direct and unprejudiced descriptions of the raw data of experience as it actually presents itself. And although they could describe (in vague terms) what it felt like to find the solution to a cognitive problem, they could not say how they did it (Hunt 1985).

Given the undeveloped nature of both neurology and phenomenology, it was no wonder that scientists would opt for something that could be unambiguously and objectively observed and measured. (It remained for Thomas Kuhn to point out sixty years later that this kind of objectivity does not lead to as unbiased a theoretical outcome as one might suppose, but that is a different matter.) Thus it was natural that scientists would want to rely on what is physical, and therefore measurable. Since it was impossible to look inside peoples' heads, and subjective reports could not be relied on, what this left was behavior. Moreover, behaviorism could leave open the possibility that those few neurophysiological facts that could be directly observed (and did not result from speculative theories about the brain) could be incorporated into behavioral studies. From this viewpoint, the behavior of the brain is just another form of behavior. Thus it was believed that behaviorism allowed the study of all and only those phenomena that could be studied scientifically. As a result, cognitive psychologists were allowed to correlate brain states with behavior, but they were not allowed to correlate either brain states or behavior with the subject's consciousness. (The problems in trying to do so were dramatically illustrated by the failure of the experimental Gestalt introspectionism of Wundt and Tichener, which will be discussed in a moment.) As a result, behaviorism became a veritable Kuhnian 'paradigm' -- a set of assumptions that could be easily thrown in to complete any neurophysiological explanation that did not quite work without the help of such assumptions. For example, consider the following explanation in Young (1988):

The relations between these areas and other parts of the brain are no doubt greatly influenced by learning, especially during childhood. Unfortunately little is known in detail about this relationship, but many traits of character and personality presumably depend upon the particular connections established by these 'reward centers', especially in relation to parents and siblings. One cannot emphasize too strongly that human 'needs' are for emotional and social satisfaction as much as for food and sex (Young 1988: 182).

It is striking that we must presume that there are 'reward centers' which are somehow structured by postulated 'needs for emotional and social satisfaction' in order to explain the physical workings of the brain. But the need to substitute theories of learning in place of direct observation was essentially built into the epistemology of behaviorism from the very beginning, because it placed such severe limits on what was deemed 'directly observable.'

Today, however, the epistemological situation is very different from what it was in Watson's day. First of all, neurology is in an explosive period of development. Secondly, neurology is an outgrowth of medicine, which is at least as much a practical as a theoretical discipline. A practicing physician's first priority is usually to pay attention to the subject's subjective self reports. If the subject says 'I feel pain when you do that,' or 'I suddenly remembered my grandmother's face when you delivered that electrical stimulation,' or 'I've been more lucid and less depressed since you increased my medication,' a physician is not likely to completely ignore such potentially useful information just because of some abstract epistemological theory. And, although a good research physician does try to devise objective ways to measure subjective processes, there is little pretense that the purpose of the operation is not in fact to measure a subjective process; thus the importance of understanding what the subjective process is and how it interrelates with other events in the subject's stream of consciousness remains a paramount concern.

The more reliable information accumulates about the functioning of the brain, the easier it becomes to formulate coherent and testable theories about the ways in which this functioning can be correlated with careful phenomenological accounts of the corresponding subjective events in consciousness. For example, ways to map patterns of electrical activity in specific areas of the brain are becoming increasingly sophisticated. EEG patterns, CT scans and other measures of neural activity in various parts of the brain have now been extensively correlated with conscious acts such as remembering (Damasio 1989; Damasio et al 1985); attention (Hernandez-Peon et al 1963; Posner and Rothbart 1992; Cohen et al 1988); the integration of sensory and memory mechanisms via frontal lobe activity (Nauta 1971); obsessional thought patterns (Gibson and Kennedy 1960); hysterical conditions (Flor-Henry 1979); feelings of elation and depression (Ahern and Schwartz 1985; Damasio and Van Hoesen 1983; Gainotti 1973); the activity of listening to music (Miller 1990: 79) -- which apparently involves very different brain areas for trained musicians (more left-lateralized); word recognition (Petersen et al 1989); language acquisition (Dore et al 1976); and many other such consciousness/brain-electrical correlations, some of which will be discussed later in this book.

In some instances, this information combined with phenomenological analysis facilitates reasonable inferences about the ways the physical and conscious processes are related. For instance, we know that, when a novel stimulus is presented, about a third of a second is required for increased electrical activity to be transferred from the primary projection area of the relevant sensory modality in the cortex (which receives direct sensory information from the outer nervous system but does not yet result in conscious awareness of a visual object) to the parietal and prefrontal areas whose activation does result in a conscious visual, auditory or tactile image of the object (Aurell 1983, 1984, 1989; Srebro 1985: 233-246; Runeson 1974: 14). Yet the primary sensory area and the relevant parietal area are almost immediately contiguous with each other. Why should a nervous impulse, which normally travels in the neighborhood of several hundred miles an hour (Restak 1984: 40), take a third of a second to travel a few millimeters? Obviously, it must be that much more complex processes are involved in activating conscious awareness of some perceptual content than a simple passive reception of a signal or stimulus in some particular part of the brain. More global processes must be involved before even the simplest possible conscious state, the having of a visual image, can be produced. But to understand what these more extended processes are requires putting together many bits of phenomenological and neurological information. This book will try to make a beginning in this direction.

One of the most important pieces of information relevant here is a phenomenological one: The selection of perceptual elements for conscious attention is partly a motivational process involving judgments about what is important for the organism's purposes. And the transmission of emotional purposes (which involve midbrain activity), into questions that conscious beings formulate for themselves in order to seek out relevant information in the environment, is a process which involves extensive prefrontal activity (Luria 1980, 1973; Damasio et al 1985, 1989; Eslinger and Damasio 1985; Nauta 1971). Thus we shall see that what goes on during that third of a second between primary projection area activation and parietal-prefrontal activation is a complex process involving emotion, motivation, and value judgments about what it is important to 'look for,' resulting in an efferent image formation which becomes a visual perception only when a match is finally found between the pattern of this efferent activity and the corresponding afferent input from the outer nervous system and the primary projection area. This means that the midbrain, the prefrontal cortex, and the parietal association areas are all involved in the production of the simplest possible conscious content. Besides the studies by Aurell, Runeson and Srebro just cited which show that passive stimulation of the 'visual cortex' does not result in perceptual consciousness unless there is frontal and parietal activity, there are similar findings with regard to the role of the reticular activating system in perception and the role of the frontal-limbic connection in recognizing the meaning of a remembered image (Ludwig 1977; Thompson 1975; Miller 1984, 1990; Gainotti et al 1993). According to Miller,

Conscious appreciation of a particular sensory impression . . . depends not just on the sensory pathways conveying that sensation, but also on the participation of a separate collateral system, the reticular activating system . . . responsible for literally 'directing attention' to incoming sensory information at different levels of processing.

Damage to this system produces a curious dissociative condition where the sensory areas of the brain process the information normally (as shown, for example, by the EEG), but the person remains subjectively unaware of the stimulus; it simply doesn't 'register' (Miller 1990: 173).

And, according to Damasio et al (1985: 252-259), unless the network of frontal-limbic connections is intact, images may produce a vague 'feeling of familiarity,' but their meaning and context cannot be recalled. It is also well known that Luria (1980) finds that physical disruption of almost any part of the brain (midbrain, frontal, parietal, temporal, etc.) interferes with memory function.

The need for all this interrelated frontal, limbic and parietal activity even for the simple perception of an object certainly dispels any notion that a given bit of cognitive information (say, the image of a horse, or the concept 'horse' occurring as the subject of a proposition) could correspond to a particular neuron or neural pathway, as some hypothesized cognitive architectures would require. It thus becomes increasingly obvious that for a cognitive theorist to posit models of information processing without considering what now can be known about neurophysiology and phenomenology is as though an architect were to design a bridge without attention to the tensile strength of the materials of which the bridge will be constructed, or as though a composer were to ignore the range limitations of the various instruments of the orchestra. Moreover, knowledge of the nature of an instrument not only limits what can be composed for the instrument; it also suggests musical possibilities for the instrument which one would not have thought of without some general understanding of the nature of the instrument on which the music is to be played. In the case of cognitive theory, we are looking for music which can be played on a conscious and organismic instrument.

Behaviorism in principle does not allow us to distinguish between an instance of conscious information processing and an instance of non-conscious information processing. Both look the same in measurable terms, especially if prior phenomenological research has not pointed us toward observable patterns we would not otherwise have looked for. But non-conscious information processing is the simpler kind, and is much easier to spell out in operationally intelligible terms. As a result, what the legacy of behaviorism has done to cognitive theory has been to systematically select for hypotheses that can explain only non-conscious forms of information processing.

According to Irwin Goldstein (1994), this behaviorist bias is still very much alive in contemporary cognitive theory. "Functionalism," he says, "is a descendent of the behavioristic approaches to the mind-body problem Ludwig Wittgenstein and Gilbert Ryle advanced (60)." The reason is that most functionalists, like behaviorists, hold that a mental event can be exhaustively described by citing its 'inputs' and 'outputs.' According to Goldstein, the essential problem with both behaviorism and functionalism is that "Statements connecting 'pain' to dispositions to withdraw, moan, wince, or behave in other particular ways are not analytic (60)." For example, consider Fodor's definition of a headache as

[That which] causes a disposition for taking aspirin in people who believe aspirin relieves a headache, causes a desire to rid oneself of the pain one is feeling, often causes someone who speaks English to say such things as 'I have a headache,' and is brought on by overwork, eye-strain and tension (Fodor 1981b: 118).

Against this view, Goldstein argues that

What does not attend every headache is not necessary for a headache. None of the causes and effects Fodor mentions attends every headache. . . . Nor is there some determinate disjunction of causes and effects an event must satisfy to be a headache. Suppose I have a sensation with a headache's location, duration, and unpleasant quality. I realize the sensation has none of the causes and effects on Fodor's list. I need not conclude this sensation is not a 'headache.' Other disjunctions of causes and effects people might propose would fail as Fodor's does. . . . Every headache has all of a headache's defining properties -- its felt location, minimum duration, [and] unpleasant quality. . . . Refinement of these necessary conditions will yield conditions that are jointly sufficient for a headache. . . . When [behaviorists] define 'pain,' . . . they miss that property that unites different pains and makes them instances of a single kind of experience (Goldstein 1994: 57-61).

Note that this 'introspectable interior' includes more than mere qualia.

There are properties other than qualitative complexion in an experience's introspectable interior (duration, location, and others). An experience's qualia -- unpleasant or otherwise -- present only one dimension of an experience's introspectable interior. A sensation's duration and felt location are distinguishable from its quale. A two second itch need not differ qualitatively from a one second itch (Goldstein 1994: 55-56).

Of course, the notion of an 'introspectable interior' raises one of the major problems that still prompt many researchers to prefer a behaviorist approach: Introspection seems to be inevitably connected with 'conscious' processes. But there are apparently many mental events and activities which do not have the character of being 'conscious.' Many mental processes are 'unconscious.' Before tracing further the implications of the behaviorist bias in cognitive science, it is important to clarify this problem.

Talk about the relationship between 'conscious' and 'unconscious' processes is often ambiguous, because 'unconscious' can mean such widely different things. There are unconscious processes which are essentially derivative from earlier conscious ones, and therefore bear many of the structural earmarks of conscious processing; by contrast, there are 'originally' or 'primitively' unconscious processes which are not derivative from conscious ones, are not structured like conscious ones, and do not interrelate functionally with conscious ones in the same way as 'derivatively' unconscious processes. Also, between 'conscious' and 'unconscious' there may be a whole range of different degrees of consciousness or semi-consciousness. These distinctions need to be emphasized before we can proceed much further.

2. 'Derivatively' versus 'Primitively' Unconscious Processes and the Various Levels of Semi-consciousness

 By emphasizing the contrast between conscious and non-conscious information processing, I do not by any means intend to deny that unconscious processes can have a mental character in the same way as conscious ones. For example, I may sense (without consciously realizing it) that a drunk man in a bar is angry (although he is smiling and showing other supposedly friendly mannerisms) because I notice (again without consciously realizing it) that he is standing too close to me (under a guise of friendliness which does not deceive me), repeatedly clenching his fists as if getting ready for a fight (though I am not conscious of seeing this), and making inappropriate jokes about his wife from which I infer (due to his generally irresponsible demeanor) that he is angry at her, and I fear that the anger could be easily displaced. I then tell the bartender that I do not want another drink so that I can prepare to leave, although I am not conscious of the reason. There is plenty of evidence for such unconscious interrelations between thinking, feeling and perception in everyday life as well as in psychotherapeutic literature.

But the reason we call these unconscious processes thinking, feeling, and perceiving in the same sense that we would in the case of conscious processes -- rather than merely 'thinking,' 'feeling' and 'perceiving' of the metaphorical kind that thermostats and thermos bottles do -- is that, in the first place, they occur in ways that are structurally analogous to the ways they would occur if they did occur on a conscious basis; secondly, they could not occur in this particular structurally-analogous way unless they were to occur in beings which as a whole do have consciousness, because they result from the habituation and sedimentation of past conscious processes; and third, they do not occur on a completely unconscious level, but there is some minimal level of awareness of them even while they are occurring -- as exemplified by the typical psychotherapeutic remark, 'I realize now that I was conscious of his clenching his fists, although I didn't pay much attention to it at the time.' Posner and Rothbart (1992), who will be discussed more fully in Chapter 1, frame this relationship between completely conscious processes and the very analogous yet unconscious or semi-conscious processes in this way:

The degree of activation of [the anterior attentional network associated with conscious awareness] increases as indexed by PET with the number of targets presented in a semantic monitoring task and decreases with the amount of practice in the task. . . . Anterior cingulate activation was related to number of targets presented. The increase in activation with number of targets and reduction in such activation with practice corresponds to the common finding in cognitive studies that conscious attention is involved in target detection and is required to a greater degree early in practice (Fitts and Posner 1967). As practice proceeds, feelings of effort and continuous attention diminish, and details of performance drop out of subjective experience (Posner and Rothbart 1992: 98).

From the completely conscious processes of which we are fully aware, to the semi-conscious thinking we do as we 'feel' our way through a heated argument or navigate while driving a car, to the completely non-conscious processes which are not even capable in principle of becoming conscious, there is no sharp dividing line, but a gradual continuum. For example, when we learn to read music, there is no definite point in time when we no longer need to 'figure out' each note. The 'figuring out' gradually gives way to a habitual response, and in this way becomes 'sedimented' (Merleau-Ponty 1962). But, as sedimented, it retains the earmarks of earlier conscious processing -- for example, that bass clef notes are processed less readily than treble clef, sharps and flats less readily than natural notes, and infrequently-occurring sharps or flats less readily than more frequent ones.

In this sense, unconscious processes which are structurally analogous to conscious ones are ultimately derivative from conscious ones, and would not occur in the way they do in beings that lack consciousness altogether. These derivatively unconscious processes (such as the unconscious processes discussed in psychotherapy, and the habituated processes discussed by Posner and Rothbart above) should be distinguished from those which are primitively or originally unconscious (such as the regulation of heartbeat, or a thermostat's computation of the desired temperature). Derivatively unconscious processes, which may be conscious to greater or lesser degrees, function very much like conscious ones, but in an increasingly truncated way as they become habituated and no longer require conscious attention. It would be exactly backwards, then, to try to explain consciousness as merely a superadded 'appendage' to an underlying non-conscious processing system. This argument will be developed more and more fully as we proceed.

The explanation of even 'originally' or 'primitively' non-conscious information processing is not a complete waste of time, for two reasons. First, such theories lead to the design of more effective computers and computer systems. And secondly, not all information processing in humans is of the conscious variety. Humans (and probably many other animals) use both conscious and non-conscious kinds of information processing, and we need to understand the architecture of both kinds if we want to understand how people think and process information. Programs of research in artificial intelligence can contribute a great deal to the development of concepts needed to understand the non-conscious type of processing, which not only is important in humans, but also interrelates with conscious information processing in complex ways.

But, in order to understand how conscious information processing is different from non-conscious processing, we must understand at least some aspects of the conscious type of processing in its own right. And to carry out this kind of investigation inevitably requires some phenomenological reflection. This objective cannot be achieved, however, unless we eschew the rigid rejection of phenomenological data still all too prominent among neuroscientists (for example, see U.T. Place 1993).

One of the main habits of thought which must be questioned if we are to move beyond a strict behaviorism in this regard is an increasingly prevalent tendency to speak as if there were no difference between the meanings of terms like 'knowing,' 'seeing,' 'remembering,' etc. as they are used in the context of conscious information processing, and their meaning as used in the context of non-conscious information processing (i.e., in thermostats, thermos bottles and the like). If we cannot distinguish between these meanings, then it seems unlikely that we can move beyond behaviorism's bias in favor of non-conscious information processing. Let's consider this problem next.

3. The Danger of Equivocating the Language of Consciousness: The Crucial Distinction Between 'Knowing' and Knowing

Artificial intelligence theorists often speak of 'states of consciousness' in a metaphorical sense. Electric eye devices are said to 'see' things. Robots 'recognize' complex objects by comparing them with 'concepts' or 'mental images' which they 'introspect,' having 'remembered' them from previous 'learning.' Robots thus 'know' when a familiar object is presented. Thermostats are said to 'want' to keep the temperature constant, and one might suppose that, in the same metaphorical uses of language, thermos bottles 'remember' what temperature they 'want' to keep the coffee at and 'try' to keep the coffee from cooling off.

These metaphorical usages serve a useful purpose, provided that we guard against equivocating them in an attempt to equate 'seeing,' 'knowing,' etc. in the metaphorical sense with seeing, knowing, etc. as states of conscious awareness. Such usages are useful because they designate real phenomena that need to be discussed. A great deal of non-conscious information processing goes on, not only in computers and other machines, but also in human beings. For example, as I was writing this passage, I thought of a book I wanted to use as an example, but could not remember the author. I pulled the book out of my bag, but did not pay attention to the author's name on the front, because I was also eating lunch at the time. Before I had a chance to read the name, I remembered it -- Jeff Coulter. The reason I remembered it was that the image of the name had impinged on my retina but without conscious awareness, and my brain had non-consciously processed the information to enough of an extent to jog my memory of the name. I had 'seen' the name in the non-conscious, metaphorical sense, just as a robot would (as in the 'blindsight' experiments by Weiskrantz and others mentioned earlier). Similarly, when I throw a baseball, my brain 'figures out' just when to release the ball given the distance and angle of the target; and when I play a piece of music in b-flat minor, my brain 'infers' that the note I can't quite remember must be a d-flat because that note fits into the b-flat minor chord that my 'knowledge' of music tells me would 'make sense' as the next chord. All this non-conscious information processing does indeed take place, and results achieved by it are analogous in certain ways with the results that would have been achieved by the corresponding conscious processing of the same information. The robot 'recognizes' a face, with results very similar in many ways to those of a conscious being's recognition of the face. The computer adds a column of numbers and ends up with the same conclusion a human being would end up with.

Historically, in fact, this was the essential reason why the Wurzburg controversy led to the widespread rejection of introspectionist and Gestalt methods in cognitive psychology (Hunt 1985). The question at stake was what kinds of phenomenal states or mental images subjects use when solving cognitive tasks. Subjects in these experiments seemed to be aware of having a sudden 'insight' in which the solution to a problem suddenly 'came to them,' but they could not describe their conscious experiencing of the cognitive process through which they attained the solution. The solution seemed to come to them involuntarily, the result of a process which they were either unaware of, could not reflect on, or could not describe in any intelligible way. Thus psychology moved to the view that, when it comes to the 'how' of cognitive operations, as Dennett (1991: 97) puts it, "We simply do not know how we do it." Or, as Miller (1990: 80) says, "There are very few people who think what they think they think." Cognition thus came to be regarded as a fundamentally unconscious process. In folk-psychological terms, we are obviously 'aware' of having solved the problem, but this awareness seems useless in understanding how the problem was solved. It was thus assumed that consciousness is an epiphenomenon which contributes nothing to the understanding of how cognition functions.

But this is the point at which a behaviorist bias in the philosophy of the human sciences exacerbates our tendency to equate 'knowing' in the non-conscious sense with knowing in the conscious sense. If the only thing that can be studied scientifically is what can be observed and measured, i.e., the behavior which results from information processing, then one tends to assume that there is no way to tell the difference between conscious and non-conscious processing. Once we have described the way a stimulus input is structurally transformed into a behavioral output, we have said all there is to know about the processing of information. Thus there is no need for any distinction between non-consciously 'knowing,' 'seeing,' etc., and consciously knowing, seeing, etc. The temptation, then, is to simply equate them in order to avoid a needless proliferation of theories and hypotheses. It is therefore very common to find passages in contemporary cognitive theory in which an author starts out by putting words like 'knowing,' 'seeing,' etc. between single quotation marks to indicate that the words are being used in a metaphorical sense, but as the passage develops the quotation marks gradually begin to be dropped, and the author begins to speak as though there were no difference between 'thinking' and thinking, and as though any adequate explanation for the one should also suffice as an adequate explanation for the other.

The reason for this, of course, is that behaviorism was formulated at a time when it was reasonable to assume that little could be known about what goes on between 'stimulus' and 'response.' But we have reached the point now in both neurophysiology and phenomenology that a great deal can be known about these processes -- certainly enough to tell us that both conscious and non-conscious information processing occur in the organism, that the two types of information processing are very different, and that they utilize very different patterns of brain function and emphasize different regions and different processes in the brain.

But, at the point when behaviorism inspires us to ignore the differences between conscious and non-conscious processing (resulting in non-reductive materialism), why are people then so inclined as their natural next step to outright deny that there is any difference, and to argue that the description of conscious processes (or 'folk psychology') is merely a confused and distorted attempt to describe a non-conscious process? The Wurtzburg controversy well illustrates the significance of this jump. Although subjects were unable to explain how they had solved cognitive problems, it was clear that their conscious processes of thinking were at least a necessary (if not sufficient) condition for the solution. Why then did psychology write off conscious awareness as an irrelevant epiphenomenon?

Ultimately, the reason for this step is the same reason that, throughout its long tradition, the philosophy of mind had led the advocates of mechanistic explanations to reject explanations framed in intentional terms. The reason is that, for any given occurrence, there can be only one necessary and sufficient explanation. If we have succeeded in showing that certain physical antecedents are sufficient to explain a certain behavioral outcome, then no other explanation is necessary. And if we have shown that these same physical antecedents are necessary to explain the behavioral outcome, then no other explanation can be sufficient to explain it. Thus, if a set of physical antecedents are both necessary and sufficient to explain the behavior, then no other explanation (for example, an intentional explanation) can be either necessary or sufficient to explain it. As the epiphenomenalist Jackendoff (1987) explicitly puts it, if consciousness has no causal efficacy, then the study of it "is not good for anything (26)." But if the intentional explanation is neither necessary nor sufficient to explain an outcome, then it is causally irrelevant to the outcome. So any description of a conscious or intentional process which holds that this process is causally relevant must be false. But our subjective explanation of our own mental processes (i.e., 'folk psychology') leads us to believe that these mental processes are causally relevant to our behavior and to the output of our information processing. Thus our subjective explanation of our own mental processes (or 'folk psychology') is false -- as Fodor, Stich, the Churchlands, and to a great extent also Dennett, have insisted. It is thus a short step to conclude that the notions of consciousness and intentionality are only confused and misguided attempts to explain what could be explained much more adequately in purely mechanistic terms if we had enough empirical scientific information (and of course we eventually will).

Cognitive theorists also tend to assume, again like traditional philosophers of mind, that the only way to avoid actually denying the existence of consciousness is to postulate that it is 'identical with' its underlying neurophysiological processes. If M (a mental process) and N (a neurophysiological process) are identical with each other, then there is no problem with saying that, on the one hand, N is both necessary and sufficient for some outcome, and on the other hand that M is necessary and sufficient for that same outcome. Of course if M and N are the same thing then they can both be necessary and sufficient for the same outcome; thus they can both cause or bring about the same outcome.

But if this is true, then nothing that can be said about consciousness or intentional processes can possibly add anything to what has already been said when we explained the same outcome in purely mechanistic terms. It follows that every description of a conscious process must be equivalent in meaning (or at least in extensional reference) to the description of some corresponding physical process. But, here again, our experience of our own consciousness presents this consciousness as something that cannot be known or understood through any amount of knowledge of objectively observable physical events. For example, no amount of knowledge of the physiological correlates of a headache can ever lead a neurologist to know what a headache feels like unless the neurologist personally has at some point felt something that feels like a headache. Since our experience of our own consciousness obviously does make the claim that knowledge of subjective events can never be exhausted by any amount of knowledge of objectively observable physical events, and since the mechanistic cognitive theorist is committed to the claim that the objectively observable physical events do constitute an exhaustive explanation, then here again the mechanistic cognitive theorist must conclude that our experience of our own consciousness is false. Even theories of psychophysical identity do not succeed in avoiding this conclusion. 'Folk psychology' (which includes all forms of phenomenology) must be rejected as false by the very nature of what it is. This point has been made very well with regard to functionalism and eliminative materialism by William Lyons (1983, 1984, 1986); see also Donald MacKay (1984).

Yet there is obviously something that rings true in the notion that we cannot know what a headache feels like unless we have actually felt something like a headache before. So we need a better solution to the mind-body problem than simply to ignore the phenomenological data of consciousness. Moreover, we cannot ignore these immediate data. We need them in order to delineate the difference between conscious and non-conscious information processing, which I am arguing here correspond to two completely different physiological processes in the brain. And we shall see that the conscious form of information processing is the kind that most typifies the human brain as compared with computers and the brains of lower animals. Without understanding the differences between conscious and non-conscious processing, we would end up completely misconstruing many aspects of brain functioning -- notably, the functioning of the prefrontal cortex, the relationship between the parietal and secondary sensory areas (as I hinted briefly above and will discuss more extensively later), the ways in which neural activity and mental processes correspond to each other, and the ways in which the different parts of the brain are coordinated with each other in different forms of consciousness.

Luckily, there is a better solution to the mind-body problem than the eliminative materialism just discussed. A full explanation of it must await Chapter 4, where we can give it the analytic attention it requires. A brief caricature of it can be intuited by pursuing our comparison of the relation between consciousness and neurophysiology to the relation between a sound wave and the medium through which the wave is propagated. The medium is not the ultimate cause of the wave, which may have originated elsewhere. Similarly, the pattern of consciousness may have originated elsewhere than in the medium through which it is propagated in the brain (for example, in the structure of intelligible reality, or in the structure of human languages which convey ideas to individuals in a culture, or even partly in the structure of emotional demands of the organism as a whole). So the brain by itself does not simply 'cause' consciousness any more than a wooden door 'causes' a sound wave to have the pattern of Tchaikovsky's Sixth Symphony as the wave passes through the door. Also, just as there is a sense in which the pattern of the wave is 'equivalent' to the pattern of the movement of wood particles in the door, there is an analogous sense in which consciousness is 'equivalent' to some combination of patterns of change in its physical substrata (the brain, language, etc.). But there is another sense -- a more important sense, in many respects -- in which it is absurd to equate Tchaikovsky's Sixth Symphony with a particular wooden door. The door is incidental to the existence of the symphony. Though some medium for the propagation of the wave is needed, many other media would have done as well as this particular wooden door. In the same way, it is absurd to say that consciousness is 'equivalent' with certain patterns of change in its physical substrata. But to spell out the sense in which consciousness is not precisely identical with its neurophysiological substratum -- the sense in which the relation must be more complex than this -- we must await the appropriate point in the development of our argument.

We see, then, how an epistemological confusion can lead to an ontological one. By assuming that anything which can be explainable must be explainable in objectively observable and measurable ways, we must also relegate intentional explanations not only to the epistemologically useless, but to the ontologically non-existent. The reason for this is not merely a matter of Ockham's razor. The reason is that two different explanations cannot both be both necessary and sufficient for the same explanans. And to solve this problem, correlatively, requires more than a mere rejection of the epistemology of behaviorism. It requires that we re-open the traditional mind-body problem and give it a more careful solution than either dualist interactionism or epiphenomenalism or psychophysical identity theories have given.

4. The Distinction Between 'Desire' and Desire and Its Importance for Cognitive Theory; The Primacy of the Subjunctive Imagination in Conscious Processing

Just as cognitive theorists are prone to equate 'knowing' in a metaphorical sense with knowing in the conscious sense, so some other scientists are prone to run together 'desiring' and 'feeling' in a metaphorical sense with desiring and feeling in the sense of a conscious state of which one is aware and upon which attention can be focused. For example, biologists speak as though organisms engage in sexual activity 'for the purpose' of procreating, or as though RNA molecules 'wanted' to reproduce themselves, or as though the body 'wanted' to achieve a better salt balance by eliminating potassium or sodium. Chemists even speak as if negatively charged ions 'wanted' to interact with positive ones 'in order to' achieve electrical neutrality, or as if atoms 'wanted' to fill or empty their outer energy shells.

These metaphorical senses of 'emotional' terms serve just as useful a purpose as the metaphorical usage of 'representational' terminology like 'knowing' and 'seeing' -- provided again that we avoid equivocation. There seem to be 'purposeful' phenomena in the realm of non-conscious nature, and these phenomena need to be described and discussed. And they have a good deal in common with conscious desires and emotions.

Perhaps I should clarify my usage of the term 'representational' here. Throughout this book, I shall use 'representation' not in the sense of 'representational epistemology' -- as if what is in the mind were a 'copy' of external reality -- but in a broader sense. By a 'representation' in the mind, I mean any kind of intentional object whatever, in the sense that phenomenologists give to this term. I 'represent' something not only when I form the image of something which I think is real, but also when I imagine a completely fanciful object -- an impressionistic rendering of a unicorn, a musical melody which I have never heard before, or even an abstract concept which I do not believe corresponds to reality (such as the idea of 'God' in the mind of an atheist). Is there any consciousness which is not 'representational' in this sense? That is the same as asking whether there is consciousness which is non-intentional in the phenomenological sense -- as suggested by Husserl (1913-1969), Carr (1986: 26) and Globus and Franklin (1982). Globus and Franklin particularly emphasize the importance of non-intentional consciousness in even grasping the essence of what consciousness is, aside from its ability to refer and compute. Perhaps meditative trance states are examples. A pure pain without reference to any picture or image either of the painful limb or the pain-inflicting object might be another example (the pure hylé in Husserl's sense). Or perhaps there are desires with no awareness of any object of desire -- such as a vague feeling of restlessness or dissatisfaction. This issue will be further discussed in a later context.

I have suggested that 'representation' in the realm of non-conscious information processing becomes representation in a conscious sense when motivational feelings lead us to 'look for' that which is considered important for the organism. We shall see that many 'desires' in the non-conscious sense interact with each other and become more and more characterizable as desires in the conscious sense the more they can find substratum elements which allow representations relevant to desired further unfoldings of the life of the organism. This 'focusing' process (the term used by Gendlin 1981, 1992, who will be discussed extensively later) begins in the limbic region, leads to generalized arousal in the reticular activation system, sets up selective neuronal gating in the hippocampus, and activates the prefrontal region to formulate questions about what we need to experience or think about to meet the emotional need of the moment; the prefrontal area of the frontal lobe then leads us to 'look for' certain images and concepts, which then become conscious notions through the activity of the parietal region and the secondary sensory areas; these in turn then become a conscious perception when the patterns of consciousness thus set up in the parietal and secondary areas find a match in the patterns of sensory stimulation affecting the primary sensory or 'primary projection' area. The stimulation of the primary projection area by perceptual data is not yet the perceptual consciousness of anything until this entire process has transpired. This is why the parietal and secondary sensory areas are not simply stimulated by the primary projection area which is in such close proximity to them, but must await a complex process which requires about a third of a second to transpire before conscious awareness of the object occurs, and before the secondary area and the other brain areas just mentioned become active.

The way this complex interrelation develops between 'representation' in the metaphorical sense and 'desires' at the preconscious level, finally resulting in a consciousness in the proper sense, can teach us a great deal not only about neurophysiological processes, but about what consciousness is and how it is produced by the natural world. Consciousness results when representation and desire become copresent in the same pattern of activity in such a way as to interrelate intentionally and purposefully I.e., a representation intentionally and purposefully becomes the object of a desire, or a desire intentionally and purposefully becomes the object of a representation; there is a desire to represent something or an attempt to represent the meaning of a desire or emotion.

We shall see that the main way in which desire meets representation is in the process of formulating a question (in humans, primarily a function of the prefrontal cortex). The desire to fulfill some motivation prompts us to ask ourselves about the meaning of the emotion that gives rise to it, and what kind of circumstances might meet the emotion's desires. When we formulate a question to ourselves, we must in the process envision the possibility that things could be different from the way they in fact are. We use subjunctives. We ask, 'What if things were this way, or that way, or if they had been this way or that way?'

The hallmark of conscious information processing, then, is that (I shall argue) it uses thought processes built up from elements that are essentially imaginative in nature. Rather than simply reacting to what is, to a stimulus input, it questions what is by comparing it to an image or concept of what might be, or might have been. A causal concept, for example, is a judgment that if the causal antecedent had occurred differently, given a certain context, then the consequent would have occurred differently -- a subjunctive judgment. Similarly, a moral statement in its most important and primary meaning is a judgment that if someone had acted differently in some way, then things would have been better -- and often that if someone were to force the person to act differently, the results would be worth the effort. In one way or another, even a moral judgment is a judgment about what might be or what might have been -- a subjunctive.

According to Piaget (1928, 1969), Streri et al (1993) and Becker and Ward (1991), infants even learn to perceive elementary physical realities by using subjunctives. The object is one that if pulled would bounce back, or if grabbed would lend itself to being sucked, or if thrown would knock something else aside. And, of course, every object is one which, if turned around, would turn out to have another side, and thus would have thickness. These are all subjunctive thought processes.

The process of questioning is where desire overlaps with representation, and it is just in this overlap that 'desire' becomes conscious desire and 'representation' becomes conscious representation. I shall explain more completely what I mean by this in the body of the text.

Before we can discuss why it is that consciousness occurs only when desire meets representation in a certain way, and render really intelligible what this 'certain way' is, some other more elementary propositions must first be established as the groundwork for such a difficult discussion. That is what the first four chapters will attempt to do. Chapter 1 will examine the relationship between the imaginative and the perceptual consciousness of the same content element from a physiological and phenomenological perspective, ending with the conclusion that the efferent and imaginative production of an image or concept always precedes the afferent and perceptual consciousness of the same or a related content. In this way we can begin to understand how emotion, motivation, the formulation of questions, negations, mental images, and finally perceptual input interrelate with each other to form perceptual consciousness.

Chapters 2 and 3 will then consider how it is that mental images are refined, reorganized and elaborated so that they can be transformed into the abstract concepts that make up the building blocks of the more sophisticated adult cognitive processes. Adult logic is built up from concepts rather than just mental images. In the view that will be developed here, one must understand what is involved in the transition from the formation of a mental image to the formation of a mental concept before one can begin to describe the neurophysiological correlates of logical thought.

Chapter 4 will show that the process-substratum approach to the mind-body relation is the only one that does not become logically impossible given certain very elementary observable facts about the interrelations of conscious and neurophysiological processes. We can then lay the groundwork for the more difficult questions that must be considered in the remainder of the book. In what sense can one say that a process can determine the activities of its substratum elements rather than (only) the other way around? What is consciousness? How and why is it that 'desire' meets 'representation' to form conscious desire and conscious representation? And how do the various processes of the brain contribute to this outcome? These questions will be addressed in Chapters 5 and 6.

We shall see that the most elementary building block in this whole problematic is a phenomenological one. We must begin by carefully reflecting on what it means to imagine something.

 

[After Post-Modernism Conference. Copyright 1997.]