CONSCIOUSNESS AND THE BRAIN:
ANNOTATED BIBLIOGRAPHY


by

Ralph D. Ellis, Ph.D.
Clark Atlanta University
ralphellis@mindspring.com

Natika Newton, Ph.D.
New York Institute of Technology
nnewton@suffolk.lib.ny.us


Index: A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

D

Dahl, P., W.H. Bailey, and T. Winson. 1983. "Effect of norepinephrine depletion of hippocampus on neuronal transmission from perforant pathway through dentate gyrus". Journal of Neurophysiology 49: 123.
Study of the role of hippocampus functions in preselecting perceptual items for attention.

Damasio, A.R. 1989. "Time-locked multiregional retroactivation: A systems-level proposal for the neural substrates of recall and recognition." Neurobiology of Cognition, Eimas, P.D. and Galaburda, A.M. (eds). Cambridge, Mass.: MIT Press.
Proposes a theory of distributed cortical processing for certain cognitive functions. Citing neuroanatomical studies by Hubel and Livingston (1987) and others revealing the segregation of processing pathways, Damasio holds that in perception and memory reactivated fragments of sensory representations occur in physically separated sensory and motor cortices, and are unified by means of synchronized activation across the cortices:


Cognition, in Damasio's theory, makes use of representations realized in the reactivation of sensory and motor experience in coordinated sequences. We can represent, and hence think about: natural and artifactual entities; features and dimensions of those entities; and events consisting of interrelations of entities. Abstract entities are "criterion-governed conjunctions of features and dimensions present in the concrete entities outlined above" (42). Particular entities and specific biographical events, as opposed to categorical or generic ones, are individuated by context – by the constellation of other events, entities and features in which they occur:


The binding codes which control the co-activation are stored in "convergence zones," which receive input from the sensory and motor cortices and send feedback projections to them. The input does not transfer representations to the convergence zones; rather, "the signals represent temporal coincidences (co-occurrence) or temporal sequences of activation in the feeding cortices" (45). This account supports the claim that cognitive activity makes use of sensorimotor representations consisting of combinations of features, and that these representations themselves can express the sorts of conceptual distinctions that Jackendoff ascribes to the level of conceptual structure. Damasio describes the aspects of reality – concrete entities, abstract entities, and events – that can be represented by distributed sensorimotor representations:


On this account one's representations can reflect, for example, a TYPE/TOKEN distinction by means of the degree of autobiographical context combined with memories of specific entities or events, and one could represent logical relations among abstract entities with appropriate sensorimotor fragments and a highly reduced context. Attentional focus in connection with sensorimotor representations is also involved in aspect selection; it can determine which aspect of a representation is salient.
        In agreement with Eslinger and Damasio (1985), Damasio et. at. (1985), and Nauta (1971), it is found that the selection of perceptual elements for conscious attention is partly a motivational process involving judgments about what is important for the organism's purposes. And the transmission of emotional purposes (which involve midbrain activity), into questions that conscious beings formulate for themselves in order to seek out relevant information in the environment, is a process which involves extensive prefrontal activity. (See also Luria 1980, 1973)

Damasio, A. R. 1994. Descartes' Error. New York: Putnam & Sons.
Mental models can be realized in the brain through the activation of image schemas, and these, in turn, through the mechanisms proposed by Damasio: the 'reconstruction of a transient pattern (metaphorically, a map) in early sensory cortices, [triggered by] the activation of dispositional representations elsewhere in the brain' (105). Recognizing what is seen requires mapping input onto memory traces of similar objects; Damasio summarizes the relation between memory representations and novel somatosensory input:

    Although the early sensory cortices and the topographically organized representations they form are necessary for images to occur in consciousness, they do not, however, appear to be sufficient. In other words, if our brains would simply generate fine topographically organized representations and do nothing else with those representations, I doubt we would ever be conscious of them as images. How would we ever know they are our images? Subjectivity, a key feature of consciousness, would be missing from such a design. Other conditions must be met.
    In essence those neural representations must be correlated with those which, moment by moment, constitute the neural basis for the self. (99)

Because of this interdependence between internally generated input and external input, an account of conscious perception will also be an account of conscious imagery. Johnson-Laird does not hold that mental models must be images; he leaves open the question of how they are realized in the brain. Damasio, however, specifies the use of sensory images in cognitive activity. Images, in his view, are firings in sensory cortices generated by activated 'dispositional representations,' codes stored in 'convergence areas' in regions such as the hypothalamus, brain stem, and limbic system.

    Acquired knowledge is based on dispositional representations in higher-order cortices and throughout many gray-matter nuclei beneath the level of the cortex. Some of these dispositional representations contain records for the imageable knowledge that we can recall and which is used for movement, reason, planning, creativity; and some contain records of rules and strategies with which we operate on those images. . .
    The appearance of an image in recall results from the reconstruction of a transient pattern (metaphorically, a map) in early sensory cortices, and the trigger for the reconstruction is the activation of dispositional representations elsewhere in the brain, as in the association cortex. (105)

Damasio argues that 'thought is made largely of images:'

    It is often said that thought is made of much more than just images, that it is also made of words and nonimage abstract symbols. Surely nobody will deny that thought includes words and arbitrary symbols. But what this statement misses is the fact that both words and arbitrary symbols are based on topographically organized representations and can become images. Most of the words we use in our inner speech, before speaking or writing a sentence, exist as auditory or visual images in our consciousness. If they did not become images, however fleetingly, they would not be anything we could know. . .The point, then, is that images are probably the main content of our thoughts, regardless of the sensory modality in which they are generated and regardless of whether they are about a thing or a process involving things; or about words or other symbols, in a given language, which correspond to a thing or process. (106-108; italics added)

Damasio, A. R. 1992. "Aphasia." The New England Journal of Medicine 326, 8, 531-539.
In contrast to Broca's aphasia, speech difficulties caused by damage to frontal areas such as the supplementary motor area, which are involved in attention and emotion as well as the initiation of voluntary movement, are associated with cognitive deficits. Patients exhibit akinesis (difficulty in initiating movement) and mutism, and on recovering "they describe a peculiar experience: the range and resonance of thought processes was reduced, and their will to speak preempted" (537). True Broca's aphasia requires damage not only to Broca's area in the left frontal cortex, but also to surrounding areas which form part of a network associated with relational aspects of language, including grammatical structure. This network includes areas in the parietal cortex (concerned with somatasensory information) and the sensorimotor cortices (532-534).

Damasio, A. and Damasio, H. 1992. "Brain and language." Scientific American, September, 88-109.

Damasio, Antonio, and G.W. Van Hoesen. 1983. "Emotional disturbances associated with focal lesions of the limbic frontal lobe". In Kenneth Heilman and Paul Satz (eds), Neuropsychology of Human Emotion. New York: Guilford Press.
Study correlating EEG patterns and CT scans with feelings of elation and depression. See also Gainotti et al (1993).

Damasio, Antonio, P.J. Eslinger, H. Damasio, G.W. Van Hoesen, and S. Cornell. 1985. "Multimodal amnesic syndrome following bilateral temporal and basal forebrain damage". Archives of Neurology 42: 252-259.
Unless the network of frontal-limbic connections is intact, images may produce a vague `feeling of familiarity,' but their meaning and context cannot be recalled.

Dascal, Marcelo. 1987. "Language and reasoning: Sorting out sociopragmatic and psychopragmatic factors". In J.C. Boudreaux, B. W. Hamill, and R. Jernigan (eds), The Role of Language in Problem Solving 2. Elsevier: North-Holland, 183-197.
Good summary of Dascal's theory that subjects intentionally act in order to process linguistic patterns according to the problem-solving demands posed by their own purposes, as opposed to simply registering a microprocessing executed by a non-conscious computational system in the brain.

Dascal, Marcelo, and Berenstein, I. 1987. "Two modes of understanding: Comprehending and grasping." Language and Communication 7, 2, 139-151.
The "matching bias," presents the finding that subjects are sensitive to the linguistic form of the instructions: the items on a card are considered relevant if they match the terms in the hypothesis. This suggests that language does play a role in thought. Dascal proposes that the field of psychopragmatics, which studies "the way in which the 'linguistic environment' of thought influences thought" (1987: 190), is the appropriate forum for the exploration of this role, and he points out the difficulties of acquiring evidence of purely noncommunicative uses of language.

Dascal, M. 1985. "Language use in jokes and dreams: Sociopragmatics vs psychopragmatics." Language and Communication 5, 2, 95-106.

Davidson, Donald. 1970. "Mental events". In Lawrence Foster and Joe W. Swanson (eds), Experience and Theory. Amherst: University of Massachusetts Press, 79-102.
Suggests that the corresponding mental and physical events are the same thing as each other, but viewed from two different perspectives — subjective (through introspection) and objective (through empirical observation). While mind and brain activity are the same thing, it is possible to view them from two different perspectives: maintains that the apparent differences between consciousness and brain activity are attributable to the fact that we can view the same phenomenon from two different perspectives — subjective or objective. But this move raises a further problem: The question remains: Why is there a difference between what is known from subjective introspection into my own consciousness on the one hand, and on the other hand what is known from objective empirical observation of my own brain? If the subjective and the objective perspectives are two different ways of knowing the same thing, then why is that which is known different in the two cases? If a neurologist had never experienced anything like a headache, she could never learn what it feels like to have a headache from any amount of empirical observation of someone's nervous system. But if the facts that are known when we know our own consciousness are still different from the facts that are known when we know our own brains no matter how much of both types of information we acquire, then these facts must not be the same facts (although the two types of facts must be very intimately interrelated or even completely inseparable by their very nature).

Davidson, D. 1984. Inquiries into Truth and Interpretation. Oxford: Clarendon Press.

Deacon, Terrence W., 1997. The Symbolic Species: The Co-Evolution of Language and the Brain. New York: W. W. Norton.
His first section is on symbols and language, the next tackles the brain's language specializations, and the last addresses the coevolution of language and the human brain, ending up with Darwinian views of consciousness.

Dement, William. 1958. "The occurrence of low voltage, fast electroencephalogram patterns during behavioral sleep in cats". Electroencephalography and Clinical Neurophysiology 10: 291-293.
Empirical finding that during dreams the modally specific imaging areas of the parietal and secondary sensory areas are activated.

Dement, William, and Nathaniel Kleitman. 1957. "Cyclic variations in EEG during sleep and their relation of eye movements, body motility and dreaming," Electroencephalography and Clinical Neurophysiology 49: 673-676.
`Dream-like thoughts' reported during stage I of sleep.

Denis, Michel. 1991. "Imagery and thinking". In Cesare Cornoldi and Mark McCaniel (eds), Imagery and Cognition. New York: Springer-Verlag, 103-131.
"Imagery is not the core of thought processes, but rather a potential medium for them (104)." Denis also interprets the data of Shaver, Pierson and Lang (1974-1975) along the same lines: "Their experiments on syllogistic reasoning provide evidence that imagery, without being a necessary component of processes involved in reasoning, has a significant facilitatory effect on reasoning. Their data suggest that some problems are more likely than others to benefit from strategies relying on visualization" (106,

Dennett, Daniel. 1969. Content and Consciousness. London: Routledge & Kegan Paul.
Develops one of the earliest coherent computational theories of information processing. He poses the problem of the causation of conscious imagery this way: "The problem of tracing the link between stimulus conditions and internal events far from the periphery should not be underestimated. . . . It is not to be expected that central events can be easily individuated in such a way that they have unique or practically unique sources in external stimulation. Suppose we tentatively identify a certain central event-type in a higher animal (or human being) as a perceptual report with the content `danger to the left'. . . . We would expect the idea of `danger to the left' to be capable of occurring in many contexts: not only in a perceptual report, but also as part of a dream, in hypothetical reasoning (`what if there were danger to the left'), as a premonition, in making up a story, and of course in negative form: `there is no danger to the left'. What is to be the relationship between these different ways in which this content and its variations can occur? . . . Certainly for any event or state to be ascribed a content having anything to do with danger to the left, it must be related in some mediated way to a relevant stimulus source, but the hope of deciphering this relation is surely as dim as can be (80-81)." Also points out that "No afferent can be said to have a significance `A' until it is `taken' to have the significance `A' by the efferent side of the brain" (74).
        Dennett also argues here against an imagery theory of thought, and in favor of a descriptionalist theory, on the grounds that the pictorial nature of visual images limits in various ways their usefulness in representation. For one thing, indeterminacy in some particular respect may be a feature of some representations, but while descriptions can be indeterminate, visual images are determinate. Thus if I think of a tiger I think of an animal with an indeterminate number of stripes, but in a picture or a visual image of a tiger the stripes are countable. For another thing, a description can contain information that a visual image cannot. An example is given by Dennett:

    Another familiar puzzle is Wittgenstein's duck-rabbit, the drawing that now looks like a duck, now looks like a rabbit. What can possibly be the difference between seeing it first one way and then the other? The image (on paper or on the retina) does not change, but there can be more than one description of that image. (137)

The point is that while there is a difference between seeing the drawing as a duck and seeing it as a rabbit, that difference is captured only in the description.

Dennett, Daniel. 1991. Consciousness Explained. Boston: Little, Brown and Co.
Argues for a theory of `heterphenomenology,' acknowledging that many of the questions we answer through objective methods would never be asked if not for subjective concepts — notice the change here from his earlier thinking. Argues that language structures determine our thoughts, since "of all the structures we become acquainted with in the course of our lives, certainly the most pervasive and powerful source of discipline in our minds is our native tongue," and that "the obligatory structures of sentences in our languages are like so many guides at our elbows, reminding us to check on this, to attend to that, requiring us to organize facts in certain ways" (300). Dennett expressly rejects the language of thought hypothesis (302). His account of the influence of language upon thought is more behavioristic; thinking, at the level he speaks of in the passages above, is analyzed as a form of covert language behavior. There is no deeper level at which the linguistic elements of this behavior have "meaning" or "intrinsic intentionality."

Dennett, D. 1988. "Quining qualia." Consciousness in Contemporary Science, Marcel, A. and Bisiach, E. (eds.), 42-77. Oxford: Clarendon Press.

Dennett, D. 1987. The Intentional Stance. Cambridge, Mass: MIT Press.

DeRenzi, E. 1982. "Memory disorders following focal neocortical damage." Philosophical Transactions of the Royal Society of London, 73-83.

Devitt, M. 1990. "A narrow representational theory of the mind." In Mind and Cognition, Lycan, W.G. (ed)., 371-398. Oxford: Basil Black-well.

Dewan, Edmond M. 1976. "Consciousness as an emergent causal agent in the context of control system theory". In Gordon Globus, Grover Maxwell and Irwin Savodnik (eds), Consciousness and the Brain. New York: Plenum Press, 179-198.

Dimond, Stuart. 1980. Neuropsychology: A Textbook of Systems and Psychological Functions of the Human Brain. London: Butterworth.
Shows importance of internal speech for the development of thinking ability as also emphasized by Don Tucker (1981), and by the Russian psychologist Lev Vygotsky (1962) who strongly influenced Luria. According to these researchers, thought processes seem to evolve during childhood as primarily left-hemisphere `internalizations' of self-directed talk, which results only with the maturation of nerve pathways interconnecting both cerebral and subcortical brain structures, including the corpus callosum and the frontal lobe-lymbic system connections.

Dinsmore, J. 1987. "Mental spaces from a functional perspective." Cognitive Science 11, 1-21.
Proposes a knowledge structuring system in which propositional attitude contexts are signaled by "space builders" such as "Henry believes . . ." What Henry believes is represented in a mental space which is distinct from "real world" space, but whose occupants can be mapped onto occupants of the real world space in certain specified ways. The mental "spaces" idea, introduced by Fauconnier (1985) is developed further by Dinsmore. Mental spaces, which are "domains used for consolidating certain kinds of information" (2), represent specific contexts. Dinsmore explains the process of 'knowledge partitioning' and its relation to simulation:

    The process of distributing knowledge over spaces according to context will be called knowledge partitioning . For instance, the knowledge that George believes p is represented by the assertion of p in a "George believes that ____" space. Knowledge partitioning seems to belong to the processes by which mental models are constructed in Johnson-Laird's (1983) theory. The reciprocal process of using contexts to access knowledge, thereby realizing global consequences of simulative reasoning, will be called context climbing . For instance, if q has been derived in George's belief space relative to the real world, and is therefore represented there as true, then by context climbing the proposition that George believes that q is available as a true proposition in the real world. (7)

The process of assigning new information acquired during discourse to the proper space does not require that each new item be explicitly tagged; the appropriate space can be implicit:

    The same kinds of inferencing involved in processing discourse about the real world would seem to be involved in processing discourses about other spaces, since the structures of the discourses seem to be similar. I will call the space with respect to which a discourse is understood the focus space for that discourse. The most obvious kind of example is provided in fictional discourses. However, another interesting example is found in a discourse that is initiated like this:
    (1) Arthur believes it is the duty of everyone to fight what he thinks is an invasion of space frogs. Before this situation gets out of hand, every homeowner should defrog his own yard, taking care to ____

    After the initial shift in focus space, the discourse could continue indefinitely with the implicit understanding that we are talking about Arthur's belief space rather than about the real world. This space has become active and inference processes relevant to discourse understanding apply locally to this domain. (13)

Dinsmore goes on to apply the model to the understanding of conditionals, counterfactuals, and to problems involving presuppositions, and he indicates that the model might support analogical and metaphorical reasoning.

Dore, John, Margery Franklin, Robert Miller, and Andrya Ramer. 1976. "Transitional phenomena in early language acquisition". Journal of Child Language 3: 13-27.
Measures of neural activity in various parts of the brain correlated with language acquisition.

Dretske, F. 1985. "Machines and the mental." Proceedings and Addresses of the APA (1985) vol. 59, 23-33.

Dreyfus, Hebert. 1979. What Computers Can't Do. New York: Harper & Row.
Classic argument for the differences between digital-computational and human mental processes.
Classic statement of arguments against the computational approach to human cognition.

Dummett, M. 1973. Frege: Philosophy of Language. London: Duckworth.

E

Edelman, Gerald M. 1989. The Remembered Present. New York: Basic Books
Postulates special memory repertoires which store both internally and externally generated data, such that the distinction between self (in the biological sense) and nonself, and associated values, are represented. This memory interacts with current perceptual (partially categorized) processes:

    The functioning of the memory repertoires depends upon connecting interoceptive signals (which are primary and relate to value but cannot be categorized in spatio-temporal detail) to those exteroceptive signals that can be categorized in great detail and that happen to be temporally correlated with those interoceptive signals. This results in a value-dominated memory system that associates sensorimotor perceptual categories with value states. (98-99)
    . . . In effect, primary consciousness results from the interaction in real time between memories of past value-category correlations and present world input as it is categorized by global mappings (but before the components of these mappings are altered by internal states). (155)

Edelman's account needs no homunculus to unify the components since unification is effected by correlation of signals brought about by reentry:

    Since reentry to and from mapped receiving areas correlates the various signals emerging from an object, and since previous categorizations in memory can interact with the outputs of these reentrant paths, there is no homunculus "looking at the image." It is the discriminative comparison between a value-dominated memory involving the conceptual system and current ongoing perceptual categorization that generates primary consciousness of objects and events. (155)

Edelman, G. M. 1987. Neural Darwinism. New York: Basic Books, Inc.
In hierarchical systems, results of higher-level processing are fed back into the system at a lower level, to be processed along with new external signals to update a representation or map of the external stimulus. In an early stage of perception external signals generate a map:

    The main function of such a map is to provide a reference for higher-order input-output relationships and successive mappings in a reentrant system. Inasmuch as other regions of the nervous system (and of the cortex in particular) must carry out routines involving multimodal input, abstractions, and map-free routines, a place must be maintained for continual reference to continuity properties. This place is the local map and its constituent domains within the primary receiving areas. (109-110)

Edelman, G. M. 1978. "Group selection and phasic reentrant signalling." In Edelman and Mountcastle, 1978.
Proposes that reentrant signaling is a mechanism allowing different processing units to exchange their results, thereby providing for continuity and updating of the representation of the world. Reentry occurs when an "internally generated signal is reentered [functions as input to another module] as though it were an external signal" (76; italics in original).

Edelman, G.M. 1992. Bright Air, Brilliant Fire. New York: Basic Bools.

Edelman, G.M. and Mountcastle, V. B. 1978. The Mindful Brain. Cambridge: The MIT Press.
Reentrant signaling, by means of feedback loops carrying signals from higher to lower areas processing sensory input, is a component of distributed neural systems, which are composed of large numbers of modular elements linked together in echeloned parallel and serial arrangements. Information flow through such a system may follow a number of different pathways, and the dominance of one path or another is a dynamic and changing property of the system.

Edwards, Betty. 1979. Drawing on the Left Side of the Brain. Boston: Houghton Mifflin.
Edwards 1979: 59)
Interesting discussion of spatial perception as related to nondominant hemisphere. Confirms Luria (1973: 270) in that differentiation between left and right brain functioning is not substantially complete until around age 10-11 (see esp. p. 59).

Ellis, Ralph D. 1980. "Prereflective consciousness and the process of symbolization". Man and World 13: 173-191.
Attempt to clarify the phenomenology of the relationship between symbolization and the movement from preconscious to conscious status on the part of conscious contents. See also Ellis (1986) and Ellis (1995).

Ellis, Ralph D. 1983. "Agent causation, chance, and determinism". Philosophical Inquiry 5: 29-42.
Argument for a deterministic account of human action.

Ellis, Ralph D. 1986. An Ontology of Consciousness. Dordrecht: Kluwer/Martinus Nijhoff.
Mixing the work of Merlau-Ponty and other phenomenologists with contemporary philosophy of mind, a `process-substratum' theory of the mind-body relation is developed. Also develops a theory of the `self' based on this ontoogy, performing unifying fnctions for consciousness. It is also argued that, the more exactly the image or concept corresponds to the desire (i.e., can serve to provide it with an appropriate substratum), the more conscious the organism is of that particular desire . The more closely the `representation' produced by a `desire' matches the desire in this sense, the more conscious the organism is of both the representation and the corresponding desire. Also points out that a symbol for a state of consciousness is a representation which works especially well to help provide the substratum for that state of consciousness. Saying the word `tree' helps me to enact the corresponding consciousness in myself

Ellis, Ralph. 1990. "Afferent-efferent connections and `neutrality-modifications' in imaginative and perceptual consciousness". Man and World 23: 23-33.
Argues on both phenomenological and neurological grounds that afferent-efferent connections facilitate `Neutrality-Modifications' (i.e., the relationship between perceptual and merely imagined mental contents) in conscious processes.

Ellis, Ralph D. 1991. "A critique of concepts of non-sufficient causation," Philosophical Inquiry 13: 22-42.
Argument that causation cannot be coherently formulated as a system in which the antecedent is `necessary but not sufficient' to produce its consequent under given background conditions.

Ellis, Ralph D. 1992b. "A thought experiment concerning universal expansion". Philosophia 21: 257-275.
In agreement with Whitehead (1925) and Cassirer (1923-1953), argues that nothing is what it is except in relation to other things We cannot describe the location of a particle of wood except in relation to patterns of movement with which the particle is involved; thus we cannot describe, let alone explain, the pattern of a sound wave merely by conjunctively stating that this particle moves here, and that particle moves there, since these discrete movements have meaning only in relation to the pattern of the larger process. Even the `here' and the `there' mean something only in relation to this larger pattern. But the materialist response is that, when we describe the movements of the various particles, we have already described them by means of measurements which by their very nature take account of the relativity of motion, so there is still no need to bring in a meta-level of entities such as `sound waves' in order to explain the movement of the wood. Each particle just moves because the adjacent particle caused it to move, and there is no need to speak as though this explanation were insufficient by saying that the `pattern of the wave' causes the door to vibrate. Why not just eliminate all unnecessary vocabulary and speak only of the location, distance and velocity of the movement of each particle? Because many patterns are constituted by the interaction rather than the mere justaposition of substratum elements, so that the pattern is constitutionally plastic.

Ellis, R. D. 1995a. "The imagist approach to inferential thought patterns: The crucial role of rhythm pattern recognition." Pragmatics and Cognition 3, 1, 75-110.
Ellis argues that one way students may learn to recognize common inference patterns in logic is through rhythm-pattern imagery.

    Once the students can recognize the patterns in a visual diagram, then they have to learn to recognize the same pattern in a temporal rhythm which they hear or imagine hearing when someone speaks an argument. . . For instance, . . we could say that 'this implies that; this; therefore that' is modus ponens, whereas 'this implies that; that; therefore this' is the fallacy of affirming the consequent. . . We can hear these temporal rhythms just as we would hear a recognizable pattern in music. (83-84)

Ellis, Ralph D. 1995. Questioning Consciousness. Amsterdam: John Benjamins.
Develops theory of consciousness in which, in order for consciousness to occur, the organism must attend to sensory input and look for a particular pattern of data. Incoming stimuli will not produce conscious states in a passively receptive organism: "Only after the organism purposely chooses to focus attention in a certain way can the 'looking-for' consciousness occur in the form of the consciousness of an image or possibility. And only after this imaginal consciousness develops . . . does perceptual consciousness occur corresponding to the perceptual object whose input is activating the primary projection area." (55) On Ellis's account, a mental image of the object is matched to a pattern of sensory input; the result is perceptual consciousness. Imagery is thus essential to perceptual consciousness. Brings together neuroscientific, psychological and phenomenological research, combining in a readable format recent developments in image research and neurology with a reassessment of the mind-body relation, and research on `mental models,' abstract concept formation, and acquisition of logical and apparently `imageless' inference skills. The result is a new theory of the relationship between consciousness, non-conscious cognitive processes, and the brain, emphasizing that, when the brain forms a mental image or concept, it is engaging in an efferent and emotionally-motivated `looking-for' process whose pattern corresponds to the kinds of efferent processes in which the brain would engage if the type of object in question were actually present and perceived. Even when the object is perceived, the efferent `looking-for' activity is prior to the afferent registering of sensory information, and it is this active, efferent activity which constitutes our consciousness of the object, not the passive, afferent receiving of the information. Based on this observation of the way efferent and afferent patterns interrelate, a theory is developed to show how consciousness occurs in the brain. It is argued that all consciousness is ultimately built up from subjunctives, in the sense that to be conscious of an object is essentially to imagine in a habituated way what would happen if we were to perform certain actions in relation to the object; and that mental images (including the imagining of proprioceptive, sensorimotor, and bodily-rhythmic conditions) fit together to build up abstract, `imageless' concepts. To entertain an abstract or `imageless' concept is to put ourselves into a `condition of readiness' to entertain the sequence of images which would be needed if we were to explicate an ostensive definition of the concept, and to feel `confident' that we could do so if needed even though we normally inhibit most of this explicating process when we use a concept. This analysis as a whole shows why conscious information processing is so structurally different from yet interrelated with non-conscious processing; how logical inference depends on the `pragmatic' imagining of recognizable rhythm-patterns in the sequencing of information; how mind and body interrelate as a process to its substratum (for example, as a sound wave relates to the medium through which it passes); how the various brain areas interact to produce corresponding kinds of conscious states; and how various cognitive processes such as memory, concept-formation, and symbolic behavior can be accounted for in terms of a theory which carefully combines both neurological and phenomenological observations. The phenomenological observations are important because they help us to distinguish between conscious and non-conscious instances of information processing -- a crucially important distinction because the two types of processing are so different. It is argued that `desire'(in the non-conscious sense) becomes desire in the conscious sense only when it becomes a process which is capable of and motivated toward appropriating and reproducing elements to be used as its own substratum by growing to include a representation of the missing elements, or at least a representation of contents ideationally related to the missing elements. For example, the `desire' for cellular sustenance grows to include proprioceptive images of oneself eating and then imaginary representations of edible objects which finally find matching patterns of activity in the primary projection area if sensory input from such an object in the environment is received. A desire which is conscious, even to a minimal extent, is one which is capable of controlling, appropriating and reproducing elements of its own substratum in such a way as to form within itself a representation (however vague and approximate) of that of which it is a desire. Without this primacy of the process over its own substratum, an event cannot qualify as a conscious event.

Elman, Jeffrey. 1993. "Learning and development in neural networks: the importance of starting small". Cognition 48: 71-99.
Summarizes empirical findings indicating that the initially unprogrammed nature of neural structures is conducive to complex learning (with emphasis on synergistic exchange between maturational change and the ability to learn complex domains such as language); if the brain were too hardwired, complex learning would be impossible.

Eslinger, Paul J. and Antonio R. Damasio. 1985. "Severe disturbance of higher cognition after bilateral frontal lobe ablation: Patient EVR". Neurology 35: 1731-1741.
Clinical study of the effects of damage to lateral frontal lobes.

Evans, G. 1982. Varieties of Reference. New York: Oxford University Press.
Sees a necessary connection between perceiving objects in space and having an idea of one's body: "Any thinker who has an idea of an objective spatial world – an idea of a world of objects and phenomena which can be perceived but which are not dependent on being perceived for their existence – must be able to think of his perception of the world as being simultaneously due to his position in the world, and to the condition of the world at that position. The very idea of a perceivable, objective, spatial world brings with it the idea of the subject as being in the world, with the course of his perceptions due to his changing position in the world and to the more or less stable way the world is" (222).
        Also argues for the ultimate indexicality of our conception of objective space. Thinking objectively about space is dependent upon a 'cognitive map:' a representation in which the spatial relations of several distinct things are simultaneously represented (151). Employing a cognitive map requires the ability to identify this map with egocentric space – a framework centered upon one's own body. The "ability to think about an objective spatial world at all"

    presupposes the ability to represent the spatial world by means of a cognitive map. But nothing that the subject can do, or imagine, will entitle us to attribute such a representation to him if he cannot make sense of the idea that he might be at one of the points representable within his map. (163)

Understanding the idea of public, objective space requires understanding egocentric space, whose content is given by our own experiences:

    Egocentric spatial terms are the terms in which the content of our spatial experiences would be formulated, and those in which our immediate behavioral plans would be expressed. This duality is no coincidence: an egocentric space can exist only for an animal in which a complex network of connections exists between perceptual input and behavioral output. (154)

Evans, Jonathan. 1993. "The mental model theory of conditional reasoning: critical appraisal and revision". Cognition 48: 1-20.
Presents empirical findings inconsistent with Johnson-Laird's `mental models' hypothesis. It is argued that mental models cannot account for the relative frequency with which people use and misuse certain inference rules such as modus ponens and modus tollens. For example, it fails to account for the `negative conclusion bias' when negated components are introduced into the rules. Evans suggests certain revisions in mental models theory which might account for these anomalies.

Evarts, Edward. 1979. "Brain mechanisms of movement," Scientific American Sept., 164-179.
Whether feedback regarding action error is positive or negative does not depend on a decision of the agent, such that it would be easy to mistake one for the other. It is part of the 'reflexive' nature of the motor procedure that feedback can be processed appropriately without conscious attention by the agent (although not always without the possibility of such attention). At the minimum, learning takes place, which requires distinguishing between success and failure. So the existence of action error on this level is a real, objective fact, all the conditions of which can be completely grasped by the one who commits the error.

F

Farah, Martha. 1989. "The neural basis of mental imagery". Trends in Neuroscience 12: 395-399.
Seems to find efferent but not afferent occipital involvement in imagery. Observes that, when subjects think about abstractions, their occipital lobes are not very active. Reports that "mental imagery involves the efferent activation of visual areas in prestriate occipital cortex, parietal and temporal cortex, and . . . these areas represent the same kinds of specialized visual information in imagery as they do in perception." (395)

Fauconnier, G. 1985. Mental Spaces: Aspects of Meaning Construction in Natural Language. Cambridge: The MIT Press.
Any state of affairs we can understand, however abstract, involves a domain consisting of entities and their interrelations, which can be represented by the image schemas developed for physical actions on physical objects. A domain is a `space,' in Fauconnier's terms. Whether the space contains trees, numbers, clashing ideologies, or Plato's Forms, we know how to represent it, and we understand the logical rules its inhabitants must obey. Fauconnier's concern, as well as that of Ballim et.al., is with nested belief clusters structured so as to facilitate propositional attitude reasoning (see also Dinsmore 1987); such belief clusters may be represented in propositional form. The construction of models is constrained by the possibilities of relationships or interactions among the elements of the domain of discourse. One can think of these models as virtual spaces in which occupiers are manipulated according to rules. Reasoning about the entities in a domain is carried out by mental manipulation of the occupiers. Relationships among domains are represented by mapping rules applying to two or more mental spaces.

Flanders, M., Tillery, S.I.H., and Soechting, J.F. 1992. "Early stages in a sensorimotor transformation." Behavioral and Brain Sciences 15, 2, 309-362.

Feigl, Herbert. 1958. "The `mental' and the `physical'". In Herbert Feigl (ed), Minnesota Studies in the Philosophy of Science, II. Minneapolis: University of Minnesota Press, 370-497.
Says that the notion of a `mental event' is a nomological dangler, unnecessary to any physical explanation of the particular pattern of brain activity in question. Once we describe and explain the motion of each individual particle, we have described and explained all that needs to be described or explained.

Feuerbach, Ludwig. 1966. Principles of the Philosophy of the Future. Indianapolis: Bobbs-Merrill.
Early proposal of a monistic relation between mind and body, with the mind construed as a `predicate' of the body.

Flor-Henry, Pierre, L.T. Yeudall, Z.T. Koles, and B.G. Howarth. 1979. "Neuropsychological and power spectral EEG investigations of the obsessive-compulsive syndrome". Biological Psychiatry 14: 119-129.
EEG patterns, CT scans and other measures of neural activity correlated with hysterical conditions.

Fodor, Jerry. 1975. The Language of Thought. Cambridge: Harvard University Press.
Develops language of thought hypothesis, or "mentalese". On this account, thinking is a process in which brain states interact computationally by means of their physical structures. The result is thought that is purely syntactic in its mechanism, but that has a semantic aspect in that its "sentences" are isomorphic with the sentences of natural language, which refer to the world. It is thus possible to say that mental states are intentional (have meaning) in the way that natural language sentences are intentional. Just as one can say "Scientists are searching for Bigfoot" without implying the existence of Bigfoot, one can think about Bigfoot in the "language of thought" without implying the creature's existence. In this way psychology merges with philosophy of language. Classic statement of the `language of thought' explanation of the way information is processed.

Fodor, Jerry. 1981. In Jerry Fodor (ed), RePresentations: Philosophical Essays on the Foundations of Cognitive Science. Cambridge: The MIT Press.
Further development of the `language of thought' approach.

Fodor, Jerry. 1981b. "The mind-body problem". Scientific American 244: 114-123.
Extremely functionalistic account of the `mental'. E.g., see his definition of a headache as "[That which] causes a disposition for taking aspirin in people who believe aspirin relieves a headache, causes a desire to rid oneself of the pain one is feeling, often causes someone who speaks English to say such things as `I have a headache,' and is brought on by overwork, eye-strain and tension (p. 118)."

Fodor, Jerry. 1983. The Modularity of Mind. Cambridge: The MIT Press.
Theorizes that the mind functions by means of modules which differ not by being spacially or structurally distinct, but by forming different patterns of behavior in possibly identical physiological substrates.

Fodor, J. 1987. Psychosemantics. Cambridge: MIT Press.

Fodor, J. 1983. The Modularity of Mind. Cambridge, Mass.: MIT Press.

Fodor, Jerry and Zenon Pylyshyn. 1988. "Connectionism and cognitive architecture: critical analysis". Cognition 28: 3-71.
Classic statement of the idea that thoughts have syntactic structure and allow for a representational system with compositional semantics, so that thoughts essentially constitute representations of external realities by means of sentences in a special `language of thought' not to be confused with thoughts expressed in a verbal language.

Foulkes, William D. 1985. Dreaming: A Cognitive-psychological Analysis. Hillsdale, N.J.: Erlbaum.
Shows that sleep reverses the direction of information processing: In waking life, we receive (afferent, posterior) sensory input, and the (efferent, anterior) pre-perceptual `inner experience' (Aurell 1989) interprets and transforms the input. In dreams, we begin with `inner experience' — emotion, motivations, feelings, etc., then transform them into images which are so vivid that they force the posterior system (especially the occipital lobe) — which would normally be a receiving system — to resonate with the images generated from our `inner experience.' It therefore seems as though we were actually `looking at' the object of the image.

Fox, Elaine. 1994. "Attentional bias in anxiety: A defective inhibition hypothesis". Cognition and Emotion 8: 165-195.
Empirical and theoretical discussion of the way anxiety creates too strong an attentional gating process for cognitive processing to be efficient.

Fox, P., Peterson, S., Posner, M., and Raichle, M. 1988. "Is Broca's area language specific?" Neurology 38 (Supplement 1): 172.
PET scanning studies indicating that Broca's area is associated with the motor cortex in mouth, tongue and hand movements, and that it "lit up" when subjects merely imagined hand movements, without carrying them out. (This is similar to the findings of Roland et. al. (1980) regarding the SMA,

Freud, Sigmund. 1959. Beyond the Pleasure Principle. New York: Bantam.
Rejects his earlier theory that electrostatic reduction is the ultimate driving force of neural behavior.

Fuchs, W. 1922. "Eine Pseudofovea bei Hemianopikern," Psychologische Forschung.
Finds that colors are still perceived after hemianopsia, which would be impossible unless the eye extensively reorganized the functions of its constituents, thus showing the flexibility of the nervous system in reorganizing itself to maximize function, as opposed to being predetermined by anatomical structure.

Fuentes, L.J., Carmona, E., and Agis, I. 1994. "The role of the anterior attentional system in semantic processing of both foveal and parafoveal words." Journal of Cognitive Neuroscience 6, 1, 17-25.

Fuster, Joaquim. 1980. The Prefrontal Cortex. New York: Raven Press.
Classic statement of the view that the prefrontal cortex is the `executive' of conscious processes: "The most crucial constituents are the attentive acts that `palpate' the environment in search of significant clues, the intentional and elaborate movements, the continuous updating of relevant information to a cognitive scheme of the overall structure and its goal. The prefrontal cortex not only provides the substrate for these operations but imparts to them their active quality. In that active role . . . one may find some justification of the often espoused view of the prefrontal cortex — `the executive of the brain' (128)."

Top of page

G - I