In part one, I examined two recent frameworks for understanding ideology (Jost and Martin) and explained how both serve as alternatives to the good old-fashioned ideology theory (GOFIT). Ultimately, I concluded that Martin’s (2015) model has specific advantages over Jost’s (2006) model, though the connection between ideology and “practical mastery of ideologically-relevant social relations” needs to be fleshed out. This is particularly true because any strong concentration on social relations seems to preclude any serious attention to cognition. But without it, the argument is vulnerable to crying foul over reductionism.
In this post, I sketch a model of cognition that checks the boxes of GOFIT ideology: distorting, invested with power, supports unequal social relations. But it is different for reasons I specify below. To do this, I use a famous experiment in neuroscience—Michael Gazzaniga’s “split-brain” hypothesis— and draw an analogue between it and a possible non-GOFIT ideology.
Galanter, Gerstenhaber … and Geertz
But before doing that, it seems reasonable to ask about the purpose of even attempting a non-GOFIT ideology. Is GOFIT a strawman? Why is it problematic? To answer these questions, and to indicate why a holistic revision of ideology away from GOFIT seems to be in order, consider Clifford Geertz and his essay (1973) “Ideology as a cultural system,” which presents what is to date arguably the most influential, non-Marxist approach to ideology in the social sciences. Geertz’s burden is to make ideology relevant by providing it with a “nonevaluative” form. And the way he does this, using modular or computational cognition, is what I want to focus on.
Ideology here is not tantamount to oversimplified, inaccurate, “fake news” style distortion that is, above all and categorically, what science is not. But if it isn’t to be censured like this, then for Geertz ideology must be a symbolic phenomenon that has something to do with how “symbolic systems” make meaning in the world, and in turn serve to guide action (e.g. “models of, models for”). To make this argument, he does, in fact, make ideology cognitive by drawing from a psychological model: Eugene Galanter and Murray Gerstenhaber’s  “On Thought: The Extrinsic Theory.”
As Geertz summarizes:
thought consists of the construction and manipulation of symbol systems, which are employed as models of other systems, physical, organic, social, psychological, and so forth, in such a way that the structure of these other systems– and, in the favorable ease, how they may therefore be expected to behave–is, as we say “understood.” Thinking, conceptualization, formulation, comprehension, understanding, or what-have-you, consists not of ghostly happenings in the head but of a matching of the states and processes of symbolic models against the states and processes of the wider world … (214)
Geertz returns to this same argument in arguably his most thorough approach to the culture concept (“The Growth of Culture and the Evolution of Mind”). Importantly, there too he does not conceive of culture or symbols absent a psychological referent, which he consistently draws from Galanter and Gerstenhaber.
Whatever their other differences, both so-called cognitive and so-called expressive symbols or symbol-systems have, then, at least one thing in common: they are extrinsic sources of information in terms of which human life can be patterned–extrapersonal mechanisms for the perception, understanding, judgment, and manipulation of the world. Culture patterns–religious, philosophical, aesthetic, scientific, ideological–are “programs”; they provide a template or blueprint for the organization of social and psychological processes, much as genetic systems provide such a template for the organization of organic processes (Geertz, 216)
How does this apply to ideology? It makes ideology a symbolic system for building an internal model. Geertz is distinctively not anti-psychological here but instead seems to double down on the “extrinsic theory of thought” to define culture as a symbol system through which agents construct models of and for some system out in the world, effectively programming their response to that system. Ideology refers to the symbol system that does this for the political system:
The function of ideology is to make an autonomous politics possible by providing the authoritative concepts that render it meaningful, the suasive images by means of which it can be sensibly grasped … Whatever else ideologies may be–projections of unacknowledged fears, disguises for ulterior motives, phatic expressions of group solidarity–they are, most distinctively, maps of problematic social reality and matrices for the creation of collective conscience (Geertz, 218, 220)
Geertz mentions the example of the Taft-Hartley Act (restricting labor unionizing) that carries the ideological label the “slave labor act.” Geertz emphasizes how ideology works according to how well or how poorly the model (“slave labor act”) “symbolically coerces … the discordant meanings [of its object] into a unitary conceptual framework” (210-211).
If GOFIT is a set of assumptions widely held about ideology, then we probably find little to disagree with in Geertz’s argument, at least at first glance. Much of it should ring true. If we object to anything it might be the heavy-handed language that Geertz uses that evokes modular or computational cognition (e.g. “programs”). But maybe Geertz himself is not responsible for this. His sources, Galanter and Gerstenhaber, were explicit in making these assumptions about cognition, and this I want to argue is important for a specific reason.
To Galanter and Gerstenhaber, “model” clearly meant the sort of three-dimensional scale models that scientists construct in order to understand large-scale physical phenomena. In this sense, they solved the “problem of human thinking” by defining it as a lesser version of idealized scientific thinking. And they were not alone in that pursuit. At least initially, cognition was presented as antithetical to behaviorism in psychology by allying itself with resources that were quite deliberate and quite reflexive: “[mid-century] cognitive scientists … looked for human nature by holding an image of what they were looking for in their [own] minds. The image they held was none other than their own self-image … ‘good academic thinking’ [became the] model of human thinking” (Cohen-Cole 2005).
This is not only the context for Geertz’s theory of ideology. His understanding of “symbol systems” writ large cannot be removed from this specific gloss on and an extension of “good academic thinking.” For our purposes, this should beg the question of whether using symbol systems to form internal models about the external world and to manipulate and creatively construe those models as equivalent to “symbolic action” should be the template or basis for defining ideology on nonevaluative grounds, that is to say, for defining ideology in the way that Geertz himself does: as cognitive.
Ideology and the Split-Brain
What I will try to do now, after this long preamble, is sketch a different possible cognitive basis for a theory of ideology, one that I think is compatible with Martin’s (2015) field-theoretic approach to ideology discussed in part one of this post. It develops a cognitive interpretation of what “practically mastery of ideologically relevant social relations” might mean. It also situates Marx as the contrary of Geertz by making social relations a necessary condition for ideology as a cognitive phenomenon, not something that needs to be bracketed (or pigeonholed as “strain” or “interest”) for ideology to be cognitive.
This different basis is Gazzaniga’s research (1967; 1998; Gazzaniga and Ledoux 1978) on the split-brain and the process of confabulation of meaning on the basis of incomplete visual input. It is important to mention that I use the split-brain as an analogue (in “good academic thinking” terms) to convey what ideology might mean as a cognitive phenomenon if it is not a symbol system. I do not imply that ideology requires a split-brain as a physical input.
For Gazzaniga, the two sides of the brain effectively constituted two separate spheres of consciousness, but this could only be truly appreciated when the corpus callosum was severed (what used to be a procedure for epileptic patients) and the two sides of the brain were rendered independent from each other. When this happened, the visual field was bissected as the brain stopped communicating information together that came through the right and left visual fields (hereafter RVF and LVF). What was observable in the RVF was received independently from what was observable in the LVF. As Gazzaniga found, the brain is multi-modal. The left hemisphere is the center of language about visual input. So when a word or image was flashed to the RVF and the information was received by the left hemisphere, the patient could provide an accurate report. When a word or image was flashed to the LVF, the patient could only confabulate because the non-integrated brain could not combine the visual information with the language functions of the left hemisphere. The split-brain patient effectively “didn’t see anything,” even though she could still connect visual cues to related pictures on command.
When visual information is presented to a split-brain, the mystery is how the verbal left hemisphere attempts to make sense of what the non-verbal right hemisphere is doing. This is the recipe for confabulations or “false memories” as Gazzaniga (1998) puts it, because here we witness the effects of the “interpreter mechanism.”
Thus, when the RVF and LVF of a split-brain patient were shown pictures of a house in the snow and a chicken’s claw, and the patient was asked to point to relevant pictures based on these visual cues, she pointed to a snow shovel and a chicken head respectively. Here is the interesting part:
the right hemisphere—that is, the left hand—correctly picked the shovel for the snowstorm; the right hand, controlled by the left hemisphere, correctly picked the chicken to go with the bird’s foot. Then we asked the patient why the left hand— or right hemisphere—was pointing to the shovel. Because only the left hemisphere retains the ability to talk, it answered. But because it could not know why the right hemisphere was doing what it was doing, it made up a story about what it could see—namely, the chicken. It said the right hemisphere chose the shovel to clean out a chicken shed (Gazzaniga 1998: 53; emphasis added).
“It made up a story” refers here to the verbal left hemisphere attempting to make sense of why right hemisphere had been directed toward a shovel. Flashing a picture to right hemisphere lacked any narrative ability, and yet the split-brain patient could still point at a relevant image even though this did not “pass through” language.
The argument here is that this serves as a good analogue for a theory of ideology that does not make computational or modular commitments. The important point is that confabulation is not just some made up story, but what the split-brain patient believes because his brain has filled in the blank (e.g. “I chose the shovel because I need to shovel out the chicken coop”). Ideology as a cognitive phenomenon does not, in this sense, mean programming the political system according to an extrinsic symbol system; in other words, building an internal model (a three-dimensional one) of that system and drawing entailments from it, as any good scientist would do. To be “in ideology” means filling in the blank as the normal way to cognitively cope with disconnected inputs, some with a “phonological representation,” others that are “nonspeaking.”
The Split-Brain and Social Relations
We can theorize that where practical mastery of social relations becomes important, in particular, social relations that are “ideologically-relevant,” it is because they generate an equivalent of a split-brain effect and its “interpreter mechanism.” In social relations arranged as fields, practical mastery consists of the “felt motivation of impulsion … to attach impulsion … to positions … [and have] the ethical or imperative nature of such motivations [be] akin to a social object, external and (locally) intersubjectively valid, that is, valid conditional on position and history” (Martin 2011: 312).
Fields refer to one type of social relation conducive to ideological effects, particularly if they are organized on quasi-Schmittian grounds of opponents and allies (Martin 2015). Marx is clear that other types of social relation (like capital) are specifically resistant to influence by any sort of cognitive mediation. Still, he achieves some understanding of those social relations by examining their “being thought … [through] abstractions” (see Marx 1973: 143). For instance, the commodity fetish can be seen as analogous to a split-brain effect: the “social relation between things” is an LVF interpretation, while the “social relation between people” is equivalent to an RVF input. A split-brain is an analogue of mental structures that correspond to these objective (social) structures.
Taking the split-brain as the basis (not the “extrinsic theory”) for ideology as a (non-GOFIT) cognitive phenomenon, then, we can speculate that only certain social relations (fields, capital) have an ideological effect. The ideological effect they do have is because they generate a split-brain scenario with disconnected inputs. Agents are subject to social relations in which they do not have direct access (RVF). They fill in the blank of the effect of those inputs through “abstractions,” i.e. explicit endorsements or propositional attitudes that take linguistic form, often mistaken on their own terms as ideology (LVF).
To be continued … [note: Zizek (2017: 119ff) also finds the split-brain useful for thinking about ideology, though his argument confounds and mystifies with Pokemon Go]
Cohen-Cole, Jamie. (2005). “The Reflexivity of Cognitive Science: The Scientist as a Model of Human Nature.” History of the Human Sciences 18: 107-139.
Galanter, Eugene and Murray Gerstenhaber. (1956). “On Thought: The Extrinsic Theory.” Psychological Review 63: 218-227.
Gazzaniga, Michael. (1967). “The Split-Brain in Man.” Scientific American 217: 24-29.
_____. (1998). “The Split-Brain Revisited.” Scientific American 279: 51-55.
Gazzaniga, Michael and Joseph LeDoux. (1978). The Integrated Mind. Plenum Press.
Geertz, Clifford. (1973). “Ideology as a Cultural System.” in Interpretation of Cultures.
Jost, John. (2006). “The End of the End of Ideology.” American Psychologist 61: 651-670.
Martin, John Levi. (2015). “What is Ideology?” Sociologica 77: 9-31.
_____. (2011). The Explanation of Social Action. Oxford.
Marx, Karl. (1973). The Grundrisse. Penguin.
Zizek, Slavoj. (2017). Incontinence of the Void. MIT
You must log in to post a comment.