When is Consciousness Learned?

Consciousness-learned

Continuing with the theme of innateness and durability from my last post, consider the question: are humans born with consciousness? In a ground-breaking (and highly contested) work, the psychologist Julian Jaynes argued that if only humans have consciousness, it must have emerged at some point in our human history. In other words, consciousness is a socially and culturally acquired skill (Williams 2011).

To summarize his argument: until as recently as the Bronze age (the third millennium BCE) he purports that humans were not, strictly speaking conscious. Rather, humans experienced life in a proto-conscious state he refers to as “bicameralism.” Roughly around the “Axial Age” (cf Mullins et al. 2018), bicameral humans declined and conscious, “unicameral” humans emerged.

One piece of evidence he deploys in support of his thesis is that the content of the Homeric poem the Iliad is substantially different than the later Odyssey. The former, he argues, is devoid of references to introspection, while the latter does have introspection. Jaynes argues a similar pattern emerges between earlier and later books of the Christian Bible. In a recent attempt  (see also Raskovsky et al. 2010) to test this specific hypothesis quantitatively,  Diuk et al. (2012), use Latent Semantic Analysis to calculate the semantic distances between the reference word “introspection” and all other words in a text. Remarkably, their findings are consistent with Jaynes’ argument  (see also: http://www.julianjaynes.org/evidence_summary.php).

Screenshot from 2018-12-19 17-47-55.png
From Diuk et al. (2012): “Introspection in the cultural record of the Judeo-Christian tradition. The New Testament as a single document shows a significant increase over the Old Testament, while the writings of St. Augustine of Hippo are even more introspective. Inset: regardless of the actual dating, both the Old and New Testaments show a marked structure along the canonical organization of the books, and a significant positive increase in introspection.”

Is Consciousness Learned in Childhood?

If consciousness, as Jaynes argued, is a product of social and cultural development, does this also mean that we each must “learn” to be conscious? Some contemporary research suggests something like this might be the case.

To begin we need a simple definition: consciousness is our “awareness of our awareness” (sometimes called metacognition). A problem with considering the extent of our conscious awareness is the normative baggage associated with “not being conscious.” For the folk, it is somewhat insulting to say people are “mindlessly” doing something, and we tend to value “self-reflection.” Certainly this is a generalization, but let’s bracket the notion that non-conscious experience is somehow less good than being conscious. The bulk of what the brain does is below the level of our awareness. For starters, when we are asleep, under general anesthesia, or even in a coma, the brain continues to be quite active. Moving to our waking lives, the kinds of skills and habits that Giddens (1979) confusingly calls the “practical consciousness” is deployed at a speed that outstrips our ability to be aware it is happening until after the fact. The kind of skillful execution associated with athletes and artists, for instance, is often associated with Csikszentmihalyi’s “flow” precisely because there is a “letting go” and letting the situation take over. All this is to say we are conscious far less than we probably think. Indeed asking us when we are not conscious  (Jaynes 1976:23):

…is like asking a flashlight in a dark room to search around for something that does not have any light shining upon it. The flashlight, since there is light in whatever direction it turns, would have to conclude that there is light everywhere. And so consciousness can seem to pervade all mentality when actually it does not.

A second major confusion is the assumption that consciousness is how humans learn ideas or form concepts. As we discuss elsewhere (Lizardo et al. 2016), memory systems are multiple, and while we learn via conscious processes, the bulk of what we learn is via non-conscious processes in “nondeclarative” memory systems (Lizardo 2017). This is especially the case for the most basic concepts we learn from infancy onward. In fact, Durkheim’s argument that it is through ritual—embodied experience—that so-called “primitive” groups learned the “basic categories of the understanding” more or less pre-figures this point (Rawls 2001).

Rather than the experience-near associated with everyday life, consciousness involves introspection and “time traveling” associated both with reconstructing our own biographies from memory and imagining possible (and impossible) futures. A recent school of thought in cognitive science—referred to as “enactivism”—takes a rather radical approach in arguing that the vast majority of human cognition is not, strictly speaking, contentful (Hutto and Myin 2012, 2017). Indeed, a lot of “remembering” does “not require representing any specific past happening or happenings… remembering is a matter of reenactment that does not involve representation” (Hutto and Myin 2017:205). But, what about autobiographical remembering involved in introspection and self-reflection which we might consider the hallmark of consciousness?

To answer this — within the broader enactivist project — they draw on group of scholars who argue that autobiographical memory is “a product of innumerable social experiences in cultural space that provide for the developmental differentiation of the sense of a unique self from that of undifferentiated personal experience” (Nelson and Fivush 2004:507). These scholars find that “a specific kind of memory emerges at the end of pre-school period”  (Nelson 2009:185). Such a theory offers a plausible explanation for “infantile amnesia” — the inability to recall events prior to about three or four — an explanation much less ridiculous than Freud’s contention that these memories were repressed so as to “screen from each one the beginnings of one’s own sex life.”

These theorists go on to argue that “a new form of social skill” associated with this “new type of memory” (Hoerl 2007:630). This skill is “narrating” one’s experience. Parent’s reminiscing with children play a central role in the acquisition of this skill (Nelson and Fivush 2004:500):

…parental narratives make an important contribution to the young child’s concept of the personal past. Talking about experienced events with parents who incorporate the child’s fragments into narratives of the past not only provides a way of organizing memory for future recall but also provides the scaffold for understanding the order and specific locations of personal time, the essential basis for autobiographical memory.

Returning to Jaynes, we find a remarkably analogous description of the emergence of consciousness as  the “development on the basis of linguistic metaphors of an operation of space in which an ‘I’ could narratize out alternative actions to their consequences” (Jaynes 1976:236). That is, we could assert, consciousness is this social skill emerging from the (embodied and social) practice of reminiscing with parents and classmates (or the like) when we are around three years old.

REFERENCES

Diuk, Carlos G., D. Fernandez Slezak, I. Raskovsky, M. Sigman, and G. A. Cecchi. 2012. “A Quantitative Philology of Introspection.” Frontiers in Integrative Neuroscience 6:80.

Giddens, A. (1979). Central problems in social theory. Berkeley: University of California press.

Hoerl, C. 2007. “Episodic Memory, Autobiographical Memory, Narrative: On Three Key Notions in Current Approaches to Memory Development.” Philosophical Psychology.

Hutto, Daniel D. and Erik Myin. 2012. Radicalizing Enactivism: Basic Minds without Content. MIT Press.

Hutto, Daniel D. and Erik Myin. 2017. Evolving Enactivism: Basic Minds Meet Content. MIT Press.

Jaynes, Julian. 1976. The Origin of Consciousness in the Breakdown of the Bicameral Mind.

Lizardo, Omar. 2017. “Improving Cultural Analysis Considering Personal Culture in Its Declarative and Nondeclarative Modes.” American Sociological Review 0003122416675175.

Lizardo, Omar, Robert Mowry, Brandon Sepulvado, Dustin S. Stoltz, Marshall A. Taylor, Justin Van Ness, and Michael Wood. 2016. “What Are Dual Process Models? Implications for Cultural Analysis in Sociology.” Sociological Theory 34(4):287–310.

Mullins, Daniel Austin, Daniel Hoyer, Christina Collins, Thomas Currie, Kevin Feeney, Pieter François, Patrick E. Savage, Harvey Whitehouse, and Peter Turchin. 2018. “A Systematic Assessment of ‘Axial Age’ Proposals Using Global Comparative Historical Evidence.” American Sociological Review 83(3):596–626.

Nelson, Katherine. 2009. Young Minds in Social Worlds: Experience, Meaning, and Memory. Harvard University Press.

Nelson, Katherine and Robyn Fivush. 2004. “The Emergence of Autobiographical Memory: A Social Cultural Developmental Theory.” Psychological Review 111(2):486–511.

Raskovsky, I., D. Fernández Slezak, C. G. Diuk, and G. A. Cecchi. 2010. “The Emergence of the Modern Concept of Introspection: A Quantitative Linguistic Analysis.” Pp. 68–75 in Proceedings of the NAACL HLT 2010 Young Investigators Workshop on Computational Approaches to Languages of the Americas, YIWCALA ’10. Stroudsburg, PA, USA: Association for Computational Linguistics.

Rawls, A. W. (2001). Durkheim’s treatment of practice: concrete practice vs representations as the foundation of reason. Journal of Classical Sociology, 1(1), 33-68.

Williams, Gary. 2011. “What Is It like to Be Nonconscious? A Defense of Julian Jaynes.” Phenomenology and the Cognitive Sciences 10(2):217–39.

Cultural Cognition in Time, from Memory to Imagination

Over the past few years, I have been thinking about the concept of imagination. It emerged out of my efforts to understand the generational change in public opinion about same-sex marriage in the U.S. when it became clear to me that young and old simply imagined homosexuality and same-sex marriage in different ways [see also three essential readings on the imagination: (Appadurai 1996; Orgad 2012; Strauss 2006)]. It wasn’t that the two cohorts disagreed about the issue; it’s that they couldn’t even understand each other. I realized that the imagination represents an implicit domain of political cognition that by-and-large goes unrecognized and unacknowledged by people when they talk to each other, while nonetheless structuring the debate in a way that is similar to framing.[1] I published my initial argument here (No paywall!), and have elaborated on this theory of imagination in my recent book (Definitely paywall!).

One thing that sets my view of the imagination apart from the ways that some other social scientists invoke the concept is that I see an important connection with the concept of collective memory. In many usages (e.g. Castoriadis 1987; Taylor 2002), the idea of the social imagination or the social imaginary is so broad that it most closely approximates the concept of culture—that incomprehensible whole that signifies everything and nothing all at the same time (Strauss makes this critique effectively). By contrast, I think the argument that Olick (1999) makes for collective memory fits well with Strauss’ critique of the social imaginary: we need a dual, individualist-collectivist theory of the imagination, one that anchors the cultural and cognitive versions of the concept in each other. Simply put, minds imagine things just like minds remember things, but the resources and the effects of imagination and memory are cultural and social.

Certainly, the cognitive process of remembering is distinguished in part by its retrospective temporal horizon, and in the empirical work of many sociologists (Baiocchi et al. 2014; Perrin 2006), the imagination’s temporal horizon is future-oriented: actions that we could take to solve a problem, or visions of a better society. Thus, it makes some sense (from a phenomenological perspective, at least) that we can think of collective memory and the social imagination as cultural-cognitive processes that occupy different spots on a temporal continuum.

However, I’d like to make the case that the social imagination is not just future-oriented, but present-oriented. I will also make the case that collective memory may be fruitfully theorized as the past-oriented variant of the social imagination. The ultimate goal of this essay is to persuade sociologists that the imagination is something of a master cultural-cognitive process, with variants that correspond to different phenomenological time horizons, and that is influenced by positive and negative socio-emotional forces.

In purely psychological terms, the imagination is the mind’s capacity to construct a mental image of a non-present phenomenon. Whether past-, present-, or future-oriented, and whether the imagined entity is real (horse) or unreal (unicorn), the cognitive process is essentially the same. Sociologically speaking, however, different imaginations have different effects: individuals’ imaginations of stereotypical and counter-stereotypical people will either reinforce or attenuate prejudicial attitudes and implicit biases (Blair, Ma and Lenton 2001; Slusher and Anderson 1987). Thus, there are political consequences to people’s imaginations: cultivating one’s capacity to produce (and act on) counter-stereotypic mental images may be an effective strategy for combatting implicit racism, sexism, and other forms of enduring prejudice.

As a sociological process, the social imagination is the process that shapes the patterns of associations that define cultural schemas, or the cultural content of a schema. In other words, the social imagination is the cultural-cognitive process that govern the creation, maintenance, and deconstruction of stereotypes, prototypes, categories, and concepts of all kinds. Certainly, other (material, structural, political, whatever) factors are involved in this process, too—like oppression, socialization, etc.—but the social imagination is the culture-cognition nexus. As Orgad (2012) shows, the mass media are one of the most critical institutions involved in contests of the social imagination. In this view, media consumption improves, not reduces, our capacity to imagine because it provides us with many of the fundamental resources for producing mental images. If you combine this understanding of the social imagination with the psychological research describe above, we can explain why stereotypical and counter-stereotypical media representations are so important: media representations can create, maintain, change, or destroy the cultural associations that define different groups of people in the public mind.

As far as I’ve read, Glaeser’s (2011) Political Epistemicsis one of the master treatises on the social imagination, though he doesn’t put it in those terms. Glaeser uses “understanding” to refer to this realm of cultural-cognition, and he uses the term to refer to both the process and its outcome. On page 10, Glaeser begins his definition of understanding by characterizing it as a process: “Understanding is a process of orientation…”; however, one page earlier, Glaeser writes of it as an achievement, or outcome: “understanding is achieved in a process of orientation…” My own view is that the imagination is this process of orientation that produces understandings. This follows Kant (1929), who, in Critique of Pure Reason, argues that the “transcendental power of imagination” is the fundamental  synthetic capacity of mind that combines perception and the cultural categories of understanding, thus structuring all human knowledge and experience.

If we keep this Kantian philosophy of the imagination at the center of our thinking, we might also conceive of memory as another species of imagination: one in which the original sensory perception took place in some bygone time and which is continually brought to life in mental images in the present by synthesizing those past perceptions with current mental structures (hence, the well-known power of our memories to change over time and for our present biography, self-identity, and social context to shape our memories into something other than what actually happened).

In sum, the imagination can be future-oriented (our ability to imagine possible future actions or solutions to social problems), present-oriented (our schemas, stereotypes, and understandings), or past-oriented (our memories).

Beyond distinguishing these three different forms of imagination, as classifed by their temporal horizon, we should differentiate between real and fantastical variants of each. Since a simple distinction between real/correct and unreal/incorrect versions of a mental image is philosophically untenable (even impossible, in the case of future-oriented mental images—things that have not yet occurred), I would argue that any given mental image should be conceived as existing on a continuum, whose polar ends represent ideal-typical, emotion-driven fantasies that “pull” our imagination in either direction. In this rendering, the ideal-typical end points are the only points on the continuum that could be labeled as the purely unreal; actually existing mental images would fall somewhere on the continuum and whose degree of “realness” is variable and relationally determined.

The point of establishing this continuum is not to determine whether one imagined mental image is more correct than another in some absolute sense, but rather to begin to discern the socio-emotional forces that are inevitably involved in the process of imagination and the sociological consequences of producing various kinds of mental images. For example, the prevalence of handgun ownership and attitudes about gun rights in the U.S. must certainly take into account the fear-driven imagination that a criminal who is waiting to rob and murder you is hiding behind every corn stalk in the state of Iowa. Whether past-, present-, or future-oriented, our mental images of reality are constructed within a socio-emotional landscape; as social scientists, it behooves us to think seriously about those landscapes, how they affect our imaginations, and how social action ultimately makes sense to the actors who imagine the world as they do.

Thus, we have three different continuums for the social imagination—one for each temporal horizon—in which mental images are constructed. The mental image’s location on the continuum is influenced by the extent to which positive and negative emotional circumstances influence the process of imagination.

Future-Oriented Imagination: The Domain of Possibility

Cultural Cognition Future

Let’s take the domain of future-oriented imagination first: the domain of possibility. The social imagination of the possible is inevitably informed by the emotions of fear and hope and situated in relation to social conditions of dystopia and utopia. Karen Cerulo (2008) has already written on the cognitive and cultural dynamics of this domain. Another notable example of the sociology of possibility is Erik Olin Wright’s “Real Utopias” research program (e.g., Wright 2013), which promises a sociology of liberation if we take it seriously.

Present-Oriented Imagination: The Domain of Understanding

Cultural Cognition Present

The social imagination of the present happens in the domain of understanding. As mentioned above, Glaeser’s Political Epistemics is the essential read on how processes of validation reinforce and challenge existing understandings. Glaeser labels these types of validation as recognition, resonance, and corroboration. In addition to them being cognitive, cultural, and social in nature, they are also emotional. The present-oriented process of imagination is anchored by two fantastical emotional tendencies: the extreme cynical denial of reality that we might call delusion, and the extreme polyannaish denial of reality that we might call naiveté. All understandings and misunderstandings can be conceived in terms of their socio-emotional tenor, as well as in their cognitive, cultural, and social terms.

Past-Oriented Imagination: The Domain of Memory

Cultural Cognition Past

Finally, turning to the domain of memory, our imaginary reconstructions of past events are influenced by the socio-emotional poles of denial of the negative and romanticization of the positive. The unreal social recollections driven by these emotions are those of erasure and nostalgia: in its extreme forms, collective memory has the potential to totally eliminate the past or construct a fantasy past that never existed. One classic sociological illustration of the importance of nostalgia is, of course, Stephanie Coontz’s The Way We Never Were (1992); this example shows clearly how the romanticization of the past is not purely cognitive or cultural, but structured by institutional power relations like those that reinforce patriarchy. In a parallel (maybe mutually constitutive) way, structures of oppression contribute to the ongoing erasure of women, people of color, and the working class from history in part because of how the socio-emotional consequences of these structures lead to us to produce distorted imaginations of the past.

Obviously, these are just simple thumb-nail sketches, but I believe that understanding the social imagination in its various temporal horizons is important, not just for explaining social action (in the interpretive, symbolic interactionist vein) but also for creating social change. Positive and negative emotions are powerful forces, and the terms on which people produce their imaginations of the world will also affect how they act in that world. Like the old idea of cognitive liberation (McAdam 1982) implies, how we imagine the world can determine whether we mobilize for justice or surrender to despair. The social imagination is very much like other social institutions; it is a cultural entity in which past, present, and future intersect. Sociology should devote some attention to this institution as we do to the others.

References

Appadurai, Arjun. 1996. Modernity at Large: Cultural Dimensions of Globalization. Minneapolis, MN: University of Minnesota Press.

Baiocchi, Gianpaolo, Elizabeth A. Bennett, Alissa Cordner, Peter Taylor Klein, and Stephanie Savell. 2014. The Civic Imagination: Making a Difference in American Political Life. Boulder, CO: Paradigm Publishers.

Blair, Irene V., Jennifer E. Ma, and Alison P. Lenton. 2001. “Imagining Stereotypes Away: The Moderation of Implicit Stereotypes through Mental Imagery.” Journal of Personality and Social Psychology, 81 (5): 828-841.

Castoriadis, Cornelius. 1987. The Imaginary Institution of Society. Cambridge, MA: MIT Press.

Cerulo, Karen A. 2008. Never Saw it Coming: Cultural Challenges to Envisioning the Worst. Chicago: University of Chicago Press.

Coontz, Stephanie. 1992. The Way We Never Were: American Families and the Nostalgia Trap. New York: Basic Books.

Glaeser, Andreas. 2011. Political Epistemics: The Secret Police, the Opposition, and the End of East German Socialism. Chicago: University of Chicago Press.

Kant, Immanuel. 1929. Critique of Pure Reason. New York: St. Martin’s Press.

McAdam, Doug. 1982. Political Process and the Development of Black Insurgency, 1930-1970. Chicago: University of Chicago Press.

Nelson, Thomas E., Rosalee A. Clawson, and Zoe M. Oxley. 1997. “Media Framing of a Civil Liberties Conflict and its Effects on Tolerance.” American Political Science Review, 91 (3): 567-583.

Olick, Jeffrey K. 1999. “Collective Memory: The Two Cultures.” Sociological Theory, 17 (3): 333-348.

Orgad, Shani. 2012. Media Representation and the Global Imagination. Malden, MA: Polity Press.

Perrin, Andrew J. 2006. Citizen Speak: The Democratic Imagination in American Life. Chicago: University of Chicago Press.

Slusher, Morgan P., and Craig A. Anderson. 1987. “When Reality Monitoring Fails: The Role of Imagination in Stereotype Maintenance.” Journal of Personality and Social Psychology, 52 (4): 653-662.

Strauss, Claudia. 2006. “The Imaginary.” Anthropological Theory, 6 (3): 322-344.

Taylor, Charles. 2002. “Modern Social Imaginaries.” Public Culture, 14 (1): 91-124.

Wright, Erik Olin. 2013. “Transforming Capitalism Through Real Utopias.” American Sociological Review, 78 (1): 1-25.

 

[1] Framing and imagination are different concepts, and it is important to distinguish between them. Framing is a communicative process with cognitive effects, while the imagination is fundamentally a cognitive process, albeit with cultural influences. Setting that difference aside, though, and focusing purely on the sociological level of each concept, the social imagination is the process that shapes the pattern of associations that define cultural schemas, while framing is the process that shapes explicit cognition (for more on how framing works through deliberate, rather than automatic processing, see Nelson, Thomas E., Rosalee A. Clawson, and Zoe M. Oxley. 1997. “Media Framing of a Civil Liberties Conflict and its Effects on Tolerance.” American Political Science Review91 (3): 567-583.)

Embodied knowledge vs. flesh and blood

As DiMaggio (1997) originally noted, most sociological theories of action make assumptions about the nature of cognition even as they dismiss any explicit discussion of cognition in favor of “social” explanation. Thinking about how culture comes to be taken up by the mechanisms of cognition and how it influences action through those mechanisms would, theoretically, address deficits in sociological theories of action and, at the same time, correct the bias towards extreme individualism that pervaded the cognitive sciences from the 1950s to the 1990s (which, as Dryfus (1992) has been screaming for his entire career, made them useful for writing chess-playing programs and little else). Persons, according to this view, are not mere symbol-processing machines, but culturally-informed symbol-processing machines, whose chaotic interaction with the myriad cultural forms of everyday life naturally produces both behavioral and cultural variation (DiMaggio, 1997: 272).

As new theory tends to do, these symbolic-schematic accounts of how action comes to be solved some problems and created a few more. In cognitive science, the symbol-processing model simply failed to manifest its promises in the fields of artificial intelligence and robotics. From the 1980s through the early 2000s, most programmers and engineers tried to mimic intelligent behavior by writing programs composed of internally consistent symbol systems. While this produced some laudable feats (one thinks of Deep Blue’s famous triumph over the then world chess champion Gary Kasparov), they were limited to extremely bounded tasks that lent themselves to abstraction. In contrast, physical tasks that nine-month-old babies did with ease were arduously recreated by robotics engineers only to fail as soon as the environment in which they were performed was slightly altered. This begged the question: if human intelligence is basically a complex symbol-processing mechanism, then why are artificial symbol-processing systems so unbelievably inept at tasks so simply any human could perform with without any amount of thought or attention?

In sociological theory, the symbol-processing model of culture and cognition painted a picture of an agent who, rather than simply responding to culture, could explore and engage with it. But the nature of the mechanism(s) that allowed for this remained opaque. In other words, if culture is internalized as cognitive architecture, what is the process of internalization? How are the cultural “logics,” “schemas,” and “heuristics” that, in interaction with the social world (or “stimuli” for the cognitive scientists) acquired and applied?

Embodiment in Social Theory

Enter the embodiment perspective. The turn towards embodiment, both within culture and cognition (Ignatow, 2007; Strand & Lizardo, 2015; Winchester, 2016) and, increasingly, within cognitive science itself (Edelman, 2004; Rowlands, 2011), has been an attempt to address these issues. In social theory, the embodiment perspective accounts for culture’s internalization by theorizing that the systems of thought that ground our ability to engage with the world – perception, the formation of habits, and the execution of habitual behavior – are essentially informed by the iterative interactions of the body with the world. For some thinkers, a capacity for “deliberation” is a feature of embodiment (Joas, 1996; Winchester, 2016), this capacity itself depends on the repertoire of habits that result from the body’s immersion in the world. Our capacity for action and the cognitive schemas and logics on which it depends finds its root in the body’s grounding in a stable world from which, through infinite experimental explorations from the first day of life until the day we die, it amasses “embodied knowledge.”

This theory of cognition has been extremely fruitful for cognitive scientists and robotics engineers. Robots fitted with exploratory learning algorithms have fared far better at problem-solving in various arenas compared to their symbol-processing predecessors (Edelman, 2004). In sociology, too, the conceptualization of knowledge as fundamentally embodied is enjoying somewhat of a heyday in sociological theory (e.g. Martin, 2011). And no wonder, since theories of embodied knowledge have several advantages over symbol-processing theories of cognition. For example, they provide an explanation of how cultural knowledge is acquired, maintained, and changed over time. In addition, they lend themselves to habit-oriented theories of action. And finally, they continually situate subjects within the world they inhabit, making a retreat into the theatre of the mind in order to “deliberate,” “calculate,” or “problem-solve” in a wholly abstract fashion analytically unnecessary. This feature of the embodiment perspective has been particularly attractive for action theorists interested in dismantling the legacy of the Cartesian model of the human subject (Crossley, 2013; Scheper-Hughes & Lock, 1987; Turner, 1984; Whitford, 2002), and for sociological theory more generally because it provides a detailed explanatory account of the inseparability of individual and society (Joas, 1996; Martin, 2011).

Beyond Representationalism

Nevertheless, despite the radical situatedness advanced by contemporary theories of embodiment in culture and cognition, a specter of their theoretical predecessors remains. Specifically, the theorization of embodied knowledge tends to conceptualize that knowledge not as a feature of the flesh and blood of the physical body in the world, but as a series of representations of bodily capacities developed and stored in the brain. Ignatow (2007: 122), for example, refers to a “repertoire of embodiments…stored in memory with cognition and language rather than in a separate location.” This makes sense intuitively. The brain, after all, is the ultimate site of the choreography of habitual behavior. We might speak of “muscle memory,” but the effortless sequencing of movements to which that phrase refers relies on patterned neuronal connections in the motor cortex. By themselves, the muscles that articulate activity know nothing of these connections. It is therefore often easy to ignore the physical body in favor of the cognitive representations that map the repertoire of habits it has access to.

But to do so is to mistake the choreography for the dancer. When we neglect the role that the flesh and blood of the physical body plays in the development and maintenance of habitual behavior, we describe embodiment only in its foundational capacity, its ability to give rise to the world immersion that characterizes experience in moments of habitual flow. Even in these moments, however, embodiment is continually vulnerable to breakdown. When we are ill or injured, for example, the cognitive infrastructure that encodes embodied knowledge can no longer make itself manifest. This aspect of embodiment – its vulnerability to disorientation and ungroundedness – is as much a feature of its nature as its ability to act as the bedrock of being-in-the-world.

This is an observation that Maurice Merleau-Ponty made more than half a century ago. Like contemporary theorists of culture and cognition, Merleau-Ponty (1962, p. 102) conceived of habit formation as “a rearrangement and renewal of the corporeal schema”; but he was also always careful to emphasize that the corporeal schema, or “habit-body”, was only intelligible when married to a corresponding “body at this moment.” The specific habit-creating character of human subjectivity, “always already” immersed in its world, relies fundamentally on the fact that the flesh and blood of the physical body (unlike its cognitive representation in the nervous system) extends into that world.

As such, the body is simultaneously an objective part of the world, on the one hand, and the foundation for subjective experience, on the other. This insight allows Merleau-Ponty to account both for the effortless enactment of habitual behavior that structures daily life and the ever-present possibility of a breakdown in the flow of experience it gives rise to: “The fusion of soul and body in the act, the sublimation of biological into personal existence, and of the natural into the cultural world is made both possible and precarious by the temporal structure of our existence” (Merleau-Ponty, 1962, p. 97, italics added).

Recognizing the possibility of breakdown as an essential element of embodiment is important for its conceptualization for two reasons. First, it is simply an accurate description of the reality of embodied experience: our habits are accessible and deployable only to the extent that we possess a body capable of enacting them. “Embodied knowledge” is not enough. Second, a recognition of the tenuousness of embodied knowledge opens up a novel space for theorizing how ruptures in the flow of existence produce behavioral variation. Like disjunctures between ideology and the material conditions of life (Swidler, 1986), or ruptures in the relationship between habitus and history (Bourdieu, 2004), breakdowns in the relationship between the physical body and the cognitive structures that map its history of activity give rise to opportunities for creative behaviour, as subjects are forced to contend with the experience of being “thrown” into an action that they are newly incapable of performing.

References

Bourdieu, P. (2004). The peasent and his body. Ethnography, 5(4), 579–599.

Crossley, N. (2013). Habit and habitus. Theory & Society, 19(2–3), 136–161.

Drefus, H. L. (1992). What computers still can’t do: A critique of artificial reason. Cambridge, MA: MIT Press.

Edelman, G. (2004). Wider than the sky. New York: Yale University Press.

Ignatow, G. (2007). Theories of embodied knowledge: New directions for cultural and cognitive sociology? Journal for the Theory of Social Behaviour, 37(2), 115–135.

Joas, H. (1996). The creativity of action. Chicago: University of Chicago Press.

Martin, J. L. (2011). The explanation of social action. New York: Oxford University Press.

Merleau-Ponty, M. (1962). Phenomenology of perception. New York: Routledge.

Rowlands, M. (2011). The new science of the mind: From extended mind to embodied phenomenology. Cambridge, MA: MIT Press.

Scheper-Hughes, N., & Lock, M. (1987). The mindful body: A prolegomenon to future work in medical anthropology. Medical Anthropology Quarterly, 1(1), 6–41.

Strand, M., & Lizardo, O. (2015). Beyond World Images: Belief as embodied action in the world. Sociological Theory, 33(1), 44–70.

Swidler, A. (1986). Culture in action: Symbols and strategies. American Sociological Review, 51, 273–286.

Turner, B. S. (1984). The body and society: Explorations in social theory. London: SAGE.

Whitford, J. (2002). Pragmatism and the untenable dualism of means and ends: Why rational choice theory does not deserve pragmatic privilege. Theory & Society, 31, 325–363.

Winchester, D. (2016). A hunger for god: Embodied metaphor as cultural cognition in action. Social Forces, 95(2), 585–606.

“Learning By Nodes”: Dendritic Learning and What It Means (Or Not) for Cultural Sociology

In a paper published earlier this year in Scientific Reports and further discussed in a later ACS Chemical Neuroscience article, a group of researchers argues that learning might not function like we previously thought. The researchers (Sardi et al. 2018a, 2018b) explain that the dominant conceptualization in cognitive neuroscience of how learning works—synaptic learning, or “Hebbian learning” (Hebb 1949)—is wrong. Instead, using a series of computational models and experiments with synaptic blockers and neuronal cultures  (see Sardi et al. 2018a:4-7), the authors find evidence for a different type of learning—what they refer to as “dendritic learning.” Just as “Copernicus was the first to articulate loudly that the earth revolves around the sun and not vice versa, even though all the accumulated astronomical evidence at that time fit the old postulation,” the researchers proclaim, as are they the first to “[swim] against conventional wisdom” of Hebbian learning theory (2018b:1231).

Of what consequence is this newfound process of dendritic learning for cultural sociology? Should we care at all? I’ll try to briefly describe some of the potential consequences of dendritic learning for cultural sociology; but, spoiler alert, I am not sure one way or the other if these consequences amount to being consequential for how we do sociology. But perhaps taking a peek at what dendritic learning is and how it is different from conventional understandings of how learning works is a nice place to start.

copernican-universe
Figure 1. Are We Witnessing a “Revolution of the Cognitive Spheres”?
Note: Image from Copernicus’ On the Revolutions of the Heavenly Spheres (Palca 2011).

LINKS VS. NODES

Going on 70 years, the prevailing explanation for how learning works has been synaptic learning. Building from Hebb’s (1949) The Organization of Behavior, the idea behind synaptic learning is that if there is an activity that stimulates a neuron which in turn stimulates another neuron, and if that activity is repeated over time, then the first neuron becomes a more efficient stimulator of the second neuron and the two become more strongly connected in the brain.

Neuron-neuron stimulation occurs through synapses, the chemical (usually) or electrical (less frequently) structural gaps between neurons transmitting information across them. Synaptic learning, then, is a type of “activity-dependent synaptic plasticity” (Choe 2015:1305). Repeated practices or exposures to a certain stimulus modifies the synaptic strength between the two neurons: when the practice/exposure is repeated, the two neurons become more tightly associated in the brain, and when the practice/exposure is not repeated, the association weakens. This process occurs relatively slowly.

Synaptic learning is the inspiration behind the old adage that “neurons that fire together wire together.” Until very recently, this was the way we assumed new neural coalitions formed in biological neural networks. Consider an example from Luke Muehlhauser over on the Less Wrong blog (Muehlhauser 2011). Think back to Pavlov’s experiment on classical conditioning (Pavlov 1910):  a dog is given food when the researcher rings a bell, and the timing between the bell ringing and the presentation of food is manipulated. At first, there is no association between the neurons stimulated by bell ringing and the neurons that trigger salivation; they are, ostensibly, mutually exclusive actions. However, if the researcher rings the bell and the food is presented to the dog at the same time (or in close enough time intervals), the neurons that fire when food is present and the neurons that fire with bell ringing are activated together. Over repeated trials, the synapses between “bell ringing” and “salivation” neurons become stronger and, eventually, simply ringing the bell induces salivation without the presentation of food (see Figure 2).

Screen Shot 2018-10-16 at 6.56.46 PM
Figure 2. Synaptic Learning with Pavlov’s Experiment
Note: Reprinted from Less Wrong blog (Muehlhauser 2011).

Sardi and colleagues refer to synaptic learning as “learning by links” (Sardi et al. 2018a:1), since learning occurs through the synapses that link the neurons together. Their research, however, suggests a different type of learning—dendritic learning, also known as “learning by nodes” (Sardi et al. 2018a:2). In short, with this mode of learning, the workhorse of the neuron for learning purposes is not the synapses, but instead the dendrites. In a neuron cell, dendrites are the long, treelike extensions that connect the cell body (the soma, which contains the cell nucleus) to the synapses that themselves “connect” the neuron to other neurons.

Take a look at Figure 3, a neuron cell’s anatomy. The dendrites are responsible for taking in information from other neurons and passing it along into the soma, while the axon is responsible for passing the information on to other neurons via the axon terminals—which are themselves connected to the next neuron’s dendrites through synapses, thus propagating information transmission across the neural network. Without dendrites, information cannot be transmitted into the body of the neuron: e.g., damaged or abnormal dendrites are linked to brain under-connectivity issues associated with autism (Martínez-Cerdeño, Maezawa, and Jin 2016). Trying to construct new neural networks without dendrites is like trying to have group deliberation with all talk and no listening.

Screen Shot 2018-10-16 at 8.44.31 PM
Figure 3. A Neuron’s Anatomy
Note: Reprinted from OpenStax (2018), redirected from Khan Academy (2018).

So, how does dendritic learning differ functionally from synaptic learning? While synaptic learning is based on the idea of synaptic plasticity, dendritic learning revolves around the notion of (you guessed it) a sort of dendritic plasticity: given increasing or decreasing levels of exposure to a neuron-activating stimulus, the extent of the neuron’s “dendritic excitability” can grow or diminish while the strength of the synapses remain relatively constant (Neuroskeptic 2018).

Consider Figure 4. Across both panels, the teardrop object at the bottom represents the neuron cell body, which is where the firing happens if the input signals from the dendrites are strong enough for an outgoing signal to be pushed from the cell body down through the axon and into the dendrites of the next neuron. The long treelike branches are the dendrites, and the tips are the synapses that connect the neuron’s dendrites to the axon terminals of other (not shown) neurons. The left panel illustrates conventional synaptic learning, where the synapses themselves are weighted (indicated by the red valves at the tips of the branches) upward or downward depending on the extent of stimulus exposure. The right panel shows dendritic learning: it is the extent to which a neuron’s dendrites are in a high state of stimulation, and not the strength of the synapses linking the neuron to other neurons, that determines the strength of the input signal and therefore whether or not the neuron fires. In dendritic learning, then, there are far fewer “learning parameters,” since the dendrites are responsible for the learning and not the synapses (see the right panel of Figure 4) (ScienceDaily 2018).

Screen Shot 2018-10-16 at 9.34.34 PM
Figure 4. Synaptic Learning (left) vs. Dendritic Learning (right)
Note: Reprinted from ScienceDaily (2018).

IMPLICATIONS (?) FOR CULTURAL SOCIOLOGY

The “Neuroskeptic” over at Discover Magazine reviewed the evidence from the Sardi et al. papers and suggests that “[a]t best they have shown that dendritic learning also happens [in addition to synaptic learning],” and that “[they] don’t think Copernicus has returned to earth just yet” (Neuroskeptic 2018). I agree with Neuroskeptic in terms of what this means for neuroscience, largely because they are the neuroscientist and I am not. That said, there does seem to be the potential for some implications for how we do cultural sociology. But the potential may be greater for some subfields than for others.

I’m Not Sure What this Adds for How Sociologists Study Learning

The existence of dendritic learning has at least two major implications for cognitive neuroscience. First, learning may happen at much faster timescales than previously thought. Second, weak synapses matter a lot. In terms of timescale, it seems that the brain isn’t that bad at quick adaptation—at least relative to traditional Hebbian learning. As Sardi and colleagues note, “[t]his dynamic brain activity leads to the capability that when we think about an issue several times we may find different solutions” (Shrourou 2018). For the importance of weak synapses, the researchers point out that dendritic strengths are “self-oscillating” (2018b:1231), where weak synapses effectively “temper” the dendritic weights and prevent them from taking on extreme values. In other words, “dendritic learning enables stabilization around intermediate [dendritic strength] values” (Sardi et al. 2018a:4). These implications are pretty important for neuroscientists and medical researchers studying various diseases of the brain (Sardi et al. 2018b:1231-32).

What does all this mean for cultural sociologists? It might be too early to tell. Dendritic learning might be faster than synaptic learning, but the time scales in the experiments are in much smaller intervals (minutes) than the learning processes of interest to sociologists. The researchers note that future studies should “investigate . . . [dendritic learning] efficiency and available learning time scales in more realistic scenarios” (2018b:1231), so it’s an empirical question whether or not the learning speed differentials between synaptic and dendritic learning are a wash with longer timescales. So, in terms of theoretical leverage, dendritic learning may or may not offer much over and above how we already talk about learning in culture and cognition studies (see Lizardo et al. 2016:293-95). At the end of the day, for cultural sociologists it may all look like GOFILT—Good Old Fashioned Implicit Learning Theory—in which case the difference between synaptic and dendritic learning can be taken as ontologically true but analytically inconsequential. Only time (pun) will tell.

The Payoff May Come Sooner for Computational Social Science

In addition to understanding the learning processes behind biological neural networks and brain disorders, Sardi and colleagues also note that this “paradigm shift” matters for developing machine learning algorithms built to mimic human learning (2018b:1231). In natural language processing, for instance, if synaptic learning isn’t the baseline model of human learning (itself an empirical question), then perhaps analytical strategies that build associations between terms or documents based on term frequencies and co-occurrences aren’t based on the best cognitive model for machine learning.

But at face value I’m skeptical of this last proposition—I like word count methods for analyzing meaning, others do too (Nelson 2014; Underwood 2013), and I’ve read enough papers that make defensible claims using them to sell me on their continued use. That said, we have not seen dendritic learning rules implemented into machine learning algorithms yet (but see Sardi et al. 2018a:2-3 for an example of dendritic learning rules in a series of perceptron models), and it might prove particularly consequential in deep learning tasks and artificial neural network models. These sort of machine learning algorithms have not gained much traction in sociology, though, so, for now, it seems that the utility of distinguishing between synaptic and dendritic learning for culture and cognition studies is truly a waiting game.

I can continue all of my work without making these distinctions, and I suspect that most of the people reading this post are in the same position.

REFERENCES

Choe, Yoonsuck. 2015. “Hebbian Learning.” Pp. 1305-09 in Encyclopedia of Computational Neuroscience, edited by D. Jaeger and R. Jung. New York: Springer.

Hebb, Donald O. 1949. The Organization of Behavior: An Neuropsychological Theory. New York: Wiley.

Khan Academy. 2018. “Overview of Neuron Structure and Function.” Khan Academy. Retrieved October 16, 2018 (https://www.khanacademy.org/science/biology/human-biology/neuron-nervous-system/a/overview-of-neuron-structure-and-function).

Lizardo, Omar, Robert Mowry, Brandon Sepulvado, Dustin S. Stoltz, Marshall A. Taylor, Justin Van Ness, and Michael Wood. 2016. “What Are Dual Process Models? Implications for Cultural Analysis in Sociology.” Sociological Theory 34(4):287-310.

Martínez-Cerdeño, Verónica, Izumi Maezawa, and Lee-Way Jin. 2016. “Dendrites in Autism Spectrum Disorders.” Pp. 525-43 in Dendrites: Development and Disease, edited by K. Emoto, R. Wong, E. Huang, and C. Hoogenraad. Tokyo: Springer.

Muehlhauser, Luke. 2011. “A Crash Course in the Neuroscience of Human Motivation.” Less Wrong. Retrieved October 16, 2018 (https://www.lesswrong.com/posts/hN2aRnu798yas5b2k/a-crash-course-in-the-neuroscience-of-human-motivation).

Nelson, Laura K. 2014. “Computer-Assisted Content Analysis and Sociology: What You Should Know.” Bad Hessian. Retrieved October 17, 2018 (http://badhessian.org/2014/01/computer-assisted-content-analysis-and-sociology-what-you-should-know/).

Neuroskeptic. 2018. “Is ‘Dendritic Learning’ How the Brain Works?” Discover Magazine. Retrieved October 16, 2018 (http://blogs.discovermagazine.com/neuroskeptic/2018/05/11/dendritic-learning/#.W8aX4P5KjdT).

OpenStax. 2018. “Neurons and Glial Cells.” OpenStax CNX. Retrieved October 16, 2018 (https://cnx.org/contents/GFy_h8cu@9.87:c9j4p0aj@3/Neurons-and-Glial-Cells).

Palca, Joe. 2011. “For Copernicus, A ‘Perfect Heaven’ Put Sun At Center.” NPR: Morning Edition. Retrieved October 16, 2018 (https://www.npr.org/2011/11/08/141931239/for-copernicus-a-perfect-heaven-put-sun-at-center).

Pavlov, Ivan. 1910. The Work of the Digestive Glands. London: C. Griffin & Company.

Sardi, Shira, Roni Vardi, Amir Goldental, Anton Sheinin, Herut Uzan, and Ido Kanter. 2018a. “Adaptive Nodes Enrich Nonlinear Cooperative Learning Beyond Traditional Adaptation By Links.” Scientific Reports 8(1):5100.

Sardi, Shira, Roni Vardi, Amir Goldental, Yael Tugendhaft, Herut Uzan, and Ido Kanter. 2018b. “Dendritic Learning as a Paradigm Shift in Brain Learning.” ACS Chemical Neuroscience 9:1230-32.

ScienceDaily. 2018. “The Brain Learns Completely Differently than We’ve Assumed Since the 20th Century.” ScienceDaily. Retrieved October 16, 2018 (https://www.sciencedaily.com/releases/2018/03/180323084818.htm).

Shrourou, Alina. 2018. “Dendritic Learning Occurs Much Faster and In Closer Proximity to Neurons, Shows Study.” News Medical: Life Sciences. Retrieved October 16, 2018 (https://www.news-medical.net/news/20180830/Dendritic-learning-occurs-much-faster-and-in-closer-proximity-to-neurons-shows-study.aspx).

Underwood, Ted. 2013. “Wordcounts Are Amazing.” The Stone and the Shell. Retrieved October 17, 2018 (https://tedunderwood.com/2013/02/20/wordcounts-are-amazing/).

Limits of innateness: Are we born to see faces?

Sociologists tend to be skeptical of claims individuals are consistent across situations, as a recent exchange on Twitter exemplifies. This exchange was partially spurred by revelations that the famous Stanford Prison Experiment (which supposedly showed people will quickly engage in behaviors commensurate with their assigned roles even if it means being cruel to others), was even more problematic than previously thought.

Fig14Koehler.png

The question of individual “durability” is sometimes framed as “nature vs nurture,” and this is certainly a part of the matter. In sociology, however, this skepticism of “durability” often goes much further than innateness, and sometimes leads sociologists to suggest individuals are inchoate blobs until situations come along to construct us (or interlocutors may resort to obfuscation by touting the truism that humans are always in a situation). If pushed on the topic, however, even the staunchest situationalist would likely concede that humans are born with some qualities, and the real question is what are the limits of such innateness? What kinds of qualities of people can be innate? To what extent are these innate qualities human universals? And, if we are “born with it” can  “it” change and how and to what extent? In Stephen Turner’s new Cognitive Science and the Social, he puts the matter succinctly:

“…children quickly acquire the ability to speak grammatically. This seems to imply that they already had this ability in some form, such as a universal set of rules of language stored in the brain. If one begins with this problem, one wants a model of the brain as “language ready.” But why stop there? Why think that only grammatical rules are innate? One can expand this notion to the idea of the “culture-ready” brain, one that is poised and equipped to acquire a culture” (2018:44–45).

As I’ve previously discussed, the search for either the universal rules or specialized module for language has, thus far, failed. Nevertheless, most humans must be “language-ready” in the minimal sense of having the ability to acquire the ability to speak and understand speech. But, answering the question of where innateness ends and enculturation begins is not easy. Even for those without the disciplinary inclination toward strongly situationalist arguments.

Are we born to see faces?

How we identify faces is a good place to explore this difficulty: Do we learn to identify faces or are we born to see faces? And, if we are born to see faces, is this ability refined through use and to what extent? Enter: the fusiform face area  (FFA). Just like language, the FFA is often used as evidence for the more general arguments of functional localization and domain specificity. This argument goes: facial recognition is produced not by generic cognitive processes involved in vision (or other generic processes), but rather an inborn special-purpose module.

One reason why faces are an even better candidate for grappling with the question of innateness than is language is that the human fetus is exposed to language while in the womb. Human fetuses gain some sense of prosody, tonality, and as a result, a basic sense of grammar in the course of development in utero. There is no comparable exposure to faces, however. Another reason is, as the Gestalt psychologists argued, faces have an irreducible structure such that they are perceived as complete wholes even when viewing only a part — “the whole is something else than the sum of its parts, because summing is a meaningless procedure, whereas the whole-part relationship is meaningful” (Koffka 1935:176).

Facial recognition encompasses two related functions: distinguishing faces from non-face objects and distinguishing among faces. The key debate within this area of cognitive neuroscience is whether there is a module that is specialized for one or both of these processes (Kanwisher, McDermott, and Chun 1997; Kanwisher and Yovel 2006), as opposed to a distributed and generic cognitive process (Haxby et al. 2001). This debate goes back to the observation that humans struggle to recognize and remember faces that are upside down, which seemed to be the case for faces more so than any other non-face object (Diamond and Carey 1986) — suggesting something about faces made them unique. 20181014-Selection_001.png The proposal facial recognition was the result of a specialized module, however, begins with a relatively recent paper by Kanwisher et al. (1997). Using functional magnetic resonance imaging (which I’ve discussed in detail in previous posts), 15 subjects were shown various common objects as well as faces. They found in 12 of those subjects a specific area of the brain was more active when they saw faces than when they saw non-face objects. On its face, it seems like reasonable evidence humans are born with a module necessary for identifying faces.

However, when one squares this claim with the underlying logic of fMRI—it is used to (a) measure relative activation, not an on/off process, and (b) voxel and temporal resolution is far too coarse to conclude a region is homogeneously activated—the claim that the FFA is a functionally specialized module for facial recognition weakens considerably.  These areas are not entirely inactive when viewing non-face objects. Indeed, relative to baseline activation, subsequent research found the FFA is significantly more active when viewing various objects (Grill-Spector, Sayres, and Ress 2006). Specifically, the level of specificity of the stimulus (e.g. faces tend to be individuals whereas chairs tend to be generic) and the participants level of expertise with the stimulus (e.g. car and bird enthusiasts) predicted greater relative activation (Gauthier et al. 2000; Rhodes et al. 2004).

Finally, if we are born to distinguish faces from non-faces, the ability to distinguish among faces is considerably trained by early socialization, and such socialization introduces a lot of variation among people. For example, one of the earliest attempts to measure facial recognition concluded, “that women are perhaps superior to men in the test; that salespeople are superior to students and farm people; that fraternity people are perhaps superior to non-fraternity people…” (Howells 1938:127).

Subsequent research in this vein found individuals are better at distinguishing among their racial/ethnic ingroups than their outgroups. In an early study of black and white students from a predominantly black university and a predominantly white university, researchers found participants more easily discriminated among faces of their own race. They also found “white faces were found more discriminable” overall, which they suggest may be the result of “the distribution of social experience is such that both black persons and white persons will have had more exposure to white faces than black faces in public media…” (Malpass and Kravitz 1969:332). Summarizing more recent work, Kubota et al.  (2012) state “participants process outgroup members primarily at the category level (race group) at the expense of encoding individuating information because of differences in category expertise or motivated ingroup attention.”

Why should sociologists care?

To summarize, the claim that facial recognition emerges from an innate functionally-specialized cognitive module is weakened in three ways: the FFA responds to more generic features faces share with other objects; the FFA is implicated in a distributed neural network rather than solely a discrete module; the FFA is used for non-facial recognition functions; and finally, facial recognition is trained by our (social) experience. Why should sociologists care? I think there are three reasons. First, innateness is not deterministic or specific but rather constraining and generic. Second, these constraints ripple throughout our social experience, forming the contours of cultural tropes, but are not immutable. Third, limited innateness does not mean individuals are not durable across situations, even (near) universally so.

A dispositional and distributed theory of cognition and action accounts for object recognition by its use: “information about salient properties of an object—such as what it looks like, how it moves, and how it is used—is stored in sensory and motor systems active when that information was acquired” (Martin 2007:25). This is commensurate with the broad approach many of the posts on this blog have been working with. Perhaps, however, there is a special class of objects for which this is not exactly the case. In other words, the admittedly weak innateness of distinguishing unfamiliar faces from non-face objects is, perhaps, the evidence we are “born with” some forms of nondeclarative knowledge (Lizardo 2017).

Such nondeclarative knowledge, however, may be re-purposed for cultural ends. Following the logic of neural exaption, discussed in a previous post, humans can be born with predispositions, especially related to very generic cognitive processes, which are further trained, refined, and recycled for novel uses, novel uses which are nevertheless constrained in a way that yields testable predictions. A fascinating example related to facial perception is anthropomorphization. If rudimentary facial recognition is innate (and therefore, probably evolutionarily old), this inherently social-cognitive process is being reused for non-social purposes (i.e. non-social in the restricted sense of interpersonal interaction). This facial recognition network—together with other neuronal networks—is used to identify people and predict their behavior, and this may be adapted to non-human animate and inanimate objects, like natural forces, as well as anonymous social structures, like financial markets.

What this means, following the logic of neural reuse and conceptual metaphor theory, is that the target domain (e.g. derivative markets, earthquakes) is “contaminated” by predispositions which originally dealt with the source domain (here, interpersonal interaction). This means attempting to imagine the intentions of thousands of unknown traders as if inferring the intentions of an interlocutor may lead traders to “ride” financial bubbles (De Martino et al. 2013). Therefore, what is and is not innate is a messy question to answer — even by those without a disciplinary distrust of innateness claims. Although cognitive neuroscientists are making headway, it remains an empirical question which objects are recognized innately and the extent to which the object recognition is robust to enculturation and neural recycling.

More importantly, the question of individual durability across situations should not be reduced solely to “nature vs nurture.” That is, we must grapple with the question of once these processes are so trained in an individual (during “primary socialization”), how easily can they be re-trained, if at all? In John Levi Martin’s Thinking Through Theory (2014:249), the third of his “Newest Rules of Sociological Method” is pessimistic in this regard: “Most of what people think of as cultural change is actually changes in the compositions of populations.” That is, even if we were to bar the possibility of innateness in any strong sense, once individuals reach a certain age they are likely to be fairly consistent across situations, with little chance of altering in fundamental ways.

REFERENCES

De Martino, Benedetto, John P. O’Doherty, Debajyoti Ray, Peter Bossaerts, and Colin Camerer. 2013. “In the Mind of the Market: Theory of Mind Biases Value Computation during Financial Bubbles.” Neuron 79(6):1222–31.

Diamond, Rhea and Susan Carey. 1986. “Why Faces Are and Are Not Special: An Effect of Expertise.” Journal of Experimental Psychology. General 115(2):107.

Gauthier, I., P. Skudlarski, J. C. Gore, and A. W. Anderson. 2000. “Expertise for Cars and Birds Recruits Brain Areas Involved in Face Recognition.” Nature Neuroscience 3(2):191–97.

Grill-Spector, Kalanit, Rory Sayres, and David Ress. 2006. “High-Resolution Imaging Reveals Highly Selective Nonface Clusters in the Fusiform Face Area.” Nature Neuroscience 9(9):1177–85.

Haxby, J. V., M. I. Gobbini, M. L. Furey, A. Ishai, J. L. Schouten, and P. Pietrini. 2001. “Distributed and Overlapping Representations of Faces and Objects in Ventral Temporal Cortex.” Science 293(5539):2425–30.

Howells, Thomas H. 1938. “A Study of Ability to Recognize Faces.” Journal of Abnormal and Social Psychology 33(1):124.

Kanwisher, Nancy and Galit Yovel. 2006. “The Fusiform Face Area: A Cortical Region Specialized for the Perception of Faces.” Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 361(1476):2109–28.

Kanwisher, N., J. McDermott, and M. M. Chun. 1997. “The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception.” The Journal of Neuroscience: The Official Journal of the Society for Neuroscience 17(11):4302–11.

Koffka, Kurt. 1935. Principles of Gestalt Psychology. New York: Harcourt, Brace.Kubota, Jennifer T., Mahzarin R. Banaji, and Elizabeth A. Phelps. 2012. “The Neuroscience of Race.” Nature Neuroscience 15(7):940–48.

Lizardo, Omar. 2017. “Improving Cultural Analysis Considering Personal Culture in Its Declarative and Nondeclarative Modes.” American Sociological Review 0003122416675175.

Malpass, R. S. and J. Kravitz. 1969. “Recognition for Faces of Own and Other Race.” Journal of Personality and Social Psychology 13(4):330–34.

Martin, Alex. 2007. “The Representation of Object Concepts in the Brain.” Annual Review of Psychology 58(1):25–45.

Martin, John Levi. 2014. Thinking Through Theory. W. W. Norton, Incorporated.

Rhodes, Gillian, Graham Byatt, Patricia T. Michie, and Aina Puce. 2004. “Is the Fusiform Face Area Specialized for Faces, Individuation, or Expert Individuation?” Journal of Cognitive Neuroscience 16(2):189–203.

Turner, Stephen P. 2018. Cognitive Science and the Social: A Primer. Routledge.

Thinking with Theory Diagrams

A recent book by Kate Raworth entitled Doughnut Economics (2017) has garnered a lot of attention. The goal of the book is revolutionary in spirit: to move economists to think more about basic social and ecological well-being. While this aim will certainly resonate with sociologists, the means of getting there may surprise you: a doughnut. Raworth argues that what is needed are new models, new theoretical diagrams to facilitate this major change in the way economists should think about the economic world. Her major diagrammatic innovation, the doughnut, helps economists think not just about growth, but a world that promotes and produces basic social needs and ecological responsibility:

 

 

This is a major diagrammatic shift. One of its most striking ambitions is to move economists away from the conception of growth as indiscriminately good. Below is a diagram of what GDP growth in economics might look like:

 

 

What kinds of thinking are embedded within diagrams like this? As Raworth notes, this exponential growth curve fits perfectly with how people metaphorically understand progress – as ‘up’ and ‘forward’. This observation is in line with the influential work of Lakoff and Johnson (2008) who show how ubiquitous the orientational metaphors ‘GOOD IS UP’ and ‘GOOD IS FORWARD’ are in Western culture; for example, ‘things are looking up’, and ‘I’m moving forward with my life’. However, metaphors are not purely linguistic phenomena. For one, as embodied cognition research has shown, these kinds of metaphors are actually grounded in ‘image-schemas’ connected to our bodies and our physical experiences (Barsalou, 2008; Lakoff and Johnson, 2008 – see Wood et al., for a sociological discussion). Secondly, these conceptual metaphors are also embedded in diagrams and are a part of how we think with and through them (Reed, 2013). In some sense then, it is likely that we are drawn to this kind of diagrammatic view of economics because it ‘resonates’ (McDonnell, Bail, and Tavory, 2017) or fits so neatly with the way we think, act, and orient ourselves to the world more generally.

Raworth (2017) argues that a basic set of core diagrams– the curves, parabolas, lines, and circles that proliferate economics articles and books – linger in the back of most economists’ minds when thinking about a given economic issue, providing them with major assumptions about economic theory. They are indelibly etched in their minds, providing consequential ‘intellectual baggage’. More controversially, she argues that many of the most iconic of these diagrams are “out of date, blinkered, or downright wrong” (pg. 21).

Accordingly, she aims to provide a new type of diagram to encourage a new type of thinking: to see the economy as embedded in society and the environment and to strive not simply for growth, but as an ecologically safe and socially just space for human flourishing. In the Doughnut, we must be careful not to ‘overshoot’ beyond the ecological ceiling, meaning that any growth that produces environmental degradation is bad. ‘Up’ and ‘forward’ are no longer indiscriminately ‘good’ as it was with the metaphorical underpinnings of the exponential growth curve diagram; instead, there is a ‘sweet spot’ within the doughnut to which economists should aim.

So why do we need diagrams to spur this kind of intellectual revolution? Why do they matter so much? A lot can be said here, but I’d like to focus on three interrelated points: First, human beings are wired for visuals; because visualization plays such a major role in cognition, we perform mental tasks like image recognition, pattern recognition, and meaning attachment with incredible speed and ease (Thorpe et al., 1996). Moreover, images, unlike non-visual ideas and concepts, go directly into our long-term memory, leaving a lasting impression with a surprising level of detail (Brady et al., 2008). Secondly, we know that a number of disciplines rely heavily on diagrams to produce new knowledge and facilitate new discoveries (Coopmans et al. 2014; Knorr Cetina 2003; Tversky, 2011).

While we tend to think of theory figures as useful tools for teaching (Baldamus, 1992), they are important tools for explanation, elaboration, clarification, analysis, critique, and intellectual creativity (Lynch, 1991; Silver, 2018; Swedberg, 2016; Turner, 2010; see also Mills, 1959, pg. 213). Lastly, diagrams do not simply support our intellectual work, but they actively shape and direct it (Silver, 2018; Turner, 2014). Diagrams are both ‘servants’ and ‘guides’ – useful for both problem-finding and problem-solving (Humphrey, 1996). They are often imbued with theoretical assumptions (e.g. Owens, 2012), can shape the kinds of questions we ask and how we interpret our findings (e.g. Lennewick, 2010) and promote certain kinds of thinking over others (Tversky, 2011). The metaphorical underpinning of the exponential growth curve is a perfect example of that.

Sociologists also work with diagrams, and so it is natural to ask ourselves about what kind of theoretical diagrams linger in the back of the minds of sociologists, and how they shape the kind of work we do. Of course, we have some iconic theory diagrams that have inspired a lot of research: Coleman’s boat/bathtub, Burgess’ ‘concentric-zone model’, or Parsons’ various AGIL schemes. We also use popular, more conventionalized diagrammatic forms: cross-classification, Venn, cartesian coordinates and more. But while a few sociologists have studied theory diagrams in sociology (Lynch, 1991; Silver, 2018; Swedberg, 2016; Turner, 2010) none have produced any data demonstrating which diagrams are most commonly used.

A paper in progress I co-authored with Daniel Silver (presented at this year’s ASA conference in Philadelphia) on some of the practical considerations of theory visualization, addresses this issue. We took a random sample (40 articles per journal) from some of the leading journals of sociological theory in North America and Europe (Sociological Theory, Theory and Society, Theory, Culture, and Society, European Journal of Social Theory) as well as some of the leading generalist journals that often include theoretical work (American Journal of Sociology, American Sociological Review, European Journal of Sociology).

We found that, of the theory diagrams in our sample (figures without data), of all of the conventionalized diagrammatic forms the path diagram was the most commonly used – making up around 20% of the theory diagrams in our sample. This likely does not come as a surprise to most sociologists: I always seem to come across path-like diagrams in my reading, both with and without data, and can think of multiple times a professor had recommended using a path diagram to think and make sense of a research project. If path diagrams are so popular in sociology, and at least some professors generically prescribe them to struggling graduate students, it is worth asking: what does it mean to see the social world through a path diagram, like the one below?

 

 

Like with the exponential growth curve model, we can learn a lot here by unpacking the basic cognitive elements embedded within the diagram. While this appears to be somewhat reductive, all concepts, even abstract theoretical concepts in sociology, are grounded in a similar structure (Lizardo, 2013). Path diagrams may be viewed as an integrated or compound image-schema (Kimmel, 2005) with two main imagistic bases:

  1. Variables as ‘containers’

First, the path diagram asks us to visualize variables as static entities that are ‘contained’ within a bounded space. Again, this fits with another one of the most fundamental metaphors identified by Lakoff and Johnson – the ‘container’ metaphor (for example, when we say ‘I’ve lived a full life’). This is an ontological metaphor, that tells us that there is an ‘inside’ and an ‘outside’ – and in this case anything ‘inside’ the circle is understood as contained within its ‘boundary’.

  1. Source-path-goal

The source-path-goal schema is one of the most important sense-making structures people have; it structures our conception of ‘journey’ (a starting point – trajectory – and a destination), ‘story’ (a beginning – middle – end) , or ‘purposeful life’ (initial problem or ambition – action – solution or achievement) (Forceville, 2006).

Visually, we can see these structures in most conventional path diagrams:

 

 

But do all sociologists see social phenomena as bounded entities with relationships moving from a starting point, along a path, towards a given outcome? Interestingly, while many sociologists certainly think this way, ‘relational sociology’ (see Abbott, 2004; Emirbayer, 1997) explicitly rejects this line of thinking. Rather than treating phenomena as static ‘things’, relational sociologists conceive the social world as dynamic relations and processes. For them, boundary specification becomes a far more difficult and contentious question. For example, where do we end webs of social relations in a network, and when do sets of relations count as a ‘thing’? Or how do we fix a particular group if its membership, the frequency, and intensity of its relationships, its definition, aims etc. are continuously changing?

The same can be said for the source-path-goal schema: Can we commit to one causal story, one fixed set of relationships between entities? Ontologically, both the ‘container’ and ‘source-path-goal’ schemas appear incompatible with relational sociology; rather than fixed, bounded entities and static, linear relationships, relational sociologists see the social world as process—expanding and contracting, appearing and disappearing, merging and dividing, and so on. While path diagrams have been extremely useful and productive in sociology, if one’s aims are relational in nature, path diagrams may not be useful for thinking through or representing them.

Given this, one can speculate about what this may mean for the discipline. If diagrams are as influential as many suggest, and path diagrams are a go-to way to visualize theoretical ideas, could this be operating as a kind of visual roadblock to some forms of theory development? Could the way sociologists think and represent their ideas visually be stifling the development of relational theory? Can relational sociologists create a small revolution of their own, as Raworth (2017) has, by inventing or promoting alternative diagrammatic forms? For now, I can only speculate – but it seems to me that we have yet to explore how our visual language may be shaping the trajectory of the field as a whole.

Works Cited

Baldamus, W. (1992). Understanding Habermas’s methods of reasoning. History of the human sciences, 5(2), 97-115.

Barsalou, L. W. (2008). Grounded cognition. Annu. Rev. Psychol., 59, 617-645.

Brady, T. F., Konkle, T., Alvarez, G. A., & Oliva, A. (2008). Visual long-term memory has a massive storage capacity for object details. Proceedings of the National Academy of Sciences, 105(38), 14325-14329.

Coopmans, C., Vertesi, J., Lynch, M. E., & Woolgar, S. (2014). Representation in scientific practice revisited. MIT Press.

Forceville, C. (2006). The source–path–goal schema in the autobiographical journey documentary: McElwee, van der Keuken, Cole. New Review of Film and Television Studies, 4(3), 241-261.

Humphrey, T. M., & Line, P. (1996). The early history of the box diagram.

Kimmel, M. (2005). From Metaphor to the” Mental Sketchpad”: Literary Macrostructure and Compound Image Schemas in Heart of Darkness. Metaphor and Symbol, 20(3), 199-238.

Knorr-Cetina, K. (2003). From pipes to scopes: The flow architecture of financial markets. Distinktion: Scandinavian Journal of Social Theory, 4(2), 7-23.

Lakoff, G., & Johnson, M. (2008). Metaphors we live by. University of Chicago press.

Latour, B. (1986). Visualization and cognition. Knowledge and society, 6(1), 1-40.

Lewinnek, E. (2010). Mapping Chicago, imagining metropolises: reconsidering the zonal model of urban growth. Journal of Urban History, 36(2), 197-225.

Lizardo, O. (2013). Re‐conceptualizing Abstract Conceptualization in Social Theory: The Case of the “Structure” Concept. Journal for the Theory of Social Behaviour, 43(2), 155-180.

Lynch, M. (1991). Pictures of nothing? Visual construals in social theory. Sociological Theory, 1-21.

McDonnell, T. E., Bail, C. A., & Tavory, I. (2017). A theory of resonance. Sociological Theory, 35(1), 1-14.

Mills, C. Wright. “The social imagination.” New York: Oxford University Pres (1959).

Owens, B. R. (2012). Mapping the city: Innovation and continuity in the Chicago School of Sociology, 1920–1934. The American Sociologist, 43(3), 264-293.

Raworth, K. (2017). Doughnut economics: seven ways to think like a 21st-century economist. Chelsea Green Publishing.

Reed, S. K. (2013). Thinking visually. Psychology Press.

Silver, D. (2018). Figure It Out!. Sociological Methods & Research, 0049124118769089.

Swedberg, R. (2016). Can You Visualize Theory? On the Use of Visual Thinking in Theory Pictures, Theorizing Diagrams, and Visual Sketches. Sociological Theory, 34(3), 250-275.

Thorpe, S., Fize, D., & Marlot, C. (1996). Speed of processing in the human visual system. nature, 381(6582), 520.

Turner, C. (2010). Investigating sociological theory. Sage Publications.

Turner, C. (2014). Travels without a donkey: The adventures of Bruno Latour. History of the Human Sciences, 28(1), 118-138.

Tversky, B. (2011). Visualizing thought. Topics in Cognitive Science, 3(3), 499-535.

Wood, M. L., Stoltz, D. S., Van Ness, J., & Taylor, M. A. (2018). Schemas and Frames.

Beyond Good Old-Fashioned Ideology Theory, Part Two

In part one, I examined two recent frameworks for understanding ideology (Jost and Martin) and explained how both serve as alternatives to the good old-fashioned ideology theory (GOFIT). Ultimately, I concluded that Martin’s (2015) model has specific advantages over Jost’s (2006) model, though the connection between ideology and “practical mastery of ideologically-relevant social relations” needs to be fleshed out. This is particularly true because any strong concentration on social relations seems to preclude any serious attention to cognition. But without it, the argument is vulnerable to crying foul over reductionism.

In this post, I sketch a model of cognition that checks the boxes of GOFIT ideology: distorting, invested with power, supports unequal social relations. But it is different for reasons I specify below. To do this, I use a famous experiment in neuroscience—Michael Gazzaniga’s “split-brain” hypothesis— and draw an analogue between it and a possible non-GOFIT ideology.

Galanter, Gerstenhaber … and Geertz

But before doing that, it seems reasonable to ask about the purpose of even attempting a non-GOFIT ideology. Is GOFIT a strawman? Why is it problematic? To answer these questions, and to indicate why a holistic revision of ideology away from GOFIT seems to be in order, consider Clifford Geertz and his essay (1973) “Ideology as a cultural system,” which presents what is to date arguably the most influential, non-Marxist approach to ideology in the social sciences. Geertz’s burden is to make ideology relevant by providing it with a “nonevaluative” form. And the way he does this, using modular or computational cognition, is what I want to focus on.

Ideology here is not tantamount to oversimplified, inaccurate, “fake news” style distortion that is, above all and categorically, what science is not. But if it isn’t to be censured like this, then for Geertz ideology must be a symbolic phenomenon that has something to do with how “symbolic systems” make meaning in the world, and in turn serve to guide action  (e.g. “models of, models for”). To make this argument, he does, in fact, make ideology cognitive by drawing from a psychological model: Eugene Galanter and Murray Gerstenhaber’s [1956] “On Thought: The Extrinsic Theory.”

As Geertz summarizes:

thought consists of the construction and manipulation of symbol systems, which are employed as models of other systems, physical, organic, social, psychological, and so forth, in such a way that the structure of these other systems– and, in the favorable ease, how they may therefore be expected to behave–is, as we say “understood.” Thinking, conceptualization, formulation, comprehension, understanding, or what-have-you, consists not of ghostly happenings in the head but of a matching of the states and processes of symbolic models against the states and processes of the wider world … (214)

Geertz returns to this same argument in arguably his most thorough approach to the culture concept (“The Growth of Culture and the Evolution of Mind”). Importantly, there too he does not conceive of culture or symbols absent a psychological referent, which he consistently draws from Galanter and Gerstenhaber.

Whatever their other differences, both so-called cognitive and so-called expressive symbols or symbol-systems have, then, at least one thing in common: they are extrinsic sources of information in terms of which human life can be patterned–extrapersonal mechanisms for the perception, understanding, judgment, and manipulation of the world. Culture patterns–religious, philosophical, aesthetic, scientific, ideological–are “programs”; they provide a template or blueprint for the organization of social and psychological processes, much as genetic systems provide such a template for the organization of organic processes (Geertz, 216)

How does this apply to ideology? It makes ideology a symbolic system for building an internal model. Geertz is distinctively not anti-psychological here but instead seems to double down on the “extrinsic theory of thought” to define culture as a symbol system through which agents construct models of and for some system out in the world, effectively programming their response to that system. Ideology refers to the symbol system that does this for the political system:

The function of ideology is to make an autonomous politics possible by providing the authoritative concepts that render it meaningful, the suasive images by means of which it can be sensibly grasped … Whatever else ideologies may be–projections of unacknowledged fears, disguises for ulterior motives, phatic expressions of group solidarity–they are, most distinctively, maps of problematic social reality and matrices for the creation of collective conscience (Geertz, 218, 220)

Geertz mentions the example of the Taft-Hartley Act (restricting labor unionizing) that carries the ideological label the “slave labor act.” Geertz emphasizes how ideology works according to how well or how poorly the model (“slave labor act”) “symbolically coerces … the discordant meanings [of its object] into a unitary conceptual framework” (210-211).

If GOFIT is a set of assumptions widely held about ideology, then we probably find little to disagree with in Geertz’s argument, at least at first glance. Much of it should ring true. If we object to anything it might be the heavy-handed language that Geertz uses that evokes modular or computational cognition (e.g. “programs”). But maybe Geertz himself is not responsible for this. His sources, Galanter and Gerstenhaber, were explicit in making these assumptions about cognition, and this I want to argue is important for a specific reason.

To Galanter and Gerstenhaber, “model” clearly meant the sort of three-dimensional scale models that scientists construct in order to understand large-scale physical phenomena. In this sense, they solved the “problem of human thinking” by defining it as a lesser version of idealized scientific thinking. And they were not alone in that pursuit. At least initially, cognition was presented as antithetical to behaviorism in psychology by allying itself with resources that were quite deliberate and quite reflexive: “[mid-century] cognitive scientists … looked for human nature by holding an image of what they were looking for in their [own] minds. The image they held was none other than their own self-image … ‘good academic thinking’ [became the] model of human thinking” (Cohen-Cole 2005).

This is not only the context for Geertz’s theory of ideology. His understanding of “symbol systems” writ large cannot be removed from this specific gloss on and an extension of “good academic thinking.” For our purposes, this should beg the question of whether using symbol systems to form internal models about the external world and  to manipulate and creatively construe those models as equivalent to “symbolic action” should be the template or basis for defining ideology on nonevaluative grounds, that is to say, for defining ideology in the way that Geertz himself does: as cognitive. 

Ideology and the Split-Brain

What I will try to do now, after this long preamble, is sketch a different possible cognitive basis for a theory of ideology, one that I think is compatible with Martin’s (2015) field-theoretic approach to ideology discussed in part one of this post. It develops a cognitive interpretation of what “practically mastery of ideologically relevant social relations” might mean. It also situates Marx as the contrary of Geertz by making social relations a necessary condition for ideology as a cognitive phenomenon, not something that needs to be bracketed (or pigeonholed as “strain” or “interest”) for ideology to be cognitive.

This different basis is Gazzaniga’s research (1967; 1998; Gazzaniga and Ledoux 1978) on the split-brain and the process of confabulation of meaning on the basis of incomplete visual input. It is important to mention that I use the split-brain as an analogue (in “good academic thinking” terms) to convey what ideology might mean as a cognitive phenomenon if it is not a symbol system. I do not imply that ideology requires a split-brain as a physical input.

For Gazzaniga, the two sides of the brain effectively constituted two separate spheres of consciousness, but this could only be truly appreciated when the corpus callosum was severed (what used to be a procedure for epileptic patients) and the two sides of the brain were rendered independent from each other. When this happened, the visual field was bissected as the brain stopped communicating information together that came through the right and left visual fields (hereafter RVF and LVF). What was observable in the RVF was received independently from what was observable in the LVF. As Gazzaniga found, the brain is multi-modal. The left hemisphere is the center of language about visual input. So when a word or image was flashed to the RVF and the information was received by the left hemisphere, the patient could provide an accurate report. When a word or image was flashed to the LVF, the patient could only confabulate because the non-integrated brain could not combine the visual information with the language functions of the left hemisphere. The split-brain patient effectively “didn’t see anything,” even though she could still connect visual cues to related pictures on command.

When visual information is presented to a split-brain, the mystery is how the verbal left hemisphere attempts to make sense of what the non-verbal right hemisphere is doing. This is the recipe for confabulations or “false memories” as Gazzaniga (1998) puts it, because here we witness the effects of the “interpreter mechanism.”

Thus, when the RVF and LVF of a split-brain patient were shown pictures of a house in the snow and a chicken’s claw, and the patient was asked to point to relevant pictures based on these visual cues, she pointed to a snow shovel and a chicken head respectively. Here is the interesting part:

the right hemisphere—that is, the left hand—correctly picked the shovel for the snowstorm; the right hand, controlled by the left hemisphere, correctly picked the chicken to go with the bird’s foot. Then we asked the patient why the left hand— or right hemisphere—was pointing to the shovel. Because only the left hemisphere retains the ability to talk, it answered. But because it could not know why the right hemisphere was doing what it was doing, it made up a story about what it could see—namely, the chicken. It said the right hemisphere chose the shovel to clean out a chicken shed (Gazzaniga 1998: 53; emphasis added).

“It made up a story” refers here to the verbal left hemisphere attempting to make sense of why right hemisphere had been directed toward a shovel. Flashing a picture to right hemisphere lacked any narrative ability, and yet the split-brain patient could still point at a relevant image even though this did not “pass through” language.

The argument here is that this serves as a good analogue for a theory of ideology that does not make computational or modular commitments. The important point is that confabulation is not just some made up story, but what the split-brain patient believes because his brain has filled in the blank (e.g. “I chose the shovel because I need to shovel out the chicken coop”). Ideology as a cognitive phenomenon does not, in this sense, mean programming the political system according to an extrinsic symbol system; in other words, building an internal model (a three-dimensional one) of that system and drawing entailments from it, as any good scientist would do. To be “in ideology” means filling in the blank as the normal way to cognitively cope with disconnected inputs, some with a “phonological representation,” others that are “nonspeaking.”

The Split-Brain and Social Relations

We can theorize that where practical mastery of social relations becomes important, in particular, social relations that are “ideologically-relevant,” it is because they generate an equivalent of a split-brain effect and its “interpreter mechanism.” In social relations arranged as fields, practical mastery consists of the “felt motivation of impulsion … to attach impulsion … to positions … [and have] the ethical or imperative nature of such motivations [be] akin to a social object, external and (locally) intersubjectively valid, that is, valid conditional on position and history” (Martin 2011: 312).

Fields refer to one type of social relation conducive to ideological effects, particularly if they are organized on quasi-Schmittian grounds of opponents and allies (Martin 2015). Marx is clear that other types of social relation (like capital) are specifically resistant to influence by any sort of cognitive mediation. Still, he achieves some understanding of those social relations by examining their “being thought … [through] abstractions” (see Marx 1973: 143). For instance,  the commodity fetish can be seen as analogous to a split-brain effect: the “social relation between things” is an LVF interpretation, while the “social relation between people” is equivalent to an RVF input. A split-brain is an analogue of mental structures that correspond to these objective (social) structures.

Taking the split-brain as the basis (not the “extrinsic theory”) for ideology as a (non-GOFIT) cognitive phenomenon, then, we can speculate that only certain social relations (fields, capital) have an ideological effect. The ideological effect they do have is because they generate a split-brain scenario with disconnected inputs. Agents are subject to social relations in which they do not have direct access (RVF). They fill in the blank of the effect of those inputs through “abstractions,” i.e. explicit endorsements or propositional attitudes that take linguistic form, often mistaken on their own terms as ideology (LVF).

To be continued … [note: Zizek (2017: 119ff) also finds the split-brain useful for thinking about ideology, though his argument confounds and mystifies with Pokemon Go]

 

References

Cohen-Cole, Jamie. (2005). “The Reflexivity of Cognitive Science: The Scientist as a Model of Human Nature.” History of the Human Sciences 18: 107-139.

Galanter, Eugene and Murray Gerstenhaber. (1956). “On Thought: The Extrinsic Theory.” Psychological Review 63: 218-227.

Gazzaniga, Michael. (1967). “The Split-Brain in Man.” Scientific American 217: 24-29.

_____. (1998). “The Split-Brain Revisited.” Scientific American 279: 51-55.

Gazzaniga, Michael and Joseph LeDoux. (1978). The Integrated Mind. Plenum Press.

Geertz, Clifford. (1973). “Ideology as a Cultural System.” in Interpretation of Cultures.

Jost, John. (2006). “The End of the End of Ideology.” American Psychologist 61: 651-670.

Martin, John Levi. (2015). “What is Ideology?” Sociologica 77: 9-31.

_____. (2011). The Explanation of Social Action. Oxford.

Marx, Karl. (1973). The Grundrisse. Penguin.

Zizek, Slavoj. (2017). Incontinence of the Void. MIT

 

Durkheimian Sociology and its Discontents, Part II: Why Culture, Social Psychology, & Emotions Matter to Suicide

In a previous post, I argued that despite its importance and “classical” status, sociologists have not contributed to the study of suicide as much as they could. While Anna Mueller and I have yet to posit a general or formal theoretical statement on suicide, in this post, I attempt to distill the basic theoretical ideas we’ve been developing for the last five years. Our work began as an effort to “test” Durkheim (Abrutyn and Mueller 2014; Mueller and Abrutyn 2015), but, very rapidly, our first quantitative studies led us to begin writing the first of four theoretical pieces formalizing Durkheim’s arch nemesis’, Gabriel Tarde’s theory of contagion (Abrutyn and Mueller 2014a). We eventually concluded that the data we needed did not exist, and through some luck, we found a field site to begin qualitatively assessing our evolving sociological view of suicide (Mueller and Abrutyn 2016). This fieldwork led to three other theoretical pieces that build on and go far beyond the Tarde piece to emphasize how cultural sociology, social psychological, and emotions shape suicidality (Abrutyn and Mueller 2014b, 2016, 2018)—particularly diffusion and clustering.

Cultural Foundations

In the 1960s, Jack Douglas (1970) offered an important critique of the conventional Durkheimian approach to suicide, arguing that suicide statistics were questionable due to various professional and personal issues surrounding medical examiner’s and coroner’s work. His larger point was that phenomenological meanings mattered more than suicide rates. About a decade later, David Phillips (1974) presented compelling evidence that audiences exposed to media reporting of suicide were at a risk of temporary spikes in suicide rates—e.g., U.S. and British suicide rates jumped 13% and 10%, respectively, following publicization of Marilyn Monroe’s suicide. We argue that there are important lessons gleaned from these two divergences from classic Durkheimian sociology.

First, meanings matter. Meanings are located in (1) general societal schema available to most people, (2) localized cultural codes that draw from and refract these general schema to make sense of the actual experiences of group of people inhabiting a delimited temporal and geographic space, and (3) the idiosyncratic schema any person in that group possesses, built from their own biography and experiences. A small, but growing body of historical (Barbagli 2015), anthropological (cf. Chua 2014; Stevenson 2014), and cultural psychological (Canetto 2012) research confirms this. For instance, some research on Canadian indigenous communities, where the suicide rate can be six times that of the Canadian average, found that youth in one community explain their own suicidality as a means of belonging (Niezen 2009); a counterintuitive finding for sociologists who think of integration as healthy. Nevertheless, these studies stop short of moving beyond broad-stroke assessments of culture. Meanings are, after all made real, embodied, and crystallized in social relationships; and, thus, social relationships—as Durkheim argued, but not quite how he imagined—matter too.

The Meaning and Meaningfulness of Social Relationships

The connection between social relationships and suicide, as studies using network principles have shown, has a structural side (Bearman 1991; Pescosolido 1994; Baller and Richardson 2009), yet they are eminently cultural as well in form and content. They are the social units in which cultural meanings emerge, spread, become available/accessible/applicable, and are stored.

Not surprisingly, and contrary to epidemiological and psychological accounts that favor a “disease” model approach to suicide “contagion,” our work has shown that network ties are only one factor, while having a friend tell you about their suicidality can lead you to develop new suicidal thoughts (Mueller and Abrutyn 2015); and in the case of girls, new suicidal behaviors. At the relational level, the general and local cultural mechanisms are further refracted. The direct, reciprocal nature of these ties, make culture real, imbuing it with affect (Lawler 2002).  This increases the odds that codes will be internalized and integrated with existing understandings of suicide, and, ultimately, mobilized in how people interpret events or situations, make sense of their own problems, and consider options for resolving said problems. In particular, it is the emotional dimension of culture and social relationships that adds the final ingredient to my vision of the future of the sociology of suicide.

The Final Ingredient: Emotions

Since the 1970s, sociologists of emotions—drawing from Cooley’s insights—have argued that social emotions like shame, guilt, or pride act as powerful social forces (cf. Turner 2007 for a review). Externally, social emotions are used as weapons to control others behavior, ranging from public degradation ceremonies used to humiliate and restore order to mundane rituals of deference and demeanor to gossip. The self is a social construct in so far as the primary groups we are socialized in provide meanings that come to make up our (1) “self-construct” or “global” sense of self. Our self is our most cherished possession as it provides a sense of anchorage across social situations. As we develop new meanings anchored in (2) relationships between specific others (role identity), (3) membership in various collectives (group identity), and (4) status characteristics that (a) identify us as belonging to one or more categorical unit (age, race, sex, occupation) and, therefore, (b) obligate or expect us to perform in certain ways and receive certain amounts of rewards and deference (social identity), meanings emerge and are grafted onto our self-concept or become situationally activated.

Social emotions are an evolutionary adaptation (cf. Turner 2007; Tracy et al. 2007). While all animals feel anger (fight) and fear (flight), and mammals also feel various degrees of sadness and happiness, shame and pride seem uniquely human because, as the Adam and Eve story teaches us or our own children’s ease with nudity shows to us, the meanings necessary for eliciting them must be learned. That is because they involve imagining what others, especially significant others, think of us; not just our behavior, but our cherished self. Pride means we’ve lived up to the imagined (and, they are often imagined in so far as they are not accurate reflections of) expectations and obligations of those we care about. Shame is the opposite: we are a failure, contemptuous in the eyes of others, deficient, and, even, polluting. Clinical research finds shame as particularly painful, often verbalized in expressions of feeling small, wanting to hide, and, other phrases like “tear my skin off” or “mortified” (Lewis 1974; Retzinger 1991).

Mortification refers to the death of the self; and, thus, shame is the signal that the self is dying, decaying, or, with chronic shame among violent prisoners, dead (cf. Gilligan 2001). Emotions are the bridge between the structural and cultural milieus we live in and the identities that anchor us in relationships. They saturate cultural meanings such that some become more relevant and essential to our identity (LeDoux 2000). Our memory and, therefore, biography is impossible without emotions, as events “tagged” with more intense emotions are more easily recalled than those that did not elicit intense, long-lasting feelings (Franks 2006). It stands to reason that the next frontier in a sociology of suicide that takes culture and microsociology seriously is one that also mixes social emotions into the theoretical “pot.”

In this spirit, Part III will shed light on where the sociological study of suicide can and should go if we are to reclaim our seat at the table in offering understanding and explanation. And, for becoming truly public in contributing to the prevention of suicide and in post-vention efforts – or those that seek to work with (individual or collective) survivors in the aftermath of a suicide.

Where Did Sewell Get “Schema”?

Although there are precedents to using the term “schema” in an analytical manner in sociology (e.g., Goffman’s Frame Analysis and Cicourel’s Cognitive Sociology), it is undoubtedly William Sewell Jr’s “A Theory of Structure: Duality, Agency, and Transformation” published in the American Journal of Sociology in 1992 that really launched the career of the term in sociology.

In our forthcoming paper, Schemas and Frames (Wood et al. 2018), we briefly sketch the history of the schema concept in the cognitive sciences—from psychology and artificial intelligence to anthropology and cognitive neuroscience. We note how certain ambiguities in Sewell’s formulation renders it unclear whether it is compatible with the concept as used in the cognitive sciences. Part of the reason, I would suggest, is because Sewell did not get this concept from the cognitive sciences, not even cognitive anthropology.

First, we must discuss (briefly) Giddens’ intervention. To summarize (following Piaget 2015:6–16) the defining features of the various varieties of structuralism—in mathematics, psychology, anthropology, linguistics—include: (1) patterned-wholes are not mere aggregates, (2) patterned-wholes presuppose some principles of composition or transformation which structure them, and (3) the dynamics of wholes, as the product of these underlying principles, result in self-maintenance such that the process which constitutes the patterned-whole is not immediately terminated.

Giddens’ innovation, first articulated in Central Problems in Social Theory (1979), and later in Constitution of Society (1984), involved separating aspects (1) and (2) above. He referred to the patterned-whole as a social system and to the underlying principles of composition and transformation as structure. In essence, he asks for a Gestalt shift in how sociologists approached the regularities of social life. This, in turn, places structure as operating “behind the scenes,” or in Giddens words, “structure as a ‘virtual order’ of differences” (Giddens 1979:64)

In response to this move, Sewell uses the term schema for the first time in this passage:

Structures, therefore, have only what [Giddens] elsewhere terms a ‘virtual’ existence (e.g., 1984, p. 17). Structures do not exist concretely in time and space except as ‘memory traces, the organic basis of knowledgeability’ (i.e., only as ideas or schemas lodged in human brains) and as they are ‘instantiated in action’ (i.e., put into practice). (Sewell 1992:6)

Giddens also, confusingly, defines “structure” as consisting of “rules and resources”  (1979:63–64). The latter of which, Sewell points out, is not virtual. He goes on to demonstrate Giddens term “rules” isn’t virtual either as it implies public prescriptions. Sewell focuses his intervention here (1992:7):

Giddens develops no vocabulary for specifying the content of what people know. I would argue that such a vocabulary is, in fact, readily available, but is best developed in a field Giddens has to date almost entirely ignored: cultural anthropology. After all, the usual social scientific term for ‘what people know’ is ‘culture,’ and those who have most fruitfully theorized and studied culture are the anthropologists… What I mean to get at is not formally stated prescriptions but the informal and not always conscious schemas, metaphors, or assumptions presupposed by such formal statements. I would in fact argue that publicly fixed codifications of rules are actual rather than virtual and should be regarded as resources rather than as rules in Giddens’s sense. Because of this ambiguity about the meaning of the word ‘rules,’ I believe it is useful to introduce a change in terminology. Henceforth I shall use the term ‘schemas’ rather than ‘rules’.

Beyond noting that he is inspired by the work of anthropologists, Sewell offers few clues as to what motivates his use of schema.

Is Sherry Ortner and Michigan’s CSST the source?

Despite referring to “schema” over a hundred times in the essay, he cites almost no scholars. In a footnote, he states “It is not possible here to list a representative example of anthropological works that elaborate various ‘rules of social life.’” In the same footnote, after citing Geertz’s The Interpretation of Cultures as the most influential discussion of culture, he states “For a superb review of recent developments in cultural anthropology, see Ortner (1984).” As this footnote suggests, it may have been Sherry Ortner who motivated his conceptualization.

In the essay, Sewell cites Ortner’s 1984 piece “Theory in Anthropology since the Sixties,” and includes Ortner among several scholars he thanks for feedback on his AJS piece. However, in the cited article, Ortner’s only mention of “schema” is in a quotation from Bourdieu  (1978:15). In this essay, she outlines the main cleavage within symbolic anthropology in the 1960s was between the Turnerians and the Geertzians. Geertz’s “most radical move,” according to Ortner, was arguing “culture is not something locked inside people’s heads, rather is embodied in public symbols” (1984:129). Ortner identified as “Geertzian,” as he was her advisor at the University of Chicago, where he taught from 1960 to 1970, before leaving for the Institute for Advanced Study at Princeton (David Schneider, another Parsonsian symbolic anthropologist, was also her teacher at Chicago).

Sewell received his Ph.D. in history at Berkeley in 1971, and was an instructor at the University of Chicago from 1968 to 1971, before becoming an Assistant Professor there from 1971 until 1975 — overlapping with Ortner’s graduate studies there. He then had a five-year stint at the Institute for Advanced Study with Geertz in residence. From 1985 to 1990, Sewell was faculty in the history and sociology at the University of Michigan, overlapping again with Ortner, a faculty member in anthropology from 1977—1995. However, the overlaps between the two (and Sewell with Ortner’s mentor), is speculative evidence of their interactions.

In 1991, the relatively new American Sociological Association Sociology of Culture Section gave an honorable mention for the best article to Nicola Beisel for “Class, Culture, and Campaigns Against Vice in Three American Cities.” Her advisor at Michigan was Sewell, and in the Culture section newsletter’s interview with her, she states (1991:4-5):

Certainly, the biggest influence on my work was the University of Michigan’s Center for the Study of Social Transformations (CSST), a group of sociologists, social historians, and anthropologists that was started by Bill Sewell, Terry McDonald, Sherri Ortner, and Jeff Paige. The year I spent as a CSST fellow was one long and extremely fruitful discussion of culture, structure, agency, and social change….I do think that we have to demonstrate to our colleagues who think they do work on ‘hard structures’ that culture plays a vital part in the constitution and reproduction of those structures. In thinking about these issues I have been greatly influenced by Bill Sewell’s and Anthony Giddens’ theorizing the duality of structures, particularly the discussions in Sewell’s forthcoming AJS article.

In a recent interview about her 1995 essay, “Resistance and the Problem of Ethnographic Refusal” published in Comparative Studies in Society and History (CSSH), Ortner also refers to the founding of CSST:

In 1995 I was still at the University of Michigan and was involved in the formation of an incredibly exciting interdisciplinary discussion group, Comparative Studies in Social Transformation or CSST (not to be confused with the journal CSSH!). CSST was populated by anthropologists, historians, and a few folks from other fields, with many shared theoretical interests (Marxism, culture theory, practice theory, feminism, Foucault, etc.) and with overlapping cultural and historical interests in–broadly speaking–issues of power, domination, and resistance. If you look at the acknowledgments of “Resistance and the Problem of Ethnographic Refusal” (and I am a big believer in looking at acknowledgments), you will see the names of many of the key participants in that group, and it is an amazing roll call of some of the leading anthropologists, historians, and other social and cultural thinkers of that generation.

Sewell was among those acknowledged (alongside, Fred Cooper, Fernando Coronil, Nick Dirks, Val Daniel, Geoff Eley, Ray Grew, Roger Rouse, Julie Skurski, Ann Stoler, and Terry McDonald). Curiously, Sewell acknowledges none of those members of CSST in his 1992 article — only Ortner. This strongly suggests there was, at least, cross-pollination between Ortner and Sewell.

Where Did Ortner Get “Schema”?

20180816-Selection_017.png
Ortner’s sketch of the Gyepshi altar in Sherpas Through Their Rituals

We may speculate, therefore, that Sewell received the schema concept from Ortner through, either informal talks, discussions at the CSST, or something of Ortner’s he read but did not cite in the AJS article. That is, it is strange that in the single essay of Ortner’s cited by Sewell, she does not really refer to “schemas” beyond a quoting Bourdieu.

In Ortner’s first book (1978), Sherpas Through Their Rituals (based on her dissertation), she references schemas only once, in quoting Ricoeur: “the stain [defilement] is the first schema of evil” (Ortner’s addendum). In a collection of reactions to Ortner’s “Theory in Anthropology since the Sixties,” by Maurice Bloch, Jane Collier, Sylvia Yanagisako, Thomas Gibson,  Sharon Stephens, and Pierre Bourdieu—based on the 1987 American Ethnological Society invited session, held at the American Anthropological Association Meetings in Chicago and published as a working paper by the CSST—Ortner offers the following in her response (1989:102-103, emphasis added):

And finally, my own recent work on Sherpa social and religious history utilizes a notion of cultural schemas, recurring stories that depict structures as posing problems, to which actors must and do find solutions. Here again structure (or culture) exists in and through its varying relations with various kinds of actors. Further, structure comes here as part of a package of emotional and moral configurations, and not just abstract ordering principles.

The work she is referring to here is in her 1989 book, High Religion: A Cultural and Political History of Sherpa Buddhism. It is here that “schema”— specifically “cultural schema”—is used numerous times (54 in total). In the opening chapter, Ortner describes two “notions” of structure will be used in the analysis (1989:14 emphasis added):

The first is a concept of structural contradictions—conflicting discourses and conflicting patterns of practice—that recurrently pose problems to actors. The second is a concept of cultural ‘schemas,’ plot structures that recur throughout many cultural stories and rituals, that depict actors responding to the contradictions of their culture and dealing with them in appropriate, even “heroic,” ways.

In chapter four, Ortner argues “Sherpa society is founded on a contradiction between an egalitarian and hierarchical ethic.” She furthermore argues that recognition of this contradiction is “culturally formalized, in the sense that important cultural stories both depict such competitive relations and show the ways in which they may be resolved….the stories collectively embody what I will call a cultural schema” (1989:59, emphasis added; see also her 1990 chapter “Patterns of History: Cultural Schemas in the Founding of Sherpa Religious Institutions”).

20180816-Selection_018.png

Ortner then offers a short survey of the “pedigree” of this concept in anthropology, beginning with what she called “key scenarios” in her dissertation and a 1973 American Anthropologist article. These are a particular kind of “key symbol,” which “implies clear-cut modes of action appropriate to correct and successful living in the culture…they formulate the culture’s basic means-ends relationship in actionable form” (1973:1341). Ortner outlines how numerous different contexts—like seating arrangements, shamanistic seances, ritual offerings to gods—were structured as if they were a hospitality event. Therefore, the “scenario of hospitality” acted as a “cultural schema,” transposable across situations and providing prescriptions for action.

Next, Ortner identifies other exemplars, including Schieffelin’s ([1976] 2005) examination of reciprocity and opposition as “cultural scenarios” among the Kaluli of New Guinea, Turner’s (1975) “root paradigms” like martyrdom in Christianity, Geertz’s  “transcription of a fixed ideal” in Negara (1980), and Sahlins’ “structures of the long run” in Historical Metaphors (1981) (1981). Ortner argues that cultural schemas have “durability” because “they depict actors respond to, and resolving…the central contradictions of the culture” (1989:61). After High Religion, Ortner refers to schemas only once, in a retrospective on Geertz in 1997.

What is absent from Ortner’s otherwise exhaustive review of anthropology in the 1984 essay, and throughout her work on cultural schemas, is any references to “cognitive” anthropology. She offers no reference to Goodenough, Lounsbury, Romney, D’Andrade, Frake or others, and only referring to Bloch’s work prior to his turn to the cognitive sciences as exemplified by his 1991 article “Language, Anthropology and Cognitive Science.” In fact, it is odd that she does not reference a 1980 review essay in the American Ethnologist, titled “On Cultural Schemata” written by G. Elizabeth Rice, a UC-Irvine PhD. Nor is there a reference to the 1983 Annual Review of Anthropology essay, “Schemata in Cognitive Anthropology,” written by Ronald Casson, a student of D’Andrade and Frake while at Stanford. Furthermore, she does not cite the work of Robert I. Levy who studied Nepal (1990) from a cognitive-anthropological perspective (in fact, both Levy’s and Ortner’s book on Nepal are reviewed in the same issue of the American Ethnologist). Originally trained as a  psychiatrist, Levy was brought to UC-San Diego in 1969 to help establish the nascent field of “psychological anthropology.” In Tahitians: Mind and Experience in the Society Islands (1975), he applies the concept of schema—which he attributes to the psychiatrist Ernest Schachtel’s study of memory and amnesia.

Several more such examples can be found. We can conclude that Ortner’s conceptualization of schema (and therefore Sewell’s and likely Sewell’s students) appears to be largely independent of its parallel development in the cognitive sciences (including cognitive anthropology) forming in the U.S. west coast (briefly discussed in my post on connectionism).

References

Geertz, Clifford. 1980. Negara. Princeton University Press.

Giddens, A. 1984. The Constitution of Society: Outline of the Theory of Structuration. University of California Press.

Giddens, Anthony. 1979. Central Problems in Social Theory: Action, Structure, and Contradiction in Social Analysis. Vol. 241. Univ of California Press.

Levy, Robert I. 1975. Tahitians: Mind and Experience in the Society Islands. University of Chicago Press.

Ortner, Sherry B. 1989. High Religion. Motilal Banarsidass.

Ortner, Sherry B. 1973. “On Key Symbols.” American Anthropologist 75(5):1338–46.

Ortner, Sherry B. 1978. Sherpas Through Their Rituals. Cambridge University Press.

Ortner, Sherry B. 1984. “Theory in Anthropology since the Sixties.” Comparative Studies in Society and History 26(1):126–66.

Piaget, Jean. 2015. Structuralism (Psychology Revivals). Psychology Press.

Sahlins, Marshall. 1981. “Historical Metaphors and Mythical Realities.” Ann Arbor: University of Michigan Press 344.

Schieffelin, E. 2005. The Sorrow of the Lonely and the Burning of the Dancers. Springer.

Sewell, William H. 1992. “A Theory of Structure: Duality, Agency, and Transformation.” The American Journal of Sociology 98(1):1–29.

Turner, Victor. 1975. Dramas, Fields, and Metaphors: Symbolic Action in Human Society. Cornell University Press.

Wood, Michael Lee, Dustin S. Stoltz, Justin Van Ness, and Marshall A. Taylor. 2018. “Schemas and Frames.” Sociological Theory, Forthcoming.

 

Beyond Good Old-Fashioned Ideology Theory, Part One

The concept of ideology is surely one of the sacred cow concepts of sociology (and the social sciences more generally) and is one of the special few that circulates widely outside the ivory tower. It is also a concept that is arguably the most indebted of all to the presumption that cognition is a matter of representation, nothing more or less. Ideology has, from its French Revolution beginnings to the present, been associated closely with ideas and more specifically with ideas that project meaning over the world in relativistic and contentious ways. Almost universally ideology is characterized by representation; historically it has also been characterized by what we can call (unsatisfactorily) distortion. For ideologies to be representations they must be capable of generating reflexively clear meaning about the world. For ideologies to be distortions those representations must generate meaning in some way that concerns the exercise of power. Since ideologies are distorting they must consist of representations that either support or contend with some current configuration of power, by prescribing its direction. This means that people do not believe ideologies because ideologies are true. Instead, some combination of social factors and self-interest leads people to believe them.

This will have to do as a (quick/dirty) summary of the most common set of referents generally associated with ideology. Let’s call it good old-fashioned ideology theory (GOFIT) for short. Even a brief perusal of the recent news would probably suggest that the world (or at least the US) is becoming increasingly “ideological” on GOFIT terms as ideology seems to be more and more important for more and more stuff that it had been irrelevant for as recently as a decade ago (e.g. restaurant attendance, college enrollment, cultural consumption). If these impressions are even partially correct, then an enormous weight is placed on ideology. It is a concept that we (sociologists included) need  in order to make sense of the fractious, tribalizing times in which we live. But it is fair question to ask whether GOFIT ideology is up to the challenge.

On the above terms, GOFIT ideology essentially consists of something like the “rule-based manipulation of symbols” type of meaning construction, unreconstructed from its heyday in the classical cognitive science of the 1950s and 60s. This should make us pause and take a second look at the concept. The goal of this post is to (not exhaustively) examine whether ideology can do without these commitments and whether the concept can be removed from GOFIT and placed on new cognitive ground. I argue that ideology can do without these commitments and that it already has been placed (or is being placed) on new cognitive ground, which makes it an important point of focus not only for substantive phenomena (all around us today) but because ideology is closely entangled with the wider theoretical stakes of relevance to this blog, and it has been since at least The German Ideology when Marx and Engels tried for a final push of idealism into the dustbin.

In this first post, I will compare two arguments that try to move beyond GOFIT. In a second post, I will sketch a different approach that tries to extend a non-GOFIT ideology even further.

Psychologists, it seems, have beaten everyone to the punch in providing key evidence attesting to the present-day significance of ideology. Here, we can point to the influential work of John Jost (2006; website) and the research program he develops against the mid-century “end of ideology” claims. Those arguments hard largely eliminated ideology as a key conceptual variable, in one sense because large disagreements over how to organize society seemed to end sometime in the 1950s, at least in the US (“even conservatives support the welfare state,” as Seymour Martin Lipset famously quipped). But in a more important sense, the “end of ideology” also meant a paradigm in political psychology built around the presumption that “having an ideology” was a mystery and that only a small minority of people actually had one. Jost resurrects ideology by developing a new question in political psychology, one that at this point probably seems grossly redundant, but which summarizes a vast body of research inside and outside the academy, all of which asks some more or less complicated version of it: “why [do] specific individuals (or groups or societies) gravitate toward liberal or conservative ideas[?]” (2006: 654).

Jost here distance himself from the political scientist Philip Converse and his claim (esp Converse 1964) that probably no more than ten percent of the population possesses anything resembling an ideology (e.g. “political belief system”). For Converse, this meant that for the vast majority of political actions, especially voting behavior by a mass public, ideology is basically irrelevant. Jost argues that, on the contrary, even if the highly rationalized, systematic commitments of true ideologues is found  only among a small minority, we cannot dismiss peoples’ attraction to conservative or liberal ideas. Relaxing a strong consistency claim, Jost finds placement on the conservatism-liberalism spectrum as highly predictive of voting trends, and not only because where people self-identify on the ideological scale closely overlaps with their party affiliation. Ideas matter too, especially if we measure them as “resistance to change and attitudes toward equality” (2006: 660), which are (presumably) the source of the major ideological differences between the left and the right.

As Jost continues, these “core ideological beliefs concerning attitudes toward equality and traditionalism possess relatively enduring dispositional and situational antecedents, and they exert at least some degree of influence or constraint over the individual’s other thoughts, feelings, and behaviors” (2006: 660; my emphasis). Here Jost hits on a problem with influence inside and increasingly outside the academy today. Research on the “dispositional and situational antecedents” of attraction to liberal or conservative ideas has become something of a cottage industry, as evidenced in popular works by luminaries like George Lakoff (2002) and Jonathan Haidt (2012), and in Jost’s own work (see 2006: 665) that finds, among other things, unobtrusive-style evidence (“bedroom cues”) that strongly correlates with placement on the liberalism-conservatism spectrum (like whether one has postage stamps lying around the house instead of art supplies). Even Adorno’s (et al 1950) arguments have been buoyed by this conversation as prescient and timely (see Jost 2006: 654) after they had been summarily dismissed by mid-century psychologists. “Right-wing authoritarianism” as a personality measure helps define antecedent conditions that lead people to be attracted to ideas (or to Trump) with different ideological content. Adorno thrives as the research winds have changed.

The key presumption of this research is that ideologies are information-lite and  not complicated, at least not in a reflexive way, as Converse thought they must be (“complicated systems of relations between ideas”). But we might reasonably wonder whether, in their lack of complication, “ideological differences” in this literature do in fact count as differences of ideology and not something else. Jost himself does little to explain what it means to be “attracted” to liberal or conservative ideas (is this the same as believing them?), and what he calls “ideas” can only be distinguished from what he (confusingly) also calls “attitudes” if we presume that ideas involve some sort of deductive, rule-based manipulations (e.g. because I believe in equality, I will support politicians that promise to help the poor). On both fronts this makes his approach problematic. While Jost is successful at clearing many of the hurdles that stand in the way of making the concept of ideology relevant again, he retains some of the strongest presumptions of GOFIT.

If political psychology has largely been resurrected by making something significant of the widely held sense that “ideological differences” are of critical significance for politics today, there is at least one other alternative to GOFIT available which has similar motivations but which does not make nearly the same commitments. John Levi Martin has developed an approach to ideology on the basis of redefining it as non-representational. Ideology does not consist of a representation of the world, in this view, but serves rather (more pragmatically) as “citizens’ way of comprehending the nature of the alliances in which they find themselves” (2015: 21). While he shares with Jost the fruitfulness of engaging with GOFIT on the relationship between “social factors” and ideologies, in Martin’s case in particular, this comes with a considerable twist: ideologies are not given autonomy as a kind of rule-like content that allows for deductive logic. As Martin argues, what appear to be ideologies are not reducible to an equation like values + beliefs = opinions. Rather, they are the means through which individuals comprehend “the alliances” in which they find themselves (which is important). What we can call ideological differences, in other words, maps onto patterns of social relations and not to differences that might be ascribable to the content of ideas.

If we take his example of whether people say they support a policy that will provide assistance to out of work, poor and/or black people, “the classic [GOFIT] conception imagines a person beginning with the value of equality, adding the facts about discrimination (say) and producing support for the policy.” Jost would probably explain this as their attraction to some view of equality, whether fueled by a personality trait or some other dispositional antecedent (just as Lakoff and Haidt would, in different ways). In Martin’s alternative, the process is entirely different: “The rule is, simply put, ‘me and my friends are good’ and ‘those others are bad’ …  [The] actual calculus of opinion formation is sides + self-concept = opinion” (27). This is what Martin calls a political reasoning source of ideology formation. Whether one would support the above policy is dictated by what it signifies about one’s position in “webs of alliance and rivalry, friendship and enmity.” It is that positioning that makes it an ideological choice, not that it is driven by some sequence that begins (or ends) with a commitment to certain ideas.

Martin provides a bit fleshier example to illustrate how political reasoning of this sort is “totally relational” and therefore endogenous to alliance/rivalry coalitions:

I once saw a pickup truck in my home town that had two bumper stickers on the rear. One had a representation of the American flag, and words next to it: “One nation, one flag, one language.” The other side had the Confederate flag. This is the flag used by the short-lived Southern confederation of states during the Civil War, when they tried to break away from the Union in order to preserve their “peculiar institution,” that is, slavery of Africans and their descendants. They wanted there to be two countries, and two flags (25)

Such infelicitous placement of the two bumper stickers would be a contradiction from a GOFIT point of view in search of the content of the ideas and how this organizes a decision to place the two stickers from some kind of logical deduction. For GOFIT, such behavior quickly becomes incomprehensible (as does the person). In fact, Martin argues, the two flags demonstrate this person’s practical mastery of the political landscape in the USA circa 2015ish: “Displaying the Confederate flag in the United States does not imply anti-black racism. However, it does imply a lack of concern with being ‘called out’ as a racist—it implies fearlessly embracing aspects of American political culture without apology … it does demonstrate anti-anti-racism” (26). The other bumper sticker (one nation, one flag, one language) demonstrates the person’s response “to certain political initiatives to ease the barriers to American citizens, residents, and possibly others who read (or speak) Spanish but not English.”

Together, the two bumper stickers make sense. But to see how we first need to bracket whatever ideas they might seem to express and situate the stickers instead in sets of social relations in which they become meaningful for this person. When we do this, we see that this person demonstrates a combination of social oppositions that together situate him/her against the “liberal coalition.” The placement of the bumper stickers is a political action, not as the expression of some commitment to underlying ideas, but as this person’s theorization of their politics: “it is their attempt to come up with an abstract representation of the political alliance system in which they are in, and the nature of their opponents” (26).  

Pace Jost, then, Martin argues that patterns of ideological difference are not ultimately driven by absolute differences between conservative and liberal ideas, though this is not to say that ideas (or words) cannot themselves become points of ideological difference. So much is this true that political reasoning itself provides an ontology and can dictate the nature of reality in way that is impervious to criticisms of ideological “distortion” and their presumption of a GOFIT mind-to-world relation that is mediated by something like a belief system. The nature of the world itself can (and has) come to be an expression of oppositions and alliances with an ideological significance. Martin and Desmond (2010: 15), for instance, find that liberals and conservatives with high political information both significantly overestimate the extent of black poverty and are much more likely to be wrong about it than are moderates and liberals and conservatives with less political information. This is an effect of political reasoning, they claim, and anticipates a sort of post-truth scenario in which facts themselves also become a means to theorize one’s political position. For high information liberals and conservatives alike, “their knowledge is that-which-helps-us-know-what-we-want-to-fight-about” (Martin 2015: 28). In other words, they become more ideological as they become more ensconced in relations of alliance and rivalry, not as they internalize complicated belief systems.

Martin, then, reinterprets ideology as the way that people comprehend their situation in relations of alliance and opposition using whatever means might seem to adequately express the accumulation of friends and the distinction from enemies. Martin surpasses the GOFIT assumptions more successfully than Jost largely because his approach to ideology does not rely imputing a content to ideas that would make them “liberal” or “conservative.” In principle, any idea could be liberal or conservative in his framework (just as any bumper sticker could, or any fact about the world could, or any political candidate) depending on whether people use it to map alliances and oppositions and comprehend the boundaries of coalitions of friends/enemies.

This, I argue, makes Martin’s approach more adequate, and historically relevant in way that Jost’s approach cannot be, for understanding what seems to be the rapid proliferation of ideological differences today, or more impressionistically the increased presence of ideology today, presumably as people use more things to “theorize” their political position inside alliances/rivalries than had been used before, complicating those groupings (at least in the interim). Once again, this is much easier to understand if we do not attempt to situate individuals into fixed categories on the basis of antecedent dispositions that give them some fixed attraction to ideas with a certain content.

But this also suggests that Martin’s approach to ideology is non-GOFIT mainly because it is (or seems to be) non-cognitive. Martin succeeds because he takes ideology out of the mind and places it in social relations. Things (e.g. bumper stickers, art supplies, flags, welfare policies) become “ideological” when they symbolize relations of alliance and rivalry, as comprehended through them and (following Marx) never in their absence, though we might ask if there is any relevant difference between using things to comprehend these relations and using things to construct them. Jost leaves ideology in the mind (in ideas), so it remains for him at least partially GOFIT, though he emphasizes that ideology is supplemented by non-cognitive things like personality or situational factors (e.g. traumatic events, like 9/11, or private ones) that make ideas carry different degrees of attraction.

When something vaguely cognitive enters Martin’s framework, it usually comes under the heading of “political reasoning in practice,” which does appear to serve adequately as an alternative to a GOFIT conception of mind. In the next post, I attempt a definition of  “practical mastery” of ideologically-relevant relations as a cognitive trait and how this is absolutely required if we want to finally (once and for all) separate ideology from its GOFIT background.

 

References

Adorno, Theodor et al (1950). The Authoritarian Personality. Studies in Prejudice, edited by Max Horkheimer and Samuel H. Flowerman. New York: W.W. Norton & Company.

Converse, Philip. (1964). “The Nature of Belief Systems in Mass Publics.” Critical Review 18: 1-74.

Haidt, Jonathan. (2012). The Righteous Mind: Why Good People are Divided by Politics and Religion. Norton.

Jost, John. (2006). “The End of the End of Ideology.” American Psychologist 61: 651-670.

Lakoff, George. (2002). Moral Politics: How Conservatives and Liberals Think. UChicago Press.

Martin, John Levi. (2015). “What is Ideology?” Sociologica 77: 9-31.

Martin, John Levi and Matthew Desmond. (2010). “Political Position and Social Knowledge.” Sociological Forum 25: 1-26.