The Promise of Affective Science and the Sociology of Emotions

The sociology of emotions is a curious subfield. On the one hand, the recognition that the study of emotions (and their dynamics) overlap with nearly every single thing sociologists care to study suggests they deserve central casting in the myriad studies that fill journals and monographs (Turner and Stets 2006). On the other hand, the sociology of emotions remains stuck in neutral, waiting for the sort of “renaissance” experienced by cognition when cultural sociology “discovered” schemas (DiMaggio 1997) and dual-process models (Lizardo et al. 2016, Vaisey 2009). This sort of paradox makes some sense, for emotions, or what founding sociologists like Cooley called sentiments, have nearly always been a part of the discipline. Weber’s most important typologies included affectual action and charismatic authority; as early as The Division of Labour, Durkheim had emotions front and center in his theory of deviance and crime; and, the aforementioned Cooley premised his entire social psychology on pride and shame transforming self into a moral thing. But, simultaneously, the study or use of emotions in sociological analysis remained mired in false Cartesian binaries (see Damasio 1994) that propped up misogynistic commitments to dichotomizing cognition (masculine) and affect (feminine), while also being tainted by association with Freudian psychoanalysis.

The 1970s saw these old barriers erode, as social psychologists—especially symbolic interactionists of a variety of flavors—began to mine the emotional veins of self (Shott 1979), roles/identities (Burke and Reitzes 1981), situations (Heise 1977), structure (Kemper 1978), and performance/expectations (Hochschild 1979—for the sake of argument, I put Hochschild here even though she [so far as I know] nor I would really call her a symbolic interactionist). Over the course of the next few decades, the most important theoretical and empirical work explaining how and why solidarity between individuals, as well as between individuals and groups, is produced and maintained centered emotions (Collins 1988, 2004, Lawler 1992, Lawler et al. 2009, Turner 2007). These works drew from Durkheim and picked up threads of Goffman’s (1956, 1967) that “felt” more important than sometimes even Goffman let on, while often like Turner’s evolutionary work on emotions or Collin’s interaction ritual chains, borrowing from nascent brain science. But, beyond these, work in the sociology of emotions remained relatively the same as it had in the earliest innovative days while its contribution beyond the sociology of emotions was held back.

Omar and I (2020) have argued previously that one of the glaring problems is that the sociology of emotions remains rooted in the Cartesian separation of mind and body that haunts social science. Emotions are, generally speaking, treated as mediating variables—e.g., signals that one’s cognitive appraisal of a situation does not match the information received about the situation (Burke and Stets 2009, Robinson 2014)—or dependent variables—e.g., emotions are things to be managed through cognitive or linguistic work (Hochschild 1983). A third option, which also treats emotions as dependent variables, posits that relational patterns like superordinate-subordinate constrain emotions either by structural fiat (Kemper 1978) or via cultural beliefs about what incumbents in these positions should and can do (Ridgeway 2006). What if the next frontier for emotions scholarship considers emotions and affect (the sociocultural labels we learn and the neurophysiological/biological response to stimuli) as independent variables?

Some Important Facts

Studying an intrapersonal force or dynamic is not radical, as cultural sociology has largely accepted the fact that cognitive mechanisms are at the root of a theory of action (Vaisey 2009). Action is caused, at least in some way, shape, or form by cognition without doing violence to the social factors beyond the organism. Affect, however, remains on the sidelines despite several key facts.

  1. Affect, as a motivating force of motor response, is older than cognition (Panskepp 1998). Evolution appears to have worked heavily on the subcortical emotion centers in mammals to encourage both the active pursuit of life-sustaining resources and the avoidance/aversion to painful life-destroying resources. And, given the exceptionally enlarged emotional architecture in our brains (in comparison to our closest cousins, gorillas and chimps), it is plausible to suggest emotions played an outsized role in humans developing and expanding their cultural repertoire for language, kinship, social organization, and so forth. In other words, emotions have been causal, historically speaking.
  2. Undoubtedly, they are causal still today. First, the subcortical areas of the brain play an important role in memory (which is the root of a social self, for instance) (LeDoux 2000). Second, human brain imaging reveals that affect is not resigned to subcortical areas of the brain, but is actually deeply integrated with areas usually reserved for cognition (Davidson 2003). Emotions, then, can control our cognition and behavior, command it in some cases (e.g., a panic attack), and, at the very least coordinate with cognitive functions. Any theory of action that fails to account for affect is dubious is unable to realistically explain social or solitary behavior cognition (Blakemore and Vuilleumier 2017).
  3. Consequently, the vast majority of social psychological processes such as comparison, appraisal, or reflection as well as the vast majority of “causal” explanations sociologists employ like values, interests, or ideology are inextricably tied to affect. If we can no more make a decision about which toothpaste to buy without affect then we should not be surprised that comparing and choosing social objects requires affect as well.
  4. A point Lizardo and I make is that sociologists too often rely on cognitive appraisals of emotions, focusing on self-reports about valence (negative/positive), intensity, mood (longer lasting feelings), and psychologized language like loneliness. However, emotions are visceral, bodily things (Adolphs et al. 2003), and sociologists cannot only borrow from psychological research and methods on emotions.
  5. Emotions may be “social constructs” in so far as a given group of people produce and reproduce labels for different bodily feelings experienced in different situations and which carry different meanings about the (a) appropriateness of those feelings, (b) expectations for their expression or suppression, and (c) “rules” about the duration and intensity of situationally-triggered emotions. However, much of this applies to either highly institutionalized settings, like formal ceremonies (e.g., funerals), where ritual participants approach the “center” of the community and the center must be protected from moral transgression (Shils 1975) or routinized encounters where interaction itself is ritualized (Goffman 1967, Collins 2004). But the need for rules and expectations implies that affect, if left to its own devices, can wreak havoc. Moreover, it ignores the diverse array of solitary actions that consume a significant portion of our daily lives (Cohen 2015), as well as ignores the fact that emotions are often things others “use” as means of affecting others’ feelings, thoughts, and actions (Thoits 1996).

Implications

If my argument that emotion’s scholarship has largely stalled is correct, but emotions are central to individual and social life, what are we to do? Of the myriad directions one could suggest, I will emphasize four that feel most consanguine to sociological inquiry.

  1. The first suggestion picks up on a larger set of questions being raised recently by sociologists of youth and education around the largely abandoned conceptual process of socialization (Guhin et al. 2021). Once a central explanatory framework for understanding how a society “out there” could find its way inside each of us, socialization, like most bits and pieces of functionalism, was tossed out with the icky water. Prematurely, it would seem because it has not been replaced meaningfully, which has subsequently constrained a once-vibrant area of interest: child (and adolescent) development from a sociological perspective. Studying emotions and emotional socialization seems fruitful for so many reasons. For one, the rules and the patterning of emotions-behaviors is really only an adult trait. Childhood and adolescence is a period of unbridled affect, as anyone with a toddler knows well. How do we teach emotion regulation? How is this teaching process distributed across classic demographic and socioeconomic categories? How effective are social forces versus natural brain development for emotion regulation? What about teaching emotion dysregulation? Finally, the most interesting set of questions revolve around social emotions like guilt, shame, pride, and empathy (Decety and Howard 2013). At this point, sociology has ceded these culturally-coded emotions to psychological research, despite the unique methodological tools sociologists possess. For example, studying a high school’s ecosystem and status hierarchy seems an incredibly important pathway to understanding shame and pride, empathy and sympathy. Here, kids are learning, supposedly, the rules of the affectual game. Rather than reduce their experiences to DSM labels like anxiety or depression, why not expand the lens through which we view mundane and spectacular youth experiences?
  2. A second related, implication centers on what I would call emotional styles or biographies. Sociologists are familiar with these sorts of metaphors, as groups have “styles” (Eliasoph and Lichterman 2003) or biographies shaped by a collective memory. These sorts of styles or biographies shape many things like the ways parents and children interface with teachers and the educational system more generally (Lareau 2003). Research has suggested that different personality types appear to correlate with different affectual “styles,” which suggests there is something neurophysiological about doing emotions (Montag et al. 2021). My best guess is that there are social forces that play a role as well, but oddly, mainstream sociologists rarely bother to ask about emotions—likely a reflection of the ingrained Cartesian binary and not negligence on the part of social scientists.
  3. Shifting gears, a third implication builds on the dual processes models approach (Vaisey 2009, Lizardo et al. 2016) and the elephant-rider metaphor. The metaphor itself is designed to explain how implicit cultural knowledge (the elephant) is largely responsible for the direction the rider takes. Deliberate, conscious action is possible but less impactful. But, what guides the elephant? To date, the answer has largely been deeply internalized values or nondeclarative knowledge, but how do we acquire those? How does the brain sort through the variety of potential ideas, scripts, frames, or schema available? And, once internalized, how does the brain choose between different schema or knowledge? Emotions are part of the answer, as affectually tagged memories are most intensely, most readily, and quickly recalled (Catani et al. 2013). But, the rider’s level of effort in directing the elephant is no less shaped by affect. In fact, emotions appear to have a dual process related to deliberate, intentional action as well (Blakemore and Vuilleumier 2017). On the one hand, internal, affectual sensations can become associated with patterned behavior, That is, recognizable affectual sensations signals “action readiness [in order to] prepare and guide the body for action” (p. 300). On the other hand, there are preconscious motivation systems that evolved to seek positive resources and avoid their negative counterparts. A child touches a hot stove and does not need their parents to teach them never to touch that stove again. Whenever they get near a stove they will become more alert and cautious. Of course, these aversions can become pathological (and no less conscious), leading to all sorts of strange phobias and disorders. The point, however, is that emotions are causal in two different ways for the rider, which seems an important addition to the dual-process models perspective, as does the consideration of how affect coordinates, controls, and sometimes commands the so-called automatic cognition that is the elephant.
  4. The final implication speaks directly to the methodological tools we use. For the most part, emotions are measured through self-report (Stets and Carter 2012), which often conflate cognitive appraisals of emotions with emotions and affect. I would point the reader towards highly innovative efforts, like those found in Katz (1999), Collins’ (2004), and Scheff’s (1990) work, respectively. All of these use some form of ultra-micro methods that make employ audio-visual technology, careful observation, and in some cases, linguistic analyses. But, these are simply a starting point, sources of inspired analytic strategy. Ethnographic techniques are easily repurposed to include emotions and affect, as careful observation of bodily display, language, and situational cues are hallmarks of good ethnographic work (Summers-Effler 2009). Even users of quantitative methods should think more carefully about how to ask about emotions, even if that means including basic questions for the sake of explorative social science.

In short, emotions remain central to understanding and explaining how we think and act, but also remain mired in antiquated notions of mind-body, rationality-irrationality, and masculine-feminine. Moreover, old insecurities surrounding the differences between psychological and sociological social psychology—which are simply microcosms of broader insecurities writ large in sociology—have generally prohibited the conceptualization of emotions as independent, causal variables, delimiting the directions the sociology of emotion may go. The next frontier, arguably, is incorporating affective sciences into the study of emotions, and allowing brain science to speak to sociology and vice versa.

References

Abrutyn, Seth and Omar Lizardo. 2020. “Grief, Care, and Play: Theorizing the Affective Roots of the Social Self.” Advances in Group Processes 37:79-108.

Adolphs, Ralph, Daniel Tranel and Antonio R. Damasio. 2003. “Dissociable Neural Systems for Recognizing Emotions.” Brain and Cognition 52:61-69.

Blakemore, Rebekah L. and Patrik Vuilleumier. 2017. “An Emotional Call to Action: Integrating Affective Neuroscience in Models of Motor Control.” Emotion Review 9(4):299-309.

Burke, Peter J. and Donald C. Reitzes. 1981. “The Link between Identities and Role Performance.” Social Psychology Quarterly 44(2):83-92.

Burke, Peter J. and Jan E. Stets. 2009. Identity Theory. New York: Oxford University Press.

Catani, Marco, Flavio Dell’Acqua and Michel Thiebaut De Schotten. 2013. “A Revised Limbic System Model for Memory, Emotion and Behaviour.” Neuroscience & Biobehavioral Reviews 37(8):1724-37.

Cohen, Ira J. 2015. Solitary Action: Acting on Our Own in Everyday Life. Oxford: Oxford University Press.

Collins, Randall. 1988. “The Micro Contribution to Macro Sociology.” Sociological Theory 6(2):242-53.

—. 2004. Interaction Ritual Chains. Princeton: Princeton University Press.

Damasio, Antonio. 1994. Descartes’ Error: Emotion, Reason, and the Human Brain. New York: Avon Books.

Davidson, Richard J. 2003. “Seven Sins in the Study of Emotion: Correctives from Affective Neuroscience.” Brain and Cognition 52:129-32.

Decety, Jean and Lauren H. Howard. 2013. “The Role of Affect in the Neurodevelopment of Morality.” Child Development Perspectives 7(1):49-54.

DiMaggio, Paul. 1997. “Culture and Cognition.” Annual Review of Sociology 23:268-87.

Eliasoph, Nina and Paul Lichterman. 2003. “Culture in Interaction.” American Journal of Sociology 108(4):735-94.

Goffman, Erving. 1956. “Embarrassment and Social Organization.” American Journal of Sociology 22(3):264-71.

—. 1967. Interaction Ritual: Essays on Face-to-Face Behavior. New York: Pantheon Books.

Guhin, Jeff, Jessica McCrory Calacro and Cynthia Miller-Idriss. 2021. “Whatever Happened to Socialization?”. Annual Review of Sociology 47:109-29.

Heise, David. 1977. “Social Action as the Control of Affect.” Behavioral Sciences 22(3):163-77.

Hochschild, Arlie. 1979. “Emotion Work, Feeling Rules, and Social Structure.” American Journal of Sociology 85(3):551-72.

—. 1983. The Managed Heart: Commercialization of Human Feeling. Berkeley: University of California Press.

Katz, Jack. 1999. How Emotions Work. Chicago: University of Chicago.

Kemper, Theodore. 1978. A Social Interactional Theory of Emotions. New York: John Wiley and Sons.

Lareau, Annette. 2003. Unequal Childhoods: Class, Race, and Family Life. Berkeley: University of California Press.

Lawler, Edward J. 1992. “Affective Attachments to Nested Groups: Choice-Process Theory.” American Sociological Review 57(3):327-39.

Lawler, Edward J., Shane Thye and Jeongkoo Yoon. 2009. Social Commitments in a Depersonalized World. New York: Russell Sage.

LeDoux, Joseph. 2000. “Cognitive-Emotional Interactions: Listening to the Brain.” Pp. 129-55 in Cognitive Neuroscience of Emotion, edited by R. D. Lane and L. Nadel. New York: Oxford University Press.

Lizardo, Omar, Robert Mowry, Brandon Sepulvado, Dustin S. Stoltz, Marshall A. Taylor, Justin Van Ness and Michael Wood. 2016. “What Are Dual Process Models? Implications for Cultural Analysis in Sociology.” Sociological Theory 34(4):287-310.

Montag, Christian, Jon D. Elhai and Kenneth L. Davis. 2021. “A Comprehensive Review of Studies Using the Affective Neuroscience Personality Scales in the Psychological and Psychiatric Sciences.” Neuroscience & Biobehavioral Reviews 125:160-67.

Panskepp, Jaak. 1998. Affective Neuroscience: The Foundations of Human and Animal Emotions. Oxford: Oxford University Press.

Ridgeway, Cecilia L. 2006. “Expectation States Theory and Emotion.” Pp. 374-67 in Handbook of the Sociology of Emotions, edited by J. E. Stets and J. H. Turner. New York: Springer.

Robinson, Dawn, T. 2014. “The Role of Cultural Meanings and Situated Interaction in Shaping Emotion.” Emotion Review 8(3):189-95.

Scheff, Thomas. 1990. Microsociology: Discourse, Emotion and Social Structure. Chicago: The University of Chicago Press.

Shils, Edward. 1975. “Ritual and Crisis.” Pp. 153-63 in Center and Periphery: Essays in Macrosociology, edited by E. Shils. Chicago: University of Chicago Press.

Shott, Susan. 1979. “Emotion and Social Life: A Symbolic Interactionist Analysis.” American Journal of Sociology 84(6):1317-34.

Stets, Jan E. and Michael J. Carter. 2012. “A Theory of the Self for the Sociology of Morality.” American Sociological Review 77(1):120-40.

Summers-Effler, Erika. 2009. Laughing Saints and Righteous Heroes. Chicago: University of Chicago Press.

Thoits, Peggy A. 1996. “Managing the Emotions of Others.” Symbolic Interaction 19(2):85-109.

Turner, Jonathan H. 2007. Human Emotions: A Sociological Theory. New York: Routledge.

Turner, Jonathan H. and Jan E. Stets, eds. 2006. Handbook of the Sociology of Emotions. New York: Springer.

Vaisey, Stephen. 2009. “Motivation and Justification: A Dual Process Model of Culture in Action.” American Journal of Sociology 114(+):1675-715.

 

 

A Taxonomy of Artifactual (Cultural) Kinds

In previous posts, I made a broad distinction between the two “families” of cultural kinds. This distinction was based on the way they fundamentally interact with people. Some cultural kinds do their work because they can be learned or internalized by people. Other cultural kinds do their work not because people internalize them but because they can be wielded or manipulated. For the most part, these last exist outside people (or at least being potentially separable from people’s bodies). We referred to the former as cultural-cognitive kinds (or cognitive kinds for short) and to the latter as artifactual cultural kinds (or artifactual kinds for short).

Most of the cultural stuff that exists outside of people (so-called “public culture”) is either an artifact, whether simple or complex (usually referred to as “material culture”), a systematic or improvised coupling between a person and an artifact (usually mediated by an internalized cultural kind such as a learned skill or ability), or a more extended socio-material ensemble (Hutchins, 1995; Malafouris, 2013), consisting of the distributed agglomeration of artifacts, people, and the knowledge (both explicit and implicit) required to use the artifacts in the setting for particular purposes, whether instrumental, expressive, or performative. Traditional cultural theory in sociology and anthropology tends to embody purpose in internalized cultural-cognitive kinds such as beliefs, goals, and values. However, an argument can be made that nothing embodies purpose (and even teleology) more directly than artifactual kinds designed to accomplish concrete ends (Malafouris, 2013).

Subsequent posts were dedicated to the process via which people internalize cultural-cognitive kinds. These reflections yielded an emergent and intuitive typology within the broad “family” of cultural cognitive kinds. Some cognitive kinds are like beliefs, encoding explicit declarations or propositions. Other cognitive kinds are more like skills or abilities and are difficult to verbalize in explicit form. A third form is in-between, more like concepts, encoding general semantic knowledge (both schematic and detail-rich) of the explicit and implicit aspects of categories. Riffing on a classic distinction in the philosophy of mind and action, we referred to the first kind as “knowlege-that,” the second kind as “knowlege-how,” and the third one as “knowledge-what.” The idea is that this provides an admittedly rough but exhaustive taxonomy of cultural-cognitive kinds as people internalize them.

Given this, it is easy to form the impression that artifactual (public) cultural kinds are an undifferentiated mass. However, recent work in cognitive science and philosophy has endeavored to provide a more differentiated taxonomic picture of the various forms artifactual kinds can take (Fasoli, 2018; Heersmink, 2021; Viola, 2021). In a forthcoming paper in a special issue of Topics in Cognitive Science dedicated to “the cognitive science of tools and techniques,” Richard Heersmink (2021) provides a useful generic typology of artifactual cultural kinds that aims for the same level of generality and exhaustiveness, concerning artifactual cultural kinds, as the knowledge-that/how/what typology concerning cultural-cognitive kinds.

Heersmink (2021) defines an artifact in the broadest sense as “material objects or structures that are made to be used to achieve an aim.” Heersmink differentiates between four broad families of artifacts: Embodied, perceptual, cognitive, and affective. To each type of artifact corresponds a specific set of skills of abilities people develop when they become good and proficient at using them, which Heersmink refers to as techniques (an approach in the same spirit as Mauss, 1973). Thus, there are embodied techniques, perceptual techniques, and so forth.

Artifact/technique is an important distinction, which separates the “cognitive” family of cultural kinds from the artifactual one. However, they tend to be run together in the literature. For instance, Hutchins (1995, p.) refers to the internalized (ability) component corresponding to the use of an external artifact as an “internal artifact.” However, this is confusing and blurs an important analytic line. As Heersmink (2013, p. 468) noted in earlier work,

it is clarifying to make a distinction between technology and technique. A technology (or artifact) is usually defined as a physical object intentionally designed, made, and used for a particular purpose, whereas a technique (or skill) is a method or procedure for doing something. Both technologies and techniques are intentionally developed and used for some purpose and are in that sense artificial, i.e., human-made. However, it is important to note, or so I claim, that they are not both artifactual. Only technologies are artifactual in that they are designed and manufactured physical objects and in this sense what Hutchins refers to as internal artifacts, such as perceptual strategies, can best be seen as cognitive techniques, rather than as internal artifacts. Moreover, given that these cognitive techniques are learned from other navigators and are thus first external to the embodied agent, it is perhaps more accurate to refer to them as internalized cognitive techniques, rather than as internal cognitive techniques.

Being “artifactual,” and thus usable (e.g., made by people but external to people and embodied in material objects but not “internalizable” by people) is diagnostic for artifacts as public cultural kinds. In the same way, being “internalizable,” is diagnostic for cognitive kinds such as skills, know-how, and abilities. This (internalizability criterion) is the distinguishing marker that separates them from artifactual kinds. Both are cultural kinds because they are the historical product of human ingenuity and invention.

Embodied artifacts are the “prototypical” of the category since they show up mainly as tools we use to get stuff accomplished. In philosophy and social theory, “Heidegger’s hammer,” and Merleau-Ponty’s “blind person’s cane” are the standard examples. Enumerating specific exemplars of the category is of course an endless task, as it includes any material object that can be used to accomplish a goal (e.g., pencils, shovels, fly swatters, brooms, skateboards, keyboards, etc.). It also includes using objects not designed for a given function to accomplish a particular goal (as when we use a hammer as a doorstop). While the “proper function” of a hammer is to drive nails through a surface, it can also be used for a myriad of improvised goals, and the same goes for pretty much every embodied artifact. Concerning the person-artifact interface, the critical phenomenological transition with regard to embodied artifacts happens when we become proficient at using them after repeatedly interacting with them (or more commonly being taught by an expert user how to use them). This results in internalization, via either socialization or enculturation, of artifact-specific skills or abilities facilitating person-and-artifact couplings. Once this coupling is established, the artifact or tool becomes transparent. It is experienced as a natural extension of the body. Following Heidegger, artifacts that have achieved this level of transparency are referred as “equipment” (Dreyfus, 1984).

Perceptual artifacts are used to correct, enhance, extend, and in some cases substitute our natural perceptual abilities. Reading glasses or hearing aids are a standard (corrective) example and telescopes or binoculars a standard (enhancing/extending) example. Merleau-Ponty’s blind man’s cane can be thought of as an embodied artifact that becomes a perceptual artifact via cross-modal substitution; tactile information comes to play the functional role for non-sighted persons that visual information plays for sighted people via the mediation of the artifact. In some cases, perceptual artifacts can be engineered so that they can make available to us aspects of the world that are naturally inaccessible to us (e.g., lightwaves in the infrared range of the spectrum). This is a type of enhancement that goes beyond amplifying the usual range of our standard perceptual techniques.

Naturally, cognitive artifacts have received a tremendous amount of attention in cognitive science and the philosophy of mind (Clark, 2008). Heersmink defines them as “…human-made, material objects or structures that functionally contribute to performing a cognitive task” (Heersmink, 2021, p. 10). Cognitive artifacts have even been used as “intuition pumps,” to show how cognition and cognitive activity can be thought of as (sometimes) occurring “outside the head,” using artifactual vehicles (e.g., a notepad or an abacus) used by people to perform cognitive tasks such as remembering and calculating (Clark & Chalmers, 1998), yielding the hypothesis of “extended cognition.” Independently of their role in this particular line of investigation, cognitive artifacts are central to the study of culture. Cognitive artifacts such as calculators, maps, multiplication tables, computers, and the like are ubiquitous in our everyday lives, facilitating a virtually open-ended range of cognitive, navigational, and calculative activities that would be either very difficult or impossible to do without them.

Affective artifacts refer to “material…objects that have the capacity to alter the affective condition of the agent” (Piredda, 2019, p. 550). Under this definition, affective artifacts are pervasive and may even precede cognitive artifacts in human evolution (Langer, 1967). They include most of the human-designed implements for the production of expressive and aesthetic symbols (e.g., music, visual arts, poetry, and the like) such as musical instruments, as well as the product of their use such as aesthetic objects and performances. Language (typically a cognitive artifact), when used in particular ways to evoke affect and emotion, becomes an affective artifact. When used to evoke feeling and emotion in a ritual or aesthetic performance, or when the voice is used for a similar purpose in singing, people’s bodies and their effectors can become the affective artifact par excellence.

As Heersmink notes, these taxonomic distinctions do not imply that many artifacts end up being hybrids, performing multiple functions at once. Thus, many perceptual artifacts (e.g., a microscope) also perform cognitive functions. Cognitive artifacts (such as a family photograph) may bring up emotionally charged autobiographical memories, thus performing affective functions. Merleau-Ponty’s blind man’s cane, as noted, is both an embodied and a perceptual artifact. Artifacts can also be linked in chains, such that one kind of artifact helps us use another one. The most coupling is embodied artifacts and cognitive artifacts; for instance, mice and keyboards help us interact with computers as cognitive artifacts. Most artifacts as used in everyday dealings consist of such hybrids or multiple chains of artifact families.

References

Clark, A. (2008). Supersizing the Mind: Embodiment, Action, and Cognitive Extension. Oxford University Press,.

Clark, A., & Chalmers, D. (1998). The Extended Mind. Analysis, 58(1), 7–19.

Dreyfus, H. L. (1984). Between Technē and Technology: The Ambiguous Place of Equipment in Being and Time. Tulane Studies in Philosophy, 32, 23–35.

Fasoli, M. (2018). Substitutive, Complementary and Constitutive Cognitive Artifacts: Developing an Interaction-Centered Approach. Review of Philosophy and Psychology, 9(3), 671–687.

Hutchins, E. (1995). Cognition in the Wild. MIT Press.

Heersmink, R. (2013). A Taxonomy of Cognitive Artifacts: Function, Information, and Categories. Review of Philosophy and Psychology, 4(3), 465–481.

Heersmink, R. (2021). Varieties of artifacts: Embodied, perceptual, cognitive, and affective. Retrieved May 23, 2021, from https://philpapers.org/archive/HEEVOA.pdf

Langer, S. K. K. (1967). Mind: an essay on human feeling. Johns Hopkins Press.

Mauss, M. (1973). Techniques of the body. Economy and Society, 2(1), 70–88. (Original work published 1935)

Malafouris, L. (2013). How Things Shape the Mind: A Theory of Material Engagement. MIT Press.

Piredda, G. (2020). What is an affective artifact? A further development in situated affectivity. Phenomenology and the Cognitive Sciences, 19(3), 549–567.

Viola, M. (2021). Three Varieties of Affective Artifacts: Feeling, Evaluative and Motivational Artifacts. https://doi.org/10.3389/fpsyg.2016.00266

 

 

Thick and Thin Belief

Knowledge and Belief

A (propositional) knowledge (that) ascription logically entails a belief ascription, right? I mean if I think that Sam knows that Joe Biden is the president of the United States, I don’t need to do further research into Sam’s state of mind or behavioral manifestations to conclude that they also believe that Joe Biden is president of the United States. For any proposition or piece of “knowledge-that,” if I state that an agent X knows that q, I am entitled to conclude by virtue of logic alone that X believes that q.

This, as summarized, has been the standard position in analytic epistemology and philosophy of mind. The entailment of belief from knowledge has been considered so obvious that nobody thinks it needs to be argued for or defended (treated as falling closer to the “analytic” end of the Quinean continuum). Most of the work on belief by epistemologists has therefore focused on the conditions under which belief can be justified, not on whether an attribution of knowledge necessarily entails an attribution of belief to an agent.

Of course, analytic philosophers are inventive folk and there have been attempts (starting around the 1960s), done via the thought experiment route, to come up with hypothetical cases in which the attribution of belief from knowledge didn’t come so easy. But most people protested against these made-up cases, denying that they in fact showed that one could attribute knowledge without attributing belief. Some of the debate, as with many philosophical ones, ultimately turned on philosophical method itself; perhaps the inability of professional philosophers to imagine non-contrived cases in which we can attribute knowledge without belief rests on the very rarefied air that philosophers breathe and the related restricted set of examples that they can imagine.

Myers-Schulz & Schwitzgebel (2013), thus follow a recent trend of “experimental philosophy,” in which philosophers burst out of the philosophical bubble and just confront the folk with various examples and ask them whether they think that those examples merit attributions of knowledge without belief. One of these examples (modified from the original ones proposed from the armchair) has us encountering a nervous student who memorizes the answer to tests, but when it comes to actually answer, gets nervous at the last minute, blanks out, and just guesses the answer to the last question in the test, which they also happen to get right. When regular old folks are then asked whether this “unconfident examinee,” knew the answer to this last question, 87% say yes. But if they are instead asked (in a between-subjects set up) whether the unconfident examinee believed the answer to the last question only 37% say yes (Myers-Schulz & Schwitzgebel, 2013, p. 378).

Interestingly, the same folk dissociation between knowledge and belief ascriptions can be observed when people are exposed to scenarios of discordance between explicit and implicit attitudes, or dissociation between rational beliefs that everyone would hold and irrational fantastic beliefs that are induced at the moment by watching a horror movie. In the “prejudiced professor” case, we have a professor who reflectively holds unprejudiced attitudes and is committed to egalitarian values, but who in their everyday micro-behavior systematically treats student-athletes as if they are less capable. In the “freaked out movie watcher” case, we have a person who just watched a horror movie in which a flood of alien larva comes out of faucets and who, after watching the movie, freaks out when their friend opens the (real world) faucet. In both cases, the great majority of the folk attribute knowledge that (student-athletes are as capable as other students and that only water would come out of the faucet), but only relatively small minorities attribute belief. Other cases have been concocted (e.g., a politician who claims to have a certain set of values, but when it comes to acting on those values, by, for instance, advocating for policies that would further them, fails to act) and these cases also generate the dissociation between knowledge and belief ascription among the folk.

Solving the Puzzle

What’s going on here? Some argue that it comes down to a difference between so-called dispositional and occurrent belief. These are terms of art in analytic philosophy, but it boils down to the difference between a belief that you hold but are not currently entertaining (but could entertain under the right circumstances) and one that you are currently holding. The former is a dispositional belief and the latter is an occurrent belief. When you are sleeping you dispositionally believe everything that you believe when you are professing wide-awake beliefs. So maybe the folk deny that in all of the cases above people who know that x also occurrently believe that x, but they don’t deny that they dispositionally do so. Rose & Schaffer (2013) find support for this hypothesis.

Unfortunately for Rose & Schaffer, a subsequent series of experiments (Murray et al., 2013), show that knowledge/belief dissociation among the folk are pervasive, applied more generally than originally thought, in ways that cannot be easily saved by applying the dispositional/occurrent distinction. For instance, when asked whether God knows or believes a proposition that comes closest to the “analytic” end of Quine’s continuum (e.g., 2 + 2 = 4), virtually everyone (93%) is comfortable attribute knowledge to God, but only 66% say God believes the trivial arithmetical proposition. Murray et al., also show that people are much more comfortable attributing knowledge, compared to belief, to dogs trained to answer math questions, and cash registers. Finally, Murray et al. (2013, p. 94) have the folk consider the case of a physics student who gets perfect scores in astronomy tests, but who had been homeschooled by rabid Aristotelian parents who taught them that the earth stood at the center of the universe and who never gave up allegiance to the teachings of his parents They find that, for regular people, the homeschooled heliocentric college freshman who also gets an A+ on their Astronomy 101 test knows the earth revolves around the sun but doesn’t believe it.

So something else must be going on. In a more recent paper, Buckwalter et al. (2015) propose a compelling solution. Their argument is that the (folk) conception of belief is not unitary and that the contrast with professional epistemologists is that this last group does hold a unitary conception of belief. More specifically, Buckwalter et al. argue that professional philosophy’s concept of belief is thin:

A thin belief is a bare cognitive pro-attitude. To have a thin belief that P, it suffices that you represent that P is true, regard it as true, or take it to be true. Put another way, thinly believing P involves representing and storing P as information. It requires nothing more. In particular, it doesn’t require you to like it that P is true, to emotionally endorse the truth of P, to explicitly avow or assent to the truth of P, or to actively promote an agenda that makes sense given P (749).

But the folk, in addition to countenancing the idea of thin belief, can also imagine the notion of thick belief (on thin and thick concepts more generally, see Abend, 2019). Thick belief contrasts to thin belief in all the dimensions mentioned. Rather than being a purely dispassionate or intellectual holding of a piece of information considered as true, a thick belief “also involves emotion and conation” (749, italics in the original). In addition to merely representing that or P, thick believers in a proposition will also be motivated to want P to be true, will endorse P as true, will defend the truth of P against skeptics, will try to convince others that P is true, will explicitly avow or assent to P‘s truth, and the like. Buckwalter et al. propose that thick and thin beliefs are two separate categories in folk psychology, that thick belief is the default (folk) understanding,  and that therefore the various knowledge/belief dissociation observations can be made sense of by cueing this distinction. In a series of experiments, they show that this is precisely the case. Returning (some of) the cases discussed above, they show that belief ascription rise (most of the time to match knowledge ascriptions) when people are given extra information or a prompt indicating thick of thin belief on the part of the believing agent.

Thin and Thick Belief in the Social Sciences

Interestingly, the distinction between thin and thick belief dovetails a number of distinctions that have been made by sociologists and anthropologists interested in the link between culture and cognition. These discussions have to do with distinctions in the way people internalize culture (for more discussion on this, see here). For instance, the sociologist Ann Swidler (2001) distinguishes between two ways people internalize beliefs (knowledge-that) but uses a metaphor of “depth” rather than thick and thinness (on the idea of cultural depth, see here). For Swidler, people can and do often internalize beliefs and understandings in the form of “faith, commitment, and ideological conviction” (Swidler, 2001, p. 7); that definitely sounds like thick beliefs. However, people also internalize much culture “superficially,” as familiarity with general beliefs, norms, and cultural practices that do not elicit deeply held personal commitment (although they may elicit public acts of behavioral conformity); those definitely sound like thin beliefs. Because deeply internalizing culture is hard and superficially internalizing culture is easy, the amount of culture that is internalized in the more superficial way likely outweighs the culture that is internalized in the “deep” way. In this respect, “[p]eople vary in the ‘stance’ they take toward culture—how seriously versus lightly they hold it.” Some people are thick (serious) believers but most people’s stance toward a lot of the culture they have internalized is more likely to range from ritualistic adherence (in the form of repeated expression of platitudes and cliches taken to be “common sense”) to indifference, cynicism, and even insincere affirmation (Swidler 2001, p. 43–44).

In cognitive anthropology (see Quinn et al., 2018a, 2018b; Strauss 2018), an influential model of the way people internalize beliefs, due to Melford Spiro, also proposes a gradation of belief internalization that matches Buckwalter et al.’s distinction between thin and thick belief, and Swidler’s deep/superficial belief (without necessarily using either metaphor). According to D’Andrade’s summary of Spiro’s model (1995: 228ff), people can go simply being “acquainted with some part of the cultural system of representations without assenting to its descriptive or normative claims. The individual may be indifferent to, or even reject these claims.” Obvious this (level 1) internalization does not count as belief, not even of the thin kind (Buckwalter et al. 2015). However, at internalization level 2, we get something closer. Here “cultural representations are acquired as cliches; the individual honors their descriptive or normative claims more in the breach than in the observance.” This comes closest to Buckwalter et al.’s idea of thin belief (and Swidler’s notion of “superficially internalized” culture) but it is likely that some people might not think this is a full-blown belief. We get there at internalization level 3. Here, “individuals hold their beliefs to be true, correct, or right…[beliefs] structure the behavioral environment of actors and guide their actions.” This seems closer to the notion of belief that is held by professional philosophers, and it is likely the default version of a belief on its way to thickening. Not just a piece of information represented by the actor and held as true on occasion (as in level 2) but one that systematically guides action. Finally, Spiro’s level 4 is the prototypical thick belief in Buckwalter et al.’s sense. Here “cultural representations…[are] highly salient,” being capable of motivating and instigating action. Level 4 beliefs are invested with emotion, which is a core marker of thick belief (Buckwalter et al., 2015, p. 750ff).

Implications

Interestingly, insofar as some influential theories of the internalization of knowledge-that in cultural anthropology and sociology make the thick belief/thin belief distinction, which, as shown by the research indicated above, is also respected by the folk, it indicates that it may be an idiosyncrasy of the philosophical profession to hold a unitary (or non-graded) notion of belief. Both sociologists and anthropologists have endeavored to produce analytic distinctions in the way people internalize belief-like representations from the larger cultural environment that more closely match the folk. This would indicate that many “problems” conceiving of cases of contradictory or in-between beliefs (Gendler, 2008; Schwitzgebel, 2001)  may have been as much iatrogenic as conceptual.

As also noted by Buckwalter et al., the thin/thick belief distinction might be relevant for debates raging in contemporary epistemology and psychological science over what is the most accurate way to conceive of people’s typical belief-formation mechanism. Is it “Descartian” or “Spinozan”? The Descartian picture conforms to the usual philosophical model. Before believing anything, I reflectively consider it, weigh the evidence pro and against, and if it meets other rational considerations (e.g., consistency with my other beliefs), then I believe it. The Spinozan belief-formation mechanism proposes an initially counter-intuitive picture, in which people automatically believe every piece of information they are exposed to without reflective consideration; only un-believing something requires conscious effort and consideration.

The Descartes/Spinoza debate on belief formation dovetails with a debate in the sociology of culture over whether culture is structured or fragmented (Quinn, 2018). The short version of this debate is that sociologists like Swidler think that (most) culture is internalized in a superficial way and that therefore it operates as fragmented bits and pieces that are brought into coherence via external mechanisms (Swidler 2001). Cognitive anthropologists, on the other hand, adduce strong evidence in favor of the idea that people internalize culture in a more structured manner. There’s definitely a problem of talking past one another in this debate: It seems like Swidler is talking about beliefs proper but Quinn is talking about other forms of non-doxastic knowledge. This last kind can no longer be considered propositional knowledge-that but comes closer to (conceptual) knowledge-what.

Regardless, it is clear that if the Spinozan story is true, then beliefs cannot be internalized as a logically coherent web and therefore cannot exert an effect on action as such. Instead, the mind (and the beliefs therein) are fragmented (Egan, 2008). DiMaggio (1997) in a classic paper in culture and cognition studies, drew that test implication from Daniel Gilbert’s research program, showing that people seem to internalize (some) beliefs via Spinozan mechanisms. For DiMaggio, this supported the sociological version of the fragmentation of culture, because if beliefs are internalized as fragmented, disorganized, barely considered bits of information, then whatever coherence they have must come from the outside (e.g., via institutional or other high-level structures), just as Swidler suggests (DiMaggio, 1997, p. 274). 

But if Buckwalter et al.’s distinction track an interesting distinction in kinds of belief (as suggested by Spiro’s degree of internalization story), then it is likely that the fragmentation argument only applies to thin beliefs. Thick beliefs, on the other hand, the ones that people are most motivated to defend, are imbued with emotion, are least likely to give up, and are most likely to guide people’s actions, are unlikely to be internalized as incoherent information bits that people just “coldly” represent or consider.

References

Abend, G. (2019). Thick Concepts and Sociological Research. Sociological Theory, 37(3), 209–233.

Buckwalter, W., Rose, D., & Turri, J. (2015). Belief through thick and thin. Nous , 49(4), 748–775.

DiMaggio, P. J. (1997). Culture and Cognition. Annual Review of Sociology, 23, 263–287.

Egan, A. (2008). Seeing and believing: perception, belief formation and the divided mind. Philosophical Studies, 140(1), 47–63.

Gendler, T. S. (2008). Alief and Belief. The Journal of Philosophy, 105(10), 634–663.

Murray, D., Sytsma, J., & Livengood, J. (2013). God knows (but does God believe?). Philosophical Studies, 166(1), 83–107.

Myers-Schulz, B., & Schwitzgebel, E. (2013). Knowing that P without believing that P. Nous , 47(2), 371–384.

Quinn, N. (2018). An anthropologist’s view of American marriage: limitations of the tool kit theory of culture. In Advances in Culture Theory from Psychological Anthropology (pp. 139–184). Springer.

Quinn, N., Sirota, K. G., & Stromberg, P. G. (2018a). Conclusion: Some Advances in Culture Theory. In N. Quinn (Ed.), Advances in Culture Theory from Psychological Anthropology (pp. 285–327). Palgrave Macmillan.

Quinn, N., Sirota, K. G., & Stromberg, P. G. (2018b). Introduction: How This Volume Imagines Itself. In N. Quinn (Ed.), Advances in Culture Theory from Psychological Anthropology (pp. 1–19). Springer International Publishing.

Rose, D., & Schaffer, J. (2013). Knowledge entails dispositional belief. Philosophical Studies, 166(S1), 19–50.

Schwitzgebel, E. (2001). In-between Believing. The Philosophical Quarterly, 51(202), 76–82.

Strauss, C. (2018). The Complexity of Culture in Persons. In N. Quinn (Ed.), Advances in Culture Theory from Psychological Anthropology (pp. 109–138). Springer International Publishing.

Simmel as a Theorist of Habit

The Journal of Classical Sociology has recently made available online a new translation, by John D. Boy, of Simmel’s classic essay on “The Metropolis and the Life of the Spirit” (better known to sociologists and urban studies people in previous translations as “The Metropolis and Mental Life”). Boy has an intriguing argument, in the translation’s introductory remarks, for why returning to Simmel’s original “spiritual” language and moving away from the “psychological” language of early translators (e.g., the German “geist” could be translated as either “spirit” or “mind”) is more faithful to Simmel’s original intellectual context and aims.

Here I would like to focus on a neglected aspect of the essay, namely, the implicit theory of habit (and its relation to the intellect and emotions) that Simmel deploys in the introductory paragraphs to set up the main argument that follows. Thus, this post can be read as a companion to previous disquisitions on habit and habit theory in this blog (see here, here, and here) and as a supplement to Charles Camic’s (1986) earlier point about the centrality of the concept of habit for most of the classical social theorists in sociology (Simmel is not one of the theorists treated at length in Camic’s classic paper) and the related story of how the idea was excised from the sociological vocabulary in the post-Parsonian period. In fact, concerning Simmel’s essay on the metropolis, in particular, it bears mentioning that one of the very earliest works influenced by Simmel’s approach (published in American Journal of Sociology in 1912) took the title “The Urban Habit of Mind” (Woolston, 1912).

Simmel on Habit and Metropolitan Life

Simmel argues that the rapid succession of novel and unpredictable stimuli in the city breaks previous habits of sensation developed in a non-urban context. Therefore, Simmel subscribes to the idea that habits are more easily developed whenever people are exposed to repetitive, internally consistent stimuli. In the more predictable non-urban setting, where each new sensation is a lot like the previous one, people can develop habits of sensibility that render them less susceptible to experience sensations in a powerful way. Simmel thus subscribes to the psychological principle that, as we develop habits of sensibility via the exposure to repetitive sensations, these fade from consciousness: “Lasting sensations, slight differences and their succession according to the regularity of habit require less consciousness” (Simmel, 2020, p. 6, emphasis added).

The city disrupts this equilibrium. It does so primarily by increasing the novelty and the unpredictability of sensory stimulation. This “intensification of nervous stimulation” is brought about “by the rapid and constant chance of external and internal sensations” (ibid, italics in original). Thus, the converse psychological principle applies: If habits are created via exposure to repetition, then exposure to novelty and non-repetition increases “consciousness” (which Simmel conceptualizes here as opposed to habit). For Simmel, people “are creatures of difference; their consciousness is stimulated by the difference between the current sensation and the ones preceding it” (ibid, emphasis added).

The disruption of habits of sensation in the city via the intensification of sensory stimulation serves as the primary psychological contrast to small-town life:

In producing these psychological conditions in every crossing of the street and in the tempo and multiplicity of its economic, occupational and social life, the metropolis creates a strong contrast to small-town and country life with its slower, more habitual, more regular rhythm in the very sensory foundation of the life of our souls, due to the far larger segment of our consciousness it occupies given our constitution as creatures of difference (ibid, boldface added).

This sets up a contrast, Simmel argues, between the calculative intellect (which Simmel associates with non-habitual cognition) and more spontaneous affect and emotion, which Simmel associates with the “more unconscious” strata of the psyche. In this way, small-town life “is founded upon relationships of disposition and emotion that have their root in the more unconscious strata of the soul and are more likely to grow out of the quiet regularity of uninterrupted habits” (ibid, emphasis added).

Thus, Simmel makes another equation here, linking habit to emotion, affect, and drives (and other residents of a more vitalistic, “dynamic” unconscious) and habit, which is separated from mental functions associated with intellect, which, for Simmel, are the more “transparent and conscious higher strata” of our inner life. This dualistic approach to habit, which distinguishes it from higher intellectual functions, seems to owe a lot to Maine de Biran’s early nineteenth-century reflections on the subject, which also made such a distinction between habit and the intellect (de Biran, 1970; see the discussion in Sinclair, 2011), one that would be criticized by Félix Ravaisson (2008).

Simmel’s reasoning and series of dualistic linkages here lead him to an odd, and seldom noted, conclusion: People who live in the city, insofar as they are forced to use “the intellect” to perform actions that would otherwise (in a non-urban context) be driven by habit, are therefore less “habit-driven” than non-urban people! This what is behind his famous “protective organ” argument, whose linkage to the habit/intellect contrast has not been noted before. For Simmel, city dwellers have to develop a way to deal with the sensory barrage in a way that prevents them from “reacting according to…[their] disposition.” Instead, “the typical metropolitan person relies primarily on…[their] intellect” (ibid). And “this intellectuality, which we have recognized as a defense of subjective life against the assault of the metropolis, becomes entangled with numerous other phenomena” (ibid).

Conclusion

The phenomena that Simmel went on to link to urban life, inclusive of the money economy, the blasé attitude, individualism, liberty, the division of labor, cosmopolitanism, fashion, and the rest, are well-known to students of Simmel’s foundational essay. Less well-known, however, are how the core premises of the piece are built on Simmel’s much-neglected (but explicitly laid out) assumptions of how the habit links to the intellect, consciousness, sensation, and emotion.

References

Camic, C. (1986). The Matter of Habit. The American Journal of Sociology, 91(5), 1039–1087.

de Biran, P. M. (1970). The Influence of Habit on the Faculty of Thinking. Greenwood.

Ravaisson, F. (2008). Of Habit. Bloomsbury Publishing.

Simmel, G. (2020). The metropolis and the life of spirit. Journal of Classical Sociology, 1468795X20980638.

Sinclair, M. (2011). Ravaisson and the Force of Habit. Journal of the History of Philosophy, 49(1), 65–85.

Woolston, H. B. (1912). The Urban Habit of Mind. The American Journal of Sociology, 17(5), 602–614.

Varieties of Implicitness in Cultural-Cognitive Kinds

In a previous post, I addressed some issues in applying the property of “implicitness” to cultural kinds. There I made two points; first, unlike other ontological properties considered (e.g., concerning location or constitution), implicitness is a relational property. That is, when we say a cultural kind is implicit, we presume that there is a subject or a knower (as the second element in the relation) for whom this particular kind is implicit. Second, I pointed out that because of this, when we say a cultural-cognitive kind (mentally represented, learned, and internalized by people) is implicit, we don’t mean the same thing as when we say a non-cognitive (public, external, artifactual) kind is implicit. In particular, while implicitness is a core property of cultural-cognitive kinds (essential to making them the sort of cultural kinds they are), they are only incidental for public cultural kinds; that is to say the former cannot lose the property and remain the kinds they are, but the latter can.

One presumption of the previous discussion is that when we say that a cultural-cognitive kind is implicit, we are talking about some kind of unitary property. This is most certainly not the case (see Brownstein 2018: 15-19). In this post, I disaggregate the notion of “implicitness” for cultural-cognitive kinds, differentiating at least two broad types of claims we make when we say a given cultural-cognitive kind is implicit.

A-Implicitness

First, there is a line of work in which implicitness refers to the status of a cultural-cognitive kind as well-learned. As Payne and Gawronski (2010) note, researchers relying on this version of implicitness come out a tradition in cognitive psychology focusing on attention and skill acquisition (Shiffrin, R. M., & Schneider 1977, 1984; Schneider & Shiffrin 1977). The fundamental insight from this work is that any mental or cognitive skill can come, with repetition and practice, to be fully “automatized.” Initially, when learning a new skill or using a cultural-cognitive tool for the first time, it is likely that we rely on controlled processing. This type of processing is demanding of cognitive resources (e.g., attention), slow, and highly dependent on capacity-limited short-term memory. With practice, however, a cultural-cognitive kind may come to be used automatically; we can now use it while also having at our disposal the full panoply of attention and cognitive capacity related resources, such as short term memory.

Think of the experienced knitter who can weave a whole scarf while reading their favorite novel; contrast this to the beginner knitter who must devote all of their attention and cognitive resources into making a single stitch. In the experienced knitter case, knitting as a cultural-cognitive skill has become fully automatized (well-learned) and can be deployed without hogging central cognitive resources. This is certainly not the case in the beginner’s case. Standard cases discussed in the phenomenology of skill acquisition and in the anthropology of skill (e.g., H. Dreyfus 2004; Palsson 1994), fall in this version of “implicitness.” Chess or tennis playing becomes “implicit” for the skilled master or player in the Shiffrin-Schneider sense of going from an initially controlled to an automatic process (S. Dreyfus 2004).

As Payne and Gawronski (2010) note, this version of implicitness (hereafter a-implicitness) focuses the learning and cultural internalization process, isolating the relational property of acquired facility, or expertise (captured in the concept of automaticity) a given agent has gained with regard to the cultural-cognitive kind in question.

When transferred to such cultural-cognitive kinds as beliefs or attitudes, the a-implicitness criterion disaggregates into two sub-criteria. We may say of an attitude that is a-implicit if it (a) automatically activated or (b) once activated, applied or put to use in an efficient and non-resource demanding manner.

Thus, a stereotype for a category (filling in open slots in the schema with non-negotiable default) is a-implicit when its activation happens without much intervention (or control) on the part of the agent after exposure to a given environmental cue or prompt. A given stereotype may also be a-implicit in that, once activated, individuals cannot help but to use for purposes of categorization, inference, behavior, and so on. One thing that is not implied when ascribing a-implicitness is that agents are not aware of their using a cultural-cognitive kind in question. For instance, people may be very well aware that their using a default stereotype for a category (e.g., I feel this neighborhood is dangerous) even if this stereotype was automatically activated.

U-Implicitness

Another line work on implicitness comes out of cognitive psychological research on (long term) “implicit” memory. From this perspective, a given cultural-cognitive kind is implicit if people are unaware that it affects their current feelings, performances, and actions (Greenwald & Banaji 1995). In this type of implicitness (hereafter u-implicitness), a key criterion is introspective inaccessibility of a given cultural-cognitive entity.

This was clearly noted by Greenwald and Banaji (1995: 8) in their classic paper heralding the implicit measurement revolution, who defined implicit attitudes as “introspectively unidentified (or inaccurately identified) traces of past experience that mediate favorable or unfavorable feeling, thought, or action toward social objects.” While there is a link to the notion of a-implicitness in the mention of “traces of past experience” (which imply a previous history of internalization or enculturation) the key criterion for something being u-implicit is that people are not aware that a cultural-cognitive element is influencing their current cognitive, affective, and/or behavioral responses to a given object at the moment.

In the case of u-implicit cultural-cognitive entities, what exactly is it that people are not aware of? As Gawronski et al. (2006) note, there are at least three separate claims here. First, there is the idea that people are not aware of the sources of the cultural-cognitive kinds they have internalized. That is, something is u-implicit because the conditions under which they internalized it are not part of (autobiographical or episodic) memory, so people cannot tell you where their beliefs, attitudes, or other internalized cultural-cognitive entities “come from.”

Second, something can be u-implicit if people are not aware of the fact that a given cultural-cognitive kind (such as an implicit attitude) is “mediating” (or influencing) their current thoughts, feelings and actions. That is, a cultural-cognitive entity is “u-implicit” in the sense that people are not aware of its content. For instance, a person may implicitly associate obesity with a lack of competence, and this cultural-cognitive association may be automatically implicated in driving their judgments and actions toward fat people. However, when asked about it, people may be unable to report that such an attitude was driving their judgment. Instead people will provide report on the explicit attitudes that they do have content-awareness of, and this content will sometimes differ from the one that could be ascribed from the reactions and behaviors associated with the u-implicit cultural kind.

Finally, people may be content-aware that they have internalized a given cultural-cognitive entity (e.g., a schema or attitude) but not be aware (and in fact deny) that it controls or affects subsequent thoughts, feelings and actions; that is people may lack effectsawareness vis a vis a given internalized cultural-cognitive element.

Figure 1. Varieties of Implicitness.

A branching diagram depicting the different types of implicitness discussed so far is shown in Figure 1 above. First, the notion of implicitness splits into two distinct properties, one applicable to public (non-mental) cultural kinds and the other applicable to cultural-cognitive kinds. Then this latter one splits into what I have referred to as a-implicitness and u-implicitness. A-implicitness, in turn, may refer to automaticity of activation or automaticity of application (or both) and u-implicitness may refer to unawareness of source (learning history), unawareness of the content of the cultural-cognitive kind itself when it is operating (e.g., an “unconscious attitude, belief, schema, etc.), or unawareness that the activation of this cultural-cognitive kind influences action.

Note that “unawareness” may also bleed into elements of a-implicitness (as noted by the dashed lines in the figure). For instance, a cultural-cognitive kind can become so automatic (in the well-learned sense) that people become unaware of its automatic activation or its application. The most robust way a cultural-cognitive entity can be implicit thus would combine elements of both a- and u-implicitness.

Implications

So, what sort of claim do we make of a cultural-cognitive kind when we say it is implicit? As we have seen, there is no unitary answer to this. On the one hand, we may mean that people have come to internalize the cultural kind (via multiple exposure, repetition, and practice) to the extent that they have acquired a relation of expertise and facility toward it. This is undoubtedly and least ambiguously the case for cultural-cognitive kinds recognize as (either bodily or mental) skillful habits. Thus, chess masters have an “implicit” ability to recognize chessboard patterns and produce a winning move, and expert piano players have an implicit ability to anticipate the finger movement that allows them to play the next note in the composition.

Note that while the typical examples of a-implicitness usually bring up expert performers, we are all “experts” at deploying and using mundane cultural-cognitive kinds acquired as part of our enculturation history, including categories (and stereotypes) used in everyday life, as well as ordinary skills such as walking, driving, or using a multiplication table. Once ensconced by practice, all of these cultural-cognitive elements have the potential to become “implicit” via proceduralization. In fact, it is the nature of habitual action to be a-implicit in the sense discussed both in terms of automatic activation by contextual environmental cues and of efficient (non-resource demanding) deployment once activated (unless it is overriden via deliberate, effortful pathways).

U-implicitness, on the other hand, is a stronger (and thus more controversial) claim. To say a cultural-cognitive kind is u-implicit is to say that it operates and affects our thoughts, feelings, and activities outside of awareness. Since the discovery of the unconscious in the 19th century and the popularization of the notion by Pierre Janet, Sigmund Freud, and followers in the 20th (Ellenberger 1970), the idea of something being both “mental” and “unconscious” has been controversial (Krickel 2018). The reason is that our (folk psychological) sense of something being mental implies that we are related to it in some way. For instance, we have beliefs, or possess a desire. It is unclear what sort of relation we have to something if we are not even aware of standing in any type of relation to it. But not all types of u-implicitness cut that deep. Among the varieties of u-implicitness, lack of content awareness is much more controversial than lack of source awareness, and when coupled with a lack of effects awareness, becomes even more controversial, especially when it come to issues of ascription and responsibility accounting.

For instance, we could all accept having forgotten (or never even committed to memory) the conditions (source) under which we learned or internalized a bunch of attitudes, preferences, and beliefs we hold for as long as we have awareness of the content of those attitudes, preferences and beliefs. What really throws people for a loop is the possibility they could have a ton of attitudes, preferences, and beliefs whose content they are not aware of and drive a lot of their behavior, thoughts, and feelings.

This is also a critical epistemic and analytic problem in socio-cultural theory featuring strong conceptions of the unconscious. In particular, the prospects of cultural-cognitive entities doing things “behind the back” of the social actor rears its ugly head. For instance, Talcott Parsons (1952) (infamously) suggested that “values” could be the sort of cultural-cognitive entity that was u-implicit (internalized in the Freudian sense), and which people had neither source nor content awareness of, putting him in the odd company of Marxist theorists which made similar claims concerning the internalization of ideology, such as Louis Althusser (DiTomaso 1982). Both proposals are seen as impugning the actor’s “agency” and committing the sin of “sociological reductionism.”

A more likely possibility is that a lot of internalized cultural-cognitive entities are not implicit in the full sense of combining both a and u-implicitness. Instead, most things are in-between. For instance, the “moral intuitions” emphasized by Jonathan Haidt (2001), can be a-implicit (automatically activated and automatically used to generate a moral judgments) without being (wholly) u-implicit. In particular, we may lack source awareness of our moral intuitions, but have both content (there’s a phenomenological or introspective “feeling” that we are experiencing with minimal content) and effects awareness (we know that this feeling is why we don’t want to put on Hitler’s t-shirt or eat the poop-shaped brownie). The same has been said for the operation of implicit attitudes and biases (Gawronski et al. 2006); they could be automatically activated and even used, and people could be very aware that they are in fact using them to generate (stereotypical) judgments, but, despite this content awareness, people may be in denial about the attitude driving their behavior (lack effects-awareness).

Habitus and Implicitness

In sociology and anthropology, various “implicit” cultural-cognitive elements are conceptualized using the lens of practice and habit theories, with Bourdieu’s theory of habitus providing the most influential linkage between cultural analysis in sociology and anthropology and research on implicit cognition in moral, social, and cognitive psychology (Vaisey 2009). The foregoing discussion highlights, however, that conceptions of implicitness in sociology and anthropology are too coarse for this linkage to be clean and that a more targeted and disaggregated strategy may be in order.

In the theory of habitus, for instance, Bourdieu emphasizes issues of learning, habituation, and expertise, which leads to the acquisition and internalization of a-implicit cultural-cognitive kinds; in fact the habitus can be thought of as a (self-organized, self-maintaining) system of such a-implicit kinds. This is especially the case when speaking of how actors acquire a “feel for the game,” or the set of skills, dispositions, and abilities allowing them to skillfully navigate social fields. In this case, it is not too controversial to emphasize the a-implicit status of a lot of habitual action and the a-implicit status of habitus as a whole.

However, when discussing how the theory of habitus helps explain phenomena usually covered under older Marxian theories of “ideology” and “consent” for institutionalized features of the social order, Bourdieu tends to emphasize features of implicitness coming closer to the u-implicit pole; that is, the fact that most of the time people do not have conscious access to the sources, content, and even effects of the u-implicit cultural-cognitive processes ensuring their unquestioning acquiescence to the social order (Burawoy 2012). This switch is not clean, and it is unlikely that the theory of implicitness that hovers around the “expertise” side of the issue (linking habitus to skillful action within fields) stands on the same conceptual ground as the one emphasizing unawareness and unconscious “consent” (Bouzanis and Kemp 2020).

While these issues are too complex to deal with here, the conceptual cautionary tale is that it is better to be explicit and granular about implicitness, especially when ascribing this property to a cultural-cognitive element as part of the explanation of how that element links to action.

References

Bouzanis, C., & Kemp, S. (2020). The two stories of the habitus/structure relation and the riddle of reflexivity: A meta‐theoretical reappraisal. Journal for the Theory of Social Behaviour, 50(1), 64–83.

Brownstein, M. (2018). The Implicit Mind: Cognitive Architecture, the Self, and Ethics. Oxford University Press.

Burawoy, M. (2012). The roots of domination: beyond Bourdieu and Gramsci. Sociology46(2), 187-206.

DiTomaso, N. (1982). “ Sociological Reductionism” From Parsons to Althusser: Linking Action and Structure in Social Theory. American Sociological Review, 14–28.

Dreyfus, H. L. (2005). Overcoming the Myth of the Mental: How Philosophers Can Profit from the Phenomenology of Everyday Expertise. Proceedings and Addresses of the American Philosophical Association79(2), 47–65.

Dreyfus, S. E. (2004). The Five-Stage Model of Adult Skill Acquisition. Bulletin of Science, Technology & Society24(3), 177–181.

Ellenberger, H. F. (1970). The discovery of the unconscious. London: Allen Lane.

Gawronski, B., Hofmann, W., & Wilbur, C. J. (2006). Are “implicit” attitudes unconscious? Consciousness and Cognition15(3), 485–499.

Haidt, J. (2001). The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychological review108(4), 814.

Krickel, B. (2018). Are the states underlying implicit biases unconscious? – A Neo-Freudian answer. Philosophical Psychology, 31(7), 1007–1026.

Pálsson, G. (1994). Enskilment at Sea. Man29(4), 901–927.

Parsons, T. (1952). The superego and the theory of social systems. Psychiatry15(1), 15–25.

Schneider, W., & Shiffrin, R. M. (1977). Controlled and automatic human information processing: I. Detection, search, and attention. Psychological Review84(1), 1.

Shiffrin, R. M., & Schneider, W. (1977). Controlled and automatic human information processing: II. Perceptual learning, automatic attending and a general theory. Psychological Review84(2), 127.

Shiffrin, R. M., & Schneider, W. (1984). Automatic and controlled processing revisited. Psychological Review91(2), 269–276.

Vaisey, S. (2009). Motivation and Justification: A Dual-Process Model of Culture in Action. American Journal of Sociology, 114(6), 1675–1715.

When Viruses Spread Social Contagion: What Covid-19 Teaches Us About Social Life

The Covid-19 pandemic has brought much grief and anxiety to the world. As deaths from the coronavirus mount and the invisible foe brings wealthy, technologically advanced societies to their knees, the world has learned not to underestimate the shocks viruses can deliver. The present outbreak’s implications are far-reaching to say the least—the virus has allowed some authoritarian leaders to strengthen their grip on societies unmoored in crisis, and we have barely scratched the surface of Covid-19’s impact on global economies.

In the wake of the 2008 financial crisis, Andrew Haldane of the Bank of England and Robert May of Oxford University—writing in a longer tradition of academics and journalists keenly aware of financial contagions—suggested using infectious diseases and biology to better understand and regulate financial systems (Haldane and May 2011). Today’s interconnected markets mean that market disturbances spread across country lines and economic systems as if they were viruses themselves; as the coronavirus swept through the world in a matter of months, financial markets too have rippled in its wake, filling rich and poor people alike with panic.

In a time of deep viral and financial distress, it seems appropriate to ask how biology and contagion can teach us a thing or two about how societies get destabilized—and how we can rally ourselves in response.

Contagion and Society

The question is certainly not new. In the nineteenth century, the British sociologist Herbert Spencer likened society to an organism governed by universal laws of evolution (Spencer 2002). The German sociologist Niklas Luhmann suggested, just over a hundred years later, that modern law operates like an immune system in guarding society’s fundamental norms and rules against challenges (Luhmann 2004). Both found in biology a glimpse into society as a self-regulating system, albeit one that can evolve, mutate, or become infected if things go wrong.

And go wrong they did. In the late nineteenth and early twentieth centuries, social scientists drew heavily on theories of contagion to make sense of modern, rapidly urbanizing societies ravaged by disease and disorder. The word contagion derives from the Latin contagio—meaning “contact” or “touch”—and the packed frenzy of that era’s burgeoning cities proved the perfect demonstration of contagion’s effects on social life. New York City saw its population just about double between 1880 and 1900, while Chicago’s more than tripled, providing optimal conditions for the rash of infectious diseases.

Virologists have long recognized that dense cities can quickly become hotbeds of contagion. Social scientists writing at the turn of the nineteenth century, however, saw this happening in the psychic realm as well. In 1903, Russian psychologist and neurologist Vladimir Bekhterev—rival to the far more famous Ivan Pavlov—wrote in Suggestion and Its Role in Social Life about how “psychic contact” transmits panic much the way “living contact” transmits microbes (Bekhterev 1998). This notion of psychic contagion caused great concern among sociologists at the time, who worried about how dangerous ideas may spread swiftly in modern urban environments, turning societies on their heads. 

Edward A. Ross, one of the founders of sociology in America, even saw in the myriad “mental contacts” that city dwellers receive daily a grave threat to democracy (Ross 1897). He felt that people would lose their ability to make political decisions in society’s best interest as they get overwhelmed by the city’s avalanche of cues and suggestions, enhanced by the press and modern communication technologies. Ross advocated physical measures aimed at promoting social distancing, including architecture designed to give people more space and carve out the mental and physical breathing room necessary for considered political engagement (Ross 1908).

The parallels between Ross’s measures and our own against Covid-19—ranging from quarantines and lockdowns to travel bans and stranded cruise ships—are uncanny today. Physical social distancing clearly works against contagious viruses, though it requires discipline and broad support to be effective. Mental contagion, on the other hand, is a far trickier foe, not least because it is sometimes good for us. When people sing together from their balconies amid national lockdowns or support one another on the sprawling networks of social media, the sense of solidarity this provides can uplift, comfort, and inspire hope. 

Still, mass reactions such as panic buying and hoarding can shred the fragile fabric of society during trying times. When people put their own interests before those of others or join agitators in discriminating against minority groups, the solidarity that anchors democracy is sorely tested.

The Covid-19 pandemic has clearly revealed the Janus-faced nature of social and biological contagion in modern society. Of course, we are used to thinking of societies, cities, and economies as susceptible to contagion by now. But the current crisis reminds us that how we use the sweeping power of social contagion is ours to own, even if it is provoked by the biological. As governments and communities work to protect us from coronavirus via social distancing and financial resuscitation, we may perhaps consider, too, how each of us might turn the force of contagion against itself.

Christian Borch is Professor of Economic Sociology and Social Theory at the Copenhagen Business School, Denmark. His latest book is Social Avalanche: Crowds, Cities and Financial Markets (Cambridge University Press, 2020).

References

Bekhterev, Vladimir M. 1998. Suggestion and Its Role in Social Life, trans. Tzvetanka Dobreva-Martinova. New Brunswick and London: Transaction Publishers.

Haldane, Andrew G., and Robert M. May. 2011. “Systemic risk in banking ecosystems.” Nature 469(7330): 351–55.

Luhmann, Niklas. 2004. Law as a Social System, trans. Klaus A. Ziegert. Oxford: Oxford University Press.

Ross, Edward A. 1897. “The Mob Mind.” Popular Science Monthly July:390-98.

—. 1908. Social Psychology: An Outline and Source Book. New York: Macmillan.Spencer, Herbert. 2002. The Principles of Sociology. New Brunswick and London: Transaction Publishers.

Durkheimian Sociology and its Discontents, Part II: Why Culture, Social Psychology, & Emotions Matter to Suicide

In a previous post, I argued that despite its importance and “classical” status, sociologists have not contributed to the study of suicide as much as they could. While Anna Mueller and I have yet to posit a general or formal theoretical statement on suicide, in this post, I attempt to distill the basic theoretical ideas we’ve been developing for the last five years. Our work began as an effort to “test” Durkheim (Abrutyn and Mueller 2014; Mueller and Abrutyn 2015), but, very rapidly, our first quantitative studies led us to begin writing the first of four theoretical pieces formalizing Durkheim’s arch nemesis’, Gabriel Tarde’s theory of contagion (Abrutyn and Mueller 2014a). We eventually concluded that the data we needed did not exist, and through some luck, we found a field site to begin qualitatively assessing our evolving sociological view of suicide (Mueller and Abrutyn 2016). This fieldwork led to three other theoretical pieces that build on and go far beyond the Tarde piece to emphasize how cultural sociology, social psychological, and emotions shape suicidality (Abrutyn and Mueller 2014b, 2016, 2018)—particularly diffusion and clustering.

Cultural Foundations

In the 1960s, Jack Douglas (1970) offered an important critique of the conventional Durkheimian approach to suicide, arguing that suicide statistics were questionable due to various professional and personal issues surrounding medical examiner’s and coroner’s work. His larger point was that phenomenological meanings mattered more than suicide rates. About a decade later, David Phillips (1974) presented compelling evidence that audiences exposed to media reporting of suicide were at a risk of temporary spikes in suicide rates—e.g., U.S. and British suicide rates jumped 13% and 10%, respectively, following publicization of Marilyn Monroe’s suicide. We argue that there are important lessons gleaned from these two divergences from classic Durkheimian sociology.

First, meanings matter. Meanings are located in (1) general societal schema available to most people, (2) localized cultural codes that draw from and refract these general schema to make sense of the actual experiences of group of people inhabiting a delimited temporal and geographic space, and (3) the idiosyncratic schema any person in that group possesses, built from their own biography and experiences. A small, but growing body of historical (Barbagli 2015), anthropological (cf. Chua 2014; Stevenson 2014), and cultural psychological (Canetto 2012) research confirms this. For instance, some research on Canadian indigenous communities, where the suicide rate can be six times that of the Canadian average, found that youth in one community explain their own suicidality as a means of belonging (Niezen 2009); a counterintuitive finding for sociologists who think of integration as healthy. Nevertheless, these studies stop short of moving beyond broad-stroke assessments of culture. Meanings are, after all made real, embodied, and crystallized in social relationships; and, thus, social relationships—as Durkheim argued, but not quite how he imagined—matter too.

The Meaning and Meaningfulness of Social Relationships

The connection between social relationships and suicide, as studies using network principles have shown, has a structural side (Bearman 1991; Pescosolido 1994; Baller and Richardson 2009), yet they are eminently cultural as well in form and content. They are the social units in which cultural meanings emerge, spread, become available/accessible/applicable, and are stored.

Not surprisingly, and contrary to epidemiological and psychological accounts that favor a “disease” model approach to suicide “contagion,” our work has shown that network ties are only one factor, while having a friend tell you about their suicidality can lead you to develop new suicidal thoughts (Mueller and Abrutyn 2015); and in the case of girls, new suicidal behaviors. At the relational level, the general and local cultural mechanisms are further refracted. The direct, reciprocal nature of these ties, make culture real, imbuing it with affect (Lawler 2002).  This increases the odds that codes will be internalized and integrated with existing understandings of suicide, and, ultimately, mobilized in how people interpret events or situations, make sense of their own problems, and consider options for resolving said problems. In particular, it is the emotional dimension of culture and social relationships that adds the final ingredient to my vision of the future of the sociology of suicide.

The Final Ingredient: Emotions

Since the 1970s, sociologists of emotions—drawing from Cooley’s insights—have argued that social emotions like shame, guilt, or pride act as powerful social forces (cf. Turner 2007 for a review). Externally, social emotions are used as weapons to control others behavior, ranging from public degradation ceremonies used to humiliate and restore order to mundane rituals of deference and demeanor to gossip. The self is a social construct in so far as the primary groups we are socialized in provide meanings that come to make up our (1) “self-construct” or “global” sense of self. Our self is our most cherished possession as it provides a sense of anchorage across social situations. As we develop new meanings anchored in (2) relationships between specific others (role identity), (3) membership in various collectives (group identity), and (4) status characteristics that (a) identify us as belonging to one or more categorical unit (age, race, sex, occupation) and, therefore, (b) obligate or expect us to perform in certain ways and receive certain amounts of rewards and deference (social identity), meanings emerge and are grafted onto our self-concept or become situationally activated.

Social emotions are an evolutionary adaptation (cf. Turner 2007; Tracy et al. 2007). While all animals feel anger (fight) and fear (flight), and mammals also feel various degrees of sadness and happiness, shame and pride seem uniquely human because, as the Adam and Eve story teaches us or our own children’s ease with nudity shows to us, the meanings necessary for eliciting them must be learned. That is because they involve imagining what others, especially significant others, think of us; not just our behavior, but our cherished self. Pride means we’ve lived up to the imagined (and, they are often imagined in so far as they are not accurate reflections of) expectations and obligations of those we care about. Shame is the opposite: we are a failure, contemptuous in the eyes of others, deficient, and, even, polluting. Clinical research finds shame as particularly painful, often verbalized in expressions of feeling small, wanting to hide, and, other phrases like “tear my skin off” or “mortified” (Lewis 1974; Retzinger 1991).

Mortification refers to the death of the self; and, thus, shame is the signal that the self is dying, decaying, or, with chronic shame among violent prisoners, dead (cf. Gilligan 2001). Emotions are the bridge between the structural and cultural milieus we live in and the identities that anchor us in relationships. They saturate cultural meanings such that some become more relevant and essential to our identity (LeDoux 2000). Our memory and, therefore, biography is impossible without emotions, as events “tagged” with more intense emotions are more easily recalled than those that did not elicit intense, long-lasting feelings (Franks 2006). It stands to reason that the next frontier in a sociology of suicide that takes culture and microsociology seriously is one that also mixes social emotions into the theoretical “pot.”

In this spirit, Part III will shed light on where the sociological study of suicide can and should go if we are to reclaim our seat at the table in offering understanding and explanation. And, for becoming truly public in contributing to the prevention of suicide and in post-vention efforts – or those that seek to work with (individual or collective) survivors in the aftermath of a suicide.

Durkheimian Sociology and its Discontents: Why its Time for a New Sociology of Suicide

Since Durkheim showed that certain social structural factors, external to the individual, had a strong positive relationship to variation in suicide rates, sociologists have maintained the argument that suicide is caused by social forces and, therefore, is a phenomenon squarely in the domain of sociology. Yet, western medical professionals (Marsh 2010) and the average person (Lake et al. 2013) continue to “explain” suicidality mainly via psychological factors; primarily mental illness or disorder, or by cognitive appraisals favored by psychology and psychiatry, like depression, burdensomeness, and hopelessness (Cavanaugh et al. 2003).

As is often the case with sociology, sociologists have done little to argue for the value of their science. Since 1980, sociology has published the second fewest amount of studies (405) on suicide; and it’s not even close (psychiatry has published 9951, while molecular biology (!) has produced 1316) (Stack and Bowman 2012:4). When sociologists study suicide, they overwhelmingly favor retesting Durkheim’s 19th century theses in order to weigh in on the classic’s continued value, as journals love papers that use new data or analytic strategies to test old, foundational ideas (Wray et al. 2011). This does little to help advance the sociological science of suicide and support sociology’s contribution to understanding, explaining, or preventing suicide.

Nevertheless, suicide remains an important phenomenon for sociology. Not only does it constitute a serious social problem—perhaps more urgent today than in Durkheim’s day—it also speaks to theoretical questions central to cultural sociology; particularly one trying to integrate contributions from the cognitive social sciences.

Because suicide is a social act, replete with meanings about why people die by suicide and who we expect to die by suicide, it is fair to ask how people come to acquire proscriptive suicide meanings that make them more vulnerable to suicidality? Of equal importance, are questions about how attitudes become actions:  myriad studies show that while ideation is a risk factor for attempting suicide the two are not neatly linked, as most ideators will never attempt suicide (Klonsky and May 2015).

In short, studying suicide presents opportunities for expanding how sociology makes sense of human behavior because it is a performance that evokes meaning in both the actor and her intended/unintended audience. In most cases, the actor, herself, must overcome the severest of prohibitions, ranging from biogenetic safeguards to informal norms and formal laws. And yet, suicide still occurs; it tends to cluster in certain physical and temporal spaces (Haw et al. 2013; Niedzwiedz et al. 2014); and, its diffusion from one person to the next has been empirically verified for nearly five decades, but remains almost completely unexamined in sociology (for exceptions, see my work with Anna Mueller [Abrutyn and Mueller 2014; Mueller and Abrutyn 2015; Mueller et al. 2014], in addition to Baller and Richardson 2002, 2009; Bjarnason 1994).

A follow-up post will offer a new framework setting up what Anna and I have argued and our work suggests as the agenda for a reinvigorated sociological science of suicide. This framework is synthetic and includes leveraging the powerful insights of cultural sociology, social psychology, and, especially, the sociology of emotions. At various points, these subfields intersect in ways that provide pathways for sociology reclaiming its place at the table for explaining suicide and contributing to its prevention. Moreover, because of both the unique and shared qualities suicide has with any other social behavior, it is hopeful that this move towards synthesis will compliment the current debates and discussions surrounding why people feel, think and do what they do.

To Feel or Not to Feel? That is No Longer the Question

It is highly likely that most readers recall learning about Phineas Gage, a railroad worker who, in 1848, had the misfortune of having a 3.5 inch, 13+ lb. metal rod (with a diameter of 1 ¼ inches) impale him. The rod went through his open mouth, behind his left eye, and out of his skull. What was exceptional in all of this, was that it neither exited his skull completely nor did he die from this injury for 12 years! Considering the state of medical knowledge and technique, this was a rather incredible and improbable survival, and I would bet that is what most people remember about his story.

Yet, for a theorist and sociologist, there is much, much more to this anecdote than the sensational. His memory, for instance, was discernibly unaffected, but the injury, by accounts of both former employers and professional “trained” in the “psychology” of yore, had somehow peeled back the protective human layers of socialization. That is, he was described as vacillating between his “intellectual faculties” and “animal propensities”; his behavior and language could be “coarse,” “vulgar,” and offensive to any “decent” people he might encounter. In spite of this, he spent seven of the 12 years left of his life in Chile, working as a long-distance stagecoach driver; which, in 1852, would have demanded a lot of cognitive skills given the temporal and physical and social demands. He was clearly successful.

What can we learn from this case? On the surface, probably not much. A debate between contemporary neuroscientists centers on how much we can draw from MRIs of a skull with no direct empirical evidence. Gage’s former employers may have maligned his reputation to protect their financial interests; doctors of the day were rarely scientific in their orientation or beholden to a professional association backed by the force of legislation; and, psychology was barely in its infancy. Nonetheless, it is not incorrect to say that damaging the brain, in most cases, leads to changes in behavior and personality.

But, what does Gage have to do with sociology and cognition? His case and others that would follow in the early 20th century inspired a body of research examining brain lesions, particularly the prefontal lobe, which is responsible for rational decision-making. For instance, in one of many experiments, Bechara, Damasio, Damasio, and Anderson (1994) provided “normals” and patients with a $2000 loan, and provided them with four decks of cards and some basic instructions: don’t lose money, but make $$ if possible. Turning a card in pile A or B rewarded $100 while C and D only $50. The catch: some cards in A and B, unbeknownst to the player, demanded a sudden high payment (e.g., $1250), while C and D, on occasion, only asked for small, modest payments (e.g., $100). Normals began by sampling all the decks, showing preferences for A and B at first, but gradually learning that C and D are the best bets. Those with damaged brains, however, started the same way but did not switch to C and D, no matter how many times they bankrupted.

From a series of follow up experiments meant to tease out specific hypotheses about rewards and punishments, and his own clinical work with lesion patients, Antonio Damasio (1995) cogently posited—at the time—a revolutionary thesis: reasoning and rationality are inextricably entwined with emotions. The classic Cartesian model of brain v. soul that undergirds seemingly false (but commonly, often unconsciously, accepted) dichotomies like rationality v. irrationality, cognition v. emotion collapses under the weight of empirical evidence.

This seems eminently sensible. Marketers draw on psychology to appeal not only to our cool rationality, but to our feelings and sentiments. We choose Crest or Colgate, Ford or Toyota, and so forth based on emotions no matter how much “instrumentality” we employ in the decision-making process (see, for example, Camerer 2007). These, of course, are mundane, arbitrary decisions; imagine if we extend this thesis to much more complex decisions, like choosing a partner, a reciprocal gift, or to make amends. It seems true that we can only make big decisions when our brain’s neural systems are linked up and our emotion centers are communicating with various other aspects of our brain (LeDoux 2000).

So, for instance, as information enters the brain it is routed to the hippocampus where it is converted into memories and indexed as either semantic or episodic. The former are general “facts” about things, people, events, and so forth that escape temporality, whereas the latter are person-specific memories with time-stamps. Our self, then, is rooted in memories that are both generalized and specific. At the same time, this information is fed into the amygdala and tagged with a valence, or level of intensity, making them more or less relevant to one’s self—that is, more intensely tagged memories are easier and more likely to be recalled. And, if the most self-relevant information comes from interactions with significant others, then the most basic unit of social organization – the human relationship – is anchored in affective moorings (Lawler et al. 2008; Cozolino 2014).

In particular, knowledge about the social self (semantic autobiographical knowledge), formed in episodes, tagged with powerful affect, and confirmed or activated frequently in encounters, comes to be generalized too, but is differentiated from the other two types in that it activates normally distinct places in the brain they do—that is, it remains rooted in the emotion centers and is what makes our global sense of self perceived as stable and consistent over longer durations and, moreover, drenches appraisals of our own actions as well as others in affect (Turner 2007). This also means, using more familiar sociological terms, goal setting, strategizing, habit, decision-making, selfing and minding are saturated with emotions (Franks 2006).

Memory works because of emotions; our senses work because of emotions; the construction, maintenance, alteration, and destruction of self, depend on our brain’s emotional neuroarchitecture as much as on the social environment’s input. Thus, if we are to take cognitive science seriously, as sociologists, then we must also take seriously the role emotions play in action and organization.