Durkheimian Sociology and its Discontents: Why its Time for a New Sociology of Suicide

Since Durkheim showed that certain social structural factors, external to the individual, had a strong positive relationship to variation in suicide rates, sociologists have maintained the argument that suicide is caused by social forces and, therefore, is a phenomenon squarely in the domain of sociology. Yet, western medical professionals (Marsh 2010) and the average person (Lake et al. 2013) continue to “explain” suicidality mainly via psychological factors; primarily mental illness or disorder, or by cognitive appraisals favored by psychology and psychiatry, like depression, burdensomeness, and hopelessness (Cavanaugh et al. 2003).

As is often the case with sociology, sociologists have done little to argue for the value of their science. Since 1980, sociology has published the second fewest amount of studies (405) on suicide; and it’s not even close (psychiatry has published 9951, while molecular biology (!) has produced 1316) (Stack and Bowman 2012:4). When sociologists study suicide, they overwhelmingly favor retesting Durkheim’s 19th century theses in order to weigh in on the classic’s continued value, as journals love papers that use new data or analytic strategies to test old, foundational ideas (Wray et al. 2011). This does little to help advance the sociological science of suicide and support sociology’s contribution to understanding, explaining, or preventing suicide.

Nevertheless, suicide remains an important phenomenon for sociology. Not only does it constitute a serious social problem—perhaps more urgent today than in Durkheim’s day—it also speaks to theoretical questions central to cultural sociology; particularly one trying to integrate contributions from the cognitive social sciences.

Because suicide is a social act, replete with meanings about why people die by suicide and who we expect to die by suicide, it is fair to ask how people come to acquire proscriptive suicide meanings that make them more vulnerable to suicidality? Of equal importance, are questions about how attitudes become actions:  myriad studies show that while ideation is a risk factor for attempting suicide the two are not neatly linked, as most ideators will never attempt suicide (Klonsky and May 2015).

In short, studying suicide presents opportunities for expanding how sociology makes sense of human behavior because it is a performance that evokes meaning in both the actor and her intended/unintended audience. In most cases, the actor, herself, must overcome the severest of prohibitions, ranging from biogenetic safeguards to informal norms and formal laws. And yet, suicide still occurs; it tends to cluster in certain physical and temporal spaces (Haw et al. 2013; Niedzwiedz et al. 2014); and, its diffusion from one person to the next has been empirically verified for nearly five decades, but remains almost completely unexamined in sociology (for exceptions, see my work with Anna Mueller [Abrutyn and Mueller 2014; Mueller and Abrutyn 2015; Mueller et al. 2014], in addition to Baller and Richardson 2002, 2009; Bjarnason 1994).

A follow-up post will offer a new framework setting up what Anna and I have argued and our work suggests as the agenda for a reinvigorated sociological science of suicide. This framework is synthetic and includes leveraging the powerful insights of cultural sociology, social psychology, and, especially, the sociology of emotions. At various points, these subfields intersect in ways that provide pathways for sociology reclaiming its place at the table for explaining suicide and contributing to its prevention. Moreover, because of both the unique and shared qualities suicide has with any other social behavior, it is hopeful that this move towards synthesis will compliment the current debates and discussions surrounding why people feel, think and do what they do.

On the Nature of Habit

Recently, however, some philosophers have begun to pay attention to habits. An example is a series of papers by Bill Pollard starting in the mid-aughts (Pollard, 2006a, 2006b), and more recently Steve Matthews (2017). Pollard tackles some fundamental issues arguing (positively) for habit-based explanations of action as a useful addendum (if not replacement) for folk-psychological accounts (along the lines of previous posts). Here I’d like to focus on Mathews more recent work, which deals with the core characteristics making something a habit.

One useful (implicit) message in this work is that consistent with the modern notion of concepts in cognitive semantics, habits are a radial category. Rather than being a crisp concept with necessary and sufficient conditions of membership, habits are a fuzzy concept, with some “core” or “central” exemplars that share most of the features of habits, and some “peripheral” members that only share some features.

Most anti-habit theorists (with Kant and Kant-inspired theorists such as Parsons being one of the primary examples) equate habit with mindless compulsion and use this equation to expunge habit from the category of action. Critiques of habit theories can thus be arranged on a strength gradient depending on which element of the radial category they decide to focus on. The weakest critiques pick peripheral members, passing them off as “prototypes” for the whole category. Peripheral members of the habit category, such as tics, reflexes, addictions, and compulsions, tend to share few features with action that is experienced as intentional. It is thus easy for these critics to deny habit-based behavior the characteristic that we usually reserve for “action” proper.

Much like American sociological theory post-Parsons (Camic, 1986), habits have been given short shrift in the analytic philosophy of action tradition. As noted in previous posts, one problem is that habit-based explanations, being a form of dispositional account of action, are hard to reconcile with dominant intellectualist approaches to explaining action. The latter, require resort to the usual “psychological” apparatus of reasons, intentions, beliefs, and desires. In habit-based explanations, actions are instead accounted for by referring to their own tendencies to reliably reoccur in specific environments given the agent’s history. This makes them hard to square with typical folk psychological explanations, in which “mental” items are the presumed causal drivers.

Matthews’s argument is that the core or prototypical members of the habit category are what Marcel Mauss called techniques. Skilled ways of being proficient at an action, acquired via an enskillment process requiring training and repetition. These include both “behavioral” skills (e.g. playing the piano, typing, riding a bike) and “cognitive” or “mental” skills, although the latter is less central members of the habit category for most people.  In this respect, most bona fide habits are mindful, without necessarily being intentional in the folk psychological sense. They also have five core features, which I discuss next.

Habits are socially shaped.- This might seem obvious. However, there is a tendency in some corners of social theory to think of habit-based accounts as somehow imposing an “individualistic” explanatory scheme. Some people decry while others celebrate (Turner, 1994) the alleged commitment to individualism that comes with habit-based accounts of action. This conception is misguided. Matthews is correct in noting that for core (prototypical) habits are hardly individualistic since they comprise culturally transmitted “techniques” for how to do things (Tomasello, 1999). That each person could have their own way (say of typing or swimming) does not make habits purely individual since they would not be constructed or transmitted if people were Crusoe-like isolates. Instead, most true habits, as revealed by recent sociological “apprenticeship ethnographies” require the embeddedness of the individual in some pedagogical context (for most children this being the family). In this way, most habits are “relational” in a fairly straightforward sense.

Habits are acquired through repetition.- Another one that seems obvious. Nevertheless, I believe this point is more consequential than meets the eye. Recent work emphasizing the root of so-called “dual process” models in theories of learning and memory emphasize that routes to cultural acquisition (ideal-typical “fast” and one shot and “slow” and high repetition) are a key way to partitioning different cultural elements. Namely propositional “beliefs” from non-declarative practices. Habits, having only the slow route of acquisition open to them belong to the latter. Hence the relatively harmless analytic equation of habits and practices. This criterion also serves to demarcate degenerate or borderline examples of the habit radial category such as phobias acquired after a single exposure to a threatening object (e.g., fear of dogs after a dog bite), which depend on analytically and physiologically distinct neural substrates. These we can safely rule out as robust members of the habit category based on the acquisition history criterion.

Habits modify people in durable ways.- As Mike and I have noted (Lizardo & Strand, 2010), this criterion serves to demarcates “strong” habit or practice theories from theories who purport to pay attention to practices but from which embodied agents with their own inertia and history of habituation seem to be absent. Commitment to habit as an explanatory category entails commitment to persons as causally powerful particulars who have been modified by habit in a durable way. This makes durable habits a disposition to behave in such-and-such ways under certain circumstances. Durable modification also entails making conceptual room for the fact that, once acquired, habits are hard to get rid of. So it is usually easier to “refunctionalize” a habit (e.g. take an old habit and put to use for new purposes) than to completely retool.

Since habits operate according to a Hebbian “use or lose it” rule, it is possible for habits to atrophy and decay. However, this decay is relatively graceful and gradual, not fast and sudden. In addition, the previous acquisition of a habit entails faster re-acquisition even when that habit has been weakened or partially lost. This is behind the folk idea that many things are “like riding a bike,” so they can come back easier when you try them again even after a period of disuse.

Considering the “second nature” created by habit means we need to differentiate the temporality of habit (acquisition, use, rehearsal or decay) from the temporality of “macro” social life, as these may not always be in sync; habits will try to persevere even under changing or adverse conditions (Strand & Lizardo, 2017). Durable modification also links nicely to classic sociological notions on the power of “cohorts” to enact social change as history is “encoded” in individuals (Bourdieu, 1990; Ryder, 1965; Vaisey & Lizardo, 2016).

Habits are activated by environmental cues and triggers.- This is one of the better documented empirical regularities in the psychology of action (Ouellette & Wood, 1998). Yet, its meager representation in sociological action theory as an explanatory tool is telling, despite sociologists obvious preference for environmental over attribute-based explanations. Perhaps part of the problem if conceptual; thinking of the environment as a “trigger” may bring fears of removing voluntarism (or as we call it today “agency”) out of the equation thus producing a unidimensional theory of action that reduces action to “conditions” (Parsons, 1937). Yet this fear is unfounded.

First, most people can prospectively plan to enter an environment they know will trigger a habit. For instance, we may set up our work space in the office in a way that facilitates the evocation of the “writing” habit. Second, agents can actively perceive that certain situations have certain “moods” or affordances and they welcome that these trigger reliable (usually pleasant) habits. For instance, a social butterfly can actively perceive that a cocktail party will be good for triggering the complex of habits making up their “outgoing” personality. These have “negative” versions; we avoid certain environments precisely because we know that they’ll trigger a habit we may want to atrophy or decay. There’s no reason to think of the triggering function of environments in purely mechanical ways.

Second, that habits are automatically triggered by environmental cues does not impugn their link to goal-oriented action. In fact, habits can be thought of as a way to facilitate the pursuit and attainment of goals. It is a Parsonian prejudice to presume that the only way to pursue goals is to “picture” them reflectively before the action is initiated and then deploy “effort” to get moving. In fact, this effortful control of action may be subject to more disruption (and thus failure in the attainment of goals) than when agents “offload” the control of action to the environment via habit. In the latter case, goals can be pursued efficiently in a way that is more robust to environmental disruption and entropy.

Habits partake of certain conditions of “automaticity”.- That habits are “automatic” also seems self-evident. However, this can also be conceptually tricky. The problem is that automacity is not a molar concept; instead, it decomposes into a variety of features, some of which can vary independently (Moors & De Houwer, 2006). This can lead to semantic ambiguity because different theorists may actually emphasize different aspects of habitual action when they use the term “automatic” to refer to it.

As already intimated earlier, for prototypical habits, the automaticity feature that most people have in mind is efficiency. After acquiring a habit via lots of repetition people gain proficiency in performing the action. This means that the action can be performed faster and more reliably. Another feature of efficiency is that we no longer have to monitor each step of the action; instead, the action can be performed while our attention resources can be freed to do something else. For instance, experienced knitters can become so efficient at knitting they can do that while reading a book or watching TV.

However, other theorists may take efficiency for granted and point to other features of automaticity as definitional of habitual action. The most controversial of these is the link to intention. For some habits are automatic because they are patterns of behavior that, via the environmental trigger condition mentioned above, bypass intention. This leads to a sometimes counterproductive dualism between “intentional action” and “habit.” I believe a better solution is to think of habitual action as having its own form non-representational “intentionality” (Pollard, 2006b). Driving a car, or riding a bike is intentional action with its own feel, the difference from reflexive intentional action being that representing each step in the action is not required (Dreyfus, 2002).

As noted earlier, the feature of automacity that makes the weakest criterion for defining prototypical habits is (lack of) goal dependence. Most habits are not automatic by this criterion since most habitual action is action for something. Habits without goals (e.g., twirling your hair, tapping your fingers) exist, but they are actually fairly peripheral members of the category. In accord with the pragmatist conception, most habits exist because they help the agent accomplish their goals. As mentioned earlier, most goals are reached via habitual action rather than by reflexive contemplation of ends and effortful initiation of action.

Other features of automaticy are even more peripheral for fixing the nature of habit. For instance, the feature that Bargh refers to as “control.” This refers to whether the agent can “stop” an action sequence once it is started. In this sense, prototypical habits (playing the piano, specifying a regression model) are “controlled” not automatic actions (Pollard, 2006b, p. 60). Skills and procedures, especially those that are narratively extended in Matthew’s (2017) sense, are all “stoppable” by the agent so don’t count as automatic by this criterion. Complete incapacity to stop a line of action only applies to peripheral members of the habit category (e.g., reflexes, phobias, etc.) and probably pertain to habitual actions with short temporal windows.

Note that this refers to whether habits are “intentional” as described above. Most habits may fail to be intentional (in the classical sense) because they are triggered by the environment, but they can be controlled because the agent (if they have the capacity) can stop them once triggered. This is why it is useful to keep different features of automacity separate when thinking about the nature of habit.

Nevertheless, the issue of controllability brings up interesting conceptual problems for habit theory. These have been sharply noted in a series of papers by the philosopher Christos Douskos (to be the subject of a future post). The basic issue is that categorizing an action as a “habit” may be separable from its status as “skill.” Basically, we have lots of skills that do not count as habitual (remaining in “abeyance” so to speak), and some habits that are not skillful. Overall, the ascription conditions for calling a pattern of action a habit, may be more  holistic, and thus empirically demanding, than pragmatist and practice theories suppose because they do not reduce to features inherent to the action or its particular conditions of acquisition.

How about the feature of the “unconscious” nature of some automatic actions? Only degenerate or peripheral members of the habit category are “unconscious.” This refers to whether the person reflexively knows whether they are performing the action. Once again, for some peripheral members of the category (cracking your knuckles while engrossed in some other activity), this may apply but it is unlikely to apply to prototypical skills and procedures (we are all aware of driving, typing, etc.). Some people point to “mindless” habit-driven actions as having this feature, such as driving to work when we meant to drive to the store. Here, however, it is unlikely that the person was unconscious of performing the action. So the lapse seems to have been one of failure to exercise control (e.g. stopping the habit because it was not the one that was properly linked to the initial goal) rather than lack of consciousness per se.

Other theorists emphasize unconscious cognitive habits, and maybe for these, this feature is more central than for more prototypical behavioral habits and procedures. Even here, however, unconscious cognitive habits may have the potential to become “conscious” (e.g. the person knows of their existence qua habits) without losing the core automaticity features defining their habitual nature (e.g. the fact they are efficient means to the accomplishment of certain cognitive goals). Overall, however, while most habitual action does rely on subpersonal processes embedded in the cognitive unconscious, most habits are performed in a “mindful” manner (without implying reflexive self-consciousness). As such, they are not automatic actions by this criterion.

References

Bourdieu, P. (1990). The logic of practice. Stanford University Press.

Camic, C. (1986). The Matter of Habit. The American Journal of Sociology, 91(5), 1039–1087.

Dreyfus, H. L. (2002). Intelligence Without Representation–Merleau-Ponty’s critique of mental representation the relevance of phenomenology to scientific explanation. Phenomenology and the Cognitive Sciences, 1(4), 367–383.

Lizardo, O., & Strand, M. (2010). Skills, toolkits, contexts and institutions: Clarifying the relationship between different approaches to cognition in cultural sociology. Poetics , 38(2), 205–228.

Matthews, S. (2017). The Significance of Habit. Journal of Moral Philosophy, 14(4), 394–415.

Moors, A., & De Houwer, J. (2006). Automaticity: a theoretical and conceptual analysis. Psychological Bulletin, 132(2), 297–326.

Ouellette, J. A., & Wood, W. (1998). Habit and intention in everyday life: The multiple processes by which past behavior predicts future behavior. Psychological Bulletin, 124(1), 54.

Parsons, T. (1937). The Structure of Social Action. New York: Free Press.

Pollard, B. (2006a). Action, Habits, and Constitution. Ratio, 19(2), 229–248.

Pollard, B. (2006b). Explaining Actions with Habits. American Philosophical Quarterly, 43(1), 57–69.

Ryder, N. B. (1965). The cohort as a concept in the study of social change. American Sociological Review, 843–861.

Strand, M., & Lizardo, O. (2017). The hysteresis effect: theorizing mismatch in action. Journal for the Theory of Social Behaviour, 47(2), 164–194.

Tomasello, M. (1999). The Human Adaptation for Culture. Annual Review of Anthropology, 28(1), 509–529.

Turner, S. P. (1994). The Social Theory of Practices: Tradition, Tacit Knowledge, and Presuppositions. University of Chicago Press.

Vaisey, S., & Lizardo, O. (2016). Cultural Fragmentation or Acquired Dispositions? A New Approach to Accounting for Patterns of Cultural Change. Socius, 2, 2378023116669726.

Culture, Cognition and “Socialization”

Culture and cognition studies in sociology are mainly concerned with the construction,  transmission, and transformation of shared stocks of knowledge. This was clear in the classical theoretical foundations of contemporary work in the sociology of culture laid out in Parsons’s middle period functionalism (Parsons 1951) and in Berger and Luckmann’s decisive reworking of the Parsonian scheme from a phenomenological perspective (Berger and Luckmann 1966). In both traditions, the process of the transmission of knowledge and, with it, the creation and recreation of both conventional and novel forms of meaning was thought of as of the utmost importance. This was usually referred to, in its intergenerational aspect, as “socialization” (of newcomers into the established culture).

Despite its acknowledged importance, contemporary culture and cognition scholars in sociology have seldom laid out explicitly what are the consequences of taking cognition seriously for understanding socialization processes. The result is that sociologists live in a conceptual halfway house, with some misleading remnants of the functionalist and phenomenological traditions on socialization still forming part of the core conception of this process. This is coupled to the fact that, save for some signal exceptions (Corsaro and Rizzo 1988; Pugh 2009), sociologists seldom study children and primary socialization processes directly and thus lack a consistent body of empirical work to move theorizing forward. This is in contrast to the growing body of “apprenticeship ethnography” work that does deal with the issue of “secondary” socialization of adults (usually the ethnographer themselves) into new settings (e.g. Wacquant, Mears, Desmond, Winchester, etc.).

Outside of sociology, there is work, done under the broad umbrella of “psychological” or “cognitive” anthropology, that has dealt with the relevance of cognition to primary socialization processes in a more or less direct way. This work, despite its limitations, can serve as a good exemplar for sociologists as to the analytic benefits of this approach to the socialization process. Here I would like to focus on the exemplary work of anthropologist Christina Toren (2005) who provides one useful example of how a cognitive approach to the study of culture and socialization can be deployed in a profitable way. In particular, Toren’s work challenges the hegemonic account of socialization that pervades sociological thinking on the culture and cognition link while providing valuable starting insights to build on.

Toren notes that traditional anthropological and sociological theories of socialization presume that “with respect to cognition, to their grasp of particular concepts, children simply become—with perhaps some minor variations—what their elders already are” (1993, 461). Toren castigates this account for being “a-historical.” She points to studies of language acquisition that call into question the assumption that socialization consists in the transmission of ready-made models of adult culture to children. These studies show that children not just acquire the linguistic categories of the parental generation ready-made, but rather, engage in their own creative reconstruction of these categories (for recent work on this score see Tomasello 2005). In Toren’s view, “human cognition is a historical process because it constitutes—and in constituting inevitably transforms—the ideas and practices of which it appears to be the product” (1993: 461-462).

For Toren, to move beyond the restrictive account of socialization as the reproduction of the adult world, it is important to incorporate the inherently embodied essence of mind into our theorizing. This requires the recognition of the fact that, empirically, socio-mental and cultural phenomena are not exhausted by explicitly articulated knowledge processes and contents (see e.g. Bloch 1991). These language-mediated processes—which makes up the bulk of empirical material in contemporary sociology of culture—are just “the tip of the iceberg as against those unconscious processes we constitute as knowledge in the body—e.g. particular ways of moving” (1999: 102).

Toren point of departure is the proposition emphasized in Bourdieu’s (1990) work, that “we literally embody our history that is the history of our relations with all those others whom we have encountered in our lives” (1999: 2). She notes an implicit model of the nature of mind and cognition is essentially inescapable and such a model informs the underlying theory of cultural acquisition and a theory of cultural transmission used by the analyst.

Traditional accounts of socialization inherited from the Parsonian and the phenomenological traditions, in excluding the body as a locus of signification, reinstate the mind/body dualism squarely in the center of the theoretical toolkit of sociologists. In this respect, it is not surprising that a formalization of the Schutzian and Parsonian accounts of the functioning of culture and institutions can be done by drawing on the tools of cognitivist, disembodied artificial intelligence, such as the “production-system” formalism (Fararo and Skvoretz 1984). In this respect, there is an indelible link between disembodied approaches to cognition, mind, and cultural transmission and the metaphor of mind as “computer.”

For Toren, the disembodied socialization account relies on an untenable “copy” theory of knowledge acquisition, providing no plausible mechanisms as to how the complex set of categories comprising adult knowledge is acquired by the child undergoing the socialization process. This theory is suspiciously silent on (distributed) differences in the cultural understanding of agents at different (developmental) trajectories in the socialization process (i.e. children and adults or adolescents and children). Socialization theory presumes a passive agent which records this external culture.

These suppositions are dubious in the light of contemporary accounts of knowledge acquisition by infants. Toren argues that given these developments, “the process of physical development, the meaning—or knowledge-making process should be understood as giving rise to psychological structures that are at once dynamic and stable over time” (1999: 9). To refer to these psychological structures she—like Bourdieu (1990)—uses the Piagetian term of “scheme.” As Toren notes the notion of scheme is a “brilliant and essentially simple idea” (1999: 9). Schemes are self-equilibrating wholes simultaneously capable of being structured and of structuring reality by the dual processes of accommodation and assimilation (see e.g. Lizardo 2004)

Embodied and embedded socialization

Toren shows the payoff of this embodied and embedded approach to culture and cognition in her analysis of the acquisition of cultural categories regarding status and gender among Fijian children (Toren 1999: 50-55). According to Toren, designations of power and status rather than being available as discursive linguistic representations, are encoded in the physical arrangement of artifacts and persons in the interior of Fijian domestic and ceremonial dwellings. This is in line with Bourdieu’s (1971) analysis of the physical embeddedness of cosmological principles in the material structure and spatial arrangement of the Berber house, Schwartz’s classic work on vertical classification (Schwartz 1981), and with recent experimental work on the role of embodied perceptual symbols in the perception, understanding, and external signification of power (Schubert 2005).

In Fiji,

all horizontal spaces inside buildings and certain contexts out of doors can be mapped onto a spatial axis whose poles are given by the terms ‘above’ (i cake) and ‘below’ (i ra). Inside a building, people of high social status ‘sit above’ and those of lower social status ‘below’. However, this distinction refers to a single plane and so non-one is seated literally above anyone else…hierarchy in day-to-day village life finds its clearest physical manifestation in people’s relation to one another on this spatial axis and is most evident in the context of meals, kava drinking and worship (2005: 51).

Toren notes that “meals in the Fijian household are always ritualized” which makes the domestic group the primary face-to-face environment in which hierarchical distinctions are enacted and constructed. During meals, the cloth in which persons sit “is laid to conform with the above/below axis of the house space.” Household members proceed to take their place at the table “according to their status: the senior man sits at the pole ‘above’ others are ‘below’ him males in general being above females.” In this manner “the seating arrangements and the conduct of the meal are a concrete realization of hierarchical relations within the domestic group” (2005: 51, italics added). Through the habitual enactment of positioning of male and female bodies across the spatial axis, hierarchy is both practically enacted and transmitted, without the need to engage in “explicit teachings” transmitted through language. This involves the dynamic construction of an analogical mapping linking spatial locations, rank, and (gendered) bodies, which then becomes culturally conventional.

The same system is used to materialize and communicate hierarchical relationships based on village rank among men during the Kava drinking ritual. The drinking of Kava, which is associated “with ancestral mana and the power of God…is always hedged about by ceremony” (2005: 55). Thus, Toren points out that “however informal the occasion, the highest status persons present must sit ‘above’ the central serving bowl.”

Because hierarchy is structured and encoded in material space, it seldom fails to signify: “on the axis of social space, one is always ‘above’ or ‘below’ others, according to one’s position relative to the top, central position.” The explicit axis of hierarchy changes according to the occasion and the composition of the group of assembled persons (i.e. age, gender, rank, etc.). Accordingly, “the image of an ordered and stratified society exemplified in people’s positions relative to one another around the kava bowl is one encountered virtually everyday in the village o Sawaieke.” In addition, the schemes that are used to materially produce hierarchy are not only productive of action, but they also bias perception. This is shown by the fact, that as Toren notes, the arrangement of sitting positions in The Last Supper (ubiquitous in most Fijian household because of missionary activity and conversions to Christianity) is interpreted according to the same above/below axis.

Why Culture is not purely ‘symbolic’

The limitations of the usual “symbolic” approach to the study of culture and ritual is seen most clearly in Toren’s study of the lay categories with which children conceptualize gender and status hierarchy in Fiji. According to Toren, “we should give up the lingering notion that to understand ritual is to analyze its meaning [purely] as relation between metaphors.” Instead, Toren argues that the specifically “symbolic” aspect of culture and ritual is something that emerges from a “process of cognitive construction in persons over time.”

For young children, “ritual is not symbolic in the conventional anthropological sense” (2005: 87). Instead, “young children take ritualized behavior for granted as part of the day-to-day material reality of their existence” (italics added). Fijian children, rather than taking ritual practices as representational, take them rather literally: “the ritualized drinking of kava is, for children, merely what people do when drinking kava. The activity is of the same material and cognitive order as…house-building.” For Toren, even the claim that it is only for adults that ritual comes to have a “symbolic” aspect in the strict (i.e. ritual practices as “referring” to non-empirical meanings) is half true. Instead, “it is only when we understand the process through which ‘the symbolic’ is cognitively constructed” on top of an embodied basis, “that we can also understand the coercive power of ritual” (2005: 87).

Toren asked a sample of Fijian children ranging from five to eleven years old to examine a prepared drawing and provide the identity of unlabeled figures sitting around a table during the kava drinking ritual and during meals in the household, and to provide their own drawings identifying were different persons (mother, father, chief, etc.) would be seated in similar circumstances. Toren finds (2005: 88-90) that by the age of six, Fijian children can reproduce the structural correspondence between gender and rank hierarchy and the above/below spatial axis discussed above, although younger children produce less ranking gradations than do older children. Toren concludes from these data that “an understanding of above/below in terms of its polar extremes occurs just before school age” (2005: 94). For these children, the position of mother below “is the anchor for situations within the household…for prepared drawings of meals, all children chose the figure below to be mother…By contrast, the figure said to be above was either father, father’s elder brother, father’s father, mother’s brother or a ‘guest’.” Toren asks:

But how does this merging of status with spatial categories come about? Piaget has always emphasized that a child’s early cognitions are tied to concrete referents, a point also made by Bourdieu (1977). This is as much the case for my own data concerning a so-called ‘symbolic’ construct as it is for the so-called ‘logical’ constructs investigated by Piaget and his co-workers. What emerges most forcefully from the children’s data is the crucial importance of the spatial axis given by above/below as this is made manifest in concrete form in houses, churches, at meals and in kava-drinking (2005: 94).

The danger of taking an adult’s linguistic and conceptual elaborations (and justifications) for cultural practices, is exemplified in Toren’s account. When asked about the reason for the hierarchical seating arrangement of persons in the kava-drinking ceremonies, the adults’ discursive elaboration is in effect a reversal of that of children. While children provide explicitly tautological responses to the question of the ultimate reasons the Chief is the person who sits on top, adults provide elaborate descriptions regarding the superior mana of different persons, and in particular of the chief. Thus, “adults notion include [in addition to the notion of mana] ideas of…legitimacy, personal achievement, the significance of mythical relations of ancestors of clans…and so on” (95). This speaks to the fundamental difference in both format and phenomenology between culture as acquired in embedded and embodied forms and more explicit forms of articulation of embodied personal culture into explicit public cultural forms.

References

Berger, Peter L., and Thomas Luckmann. 1966. The Social Construction of Reality: A Treatise in the Sociology of Knowledge. Anchor Books. New York: Doubleday.

Bloch, Maurice. 1991. “Language, Anthropology and Cognitive Science.” Man 26 (2): 183–98.

Bourdieu, Pierre. 1990. The Logic of Practice. Stanford University Press.

Corsaro, William A., and Thomas A. Rizzo. 1988. “Discussione and Friendship: Socialization Processes in the Peer Culture of Italian Nursery School Children.” American Sociological Review 53 (6): 879–94.

Fararo, Thomas J., and John Skvoretz. 1984. “Institutions as Production Systems.” The Journal of Mathematical Sociology 10 (2): 117–82.

Lizardo, Omar. 2004. “The Cognitive Origins of Bourdieu’s Habitus.” Journal for the Theory of Social Behaviour 34 (4): 375–401.

Parsons, Talcott. 1951. The Social System. Glencoe, Illinois: The Free Press,.

Pugh, Allison J. 2009. Longing and Belonging: Parents, Children, and Consumer Culture. University of California Press.

Schubert, Thomas W. 2005. “Your Highness: Vertical Positions as Perceptual Symbols of Power.” Journal of Personality and Social Psychology 89 (1): 1–21.

Schwartz, Barry. 1981. Vertical Classification: A Study in Structuralism and the Sociology of Knowledge. University of Chicago Press.

Tomasello, Michael. 2005. Constructing a Language. Harvard University Press.

Toren, Christina. 1993. “Making History: The Significance of Childhood Cognition for a Comparative Anthropology of Mind.” Man, 461–78.

———. 2005. Mind, Materiality, and History: Explorations in Fijian Ethnography. Routledge.

What’s Cultural About Analogical Mapping?

Analogical mapping is a cognitive process whereby a particular target is understood by analogizing from a particular source. For example, Lakoff and Johnson (1999) have observed that people often reason about love metaphorically as a journey. In a previous post I discussed some experimental evidence supporting the claim that activating a particular metaphor over another may be consequential for reasoning by encouraging certain outcomes over others (for an excellent review of this literature, see Thibodeau et al. (2017)). For a cultural sociologist, these findings may well be interesting but may seem somewhat esoteric. In this post, I make the case that analogical mapping (this term is used interchangeably with “conceptual metaphor”) is an inherently cultural phenomenon relevant for cultural analysis.

Analogical mapping is cultural in at least two senses. First, analogical mapping is cultural because knowledge of sources is learned. While many sources may be universal or near-universal because they are learned through universal experiences, others may be more idiosyncratic. For example, in this clip from Cloudy With A Chance of Meatballs, sardine fisherman Tim Lockwood tries to comfort his young son with a fishing metaphor, with poor results.

The uneven distribution of source domain knowledge opens important questions for cultural analysis. For example, how do analogical mappings from rare or privileged sources affect the formation, perpetuation, or dissolution of interpersonal ties? Does analogical mapping sometimes facilitate group solidarity and boundary-making? In the sitcom Brooklyn 99, for example, the police captain Raymond Holt becomes familiar with the sitcom Sex and the City in order to quickly win the trust of a certain aficionado of the series. When meeting this person, Holt casually discloses, “I’m such a Samantha,” conveying a wealth of information about himself to his interlocutor and instantly creating rapport. In such cases, metaphorical usage may convey worlds of meaning because the chosen source domain suggests certain background experiences.

via GIPHY

Cultural analysts might also investigate if/how the uneven distribution of source domain knowledge contributes to inequality. It is possible, for example, that there are certain metaphors whose meaning is clear among certain classes because of a shared familiarity with the source domain, but which might be obscure to those in other classes. If these metaphors are located at crucial points, they could be consequential for the meting out of rewards.

Second, analogical mapping is cultural because the mapping of a particular source to a particular target is learned, such that a person may be predisposed to a particular source-target mapping over another when a particular situation arises. These metaphorical predispositions can have far-reaching consequences. For example, Johnson (1987) argues that a medical revolution was brought about by changing the metaphor used for thinking about the body. The old metaphor, which he calls THE BODY IS A MACHINE, structured medical diagnosis and practice through its various entailments. If the body is a machine, then “the body consists of distinct, though interconnected parts… breakdowns occur at specific points or junctures in the mechanism… diagnosis requires that we locate these malfunctioning units” and “repair (treatment) may involve replacement, mending, alteration of parts, and so forth.” Johnson elaborates: “The key point in all of this is that the BODY AS MACHINE metaphor was not merely an isolated belief; rather, it was a massive experiential structuring that involved values, interests, goals, practices, and theorizing. What we see is that such metaphorical structurings of experience have very definite systematically related entailments” (p. 130).

The key cultural revolution in medical practice entailed developing a new metaphor, which Johnson calls THE BODY IS A HOMEOSTATIC ORGANISM. The medical researcher Hans Selye developed this new metaphor in response to the machine model’s inability to explain why different stressors triggered the same bodily reaction. Following the old model, symptoms were specific and traceable to particular breakdowns, and treatment entailed localized repairing of the faulty part(s). Within the HOMEOSTATIC metaphor, however, disease was understood as “not just suffering, but a fight to maintain the homeostatic balance of our tissues, despite damage” (p. 134). For more examples of shared mappings and their consequences, see Shore (1996) on foundational schemas.

Recognition of these two cultural dimensions of analogical mapping leads to an important theoretical observation: cultural variation can result from mapping universal building blocks (i.e. universally shared knowledge of sources) differentially to particular targets. There is a difference between not being able to understand a metaphor because you are not familiar with the source, and finding a novel metaphor surprising or unusual, but perfectly understandable. Much of what may count as cultural variation in conceptual thought may result from different mappings from the same universal stock of sources (i.e. image schemas), rather than differential mapping rooted in idiosyncratic, group-specific sources. It is an empirical question, but we need not assume that because people are using different sources, they are indecipherable to one another.

In sum, analogical mapping is not just a cognitive process; it is inescapably cultural. Source knowledge and source-target mappings are socially learned, and because of this, we have reason to believe that in at least some cases, analogical mapping is consequential for the organization of social life.

References

Johnson, Mark. 1987. The Body in the Mind: The Bodily Basis of Meaning, Imagination, and Reason. University of Chicago Press.

Lakoff, George and Mark Johnson. 1999. Philosophy In The Flesh. Basic Books.

Shore, Bradd. 1996. Culture in Mind: Cognition, Culture, and the Problem of Meaning. Oxford University Press.

Thibodeau, Paul H., Rose K. Hendricks, and Lera Boroditsky. 2017. “How Linguistic Metaphor Scaffolds Reasoning.” Trends in Cognitive Sciences 21(11):852–63.

Habits in a Dynamic(al) System

In this post I try to show that the theory of action implied in Swidler (2001) is an inherently dynamic theory that is unfortunately couched in terms of comparative statics. Here I unpack Swidler’s action theory by re-translating the relevant terms into the language of dynamical systems theory. I show that properly understood, the distinction between settledness and unsettledness and the description of the different associations between culture and action in those two states actually refer to the differences between social action that occurs within dynamic equilibria and that which occurs when equilibria are broken and there emerges a sharp transition between states.

A key problem in the theory of action concerns the issue of what are the conditions under which we should expect to observe behavioral stability versus those under which we should expect to observe change. Theories departing from a conceptualization of action as practice, tend to presume that there is a tendency towards stability in human action. To put it simply, the basic claim is that most persons tend to work very hard to bring a semblance of order and predictability to their lives. This is what Swidler has referred to as the tendency for persons to fall into settled lives.

While the notion of “settled lives” may bring to mind a tendency towards stasis and lack of change, actually the opposite is the case: persons must work very hard to sustain settledness; as such the attainment of a settled existence is an active accomplishment on the part of persons, who invest a lot of time and energy fighting against entropy-inducing environmental conditions pushing their lives towards unsettledness. In that respect, we may think of the observation that persons are able to (within limits) approach the idea of living a settled life as implying that on the whole, settledness emerges as a dynamic equilibrium or as an attractor state in social behavior.

Swidler notes that under a settled existence, persons are able to draw on their existing “toolkit” of behavioral routines and habits to get by. There is thus an implicit linkage between action and motivation here, a linkage that deserves to be made explicit. We can begin by proposing that persons are motivated to choose those states that allow them to maximize performance given their already existing capacities. People avoid those environments and situations that call for skills that are different from those they already possess and which would thus bring unsettledness to their lives. This active avoidance of environments in which there is a mismatch between existing competences and called-for performances and the active seeking of environments calling for competences that persons already possess, lead to forms of positive feedback increasing the deployment of these same behavioral dispositions in the future.

These forms of positive feedback between persons, situations, and competences are very common. A paradigmatic situation is that which obtains between the fluency and effectiveness of a given skill and the frequency with which that skill is “practiced”: by regularly deploying a given set of competences and skills, persons get better at them, which means they are more likely to deploy them in the future. Conversely, skills that stop being called upon, fall into disrepair and, subsequently, into disuse. Another source of positive feedback is the relationship between current skills and those potentially novel skills not yet acquired. In honing in their existing set of skills, persons miss the opportunity to acquire new ones (this is the standard notion of opportunity costs as applied to skill acquisition). Thus, the more persons enact their competences, the more likely they are to stick to those competences and the less likely they are to abandon those competences in order to acquire new ones.

The existence of positive feedback between use and refinement of dispositions, however, may result in the creation of conditions in which alternative “settled lives” exist for the same set of dispositions. If this is the case, it is possible that gradual changes in external conditions, especially changes that make it harder for persons to deploy their existing competences, may move them closer to a critical regime shift towards “unsettledness” in which the resolution of these unstable unsettled states is achieved not by returning to the old settled life but by moving to a radically different (but also settled) existence.

In the standard approach to action theory, thinking of social action as being governed by habit is usually thought to constitute a sufficient explanation for behavioral stability. The implication is that a theory of action that claims that most action is habitual is ill-equipped to explain sudden or radical transformations in behavior, thought and action. This leads analysts to suppose that habit-based theories of action need to be supplemented with some other way of conceptualizing action (e.g. a “non-habitual,” reflexive, or purposive addendum to the habit-based theory) if we are to explain radical behavioral change.

This stance is misguided. Instead, I would argue that a habit-based theory of action, implies a conceptualization of stable action as (relatively) temporary equilibria or attractors in a dynamical system. This means that it is precisely because action is by its very nature habitual, that the opportunity for radical qualitative transformations exists. These transformations are the result of regime-shifts and stand as evidence that the same set of incorporated habits can be the drivers of action in qualitatively distinct action regimes.

Thus, “conversions” do not necessarily imply retooling; that is distinct behavioral regimes and sudden transitions from one to the other are, as a rule, premised on continuity of the underlying habitual dispositions and competences. When looked at in terms of the switch from one regime of action to another, this phenomenon can be mistaken for a gradual “transposition” of schemes, so that there is continuity in change. Instead, what has happened is a global reorganization of behavior around the same set of underlying capacities productive of action.

References

Swidler, Ann. 2001. Talk of Love: How Culture Matters. Chicago: University of Chicago Press.

Exaption: Alternatives to the Modular Brain, Part II

Scientists discovered the part of the brain responsible for…

In my last post, I discuss one alternative to the modular theory of the mind/brain relationship: connectionism. Such a model is antithetical to modularity in that there are only distributed networks of neurons in the brain, not special-purpose processors.

One strength of the modular approach, however, is that it maps quite well to our folk psychology. And, much of the popular discourse surrounding research in neuroscience involves the celebrated “discovery” of the part of the brain responsible for X. A major theme of the previous posts is that the social sciences should be skeptical of the baggage of our folk psychology. But, is there not some truth to the idea that certain regions of the brain are regularly implicated in certain cognitive processes?

The earliest attempts at localization relied on an association between some diagnosed syndrome—such aphasia discussed in the previous posts—and abnormalities of the brain’s structure (i.e. lesions) identified in post-mortem examinations. For example, Paul Broca, discussed in my previous post, noticed lesions on a particular part of the brain in patients with difficulty producing speech. This part of the brain became known as Broca’s area, but researchers only have a loose consensus as to the boundaries of the area (Lindenberg, Fangerau, and Seitz 2007).

Furthermore, the relationship between lesions in this area and aphasia is partial at best. A century later, Nina Dronkers, the Director of the Center for Aphasia and Related Disorders, states (2000:60):

After several years of collecting data on chronic aphasic patients, we find that only 85% of patients with chronic Broca’s aphasia have lesions in Broca’s area, and only 50–60% of patients with lesions in Broca’s area have a persisting Broca’s aphasia.

More difficult for the modularity thesis, those with damage to Broca’s area and who also have Broca’s aphasia usually have other syndromes. This implies that the area is multi-purpose, and thus not a single-purpose language production module (see this book-length discussion Grodzinsky and Amunts 2006). One reason I focus on Broca’s area (apart from my interest in linguistics) is that it is considered the exemplary case for the modular theory quite dominant (if implicit) in much neuroscientific research (Viola and Zanin 2017).

Part of the difficulty with assessing even weak modularity hypotheses, however, is that neuroanatomical research continues to revise the “parcellation” of the brain. The first such attempt was by Korbinian Brodmann, published in German in 1909  as “Comparative Localization Studies in the Brain Cortex, its Fundamentals Represented on the Basis of its Cellular Architecture.” He divided the cerebral cortex (the outermost “layer” of the brain) into 52 regions based on the structure of cells (cytoarchitecture) sampled from different sections of brains taken from 64 different mammalian species, including humans (see Figure 1). Although Brodmann’s studies were purely anatomical, he wrote: “my ultimate goal was the advancement of a theory of function and its pathological deviations.” Nevertheless, he rejected what he saw as naive attempts at functional localization:

[Dressing] up the individual layers with terms borrowed from physiology or psychology…and all similar expressions that one encounters repeatedly today, especially in the psychiatric and neurological literature, are utterly devoid of any factual basis; they are purely arbitrary fictions and only destined to cause confusion in uncertain minds.

20180522-Selection_003
Figure 1. Brodmann’s handdrawn parcellation of the human brain.

Over a century later, many researchers continue to refer to “Brodmann’s area” numbers as general orientation markers. More recently (see Figure 2), using data from the Human Connectome Project and supervised machine learning techniques, a team of researchers characterized 180 areas in each hemisphere — 97 new areas and 83 areas identified in previous work (Glasser et al. 2016). This study used a “multi-modal” technique which included cytoarchitecture, like Brodmann, but also connectivity, topography and function. For the latter, the study used data from “task functional MRI (tfMRI) contrasts,” wherein resting state measures are compared with measures taken during seven different tasks.

glasser-map

One of these tasks was language processing using a procedure developed by Binder et al. (2011) wherein participants read a short fable and then are asked a forced-choice question. Glasser et al. found reasonable evidence associating this language task with previously identified components of the “language network” (for recent overviews of the quest to localize the language network, see Frederici 2017 and Fitch 2018, both largely within the generative tradition).  Specifically, these are Broca’s area (roughly 44) and Wernicke’s area (roughly PSL), and also identified an additional area, which they call 55b). Their findings also agreed with previous work going back to Broca on the “left-lateralization” of the language network—which means not that language is only in the left hemisphere (as some folk theories purport), but simply the left areas show more activity in response to the language task than in homologous areas in the right hemisphere (an early finding which inspired Jaynes’ Bicameral Mind hypothesis)

Does this mean we have discovered the “language module” theorized by Fodor, Chomsky, and others? Not quite, for three reasons. First, Glasser et al. found if they removed the functional task data, their classifier was nearly as accurate at identifying parcels. Second, the parcels were averaged over a couple hundred brains, and yet the classifier was still able to identify parcels in atypical brains (whether this translated into changes in functionality was outside the scope of the study).

Third, and most important for our purposes, this work does not—and the researchers do not attempt to—determine whether parcels are uniquely specialized (or encapsulated in Fodor’s terms). That is, while we can roughly identify a language network implicating relatively consistent areas across different brains, this does not demonstrate that such structures are necessary and sufficient for human language, and solely used for this purpose. Indeed, language may be a “repurposing” brain parcels used for (evolutionarily or developmentally older) processes. This is precisely the thesis of neural “exaption.”

What is Exaption?

In the last few decades several new frameworks—under labels like neural reuse, neuronal recycling, neural exploitation, or massive redeployment—attempt to offer a bridge between the modularity assumptions which undergird most neuroanatomical research, on one hand, and the connectionist assumptions which spurred advancements in artificial intelligence research and anthropology on the other. Such frameworks also attempt to account for the fact there is some consistency in activation across individuals, which does look a little bit like modularity.

The basic idea is exaption (also called exaptation): some biological tendencies or anatomical constraints may predispose certain areas of the brain to be implicated in certain cognitive functions, but these same areas may be recycled, repurposed, or reused for other functions. Exemplars of this approach are Stanislas Dehaene’s Reading in the Brain and Michael Anderson’s After Phrenology.

Perhaps the easiest way to give a sense of what this entails is to consider cases of neurodiversity, specifically the anthropologist Greg Downey’s essay on the use of echolocation by the visually impaired. While folk understandings may suggest that hearing becomes “better” in those with limited sight, this is not quite the case. Rather, one study finds, when listening to “ a recording [which] had echoes, parts of the brain associated with visual perception in sighted individuals became extremely active.” In other words, the brain repurposed the visual cortex as a result of the individual’s practices. While most humans have limited echolocation abilities and the potential to develop this skill, only some will put in the requisite practice.

Another strand of research supporting neural exaption falls under the heading of “conceptual metaphor theory” (itself a subfield of cognitive linguistics). The basic argument from this literature is that people tend to reason about (target) domains they have had little direct experience with by analogy to (source) domains with which they have had much direct experience (e.g. the nation is a family). As argued in Lakoff and Johnson’s famous Metaphors We Live By, this metaphorical mapping is not just figurative or linguistic, but rather a pre-linguistic conceptual mappings, and an—if not the—essential part of all cognition (Hofstadter and Sander 2013). Therefore, thinking or talking about even very abstract concepts re-activates a coalition of neural associations, many of which are fundamentally adapted to the mundane sensorimotor task of navigating our bodies through space. As we discuss in our forthcoming paper, “Schemas and Frames” (Wood et al. 2018), because talking and thinking recruit areas of our neural system often deployed in other activities—and at time-scales faster than conscious awareness can adequately attend—our biography of embodiment channels our reasoning in ways that seem intuitive and yet are constrained by the pragmatic patterns of those source domains. This is fully compatible with the dispositional theory of the mental Omar discusses.

What does this mean for sociology? I think there are numerous implications and we are just beginning to see how generative these insights are for our field. Here, I will limit myself to discussing just two, specifically related to how we tend to think about the role of language in our work. First, for an actor, knowing what text or talk means involves an actual embodied simulation of the practices it implies, very often (but not necessarily) in service of those practices in the moment (Binder and Desai 2011). Therefore, language should not be understood as an autonomous realm wherein meanings are produced by the internal interplay of contrastive differences within an always deferred linguistic system. Rather, following the later Wittgenstein in the Philosophical Investigations, “in most cases, the meaning of a word is its use.” Furthermore, as our embodiment is largely (but certainly not completely) shared across very different peoples (for example, most of us experience gravity all the time), there is a significant amount of shared semantics across diverse peoples (Wierzbicka 1996)—indeed without this, translation would likely be impossible.

Second, the repurposing of vocabulary commonly used in one context into a new context will often involve the analogical transfer of traces of the old context. This is because invoking such language activates a simulation of practices from the old context while one is in the new context. (Although this is dependent upon the accrued biographies of the individuals involved). This suggests that our language can be constraining in predictable ways, but not because the language itself has a structure or code rendering certain possibilities unthinkable. Rather, it is that language is the manifestation of a habit inextricably involved in a cascade of other habits, making it easier to execute  (and therefore more probable for) some actions or thoughts over others. For example, as Barry Schwartz argued in his (criminally under-appreciated) Vertical Classification, it is nearly universal that UP is associated with power and also the morally good as a result of (near-universal) practices we encounter as babies and children. This helps explain the persistence of the “height premium” in the labor market (e.g., Lundborg, Nystedt, and Rooth 2014).

 

References

Binder, Jeffrey R. et al. 2011. “Mapping Anterior Temporal Lobe Language Areas with fMRI: A Multicenter Normative Study.” NeuroImage 54(2):1465–75.

Binder, Jeffrey R. and Rutvik H. Desai. 2011. “The Neurobiology of Semantic Memory.” Trends in Cognitive Sciences 15(11):527–36.

Dronkers, N. F. 2000. “The Pursuit of Brain–language Relationships.” Brain and Language. Retrieved (http://www.ebire.org/aphasia/dronkers/the_pursuit.pdf).

Fitch, W. Tecumseh. 2018. “The Biology and Evolution of Speech: A Comparative Analysis.” Annual Review of Linguistics 4(1):255–79.

Friederici, Angela D. 2017. Language in Our Brain: The Origins of a Uniquely Human Capacity. MIT Press.

Glasser, Matthew F. et al. 2016. “A Multi-Modal Parcellation of Human Cerebral Cortex.” Nature 536(7615):171–78.

Grodzinsky, Yosef and Katrin Amunts. 2006. Broca’s Region. Oxford University Press, USA.

Hofstadter, Douglas and Emmanuel Sander. 2013. Surfaces and Essences: Analogy as the Fuel and Fire of Thinking. Basic Books.

Lindenberg, Robert, Heiner Fangerau, and Rüdiger J. Seitz. 2007. “‘Broca’s Area’ as a Collective Term?” Brain and Language 102(1):22–29.

Lundborg, Petter, Paul Nystedt, and Dan-Olof Rooth. 2014. “Height and Earnings: The Role of Cognitive and Noncognitive Skills.” The Journal of Human Resources 49(1):141–66.

Viola, Marco and Elia Zanin. 2017. “The Standard Ontological Framework of Cognitive Neuroscience: Some Lessons from Broca’s Area.” Philosophical Psychology 30(7):945–69.

Wierzbicka, Anna. 1996. Semantics: Primes and Universals. Oxford University Press, UK.

Wood, Michael Lee, Dustin S. Stoltz, Justin Van Ness, and Marshall A. Taylor. 2018. “Schemas and Frames.” Retrieved (https://osf.io/preprints/socarxiv/b3u48/).

 

To Feel or Not to Feel? That is No Longer the Question

It is highly likely that most readers recall learning about Phineas Gage, a railroad worker who, in 1848, had the misfortune of having a 3.5 inch, 13+ lb. metal rod (with a diameter of 1 ¼ inches) impale him. The rod went through his open mouth, behind his left eye, and out of his skull. What was exceptional in all of this, was that it neither exited his skull completely nor did he die from this injury for 12 years! Considering the state of medical knowledge and technique, this was a rather incredible and improbable survival, and I would bet that is what most people remember about his story.

Yet, for a theorist and sociologist, there is much, much more to this anecdote than the sensational. His memory, for instance, was discernibly unaffected, but the injury, by accounts of both former employers and professional “trained” in the “psychology” of yore, had somehow peeled back the protective human layers of socialization. That is, he was described as vacillating between his “intellectual faculties” and “animal propensities”; his behavior and language could be “coarse,” “vulgar,” and offensive to any “decent” people he might encounter. In spite of this, he spent seven of the 12 years left of his life in Chile, working as a long-distance stagecoach driver; which, in 1852, would have demanded a lot of cognitive skills given the temporal and physical and social demands. He was clearly successful.

What can we learn from this case? On the surface, probably not much. A debate between contemporary neuroscientists centers on how much we can draw from MRIs of a skull with no direct empirical evidence. Gage’s former employers may have maligned his reputation to protect their financial interests; doctors of the day were rarely scientific in their orientation or beholden to a professional association backed by the force of legislation; and, psychology was barely in its infancy. Nonetheless, it is not incorrect to say that damaging the brain, in most cases, leads to changes in behavior and personality.

But, what does Gage have to do with sociology and cognition? His case and others that would follow in the early 20th century inspired a body of research examining brain lesions, particularly the prefontal lobe, which is responsible for rational decision-making. For instance, in one of many experiments, Bechara, Damasio, Damasio, and Anderson (1994) provided “normals” and patients with a $2000 loan, and provided them with four decks of cards and some basic instructions: don’t lose money, but make $$ if possible. Turning a card in pile A or B rewarded $100 while C and D only $50. The catch: some cards in A and B, unbeknownst to the player, demanded a sudden high payment (e.g., $1250), while C and D, on occasion, only asked for small, modest payments (e.g., $100). Normals began by sampling all the decks, showing preferences for A and B at first, but gradually learning that C and D are the best bets. Those with damaged brains, however, started the same way but did not switch to C and D, no matter how many times they bankrupted.

From a series of follow up experiments meant to tease out specific hypotheses about rewards and punishments, and his own clinical work with lesion patients, Antonio Damasio (1995) cogently posited—at the time—a revolutionary thesis: reasoning and rationality are inextricably entwined with emotions. The classic Cartesian model of brain v. soul that undergirds seemingly false (but commonly, often unconsciously, accepted) dichotomies like rationality v. irrationality, cognition v. emotion collapses under the weight of empirical evidence.

This seems eminently sensible. Marketers draw on psychology to appeal not only to our cool rationality, but to our feelings and sentiments. We choose Crest or Colgate, Ford or Toyota, and so forth based on emotions no matter how much “instrumentality” we employ in the decision-making process (see, for example, Camerer 2007). These, of course, are mundane, arbitrary decisions; imagine if we extend this thesis to much more complex decisions, like choosing a partner, a reciprocal gift, or to make amends. It seems true that we can only make big decisions when our brain’s neural systems are linked up and our emotion centers are communicating with various other aspects of our brain (LeDoux 2000).

So, for instance, as information enters the brain it is routed to the hippocampus where it is converted into memories and indexed as either semantic or episodic. The former are general “facts” about things, people, events, and so forth that escape temporality, whereas the latter are person-specific memories with time-stamps. Our self, then, is rooted in memories that are both generalized and specific. At the same time, this information is fed into the amygdala and tagged with a valence, or level of intensity, making them more or less relevant to one’s self—that is, more intensely tagged memories are easier and more likely to be recalled. And, if the most self-relevant information comes from interactions with significant others, then the most basic unit of social organization – the human relationship – is anchored in affective moorings (Lawler et al. 2008; Cozolino 2014).

In particular, knowledge about the social self (semantic autobiographical knowledge), formed in episodes, tagged with powerful affect, and confirmed or activated frequently in encounters, comes to be generalized too, but is differentiated from the other two types in that it activates normally distinct places in the brain they do—that is, it remains rooted in the emotion centers and is what makes our global sense of self perceived as stable and consistent over longer durations and, moreover, drenches appraisals of our own actions as well as others in affect (Turner 2007). This also means, using more familiar sociological terms, goal setting, strategizing, habit, decision-making, selfing and minding are saturated with emotions (Franks 2006).

Memory works because of emotions; our senses work because of emotions; the construction, maintenance, alteration, and destruction of self, depend on our brain’s emotional neuroarchitecture as much as on the social environment’s input. Thus, if we are to take cognitive science seriously, as sociologists, then we must also take seriously the role emotions play in action and organization.

What are Dispositions?

A recurrent theme in previous posts is that social scientists have a lot to gain by replacing belief-desire psychology as an explanatory framework with a dispositional theory of the mental. As I argued before, it is something that we already do and has a good pedigree in social theory.

The notion of disposition has had a somewhat checkered history in sociological theory. It was central to Bourdieu’s definition of one of his core concepts (habitus) and played a central role in his scheme (Bourdieu defined habitus as a “system” of dispositions). Yet, American sociologists seldom use the notion in a generative way. I want to propose here that it should be a (if not the) central notion in any coherent action theory.

Dispositional explanations of action are not philosophically neutral because they make strong assumptions about the linkage between the capacities presumed to be embodied in agents and our ability to make sense of their actions. This is a good thing since a lot of action theories are not explicit as to their commitments. For instance, a dispositional account has to presuppose that the fact we can make sense of other people’s actions (e.g. when skilfully playing the role of folk psychologists) is itself a manifestation of a disposition, which may or may not manifest itself (sometimes we make sense of other people’s actions by taking other stances that are not folk psychological). In this respect, a dispositional account of action is one that must refer to an unobserved process which is only available via its overt manifestations. Because of this, dispositional explanations must deal with some unique conceptual and philosophical challenges (Turner 2007).

I outline some of these in what follows.

First, a dispositional explanation of action is consistent with a realist, capacities-based account of causation and causal powers. One useful such account has been referred to as “dispositional realism.” According to Borghini and Williams (2008, 23), dispositional realism “refers to any theory of dispositions that claims an object has a disposition in virtue of some state or property of the object.”

In addition, the fact that dispositions are properties possessed by their bearers entails that observing an overt manifestation of a disposition suffices to conclude that the agent possesses that disposition. However, the reverse is not the case. Dispositions may fail to manifest themselves even when their conditions for manifestation obtain (Fara 2005: 42). So not observing an action or profession of belief does not indicate the agent lacks the disposition to behave like X or believe X.

For sociologists, whose “objects” are people, this last statement entails for instance, that we can ascribe dispositions to people in the absence of any overt manifestation (e.g. dispositions to believe), and that this ascription is therefore partially independent from any single present (or past) situation in which we may have observed the agent. For instance, we may be familiar with the causal history experienced by the person, know that certain causal histories result in the acquisition of certain dispositions, and thus ascribe dispositions based on our familiarity with a person’s causal history before we see any of their manifestations.

Second, dispositions are causally relevant to their manifestations (Fara 2005, 44). In most settings to say  an agent has a disposition (D) to take a given intentional stance (belief, desire) towards propositional content Y, or to engage in action W in context C is to say D suffices to produce that intentional stance or that action in that context.

Third, dispositions are properties of the person, not properties of “the situation” or some external environmental feature. This is not say situations don’t have properties. It is to say, however, that in order for a situational property to apply to the explanation of action, we must presume the agent has a disposition to react in such-and-such a way to that situational property. Environments and situations have no free-standing causal powers in determining action. Any environmental effect has to be mediated by the dispositions to act and react that the agent is taken to possess (Cervone 1997).

This also entails that dispositions have bases, but the dispositions are not reducible to some non-dispositional substrate. Dispositional properties are irreducibly dispositional. Dispositions are not holistic glosses over behavior that could be realizable over “wildly disjunctive” set of underlying substrates. Instead, a dispositional ascription is an inherently ontological claim: something exists (the disposition) the causal power of which is responsible for the overt behavioral manifestation in question.

Do dispositions entail conditionals? A popular philosophical view defines a disposition as those properties of objects or persons that entail a conditional statement. For instance, the disposition “fragile” ascribed to a cup entails the condition “would break if struck by a sufficiently rigid object.” Here I follow Fara (2005) in noting that the conditional account of dispositions fails for a variety of reasons. We can consider something to be a disposition without referring to what would occur in a possible world or mental space. Instead of conditionals, dispositional ascriptions entail habituals (Fara 2005: 63), thus a dispositional explanation of action is consistent with a habit-based theory of action.

Accordinngly, fourth, dispositional realism entails a rejection of the conditional (e.g. counterfactual) definition of causation for explaining action (Martin 2011). The reason for this is that under conditional accounts the causal potency of dispositions is given a backseat in favor of talk about “the laws of nature, possible worlds, abstract realms, or what have you” (Borghini and Williams 2004: 24). This penchant to substitute talk about fake or possible worlds for talk about this world is the source of various pathological understandings of causality in social science (Martin 2011).

Fifth, when we say an agent has a certain disposition to do Y, we say that the agent does Y because of something inherent in his or her nature. Note that this account is perfectly compatible with the idea that this nature is “acquired.” The notion of something behaving like a natural property of an agent is separable from how is it that that something became part of the agent’s nature (e.g. learning or genetics). Sociologists are sometimes allergic to talking about properties inherent in agents lest they be accused of “essentialism.” Once acquired and locked in via habituation, dispositions can function as “second nature” in which case the provisional and qualified use of so-called “essentialist” language is not misleading.

This view of dispositions, as noted earlier, entails that there are not purely situational or derived properties, such as for instance, “relational properties” floating around unmoored in the ontological ether. This is not say there are no relational properties. Instead, it is to say that relational properties depend on dispositional properties but not the reverse: the capacity of the agent to enter into relations with properties and entities in the environment requires dispositions. This is what the joining of a habitual account of action (which trades on dispositional talk) and a field theory (which trades on relational talk) is not arbitrary but required to deal with the sorts of questions sociologists are disposed to ask (Merriman and Martin 2015).

Sixth, the relationship between a disposition and an overt manifestation is normally one to many. A single disposition may manifest itself as distinct forms of overt behavior or experiences depending on context (Borghini and Williams 2004: 24).

Finally, dispositions may organize themselves into systems of dispositions. Bourdieu thought this was the natural tendency. However, a dispositional explanation of action does not require the assumption of overall systematicity. In fact, the weaker assumption of loose coupling until proven otherwise is more likely to be empirically accurate.

In a follow-up post, I will outline other consequences of adopting a dispositional ontology at the level of the actor.

References

Borghini, Andrea, and Neil E. Williams. 2008. “A Dispositional Theory of Possibility.” Dialectica 62 (1). Wiley Online Library: 21–41.

Cervone, Daniel. 1997. “Social-Cognitive Mechanisms and Personality Coherence: Self-Knowledge, Situational Beliefs, and Cross-Situational Coherence in Perceived Self-Efficacy.” Psychological Science 8 (1). SAGE Publications Inc: 43–50.

Fara, Michael. 2005. “Dispositions and Habituals.” Nous 39 (1): 43–82.

Martin, John Levi. 2011. The Explanation of Social Action. Oxford University Press.

Merriman, Ben, and John Levi Martin. 2015. “A Social Aesthetics as a General Cultural Sociology?” In Routledge International Handbook of the Sociology of Art and Culture, 152–210. Routledge.

Turner, Stephen P. 2007. “Practice Then and Now.” Human Affairs 17 (2): 375.

Folk Psychology and Legal Responsibility

If folk psychology is false, is legal responsibility dead?

If legal responsibility is dead, is everything permitted?

Maybe not, but such questions have received growing attention in the legal field, as the field confronts the prospect of an emergent “neuro-law.” Neuroscience challenges the unacknowledged background of commitments to theories of action that underwrite the law. In this post, I want to argue that it does so in a way that could have particular significance for a similar neuroscientific challenge to sociology. At least for some legal scholars, what drives the issue is nothing less than the explanatory merit of folk psychology itself: “The law will be fundamentally challenged only if neuroscience or any other science can conclusively demonstrate that the law’s psychology is wrong and that we are not the type of creatures for whom mental states are causally effective” (Morse 2015: 262).

Here, I want to argue that as these debates unfold in legal fields, they seem to translate in some vaguely interesting ways to the way similar debates unfold in sociology, not least because the primary definition of action found in sociology comes from Max Weber, i.e. “the lawyer as social thinker” (Turner and Factor 1994). Turner and Factor reveal that the predominant action accounting scheme present in sociology (Weberian) was essentially created by Weber’s “transformation of the categories of legal science into the basic categories of sociology” (Turner and Factor 1994: 1). Weber, of course, passed the German referendar and both his dissertation and habilitationsschrift were on the history of law (medieval commercial and Roman agrarian to be exact). It makes sense, then, that his approach to sociological categories (like action) should be situated against the “significant prehistory in the legal writing of his own time,” in particular the work of the legal philosopher Rudolf von Ihering (see Lizardo and Stoltz 2018).

Much of Weber’s theoretical legacy repurposed the conceptual frameworks of legal science in order to fundamentally strip sociology of any strong “social” theory by removing emphasis from collectivities, social forces, developmental principles, and social evolution. In their place, he relied on probabilistic causality, that was contingent on subjective meaning, and ideal-typical concept formation that involved “redefinition and substitution.” The goal was to devise an approach and categorical framework that would “eliminate questions that require an ‘ultimate cause.’” All of this appropriately situates Weber’s category of action in a legal tradition because it becomes clearly marked here as a category that targets the causal responsibility carried by individual actors and which is assigned in an attributivist manner that is not different from what happens in a courtroom and how legal practitioners reconstruct a line of action and attribute responsibility using the “language of the lawyer” (Turner and Factor 1994: 5).

As Omar and I argued (2015), Weber’s “basic sociological categories” approach to action remains the genealogical seed of present-day theories of action in sociology that span differences on the margins between interpretivist, rational choice or the DBO model. If Weber himself was essentially doing legal philosophy when he defined his category of action, then it seems worthwhile to examine what neuroscience means for the law and whether any lessons can be learned from a kind of (weak) comparison between sociology and the law as loosely allied fields. Nowhere does this seem more true than in making subjective meaning the best way of attributing causal responsibility (as Weber himself advocated).

While modern legal systems vary a great deal in their traditional practices (e.g. Napoleonic versus Common Law traditions), the basic concepts are surprisingly general and span different legal systems, as they all revolve around the attribution of responsibility and liability (Hart 2008). For our purposes, the most important facet of the law is that it features (as it did for Weber) an “act requirement,” which basically means that the only things that can count as illegal are actions. Legal codes are effectively long lists of “illegal” actions (about 7000 actions in the US federal legal code).  Even a non-action must be made to resemble an action (by featuring a cognitive state at the very least) in order to be illegal.

How does the law define action? For the law, the most important aspect is that an action have an agent. And the only class of agents that count as legitimate agents for the law are human agents (not “autonomous agents” or non-human animal agents or force of God agents, etc; Hage 2017). This is important because to legally define human agency we must apply a framework that gives access to the mental states that make person X the agent causally responsible for this illegal act Y. While the act itself is illegal apart from this agent, it is only this agent that makes this action an action in a worked out legal frame. For this illegal action to count as an action, then, three main mental states must be attributable to the responsible agent, which serve as legally acceptable “dispositions” that do the most important job of linking him/her to the illegal act: “motivational states of desire, wish or purpose; cognitive states of belief; conative states of intending or willing” (Moore 1993: 3).

The focus on linking an agent to an action is the most important and (for my argument) most theoretically interesting part of how modern legal frameworks attribute agency. The fundamental and seemingly unconventional term underwriting this is, of course, “causal-responsibility.” As Hart (2008[1968]) notes, there are different types of responsibility and liability recognized by the law generally speaking: in addition to causal, there is role-responsibility, liability-responsibility, capacity-responsibility. Yet in every case, liability and responsibility can be assigned only if an “act occurred.” And in order for an act to have occurred, there must be an agent linked to it, and that agent must have had a motivational state, cognitive state, and conative state in order to be linked to it.

So what does neuroscience mean for the law? It potentially means that the conventional and critically important link between agents and actions (an analogue to Weberian subjective meaning) in modern law is scrambled beyond recognition. Any skepticism about folk psychological states will create problems if, as the argument goes, acts (not agents) are illegal but those acts require an agent-linkage that only works through attributative folk psychological states. Neuroscience potentially makes “acts” legally unrecognizable by jeopardizing the logical connection between agents and actions that takes standard form in a belief/desire deduction. As this is established in the “grammar of the law” (Boltanski 2014), it means that criminal responsibility cannot  be adequately served by attributing states of mind.

And that is exactly the problem as Morse (2008) sees it, because neuroscience promotes a sort of “no action thesis” in response. Namely, “the truth of [neural] determinism is consistent with the existence or non-existence of agency, with the causal role or non-causal role of mental states in explaining behavior. Responsibility depends on agency, on the causal role of mental states, and the new discoveries arguably deny the possibility of agency as it is traditionally conceived.” Thus, neuroscientific correlates can make it seem as if the act did not occur because of an agent. The basic problem is that the legal “reality of the act” is now independent from what the law can recognize as the “agent of the act.”

Ultimately Morse defers to something like Daniel Dennett’s “intentional stance” (1987) as a deflationary move, which foregrounds the sheer pragmatic value of attributive styles that are (now) mainly conventional by comparison. This is a safe solution and, for him, it is the most likely solution, even if neuro-law is here to stay. A revolutionary displacement in law will not occur, at least not anytime soon, for reasons not the least of which have to do with the heavy weight of judicial precedent. Legal traditions consistently outrun the introduction of new explanatory frames.

It still seems reasonable to ask whether a tour down this rabbit hole has any bearing on the way sociologists explain action, the historical Weber connection notwithstanding. Turner and Factor (1994) argue that there is a significant difference in at least one critical respect: lawyers are constrained by the “dogmatic framework of the law” in attributing responsibility for illegal acts. The sociologist does not have exactly the same burden, but tries to satisfy instead a “conceptual framework of the audience … a shifting, ‘eternally young’ framework” (e.g. what Weber called the “language of life”). In a very conventional sense, this runs up against determinism of a different sort (“social determinism”). But this could actually leave sociology positioned to translate neuroscience into action accounting schemes that embrace a “no action thesis” and do not try to work around it by conventionalizing certain frameworks.

Stephen Turner’s new book (2018) introduces the catchy phrase “verstehen bubble.” One application of it could very well be to entire fields that become trapped within the circular limits of categories that enable both communication and introspection and make phenomenon unrecognizable or incommunicable except as the tokens of self-reinforcing or looping types. Presumably, there are not too many fields that will feature both classes of categories, but as the above discussion suggests, a verstehen bubble seems to characterize the law while sociology is arguably less prone by comparison. Its categories of communication (at least) can find the language of intention, belief and desire problematic, though counter-vocabularies are either very carefully partialized (see Kurzman 1991) or even combined with introspective categories (becoming a paranoid style [Boltanski 2014]). Nevertheless, its unique position as both introspective and communicative would, perhaps by that fact alone, make sociology a venue for producing “surrogates” (in Turner’s terms) that reach toward an explanatory domain located somewhere beyond the verstehen bubble.

 

References

Boltanski, Luc. (2014). Mysteries and Conspiracies. London: Polity.

Dennett, Daniel. (1987). The Intentional Stance. MIT Press.

Hage, Jaap. (2017). “Theoretical Foundations for the Responsibility of Autonomous Agents.” Artificial Intelligence and the Law 25: 255-271.

Hart, HLA. (2008). Punishment and Responsibility: Essays in the Philosophy of Law. Oxford UP.

Kurzman, Charles. (1991). “Convincing Sociologists: Values and Interests in the Sociology of Knowledge” pp. 250-271 in Ethnography Unbound. UC Press.

Lizardo, Omar and Dustin Stoltz (2018). “Max Weber’s Ideal versus Material Interests Revisited.” European Journal of Social Theory 21: 3-21.

Morse, Stephen. (2008). “Determinism and the Death of Folk Psychology: Two Challenges for Responsibility from Neuroscience.” Minnesota Journal of Law, Science and Technology 9: 1-36.

Morse, Stephen. (2015). “Neuroscience, Free Will and Criminal Responsibility.” in Free Will and the Brain. Cambridge UP.

Strand, Michael and Omar Lizardo. (2015). “Beyond World Images: Belief as Embodied Action in the World.” Sociological Theory 33: 44-70.

Turner, Stephen. (2018). Cognitive Science and the Social: A Primer. Routledge.

Turner, Stephen and Regis Factor. (1994). Max Weber: The Lawyer as Social Thinker. London: Routledge.

 

 

Bourdieu as a (Hetero)phenomenologist

Toward the beginning of Pierre Bourdieu’s newly published 1999-2000 lectures from the College de France (Manet: A Symbolic Revolution), a perplexed art student asks Bourdieu the following question, one that the verbatim lectures show to have worried and preoccupied the great theorist of practice:

Your recent conferences have astonished me. For when you speak of developing a theoretical approach to art, taking note not of the intentions of the artist but of his dispositions, this implies that the artist cannot be the author of a theoretical work, as Kandinsky was, because he cannot be conscious of the dispositions that he has acquired unconsciously. As an artist, I find this point of view difficult to accept, for I see in it a risk of alienation that is heavy with consequences and, in this case, must I conclude that I am unable to write a thesis? (Bourdieu 2017: 60)

This question comes after Bourdieu makes claims like the 19th century French artist Edouard Manet included “explosive” qualities in his work “without necessarily being aware of it” (27), that if we were to ask about any Manet painting “Did Manet consciously want to do that and did he premeditate what he put into that painting?” the answer must be an unambiguous “no” (45), and finally, as Bourdieu claims, “What I am describing here is … not the conscious mind of the painter … [but] a painter who finds himself practically engaged with the creation of his picture. He has, then, a practical intention, which is not at all his conscious, premeditated intention” (49-50). No wonder the student was perplexed.

The Manet that appears in Bourdieu’s lectures is not a great conversationalist and really has no unique insight into the nature of his craft. Truly as dumb as a painter, as the old saying goes. If, on a given day, you were to ask Manet “what are you doing?” he would very likely give a most honest reply: “I’m painting a picture.” As Bourdieu continues, “He might even have said a little more, for example: ‘I want to paint something that’ll be a work of art. I want to show that you can make something modern out of a classical model.’” Bourdieu believes that a similar boring answer would be given by Zola or [fill in the blank] writer of renown, or really any one of us as we engage in any activity (I am writing a blog post. I’m not quite sure what it is about. I believe something about Bourdieu). “But that does not make [Manet or Zola] or us either totally unconscious automata, or perfectly lucid subjects” (2017: 49).

What is so interesting about these lectures, and other of Bourdieu’s late work on art (1996), is that here we see the application of practice for the purposes of phenomenology, seeking a reconstruction of direct experience, but which I want to claim fits more the profile of a true heterophenomenology (Dennett 1991; 2003), which I’ll explain. Bourdieu wants to put himself and his readers in the artists’ shoes. He wants to know what it was like to be in the world from the “artist’s point of view,” confront the pragmata they confronted, their problems and their solutions for those problems, which he ultimately claims is the legacy they leave behind for other practitioners in a field to recapitulate (e.g. “Beethoven’s solution” or “Manet’s solution”). An opus infinitum, as Bourdieu confesses, and on several occasions in the lectures he admits his embarrassment at what does not know and never will. Yet there is merit in the task, as he explains, especially when the topic is art, or “pure practice without theory” (as Durkheim said).

Heterophenomenology is a coinage of the philosopher Daniel Dennett and my argument is that it is particularly useful for understanding exactly what Bourdieu is up to in this work, as a particularly relevant application of practice. As Dennett claims, heterophenomenology is, quite basically, the “phenomenology of another not oneself” (2003: 19). It is therefore a third-person accounting scheme (e.g. “Manet does”), but one that takes the “first person point of view [e.g. “I did”] as seriously as it can be taken.” What this means is that a heterophenomenologist (mouthful, hereafter HPist) understands that answers to to questions that, seemingly, give access to dimensions first-person experience (“What were you thinking when you painted X?”) are actually third-person investigations that invoke a person’s verbal capacity to link an internal state to a proposition (“I was thinking that …”). An HPist does not dismiss that kind of data, but neither does she take it at face value.

Rather, the job of the HPist is comprehensive. It involves generating a “fictional world … populated with all the images, events, sounds, smells, hunches, presentiments and feelings that the subject sincerely believes to exist in his or her (or its) stream of consciousness. Maximally extended, it is a neutral portrayal of exactly what it is like to be that subject” (Dennett 1991: 99).

The catch is that even though the HPist trusts the person, and “maintains a constructive and sympathetic neutrality” toward what people say about their experience and their action, this does not make those people the authority on the truth of the experience. What the world is like to them is at best an “uncertain guide” to what is going on for them as they experience and act. But neither does this mean that they are zombies (Dennett) or unconscious automata (Bourdieu). The point is that the HPist tries to compile a definitive description of the world according to subjects, but uses everything in that description as primary data, part of a “fictional world,” that is just the beginning of the analysis. It is what must be explained.

Dennett (2003) demonstrates what this using the following example from a famous experiment by Roger Shepard and Jacqueline Melzer (1971). Shepard and Melzer asked subjects whether the two drawings below (Figure 1) were of different objects or just different views of the same object. Nearly every subject reported that they solved the puzzle by rotating one of the figures in their “mind’s eye” to make a comparison. But were they really doing this mental rotation? If we were to ask a hundred people (including me!), they would probably all say that, yes, I mentally rotate the object on the right in order to answer that, indeed, these are two different views of the same object. The phenomenologist would be satisfied with this. The HPist, however, is more intrigued by “mental rotation” as a belief about conscious experience. She willfully submits that what this experience is like for subjects could have nothing to do with what is going on for subjects as they have this experience. But we first need to know that “mental rotation” is what the experience is like for them (see Pylyshyn 2002; Foglia and O’Regan 2016 for literature on mental imagery).

 

Screen Shot 2018-04-24 at 6.56.21 PM.png

Figure 1

Dennett’s HPist strategy bears a none too faint resemblance to the method Bourdieu uses in answering the art student and in his remarkable analysis (68-72) of Manet’s painting Luncheon on the Grass (1863). Here, he attempts to put himself and his audience in Manet’s “historical place … a kind of imaginary reconstruction [of] Manet’s point of view as he was engaged in producing the Luncheon” (2017: 69). Thus, what Manet does here is basically create a tableau vivant, those still life paintings of groups of people arranged in some kind of scene that were so fashionable under Napoleon III. But he does something more than this. Bourdieu’s analysis is too detailed (even though he declares his shame about lacking the “necessary competence to do it”) to reproduce in its entirety. I’ll just point to two examples that resemble a heterophenomenology, ones that relate specifically to mental imagery.

First, Manet’s “symbolic revolution” is found in microcosm in the Luncheon because he had placed himself in an impossible situation in relation to his models: “he got [them] to adopt a classical pose, but through [their] clothing and his pictorial manner, he gave [them] modern connotations” (2017: 70). Did he intend to do this? Bourdieu says no. And to build that case he says that Manet effectively “pictured in his mind’s eye” a Venetian painting  that he had seen at the Louvre (Marcantino Raimondi The Judgment of Paris … people in the bottom right corner).

Second, the woman in the background is a mystery. The kind of wallpaper effect of the woman suggests Japanese art as a model. But Bourdieu argues that Manet likely had a work of “[Jean-Antoine] Watteau in mind” (maybe the Feast of Love, that woman in the righthand background) someone he admired a great deal. He puts this woman in as a “nod to Watteau … [that] closes the triangle of the three figures … She is treated differently: the brushstrokes are light, as in the style of Watteau” (71).

Screen Shot 2018-04-24 at 7.01.32 PM.png

Luncheon on the Grass, Manet, 1863

 

Screen Shot 2018-04-24 at 6.56.39 PM.png

The Judgment of Paris, Raimondi ca. 1510-1520

 

Screen Shot 2018-04-24 at 7.01.18 PM.png

The Feast (or Festival) of Love, Watteau 1718-19

As Bourdieu finishes this “very strange exercise” he feels ashamed doing it, because he does not have the necessary competence. The problem is that those who do have the competence (art historians) don’t think of doing it and remain instead in the role of “interpreter, observer and analyst.” The question here is not a matter of “influence.” It is rather the HPist’s question that what painting the Luncheon was like for Manet gives no special insight into what was going on for Manet as he painted.

According to Bourdieu, what was going on for Manet was practice. This is particularly true because Manet was really never one to serve as Cartesian observer of his own interior process and usually gave boring, trite, cliche answers about his revolutionary work, including to Zola when Manet was doing his portrait. He seemed to be a zombie when he spoke about painting, but of course he was anything but a zombie. That in itself is data to the HPist, as it suggests a dimension of experience running so far past what we can access simply by recording verbalized beliefs about that experience.

At the end of this long examination, motivated by the art student who is despairing at the thought that his first-person experience may not have complete authority, Bourdieu finally gets around to answering him in the best way he can, with a kind of practical charity principle:

[Manet] was not entirely sure of what we wanted to do: he had a vague plan … And it was in the process of doing it that he found out what he wanted to do, as we do ourselves when we do something … To reply once again to the question put to me, which has caused me to slow down and fill out my argument: one can be very intelligent with one’s body, without using language, of course, when one is a dancer or pianist. Having said this, there is the particular problem of the executant as opposed to the person who produces his own text, or his own work of art. So there is an intelligence of the body … Painters know how to adopt towards painting a viewpoint that is a practical understanding; they have a practical perception which is based on know-how … they understand a savoir-faire as savoir-faire, and don’t write lectures on it (73)

 

References

Bourdieu, Pierre. (2017). Manet: A Symbolic Revolution. London: Polity.

Bourdieu, Pierre. (1996). The Rules of Art. Stanford: Stanford UP.

Dennett, Daniel. (1991). Consciousness Explained. Boston: Back Bay

Dennett, Daniel. (2003). “Who’s on first? Heterophenomenology explained.” Journal of Consciousness Studies 10: 19-30

Foglia, Lucia and Kevin O’Regan. (2016). “A New Imagery Debate: Enactive and Sensorimotor Accounts.” Review of Philosophy and Psychology 7: 181-196.

Pylyshyn, ZW. (2002). “Mental Imagery: In Search of a Theory.” Behavioral and Brain Sciences 25: 157-182.

Shepard, Roger and Jacqueline Melzer. (1971., “Mental rotation of three-dimensional objects” Science 171: 701–3.