The Evocation Model of Framing

In a forthcoming article, my coauthors and I outline what we call an “evocation model” of framing by which a frame, understood as a situated assemblage of material objects and settings (i.e., a form of public culture), activates schemas, understood as flexible, multimodal memory structures (i.e., a form of personal culture), evoking embodied responses (Wood et al. 2018). In this post, I will discuss several empirical examples from conceptual metaphor research that are consistent with our model and which expand it in promising ways.

Conceptual Metaphor Theory

Conceptual Metaphor Theory (CMT) asserts that much of our reasoning about abstract concepts is based on analogical mapping, whereby some more familiar source is used to understand and make inferences about some less familiar target. For example, Lakoff (2008:383) argues that people typically understand anger metaphorically as a hot fluid in a container. Following this metaphorical mapping, the body is understood as a container for emotions, emotions are understood as substances, and anger itself is understood as heated substance. This metaphorical mapping is manifest linguistically in phrases such as “you got my blood boiling,” “she was fuming,” and “he’s really steamed up.” It is also often manifest visually in similar ways. Consider, for example, the depiction of anger in the movie, Inside Out: Anger is red, boxlike, and at times, literally exploding with fire. This particular conceptual metaphor is extremely common across different cultures (Talebi-Dastenaei 2015).

via GIPHY

Metaphors and Schemas

The mapping of a source onto a target in CMT may be also described as the activation of a particular schema in relation to a task. In some cases, such as describing the experience of anger, there is one dominant schematic network guiding meaning construction. In other cases, there are multiple accessible schemas–multiple sources–which could easily be activated for a specified task. In our paper, we argue that framing is the process by which a frame (understood as an assemblage of material objects that may include anything tactile such as text, sounds, visible objects, smells, etc.) activates schemas, and this activation evokes a particular response. A quickly-expanding field of experimental research on CMT supports this model.

Schematic Activation and Reasoning about Crime

Thibodeau and Boroditsky (2011) find that activating particular schemas over others when describing a social problem affects the kinds of solutions people propose to address the problem. In one experiment they told two groups of participants about increasing crime rates in the fictional city of Addison, gave relevant statistics, and asked for possible solutions. They described crime in Addison metaphorically as a beast preying on the city to the first group, and as a virus infecting the city to the second group. Remarkably, despite having the same crime statistics, individuals in the different groups clustered around different solutions: “When crime was framed metaphorically as a virus, participants proposed investigating the root causes and treating the problem by enacting social reform to inoculate the community, with emphasis on eradicating poverty and improving education. When crime was framed metaphorically as a beast, participants proposed catching and jailing criminals and enacting harsher enforcement laws.” Additionally, Thibodeau and Boroditsky found that participants were unaware of the role of the metaphorical framing on their own thinking–both groups believed their solutions were rooted solely in the available data–suggesting that the framing effect was covert.

These findings suggest that when multiple schemas may be fittingly activated to support reasoning, the schema that is activated may in large part determine the result people come to. Recent sociological work on schematic understandings of poverty reaches a similar conclusion (Homan, Valentino, and Weed 2017).

Schematic Activation and Creative Thinking

In some cases, schematic activation influences cognitive performance more than predisposing someone to one outcome over another. For example, Leung et al. (2012) identify several metaphors which express creative thinking–considering a problem “on one hand, and then the other,” “thinking outside the box,” and “putting two and two together”–and ask whether physically embodying these metaphors actually makes people more creative. In a series of studies, Leung et al. have participants perform different tasks measuring convergent thinking (“the search for the best answer or the most creative solution to a problem”) or divergent thinking (“the generation of many ideas about and alternative solutions to a problem”) while being in either a controlled or experimental condition. Participants in the experimental condition for the “thinking on one hand, and then the other” metaphor were asked to generate ideas for using a campus building while physically holding out one hand and pointing to a wall, then switching hands and pointing to the other wall and generate more ideas. Leung et al. found that participants in the experimental condition generated more ideas (evidence of higher divergent thinking) than those in the control conditions.

To test the “thinking outside the box” metaphor, participants were assigned to perform a convergent thinking task in one of three conditions: sitting inside a 5×5 foot box constructed with pvc pipe and cardboard, sitting outside the 5×5 foot box, or sitting in a room without a box present. Leung et. al found that participants in the outside-the-box condition generated more correct answers than either two conditions–literally thinking outside the box seems to have helped them think outside the box metaphorically. In a related study, they had other participants perform a divergent thinking task while either walking freely, walking in a fixed, rectangular path, or not walking at all. Here they found that participants who walked freely generated more new ideas.

Together, these findings highlight the subtle influence of one’s environment on cognitive performance. While framing in sociology is typically understood as influencing what people think, it may be beneficial to also consider how certain frames facilitate or inhibit particular cognitive tasks.

Schematic Matching and Evaluating Drug Effectiveness

Keefer et al (2014) demonstrate an extension of our framework with their theory of “metaphoric fit.” The authors argue that when people evaluate the effectiveness of an abstract solution to an abstract problem, people are more likely to positively evaluate the effectiveness of the solution if the problem and the solution are understood via the same metaphors (i.e. the same schemas are activated in relation to both). They test this specifically with a series of experiments about a fictional drug proposed to treat depression. In one experiment, they described to participants the drug “Liftix” (the solution) with vertical metaphors: (e.g., “has been shown to lift mood”; “patients everywhere have reported feeling uplifted”). Two groups were given this same description of Liftix, but each group received a different description of depression (the problem). In one condition, participants were given a description of depression that activated the same verticality schema: “(“Depressed individuals feel that while other people’s lives have both ups and downs, their life has considerably more downs”). In the other condition, depression was described more literally: (“Depressed individuals feel that while other people’s lives have both positive and negative periods, their life has considerably more negative periods”). Both groups then rated how effective they thought Liftix would be. Participants in the metaphor-matching condition were more likely to give Liftix a higher rating. The authors replicated the experiment by activating LIGHT/DARK rather than UP/DOWN and found the same results. They also replicated the experiment by activating these schemas visually rather than linguistically, and again saw the same outcomes.

This study suggests that the evocation of a particular response may not be the result of activating a particular schema alone, but the interrelations of activated schemas. As such, it offers an intriguing expansion to our model and suggests that a more relational schematic analysis may sometimes be necessary.

Conclusion

A growing body of experimental research supports the core of our evocation model of framing. In various ways, the physical environment may be manipulated to activate particular schemas or combinations of schemas, and this activation evokes particular responses. In some cases, this activation may affect what people think, and in other cases, how well they think.

Although each of the studies I cited here is experimental, I note that the analysis of schemas, frames, and framing need not be limited to experiments. For example, a researcher might wish to know the variety of ways people schematically understand a concept before constructing an experiment, as Homan et al. (2017) do in their study of poverty. Alternatively, a researcher may lean on established experimental results to make inferences about the consequences of observed frames “in the wild.” Beyond this, research may focus also on the development and diffusion of particular models of frames, as we discuss in the forthcoming paper. The bottom line is that experimental work has been helpful for giving empirical support for the basic theoretical framework, but researchers should consider experimental research as just one piece of a larger puzzle.

References

Gibbs, Raymond W. and Raymond W. Gibbs Jr. 2017. Metaphor Wars. Cambridge University Press.

Homan, Patricia, Lauren Valentino, and Emi Weed. 2017. “Being and Becoming Poor: How Cultural Schemas Shape Beliefs About Poverty.” Social Forces; a Scientific Medium of Social Study and Interpretation 95(3):1023–48.

Keefer, Lucas A., Mark J. Landau, Daniel Sullivan, and Zachary K. Rothschild. 2014. “Embodied Metaphor and Abstract Problem Solving: Testing a Metaphoric Fit Hypothesis in the Health Domain.” Journal of Experimental Social Psychology 55:12–20.

Lakoff, George. 2008. Women, Fire, and Dangerous Things. University of Chicago Press.

Leung, Angela K. Y. et al. 2012. “Embodied Metaphors and Creative ‘Acts.’” Psychological Science 23(5):502–9.

Talebi-Dastenaei, Mahnaz. 2015. “Ecometaphor: The Effect of Ecology and Environment on Shaping Anger Metaphors in Different Cultures.” Retrieved (http://ecolinguistics-association.org/download/i/mark_dl/u/4010223502/4625423432/TalebiEcology_and_anger_metaphorsFINAL.pdf).

Thibodeau, Paul H. and Lera Boroditsky. 2011. “Metaphors We Think with: The Role of Metaphor in Reasoning.” PloS One 6(2):e16782.

Wood, Michael Lee, Dustin S. Stoltz, Justin Van Ness, and Marshall A. Taylor. 2018. “Schemas and Frames.” Retrieved (https://osf.io/preprints/socarxiv/b3u48/).

Connectionism: Alternatives to the Modular Brain, Part I

In my previous post, I introduced the task of cognitive neuroscience, which is (largely) to locate processes we associate with the mind in the structures of the brain and nervous system (Tressoldi et al. 2012). I also discussed the classical and commonsensical approach which conceptualizes the brain and mind relationship by analogy to computer hardware and software: distinct physical modules in the brain run operations on a limited set of innate codes (not unlike binary code) to produce outputs. One problem with this I discussed is theoretical: the grounding problem.

Another objection is empirical. If one proposes a strict relationship between functional modularity and structural modularity, using brain imaging technology, researchers should be able to identify these modules in neural architecture with some consistency across persons. However, researchers do not find such obvious evidence (Genon et al. 2018). For example, some of the researchers who pioneered brain imaging techniques, specifically positron emission tomography (PET), attempted to find three components of the “reading system” (orthography, phonology, and semantics) (e.g., Peterson, Fox, Posner, & Mintun, 1989). A decade later, researchers continued to disagree as to where the “reading system” is located (Coltheart 2004).

Part of the problem may be methodological: the technology remains rudimentary and advances come with tradeoffs (Turner 2016; Ugurbil 2016). The fMRI is the most common technique used in research, and high-resolution machines can measure blood flow in voxels (3-dimensional pixels) that are about 1 cubic millimeter in size. With an average of 86 billion neurons in the human brain (Azevedo et al. 2009), there are an average of 100,000 neurons in one voxel (although neurons vary widely in size and structure—see NeuroMorpho.org for  a database of about 90,000 digitally reconstructed human and nonhuman neurons), and each neuron has between hundreds to thousands of synapses connecting it (with varying strengths) to neighboring neurons. To interpret fMRI data, neuronal activity within each voxel is averaged, using the kinds of statistical techniques familiar to many sociologists, and must extract signal from noise. Therefore, it is important to bear in mind, like all inferential analyses, findings are provisional.

Connectionism in Linguistics and Artificial Intelligence

Even if non-invasive imaging resolution were to be extended to the neuronal level in real-time, it may be that there are no special-purpose brain modules to be discovered. That is, it may be that cognitive functions are distributed across the brain and nervous system, in perhaps highly variable ways. Such an alternative relies on a network perspective and comes with many potential forebearers, such as Aristotle, Hume, Berkeley, Herbert Spencer, or William James (Medler 1998).

Take for example Paul Broca and Carl Wernicke’s work on aphasia in the late 19th century. Noting the varieties if aphasia, or the loss of the ability to produce and/or understand speech or writing, Lichtheim (1885) concludes, following the work of Wernicke and Broca: different aspects of language (i.e. speaking, hearing speech, understanding speech, reading, writing, interpreting visual language) are associated with different areas of the brain, but connected via a neural network. Interruption along any one of these pathways can account for observations of the many kinds of aphasia.  

20180419-Selection_001.png
Figure from Lichtheim (1885:436), demonstrates the pathways connecting concepts (B) to “auditory images” (A) and “motor images” (M), each of which might be disrupted causing a specific kind of aphasia.

If language were produced by a discrete module, one would predict global language impairment, not piecemeal. Thus, this work developed the notion that so-called psychological “faculties” like language were distributed across areas of the brain. Following the logic of such evidence, an alternative perspective later referred to as connectionism, argues that the brain has no discrete functional regions and does not operate on symbols in a sequential process as a computer, but rather is distributed neural network which operates in parallel.

The connectionist approach (also called parallel distributed processing or PDP) coalesced primarily around PDP Research Group,  lead by David Rumelhart and James McClelland at the Institute for Cognitive Science at UC-San Diego, as an alternative to the generative grammar approach to modeling brain activity. In particular, the publication of Parallel Distributed Processing in 1986 marked the beginning of the contemporary connectionist perspective.

A key difference with prior computational approaches is that connectionist theories dispense with the analogy of mind as software and brain as hardware. Mental processes are not encoded in some language of thought or translated into neural architecture, they are the neural networks. Furthermore, unlike Chomsky’s generative grammar, a connectionist approach to language can better account for geographical and/or sociological variation—dialects, accents, vocabulary, syntax—within what is commonly considered the “same” language. This is because learning (from a connectionist perspective) plays a key role in both language use and form, and thus is easily coupled with, for example, practice theoretic approaches which reconceptualize folk concepts, like beliefs, into a species of habit.

Take, for example, Basil Bernstein’s pioneering work on linguistic variation across class in England (1960). He demonstrated that, independent of non-verbal measures of intelligence, those in the middle class would use a broader range of vocabulary (and therefore would score higher on verbal measures of intelligence) because elaborating one’s thoughts (and talking about oneself) was an important practice (and therefore habit) for the middle class, but not for the working class. As Bernstein summarized, “The different vocabulary scores obtained by the two social groups may simply be one index, among many, which discriminates between two dominant modes of utilizing speech” (1960:276).

Connectionism and Cognitive Anthropology

Beginning in the 1960s, cognitive anthropology was beginning to see problems with modeling culture using techniques like componential analysis (a technique borrowed from linguistics, see Goodenough 1956), which followed a decision-tree, or “checklist” logic. It is here a small theory-group in cognitive anthropology—called the “cultural models” school surrounding Roy d’Andrade while at Stanford in the 1960s and then UC-San Diego in the 1970s—circulated informally a working paper written by the linguist Charles Fillmore (while at Stanford) in which he outlined “semantic frames” as an alternative to checklist approaches to word meanings. In another paper circulated informally, “Semantics, Schemata, and Kinship,” referred to colloquially as “the yellow paper” (Quinn 2011:36), the anthropologist Hugh Gladwin (while also at Stanford) made a similar argument. Rather than explain the meaning of familial words like “uncle” in minimalist terms, anthropologists should consider how children acquire a “gestalt-like household schema,” and uncle “fits” within this larger cognitive structure.

However, it wasn’t until these cognitive anthropologists paired this new concept of cultural schemas with the connectionism that, according to Roy d’Andrade (1995) and Naomi Quinn (2011), a paradigm shift occurred in cognitive anthropology in the 1980s and 1990s. Quinn recalls the second chapter of Rumelhart, et al’s 1986 book, “Schemata and Sequential Thought Processes in PDP Models” gave the schema a “new and more neurally convincing realization as a cluster of strong neural associations” (Quinn 2011:38).

Beyond d’Andrade and his students and collaborators like Quinn and Claudia Strauss at Stanford, Edwin Hutchins, who also worked closely with Rumelhart and McClelland’s PDP Research Group, was instrumental in extending connectionism from the individual brain to a social group with his concept of “distributed cognition.” Independently of this US West Coast cognitive revolution, the British anthropologist Maurice Bloch was one of the first to recognize the importance of connectionism for anthropology. Beginning with his essay “Language, Anthropology and Cognitive Science,” in which he criticized his discipline for relying on an overly linguistic conceptualization of culture (a criticism which applies with full force to contemporary cultural sociology). 

In a follow-up post, I will consider more recent advances in understanding the brain-mind relationship, specifically the concept of “neural reuse,” and assess the connectionist model in light of this work.

References

d’Andrade, Roy G. 1995. The Development of Cognitive Anthropology. Cambridge University Press.

Azevedo, Frederico A. C. et al. 2009. “Equal Numbers of Neuronal and Nonneuronal Cells Make the Human Brain an Isometrically Scaled-up Primate Brain.” The Journal of Comparative Neurology 513(5):532–41.

Bloch, Maurice. “Language, anthropology and cognitive science.” Man (1991): 183-198.

Bernstein, Basil. 1960. “Language and Social Class.” The British Journal of Sociology 11(3):271–76.

Coltheart, Max. 2004. “Brain Imaging, Connectionism, and Cognitive Neuropsychology.” Cognitive Neuropsychology 21(1):21–25.

Genon, Sarah, Andrew Reid, Robert Langner, Katrin Amunts, and Simon B. Eickhoff. 2018. “How to Characterize the Function of a Brain Region.” Trends in Cognitive Sciences.

Goodenough, Ward H. 1956. “Componential Analysis and the Study of Meaning.” Language 32(1):195–216.

Lichtheim, Ludwig. 1885. “On Aphasia.” Brain 7:433–84.

Medler, David A. 1998. “A Brief History of Connectionism.” Neural Computing Surveys 1:18–72.

Petersen, S.E., Fox, P.T., Posner, M.I., Mintun, M. and Raichle, M.E., 1989. “Positron emission tomographic studies of the processing of single words.” Journal of Cognitive Neuroscience, 1(2), pp.153-170.

Quinn, Naomi. 2011. “The History of the Cultural Models School Reconsidered: A Paradigm Shift in Cognitive Anthropology.” Pp. 30–46 in A Companion to Cognitive Anthropology.

Rumelhart, David E., James L. McClelland, and the PDP Research Group. 1986. Parallel Distributed Processing. Cambridge, MA: MIT Press.

Tressoldi, Patrizio E., Francesco Sella, Max Coltheart, and Carlo Umiltà. 2012. “Using Functional Neuroimaging to Test Theories of Cognition: A Selective Survey of Studies from 2007 to 2011 as a Contribution to the Decade of the Mind Initiative.” Cortext. 48(9):1247–50.

Turner, Robert. 2016. “Uses, Misuses, New Uses and Fundamental Limitations of Magnetic Resonance Imaging in Cognitive Science.” Philosophical Transactions of the Royal Society of London. 371(1705).

Ugurbil, Kamil. 2016. “What Is Feasible with Imaging Human Brain Function and Connectivity Using Functional Magnetic Resonance Imaging.” Philosophical Transactions of the Royal Society of London. 371(1705).

 

The Decision to Believe

As noted in a previous post, there are analytic advantages with reconceptualizing the traditional denizens of the folk-psychological vocabulary from the point of view of habit theory. So far, however, the argument has been negative and high-level; thinking of belief as habit, for instance, allows us to sidestep a bunch of antinomies and contradictions brought about by the picture theory. In this post, I would like to outline some positive implications of recasting beliefs as a species of habit. However, I will begin by discussing other overlooked implications of the picture theory and then (promise) move on to some clear substantive implications of the habit conception.

As noted before, the picture theory of belief is part of a more general set of folk (and even technical) conceptions of how beliefs work. I have already noted one of them and that is the postulate of incorrigibility: If somebody assents to believing p, then we presume that they have privileged first-person knowledge as to this. It would be nonsensical (and socially uncouth) for a second person to say to them “I know better than you on this one; I don’t think you believe p.” Folk Cartesianism thus operates as a philosophical set of tenets (e.g. the idea we have privileged introspective and maybe even non-inferential access to personal beliefs), and as a set of ethnomethods to coordinate social interaction (accepting people’s claims they believe something when they tell us so without raising a fuss).

I want to point to another, less obvious premise of both folk and technical Cartesianism. This is the notion (which became historically decisive in the Christian West after the Protestant Reformation) that you get to choose what you believe. Just like before, this doubles as a philosophical precept and as an ethnomethod used to organize social relations in doxa-centric societies (Mahmood 2011). If you get to choose what you believe, and if your belief is obnoxious or harmful, then you are responsible for your belief and can be blamed, punished, burned at the stake and so on. As the sociologist David Smilde has also noted, there is a positive version of this implication of folk Cartesianism: if the belief is good for you (e.g. brings with it new friends, behaviors, resources) then we should expect you (under the auspices of charitable ascription) to choose to believe it. However, the weird prospect of people believing something not because they find its truth or validity compelling but because of instrumental reasons raises its ugly head in this case (Smilde 2007, 3ff; 100ff).

The idea of choosing to believe is not as crazy as it sounds. At least the negative of it, the idea we could bring up a consideration (let’s say a standard proposition) and withhold belief from it until we had scrutinized its validity was central to the technical Cartesian method of doubt. Obviously, this requires that we have some reflective control over our decision to believe in something or not while we consider it, so in this respect technical and folk Cartesianism coincide.

As Mike and I discuss in the 2015 paper, rejecting the picture theory (and associated technical/folk Cartesianism) of belief makes hash of the notion of “choosing to believe” as a plausible belief-formation story. Here the strict analogy to prototypical habits helps. Consider a well-honed habit; when exactly did you choose to acquire it? Now even if you made a “decision” to start a new training regimen (e.g. Yoga) at what point did it go from a decision to a habit? Did that involve an act of assent on your part? Now consider a traditional belief stated as an explicit linguistic proposition you claim to believe (e.g. “The U.S. is the land of opportunity”). When did you choose to believe that? We suggest, that even a fairly informal bit of phenomenology will lead to the conclusion that you do not have credible autobiographical memories of having “chosen” any of the things you claim to believe. It’s as if, as Smilde points out, the original memory of decision is “erased” once the conviction to believe takes hold.

We suggest that the apparatus of erased memories and decisions that may or may not have taken place is an unnecessary outgrowth of the picture theory. Just like habits, beliefs are acquired gradually. The problem is that we take trivial (in the strict sense of trivia) encyclopedic statements (e.g. Bahrain is a country in the middle east) as prototypical cases of belief. Because these could be acquired via fast memory binding after a single exposure they seem to be the opposite of the way habits are acquired. However, these linguistic-assent to trivia beliefs are analytically worthless because it is unlikely that if there’s anything like belief that plays a role in action, it takes the form of linguistic-trivia beliefs. That we believe (no pun intended) that these types of propositions are “in control” of action is itself also an unnecessary analytic burden produced by the picture theory.

Instead, as noted before, a lot of our action-implicated beliefs are clusters of dispositions and not passive acts of private assent to linguistic statements. However, trivia-style beliefs capable of being acquired via a single exposure are the main stock in trade of both the folk idea of belief and the intellectualist strand of philosophical discussion on the topic. Thus, they are important to deal with conceptually, even if, from the point of view of the habit theory they represent a degenerate case since from this perspective, repetition, habituation, and perseverance is the hallmark of belief (Smith and Thelen 2003).

That said, what if I told you that the folk-cartesian notion of deciding to believe is inapplicable even in the case, of trivia-style one shot belief? This is the key conclusion of what is now the most empirically successful program on belief formation in cognitive psychology. The classic paper here is Gilbert (1991), who traces the idea back to Spinoza, although the subject has been revived in the recent efflorescence of work in the philosophy of belief. See in particular Mandelbaum (2014) and Rott (2017). This last notes that this was also a central part of the habit-theoretic notion of belief shared by the American pragmatists.

When it comes to one shot propositions, people are natural born believers. In contrast to the idea that conceptions are first considered while withholding belief (as in the Cartesian model) what the evidence shows is that mere exposure or consideration of a proposition leads people to treat as a standing belief in future action and thinking. Thus, people seem incapable of not believing what they bring to mind. While this may seem like a “bug” rather than a feature of a cognitive architecture, it is perfectly compatible with both a habit-theoretic notion of belief, and a wider pragmatist conception of mentality, of the sort championed by James, Dewey, and in particular the avowed anti-Cartesian C. S. Peirce. Just in the same way that every action could be the first in a long line that will fix a belief or a habit, the very act of considering something makes it relevant for us without the intervention of some effortful mental act of acceptance.

So just like you don’t know where your habits come from, you don’t know where your “beliefs” (in the one-shot trivia sense) come from either. The reason for this is that they got in there without having to get an invitation from you. In the same way, an implication of the Spinozist belief-formation process is that the thing that requires effort and controlled intervention is the withdrawal of belief (which is difficult and resource demanding). This links up the Spinozist belief-formation story with dual process models of thinking and action (Lizardo et al. 2016).

This is also in strict analogy with habit: While lots of habits are relatively easy to form (whether or not desirable) kicking a habit is hard. Even the habits that seem to us “hard” to form (e.g. going to the gym regularly) are not hard to form because they are habits; they are hard to form because they have to contend with the existence of even stronger competing habits (lounging at home) that will not go away without putting up a fight. It is the dissolution of the old habit and not the making of the new one that’s difficult.

So with belief. Beliefs are hard to undo. Once again, because we mistakenly take the trivia one-shot version of belief as the prototype this seems like an exaggeration. So if you believed “Bahrain was a country in Africa” and somebody told you “no, actually it’s in the Persian Gulf” it would take some mental energy to give up the old belief and form the new one, but not that much; most people would be successful.

But as noted in a previous entry, most beliefs are clusters of habitual dispositions, not singleton spectatorial propositions toward which we go yea or nay. So (easily!) developing these dispositional complexes in the context of, let’s say, a misogynistic society like the United States, would mean that “unbelieving” the dispositional cluster glossed by the sentential proposition “women can’t make as good as leaders as men” is not a trivial matter. For some, to completely unbelieve this may be close to impossible. This is something that our best social-scientific theories (whether “critical” or not) have yet to handle properly because their conception of “ideology” is still trapped in the picture theory (this is a matter for future posts).

Beliefs, as Mike and I noted in a companion paper (Strand and Lizardo 2017), have an inertia (which Bourdieu referred to as “hysteresis”) that makes them hang around even after a third person observer can diagnose them as “out of phase,” or “outmoded.” This is the double-edged nature of their status as habits; easy to form (when no competing beliefs are around) and easy to use (once fixed via repetition), but hard to drop.

References

Gilbert, Daniel T. 1991. “How Mental Systems Believe.” The American Psychologist 46 (2). American Psychological Association: 107.

Lizardo, Omar, Robert Mowry, Brandon Sepulvado, Dustin S. Stoltz, Marshall A. Taylor, Justin Van Ness, and Michael Wood. 2016. “What Are Dual Process Models? Implications for Cultural Analysis in Sociology.” Sociological Theory 34 (4). journals.sagepub.com: 287–310.

Mahmood, Saba. 2011. Politics of Piety: The Islamic Revival and the Feminist Subject. Princeton University Press.

Mandelbaum, Eric. 2014. “Thinking Is Believing.” Inquiry: A Journal of Medical Care Organization, Provision and Financing 57 (1). Routledge: 55–96.

Rott, Hans. 2017. “Negative Doxastic Voluntarism and the Concept of Belief.” Synthese 194 (8): 2695–2720.

Smilde, D. 2007. Reason to Believe: Cultural Agency in Latin American Evangelicalism. The Anthropology of Christianity. University of California Press.

Smith, Linda B., and Esther Thelen. 2003. “Development as a Dynamic System.” Trends in Cognitive Sciences 7 (8): 343–48.

Strand, Michael, and Omar Lizardo. 2017. “The Hysteresis Effect: Theorizing Mismatch in Action.” Journal for the Theory of Social Behaviour 47 (2): 164–94.

Is The Brain a Modular Computer?

As discussed in the inaugural post, cognitive science encompasses numerous sub-disciplines, one of which is neuroscience. Broadly defined, neuroscience is the study of the nervous system or how behavioral (e.g. walking), biological (e.g. digesting), or cognitive processes (e.g. believing) are realized in the (physical) nervous system of biological organisms.

Cognitive neuroscience, then, asks how does the brain produce the mind?

As a starting point, this subfield takes two positions vis a vis two kinds of dualism. First, is the rejection of Descartes’ “substance dualism,” which posits the mind is a nonphysical “ideal” substance. Second, is the assumption that so-called cognitive processes are somehow distinct from simple behavioral or biological processes, referred to as “property dualism.” That is, processes we tend to label “cognitive”—imagining, calculating, desiring, intending, wishing, believing, etc.—are distinct yet “localizable” in the physical structures of the brain and nervous system. As Philosophy Bro summarizes:

…substance dualism says “Oh no you’ve got the wrong thing entirely, stupid” and property dualism says “yeah, no, go on, keep looking at the brain, we’ll get it eventually.”

Broadening the scope to the entire cognitive sciences, including the philosophy of mind, one would be hard-pressed to find a contemporary scholar who takes substance dualism seriously. Thus, whatever the relationship between the mental and the neural, it cannot be that the mental is a nonphysical ideal substance which cannot be studied in empirical ways.

The current debate, rather is between various kinds of property dualist positions and those who argue against even property dualism. However, without diving into these philosophical debates, it is helpful to get a handle on what different trends in cognitive neuroscience contend is the relationship between brains and minds. Here I will briefly review what is considered the classical and commonsensical view, which is a quintessential property dualist approach.

An 1883 phrenology diagram, From People’s Cyclopedia of Universal Knowledge, Wikimedia Commons

The Modular Computer Theory of Mind

The classic approach to localization suggests that the brain is composed of discrete, special-purpose, “modules.” In many ways, this is aligned with our folk psychology: the amygdala is the “fear center,” the visual cortex is the “vision center,” and so on. This approach is most often traced back to Franz Gall and his pseudo-scientific (and racist) “organology” and “cranioscopy,” later referred to as “phrenology.” He argued that there were 27 psychological “faculties” each which had a respective “sub-organ” in the brain.

While most of the work associated with Gall was discarded, the idea that cognitive processes could be located in discrete modules continued, most forcefully in the work of the philosopher Jerry Fodor, specifically The Modularity of Mind (1983). Fodor’s approach builds on Noam Chomsky’s generative grammar. Struck by his observation that young children quickly learn to speak grammatically “correct” sentences, Chomsky argued the acquisition of language cannot be through imitation and trial-and-error. Instead, he proposed human minds have innate (and universal) structures which denote the basic set of rules for organizing language. The environment simply activates different combinations, resulting in the variation across groups. With a finite set of rules, humans can learn to create an infinite number of combinations, but no amount of experience or learning will alter the rules. (I will save the evaluation of Chomsky’s approach to language acquisition for later, but it doesn’t fare well).

Fodor took this one step further and argued that the fundamental contents of “thought” was language-like in this combinatorial sense, or what has come to be known as “mentalese.” In Language of Thought (1975), Fodor proposed that in order to learn anything in the traditional sense, humans must already have some kind of language-like mental contents to work with. As Stephen Turner (2018:45) summarizes in his excellent new Cognitive Science and the Social: A Primer:

If one begins with this problem, one wants a model of the brain as “language ready.” But why stop there? Why think that only grammatical rules are innate? One can expand this notion to the idea of the “culture-ready” brain, one that is poised and equipped to acquire a culture. The picture here is this: cultures and languages consist of rules, which follow a template but which vary in content, to a limited extent; the values and parameters need to be plugged into the template, at which point the culture or language can be rapidly acquired, mutual understanding is possible, and social life can proceed.

Such a thesis rests on the so-called “Computational Theory of Mind,” which by analogy to computers, presumes the mental contents are symbols (a la “binary codes”) which are combined through the application of basic principles producing more complex thought. Perception is, therefore, “represented” in the mind by being associated with “symbols” in the mind, and it is through the organization of perception into symbolic formations that experience becomes meaningful. Different kinds of perceptions can be organized by different modules, but again, the basic symbols and principles unique to each module remains unmodified by use or experience.

Despite the fact such a symbol-computation approach to thinking is “anti-learning,” this view is often implicit in (non-cognitive) anthropology and (cultural) sociology. For example, Robert Wuthnow ([1987] 1989), Clifford Geertz (1966), Jeffrey Alexander with Philip Smith (1993) were each inspired by the philosopher Susanne Langer’s Philosophy in a New Key, in which she argues for the central role of “symbols” in human life. She claims “the use of signs is the very first manifestation of mind” ([1942] 2009:29, thus “material furnished by the senses is constantly wrought into symbols, which are our elementary ideas” ([1942] 2009:45), and approvingly cites Arthur Ritchie’s The Natural History of the Mind, “As far as thought is concerned, and at all levels of thought, it is a symbolic process…The essential act of thought is symbolization” (1936:278–9).

Conceptualizing “thinking” as involving the (computational) translation of perceptual experience into a private, world-independent, symbolic languages, however, makes it difficult to account for “meaning” at all. This is commonly called the “grounding problem,” (which Omar discussed in his 2016 paper, “Cultural symbols and cultural power”), which grapples with the following question (Harnard 1990:335): “How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes [or principles of composition], be grounded in anything but other meaningless symbols?”

The problem is compounded when the mind is conceived as composed of multiple computational “modules,” each of which is independent from the other. The most famous thought-experiment demonstrating the problem with this approach is Searle’s (1980) “Chinese Room Argument.” To summarize, Searle posits a variation on the Turing Test where both sides of the electronically-mediated conversation are human (as opposed to one human and the other artificial); however, both speak different languages:

Suppose that I’m locked in a room and given a large batch of Chinese writing . . . . To me, Chinese writing is just so many meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules . . . The rules are in English, and I understand these rules . . . and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response . . . . Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols . . . my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. (Searle 1980:350–1)

Despite his acquired proficiency at symbol manipulation, locked in the room, he does not understand Chinese, nor does the content of his responses have any meaning to him. Therefore, Searle concludes, thinking cannot be fundamentally computational in this sense.

There are viable alternatives to this modular computer theory of the mind, many of which may run counter to folk understandings, but which square better with evidence. More importantly, these alternatives (which will be covered extensively in this blog) would likely be considered more “sociological,” as they invite (and often require) a role for both learning and context in explaining cognitive processes.

References

Alexander, Jeffrey C. and Philip Smith. 1993. “The Discourse of American Civil Society: A New Proposal for Cultural Studies.” Theory and Society 22(2):151–207.

Fodor, Jerry A. 1975. The Language of Thought. Harvard University Press.

Fodor, Jerry A. 1983. The Modularity of Mind. MIT Press.

Geertz, Clifford. 1966. “Religion as a Cultural System.” Pp. 1–46 in Anthropological Approaches to the Study of Religion, edited by M. Banton.

Harnad, Stevan. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1-3), 335-346.

Langer, Susanne K. [1942] 2009. Philosophy in a New Key: A Study in the Symbolism of Reason, Rite, and Art. Harvard University Press.

Lizardo, Omar. (2016). Cultural symbols and cultural power. Qualitative Sociology, 39(2), 199-204.

Ritchie, Arthur D. 1936. The Natural History of Mind. Longmans, Green and co.

Searle, John R. (1980). Minds, brains, and programs. Behavioral and brain sciences, 3(3), 417-424.

Turner, Stephen P. 2018. Cognitive Science and the Social: A Primer. Routledge.

Wuthnow, Robert. [1987] 1989. Meaning and Moral Order: Explorations in Cultural Analysis. Berkeley: University of California Press.

Making Ontology Practical

Questions of ontology have gathered an audience in sociology over the past decade, particularly as galvanized (pro or con) by the critical realist movement (Gorski 2013). Such an influence is to be welcomed: ontology constitutes an improvement in the way that traditional issues are discussed and debated in the field. In this post, I will critique a general problem in these discussions and then sketch out a different way of approaching ontology that draws it together with action.

The problem with many discussions of ontology is that they have the tendency to engage in what Charles Taylor (1995) and others call ontologizing. The fallacy here is not fundamentally different from identifying a rational procedure of thought and then reading this into the very constitution of the mind (a la Descartes or rational choice). Ontologizing means that questions of ontology, definitions of what there is, are resolved first. Ontological commitments are made prior to research activity, which then constrain both choice of method and the range of legitimate knowledge claims (Lizardo 2010). The ontologizing tendency thus “[runs] the question of ‘what there is’ together with the question of ‘what properly explains’” (Tsilipakos 2012).

It could be argued that not resolving ontological questions in such prioristic fashion means that we will not ultimately be able to distinguish between what “something is” and what it is “for us.” The inclination will then be toward deflationary claims or toward various species of antirealism, skepticism and relativism. However, this fear only arises because we still have not cleared a mediational (inside/outside, transitive/intransitive) picture of our grasp of the world.

In this post, I will argue that sociologists do not have to make ontological commitments a priori while still making ontological claims nonetheless. Ontologizing can be avoided, but to do so requires that we take account of certain ontological arguments that have been neglected in these conversations, ones that reframe questions of ontology around motor functioning, action and “being-in” a world.  

The first approach is drawn from neuroscience. For Vittorio Gallese and Thomas Metzinger (2003), it is the the motor system that constructs goals, actions and intending selves. It self-organizes these distinguishable ontological parts in alignment with the requirements of motor function. This serves as the building block for a representation of the intentionality-relation which organizes higher level forms of social cognition and the first-person perspective. The surprise is that all of this is rooted in “an ‘agent-free’ type of subpersonal self-organization.”

What this means, in other words, is that traits or predicates (goal, self, intention, action) that are often treated as irreducibly “personal … have to be avoided on all subpersonal levels of description” because they are only one way that the “subpersonal functional module” that is the brain can interpret a world in terms of a functional ontology. As Metzinger and Gallese continue, this involves “explicit and implicit assumptions about the structure of reality, which at the same time shape the causal profile of [our] motor output and the representational deep structure of the conscious mind arising from it (its ‘phenomenal output’)” (2003: 366).

Fundamentally, it is our motor system that gives us this phenomenal content, not because the brain interpreting the world involves an epistemic task in which, presumptively, a “little man in the head interprets quasi-linguistic representations” (Metzinger and Gallese 2003: 557). There is not a more basic conscious agent nor a transcendental subject, more basic in the sense that either one precedes motor function. Rather, it is our “dynamical, complex and self-organizing physical system” involved in moving our body that feeds directly into the higher-level phenomenal experience that we and others are selves with goals, who act intentionally in a world, and that this all “actually belongs to the basic constituents of the world” (Gallese and Metzinger 2003: 366).

Gallese and Metzinger call this the brain’s “action ontology.” What I want to argue is that this perspective on ontology as the “brain interpreting a world” through recourse to motor function not representation aligns with an ontology that emphasizes “being-in” a world as giving the best insight into its social constitution. The key linkage is the association between ontology and action.

The second, and parallel approach, is philosophical. For Martin Heidegger, the problem starts with his mentor and rival Edmund Husserl whose famous insistence on “phenomenology” indicated his concern with how things appear to consciousness and not things-in-themselves that lie hidden behind appearance. This introduces a profound dualism in the phenomenal realm because Husserl rejects the basic empiricist claim that what something is is simply a bundle of qualities. Rather, for Husserl, objects are always intentional objects that are never perfectly identical with the qualities through which they are represented.

Heidegger’s concern with ontology appears from his break on this very point in Husserl’s thinking. For Heidegger, the way in which we deal with things in the world is not by holding them in our consciousness but by taking them for granted as items of everyday use. This means that these entities are not Husserl-style phenomena that are lucid to our view, but instead hidden and withdrawn realities that perform their labors for us unnoticed. This is why whenever we turn our attention to these hidden entities, they are always surrounded by a vast landscape of other things still taken for granted.

Heidegger calls this fundamental ontology and it effectively means that any ontology must start from the reference point of “being-in” a world (Heidegger 1996[1927]: 49-59). This gives a lot of latitude to ontology because, as Heidegger concludes further, the history of philosophy is constantly guilty of reducing reality to some one form of presence, what some call an “ontotheology” in which one privileged entity serves as explanation for all others: like forms, God, monads, res cogitans, power, subjectivity, deep structures as examples. To single out one entity as the explanation of all others amounts to treating one entity as an incarnation of all being, which it cannot be because entities are only encountered in our practice, as something “that we [have] to take account of in our everyday coping” (Dreyfus and Taylor 2015: 144). For Heidegger, we must not predefine a relevant ontology and omit any appreciation for how reality is hidden and withdrawn and never fully manifest to our view, though we rely upon it in our action.

Bourdieu (1996 especially) is one who can best grasp the transition of Husserl to Heidegger as a move toward ontology because he makes no specific ontological commitments a priori while still making ontological claims nonetheless. He does not, however, subscribe to a metaphysics of presence, with the notable exception of embodied agency (recapitulating the same move from Mauss to Merleau-Ponty). A field, then, is a device of “methodological structuralism” (Lizardo 2010) that allows an analyst to recover ontology through its association to action, in a way that parallels subpersonal self-organizing in action ontology and “being-in” in fundamental ontology. By focusing on agents’ lines of action, the construction of a field is the analysts’ practical activity that brings to light the landscape of real things whose otherwise hidden labors enable the action in question. Field theorists in sociology draw attention to bundles of relations as the hidden and withdrawn reality relied upon for action (Martin 2011).  

The difference between this claim and Metzinger and Gallese’s action ontology is that the “dynamic, complex, self-organizing” system that morphogenetically appears in a field does not have to assume the phenomenal properties of selves with goals who act intentionally in a world, even if that intentionality-relation is folk theory. Rather, action ontology (and “being-in”) means that being is only in a world, meaning that it is integrated and interindividual, and its emergent forms vary as much as the world varies. Field theory is powerful tool for capturing that variance by making social ontology matter without, however, committing to an ontologizing project.

In a follow-up post I will discuss field, apparatus and totality as different methodological structuralisms that capture the variability of worlds.

References

Bourdieu, Pierre. (1996). The Rules of Art. Stanford: Stanford UP

Dreyfus, Hubert and Charles Taylor. (2015). Retrieving Realism. Cambridge: Harvard UP.

Gallese, Vittorio and Thomas Metzinger. (2003). “Motor ontology: the representational reality of goals, actions and selves.” Philosophical Psychology 16: 355-388.

Gorski, Philip. (2013). “What is Critical Realism? Why Should You Care?” Contemporary Sociology 42: 658-670

Heidegger, Martin. (1996[1927]). Being and Time. Translated by Joan Stambaugh. Albany, NY: SUNY Press.

Lizardo, Omar. (2010). “Beyond the Antinomies of Structure: Levi-Strauss, Bourdieu, Giddens and Sewell.” Theory and Society 39: 651-688.

Martin, John Levi (2011). The Explanation of Social Action. New York: Oxford UP.

Metzinger, Thomas and Vittorio Gallese (2003). “The emergence of a shared action ontology: Building blocks for a theory.” Consciousness and Cognition 12: 549-571.

Taylor, Charles. (1995). Philosophical Arguments. Cambridge: Harvard UP

Tsilipakos, Leonidas. (2012). “The Poverty of Ontological Reasoning.” Journal for the Theory of Social Behaviour 42: 201-219.

Are the Folk Natural Ryleans?

Folk psychology and the belief-desire accounting system has been formative in cognitive science because of the claim, mainly put forth by philosophers, that it forms the fundamental framework via which everybody (philosopher and non-philosopher alike) understands human action as meaningful. Both proponents of some version of the argument for the ineliminable character of the folk psychological vocabulary (Davidson, 1963; Fodor, 1987), and critics that cannot wait for its elimination by a mature neuroscience as an outmoded theory (Churchland, 1981) accept the basic premise; namely, that when it comes to action understanding, folk psychology is preferred by the folk. The job of philosophy is to systematize and lay bare the “theoretical” structure of the folk system (to save it or disparage it).

In a fascinating new article forthcoming in Philosophical Psychology, Devin Sanchez Curry, tries to challenge this crucial bit of philosophical common wisdom, which he refers to as “Davidson’s Dogma” (Sanchez Curry acknowledges that this might not be exegetically strictly true of Davidson’s writings, although it is true in terms of third-party reception and influence). In particular, Sanchez Curry hones in on the claim that the folk use a “theory” of causation to account for action using beliefs: Essentially the idea that beliefs are inner causes (the cogs in the internal machinery) that produce action when they interact with other beliefs and desires. This is the subject of a previous post.

Sanchez Curry, rather than staying at the purely exegetical or conceptual analysis level,  turns to the empirical literature in psychology on lay belief attribution to shed light on this issue. There he notes something surprising. There’s little empirical evidence that the folk resort to a belief-desire vocabulary or to a theory of these as inner causes (cogs and wheels in the internal machinery) of action. Going through the literature on the development and functioning of “mindreading” abilities, Sanchez Curry shows that the primary conclusion of this line of work is that the explicit attribution of representational (e.g. “pictures in the head”) versions of belief is the exception, not the rule.

Instead, the literature has converged (like many other subfields in social and cognitive psychology) on a dual systems/process view, in which the bulk of everyday mindreading is done by high capacity, high-efficiency automatic systems that do not traffic in the explicit language of representations. Instead, these systems are attuned to routine behavioral dispositions of others and engage in the job of inference and filling-in of other people’s behavior patterns by drawing on well-honed schemata trained by the pervasive experience of watching conspecifics make their way through the world. Explicit representational belief attribution practices emerge when the routine System I process encounter trouble and require either observers or other people to “justify” what they have done using a more explicit accounting.

As Sanchez Curry notes, the evidence here is consistent with the idea (which I alluded to in a previous post) that persons may be “natural Ryleans” but that the Rylean (dispositional) action-accounting system is so routinized as to not have the flashy linguistic bells and whistles of the folk psychological one. This creates the illusion that there’s only one accounting system (the belief-desire one), when in fact there are two, it is just that the one that does most of the work is nondeclarative (Lizardo, 2017), while the declarative one gets most of the attention, even though it’s actually the “emergency” action-accounting system, not the everyday workhorse.

As Sanchez Curry also notes, evidence provided by “new wave” (post-Heider) attribution theorists show that the explicit (and actual) folk psychological accounting system even when activated, seldom posits beliefs as “inner causes” of behavior. Instead, when people enter the folk-psychological mode to explain puzzling behavior that cannot be handled by System I practical mindreading, they look for reasons, not causes. These reasons are holistic, situational, and even “institutional” (in the sociological sense). There are “justifications” that will make the action meaningful while saving the rationality of the actor, given the context. They seldom refer to internal machineries or producing causes. We look for justifications to establish blame, to “make sense” (e.g. “explain”) or “save face” not to establish the inner wellsprings of action. So even in this case the folk are natural Ryleans and focus on the observables of the situation and not the inner wellsprings. This means, that the “theory” of folk psychology is a purely iatrogenic construction of a philosophical discourse on action that plays little role in the actual attributional practices of the folk: Folk psychology in the Davidsonian/Fodorian sense turns out to be the specialized construction of an expert community.

One advantage of this account is that it solves what I previously referred to as the “frame problem” faced by all “pictures in the head” as causal drivers of action. The problem is that the observer has to pick one of a myriad of possible pictures as the “primary” cause for the action. But there is no way to make this selection in a non-arbitrary way if we are stuck with the “inner cause” conception. In the Rylean conception, the “reason” we attribute will depend on the pragmatics and goals of the reason request. Are we seeking to establish blame? Make sense of a puzzle? Save the agent’s face? Make it seem like they are devious?

These arguments have several important implications. The most important one is that mostly, nobody is imputing little world pictures to other people to explain their action, empathize, or even predict or make inferences as to what they will do next. Dedicated, highly trained automatic systems do the job when people are behaving in “predictable” ways. No representations required there (Hutto, 2004). When this action-tracking system fails, we resort to more explicit action accountings, but more accurately we resort to the placing of strange or puzzling action in a less puzzling context. Even here, this is less about getting at occult or inner well-springs than of trying to construct a “reason” why somebody might have acted this way that makes the action less puzzling.

References

Churchland, P. M. (1981). Eliminative Materialism and the Propositional Attitudes. The Journal of Philosophy, 78(2), 67–90.

Davidson, D. (1963). Actions, Reasons, and Causes. The Journal of Philosophy, 60(23), 685–700.

Fodor, J. A. (1987). Psychosemantics: The Problem of Meaning in the Philosophy of Mind. MIT Press.

Hutto, D. D. (2004). The Limits of Spectatorial Folk Psychology. Mind and Language, 19(5), 548–573.

Lizardo, O. (2017). Improving Cultural Analysis: Considering Personal Culture in its Declarative and Nondeclarative Modes. American Sociological Review, 0003122416675175.

The Ascription of Dispositions

It’s what you do” is the title of a wildly successful advertising campaign by the American insurance company GEICO. In each spot, we see either a “type” (people in a horror movie, a camel, a fisherman, a cat, a Mom, a golf commentator) or people familiar enough to the intended middle-aged audience of insurance buyers to be considered types (mainly 80s and 90s musical acts like Europe, Boyz to Men, or Salt-N-Pepa) doing things they “typically” do. These things are either out of place, annoying, rude, or irrational and thus funny within the context of the “frame” (an office, a restaurant, etc.) in which they are presented.

For instance, in a viral spot, Peter Pan shows up at the 50th-anniversary reunion to remind everybody else of how young he is (and how old they are). The voiceover reads: “If you are Peter Pan you stay young forever. It’s what you do.” In another one, a poor guy slowly sinks to his death in quicksand, while imploring a nearby cat to get help. The cat of course just licks her paws without looking at him: “If you are a cat you ignore people. It’s what you do.”

The commercials are of course funny due to the specificity of each setup. I want to suggest, however, that they may carry a more general lesson. Perhaps they strike us as noticeable (and thus humorous) because they use an action accounting system that is inveterately familiar but that we usually keep in abeyance. In fact, it is so familiar that it requires the odd situations in the GEICO commercials to make it stand out. This action accounting system, rather than relying on “belief-desire” ascriptions, points to typicalities in behavior patterns as their own justification. Thus the template “If are you X, you do Y, it’s what you do” may hold the key for prying ourselves loose of belief-desire talk.

In a previous post, I argued that the belief-desire accounting system commits us to a model in which action is driven by “little pictures in the head.” An entire tradition of explaining action by making recourse to the “ideas” that “drive” it is based on such a strategy (Parsons, 1938). This is not as innocent of a move as it may seem. Pictures in the head are entities assumed to have specific properties (e.g. representational, content-ful, and casually power-ful) that ultimately need to be cashed in in any scientific account of action. This may not be possible (Hutto & Myin, 2013).

In a follow-up post, I noted that, even if taking an ontology-neutral stance (Dennett, 1989), the ascription of belief from a third-person perspective is not an unproblematic practice either. Sometimes, different pieces of evidence (e.g. what people claim to believe) clashes with other pieces of evidence (what people do) to make belief ascription a problematic affair. The point there was that sometimes, even in our routine ascription behavior, we don’t treat beliefs as purely pictures. Actions matter too and sometimes we may conclude that what people really believe has nothing to do with the pictures (e.g. propositions) that they claim to have in their head.

So maybe our ascription practices and our action accounting systems can go beyond the usual belief-desire combo of folk psychology. This is important because one of the reasons why the claim that belief is a kind of habit might be problematic to some is that it doesn’t seem to fit any intuitive picture of the way we keep track and explain other people’s action (or our own). Here I will build some intuition for the claim that there are other ways of “explaining” action that doesn’t require the ascription of picture-like constructs that drive action. These are also compatible with the idea that beliefs are a kind of habit. Moreover, these are already ascription practices that we follow in our everyday accountings; it’s just that they are too boring to be noticeable.

The most obvious way in which we sometimes explain action without using the language of belief is to talk about somebody’s tendencies, propensities, inclinations, etc. Just like in the GEICO commercials, instead of ascribing beliefs and desires we simply point to the action as being “typical” of that doer. In the philosophy of action, at least since Ryle (2002), this is usually referred to as using a “dispositional” language. Just like ideas, dispositions are sufficient “causes” of the action they help account for. So going back to the example of Sam the fridge opener: Instead of saying that Sam opened the fridge because they believed there was sandwich inside, we can say: “Sam tends to open the fridge when they are hungry. It’s what they do.” This is a way of accounting for the action that does not resort to the ascription of world pictures. Instead, it points to a regularity or a tendency in Sam’s action that is noted to occur under certain (usually typical) conditions.

These kind of dispositional ascriptions are fairly common. In fact, they are so common they are kind of boring. Maybe they stand out less than the usual belief-desire combo of folk psychology because they are seldom used for action justification, rationality ascription, or storytelling. A serial killer who attempted to mount a defense based on the claim that “killing is just what I do” would be the subject of a short trial. In this sense, dispositional ascriptions are gray and drab (in spite of their strict accuracy) while the trafficking in (and sometimes the clash between) beliefs and desires just tell a more interesting story (in spite of their inherently speculative nature). But the pragmatics of belief-desire language use or their mnemonic advantage should not dictate their use in social-scientific explanatory projects. Dispositions have an advantage here because they commit us to a less inflationary ontology compatible with the naturalistic commitments of cognitive neuroscience.

As Schwitzgebel (2010) has argued, the dispositional approach can be extended to account for our ascription of the usual “attitudes” whether propositional (like beliefs and desires) or not. This also points to a solution to the ascription problems that arise when sayings (or phenomenological experience) does not match up with action. In contrast to pro-judgment (which favor subjective certainties and verbal reports) or anti-judgment (which favors action) views, the idea is to think of the global entity (e.g. the “belief” or the “desire”) as a cluster of dispositions. So rather than any one member (the saying or the doing) being decisive in our ascription, they all count (although we may weigh some more than others). This means that sometimes, the matter of whether somebody “believes” P will be undecidable (the cases of implicit/explicit dissociation) because different dispositions point in different directions.

The bigger point, however, is that all dispositional ascriptions have the structure of “habituals” (Fara, 2005). So when we say Sam “believes” P, what we are really saying is that Sam is predisposed to agree that P under a certain broad range of circumstances. But we also say that Sam is likely to act as if P is true, to have certain subjective experiences consistent with the truth of P and so on. In this respect, the “belief” that P is just a cluster of cognitive, phenomenological, verbal, and behavioral dispositions. This cashes in on the insight that “habit” (or disposition) is the superordinate category in mental life and that the other terms of the mental vocabulary fall of as special cases. This also reinforces the point which Mike and I made in the original paper (see in particular 56-57), that the issue is not the elimination of the language of belief and desire (or the other folk mental concepts), but their proper re-specification within a habit-theoretic framework.

Another nice feature of the dispositional ascription approach is that when we ascribe a belief, we no longer have to commit ourselves to the existence or causal efficacy of problematic entities (e.g. world pictures) but point to the usual set of things clear in experience: Actions, linguistic declarations, comportments, moods, etc.). Usually, these hang together and point in the same direction, sometimes they do not. However, whether this hanging together no longer has to result in a contest between heterogeneous entities (e.g. sayings versus doings) but between different species of the same dispositional genus.

Note, however, that picking one disposition in the cluster as the decisive element in an act of ascription is a conclusion that cannot be reached by virtue of a priori methodological policy (such as those privileging doings over sayings or vice-versa). Instead, we need to commit ourselves to an ascription standard combining inference to the best explanation with a coherentist approach: Attitude ascriptions should maximize harmony across the entire dispositional profile. So it would be a mistake, for instance, to select a single disposition (or phenomenal experience, or verbal report) as the criterion for attitude ascription, when there’s an entire panoply of other dispositions pointing in a different direction.

So the issue is not whether there’s a contest between “sayings” and “doings” (Jerolmack & Khan, 2014). Rather, the best tack is taking a tally of the entire dispositional panoply, which may involve lots of tendencies to say, do, and experience into account. Here some sayings might clash against some sayings and some doings against other doings.  Whether people strive for consistency across their dispositional profile may be as much of a sociocultural matter (as argued by Max Weber) than an a priori analytic issue. In all, however, what we are confronting are dispositions clashing (or harmonizing with) other dispositions, so in this sense, the analytical task becomes tractable from within a single action vocabulary.

References

Dennett, D. C. (1989). The Intentional Stance. MIT Press.

Fara, M. (2005). Dispositions and Habituals. Nous , 39(1), 43–82.

Hutto, D. D., & Myin, E. (2013). Radicalizing Enactivism: Basic Minds Without Content. MIT Press.

Jerolmack, C., & Khan, S. (2014). Talk Is Cheap: Ethnography and the Attitudinal Fallacy. Sociological Methods & Research. https://doi.org/10.1177/0049124114523396

Parsons, T. (1938). The Role of Ideas in Social Action. American Sociological Review, 3(5), 652–664.

Ryle, G. (2002). [1949], The Concept of Mind. Chicago: The University of Chicago Press,. With an lntroduction by Daniel C. Dennett.

Schwitzgebel, E. (2010). Acting contrary to our professed beliefs or the gulf between occurrent judgment and dispositional belief. Pacific Philosophical Quarterly, 91(4), 531–553.

Beyond the Framework Model

Most work in cultural analysis in sociology is committed to a “framework” model of culture and language. According to the framework model, persons need culture, because without culture (which usually takes the form of global templates that the person is not aware of possessing) they would not be able to “make sense” of their “raw” perceptual experience. Under this model, culture serves to “organize” the world into predictable categories. Cognition thus reduces to the “typing” of concrete particulars (experientially available via perception) into cultural constituted generalities.

The basic model of cognition here is thus sequential: First the world is made available in raw (particular) form, then it is “filtered” via the (culturally acquired) lenses and then it emerges as a “sensible,” categorically ordered world. This model accounts for the historical and spatial diversity of culture even while acknowledging that at the level of “raw” experience we all inhabit the same world. The only problem is, as Kant understood and as post-Kantians always despaired, that this “raw” universal world does not make sense to anybody! The only world that makes sense is the culturally constituted world. In this sense the price that we pay to have a world that “makes sense” is the donning of conceptual glasses through which we must filter the world; the cost of making sense of the world is not being aware of the cultural means through which that sense is made.

The framework model is pervasive in cultural analysis. However, a consideration of work in the modern cognitive science of perception leads us to question its core tenets.

One major weakness is that the framework model has to rely on a theoretical construction that has a shaky scientific status: This is the counterfactual existence of “raw” (pre-cultural, pre-cognitive) experience. However, it is hard to find a conceivable time-scale at which we could say that there is the possibility that there is a “raw” experience for somebody.

In contrast to the “sequential” model, an alternative way to think of this is that experience qua experience is inherently specified and thus meaningful. That is when persons experience the world that world is always already a world for them and therefore as directly meaningful. It is true that at a slower time scales, after a person experiences a world that is for them, they may also activate conventional representations in which other “meanings” (namely semantic information on objects, events, settings, and persons activated from so-called “long-term” memory) may slow themselves enough to modify their initial meaningful uptake of the world. But none of these meanings are necessary to “constitute” the world of objects, persons and events as meaningful if by meaningful we (minimally) mean capable of being understood and integrated into our everyday practical projects (Gallese & Metzinger, 2003).

The framework model erred because it took a high-level cognitive task (namely classification or in Berger and Luckmann’s mid-twentieth century phenomenological language “typification”) that is not the right kind of task for how the world of perception becomes meaningful to us. Classification is just too slow a task; perception happens much faster than that (Noë, 2004). Because of this, classification is way too flimsy a foundation to build the required model of how persons make a meaningful world. In this respect, cultural analysis in sociology has been hampered by a piece of conceptual metaphor working behind the back of the theorist. The (unconscious) inference that comes from mapping the experiential affordances of the usual things that serve as frameworks or lenses (which included durability and solidity) into the abstract target domain of perception and experience.

Work in the psychology of classification shows that as hard as we may try to search for them, the “hard” lenses and classificatory “structures” dreamed up by contemporary cultural analysis do not exist (Barsalou, 1987). Instead, most classification is shown to be (mystifyingly from the perspective of framework models) fluid and context-sensitive, with the classification shifting even if we change the most minute and seemingly irrelevant thing about the classificatory context (Barsalou, 2005). Thus, at the level of experience, culture surely cannot take the form of (conscious or unconscious) “frameworks” because these frameworks are just nowhere to be found (Turner, 1994).

How can we think of perception if we are not to use the framework model? Here is one alternative. Perception, at its most basic level, is simply identification, and identification is specification. And specification is the production of a relation. That is, a world opens up for an organism when the organism is able to specify, and thus make “contact,” with that world in relation to itself. This kind of specification is an inherent organism-centric activity. A world is always a world for somebody. In this respect, this analysis is less “generic” than traditional cultural analysis, which tends to speak of meaningful worlds in relation to abstract representative (shall we say “collective”?) agents. But meaning is always personal and organism-centered.

This insight implies not the impossibility of impersonal or even collective meaning, but its complexity and difficulty. Modern cultural analysis, by essentially taking the products of collective meaning-making as its starting point (and the mechanisms that produce their status as shared for granted) actually sidestep some of the hardest questions in favor of relatively easy questions (the interpretation of collective symbols for generic subjects). But most symbols are symbols for concrete, embodied subjects who have nothing generic about them. Surprisingly enough the first lesson that the emerging sciences of meaning construction have for contemporary cultural analysis, is that the basic way in which cultural analysts go about “analyzing” meaning is actually too abstract and not quite as concrete (or “personal”) as one would wish.

References

Barsalou, L. W. (1987). The instability of graded structure: Implications for the nature of concepts. Concepts and Conceptual Development: Ecological and Intellectual Factors in Categorization, 10139. Retrieved from https://pdfs.semanticscholar.org/b14d/961c846075ca67ec11cf60ea7b0bc6ea17cd.pdf

Barsalou, L. W. (2005). Situated conceptualization. Handbook of Categorization in Cognitive Science, 619, 650.

Gallese, V., & Metzinger, T. (2003). Motor ontology: the representational reality of goals, actions and selves. Philosophical Psychology, 16(3), 365–388.

Noë, A. (2004). Action in Perception. Bradford book.

Turner, S. P. (1994). The Social Theory of Practices: Tradition, Tacit Knowledge, and Presuppositions. University of Chicago Press.

Ascription Practices: The Very Idea

How do we know what others believe? The answer to this question may seem clear but as we will see it has some interesting hidden complexities. Some of these bear directly on some established policies in social-scientific method.

One obvious answer is that if we want to know what others believe, and these others are language using creatures like ourselves is that we ask them: “Do you believe P?” If the person verbally reports believing P, then we are on safe grounds in ascribing belief P.

So far so good. But let us say a person assents to P but acts in other ways that seem counter to the content of that belief. What to do then? Cases of this sort have become popular grist for reflection in recent work in the philosophy of belief. Spurred by a series of papers by Tamar Gendler (Gendler, 2008a, 2008b), a lively literature has developed on what to ascribe when belief “sayings” (usually referred to as “judgments”) come apart from “doings” (for a sampling see Albahari, 2014; Kriegel, 2012; Mandelbaum, 2013; Schwitzgebel, 2010; Zimmerman, 2007). Some cases are stock in trade and involve people who verbally commit to a belief or attitude but act in contrary ways.

Of most interest for social and behavioral scientists are cases of what are called dissociations between “explicit” or more accurately, direct, measures of a construct (such as a belief or an attitude), which usually rely on self report, and so-called “implicit,” or more accurately, indirect measures of the same construct (Gawronski, Peters, & LeBel, 2008). While the so-called “implicit attitude test” is the most familiar indirect measure, there is an entire family composed of dozens of distinct indirect measurement strategies (Nosek, Hawkins, & Frazier, 2011). The key point is, however, that indirect measures usually rely not on verbal reports but on observations of rapid-fire action or behavioral responses that are assumed not to be under voluntary control.

Dissociations between direct and indirect measures are usually good examples of “sayings” and “doings” coming apart. If these are as common as the literature suggests, then belief (or attitude) ascription problems are also more common than we realize.

So let’s say we observe someone who fits the profile of “Chris the implicit racist.” Chris is a white high school teacher who professes the belief(s) that black people in the United States are no more or less intelligent, violent, or hardworking than white people. Yet systematically in their unguarded behavior (observed via ethnographic classroom observation or via indirect measures of implicit bias collected in the lab) Chris shows a preference for white people (e.g. they discipline black students more harshly for similar offenses; they are more likely to call on white students, etc.) and implicitly associates people with dark skin with a host of negative concepts, including lack of intelligence, proneness to violence, and a weaker work ethic.

What does Chris believe? Considerations of dissociative cases like this lead in two interesting directions. The first is that actions, practices, and behaviors have a non-negligible weight in our belief ascription practices. So judgments aren’t everything. This is important because, in polite company, our everyday practices of belief ascription follow folk cartesianism: That is, belief ascription is fixed by explicit reports of what people say they believe and people have incorrigible knowledge about those personal judgments. We cannot ascribe a belief to a person they profess not to have (alternatively people have veto power over second-person ascription practices).

In philosophy, this is called the “pro-judgment” view on ascription. This stance holds we can only ascribe the belief that P if a person reports they believe P. The actions inconsistent with P (e.g. for Chris, the automatic association of Black people with violence, or their penchant to send black students to detention for minor offenses) are acknowledged to exist but they are just not a kind of belief. They are something else. Gendler (2008a) has proposed that given the recalcitrant existence of these types of counter-doxastic behaviors, we should add a new mental category to our lexicon: “aliefs.” So we can say Chris “believes” blacks are no more violent than whites (as given by their self-report), but “alieves” they are more violent (as given by their performance on indirect attitude measures and their avoiding walking in certain predominantly black neighborhoods).

An alternative view, called the “anti-judgment” view says actions speak louder than words. Or, as indicated by the title of a recent entry in a book review symposium dedicated to Arlie Hochschild’s Strangers in Their Own Land, anti-judgment people say “who cares what they think?” (Shapira, 2017). If Chris walks like P, and quacks like P, then Chris believes P.

I should note that this belief ascription strategy is not that bizarre and that it exists as a “second option” in our commonsense arsenal, even if our first option is folk cartesianism. This should alert you that ascription practices may be very much a matter of socio-cultural tradition and regulation as they are a matter of the usual canons of rationality.

This matters if these same ascription practices are followed mindlessly in social science research. This would be a case of folk practices dictating what should be a matter of social scientific consideration. In this sense, social scientists should care very much if they are pro-judgment folk cartesians, anti-judgment, or something else, as this will bear directly on their conclusions. But we are getting ahead of ourselves. The point is that “trumping” a person’s cartesian incorrigibility by pointing to their inconsistent actions is an available move (both in social science and everyday life) but it is also one that should seldom be undertaken lightly.

Note that the pro-judgment and anti-judgment views are not the only available ones. To fill out the space of options: We may ascribe both P and ~P beliefs (the contradictory belief view). Or we may ascribe a mixture of the inconsistent beliefs (the “in-between” belief view). Or we may say Chris believes neither P nor ~P, but that they vacillate inconsistently between the two depending on circumstances (the shifting view) (Albahari, 2014).

And this is the second thing that inconsistency between sayings and doings highlight. Rather than focusing on goings-on trapped within each person’s cartesian theater, we can now see that beliefs are very much a matter of both actions and practices, both in terms of the believer and the ascriber. In this respect, two consideraations come to the fore.

First, there are the “belief proclamation” practices we are used to (e.g. people verbally saying they believe thus and so) but also the myriad of behaviors and actions that other people monitor and that they use to ascribe beliefs to others. It is these belief ascription practices I wanted to highlight in this post. As noted, they are both a matter of everyday interpersonal interaction, and for my purposes, they are a standard, but seldom commented upon, aspect of every social scientific practice. After all, social scientists (especially those who do qualitative work) constantly ask people what they believe about a host of things (e.g. Edin & Kefalas, 2011; Young, 2006), and are thus confronted with self-reports of people claiming to believe things. These same social scientists may also sometimes have the opportunity to observe these people in ecologically natural settings, which allows them to compare self-reports to doings (Jerolmack & Khan, 2014).

In this post, I will not try to adjudicate or argue for what the best ascription strategy is. I will leave that for a future post. Here I note two things. First, it is clear certain ascription practices have elective affinities with certain conceptions of what beliefs are. For instance, “pro-judgment” views have an affinity with the “pictures in the head” conception of belief. As such, pro-judgment views assume all of the things that such a conception assumes, such as representationalism, the relevance of the truth/falsity criterion and so on. “Anti-judgment” views focusing on action, may be said to be more consonant with some forms of practice theory, as can some other (e.g. “in-between” or “contradictory” views).

Second, belief ascription practices have the familiar duality of being both possible “topics” and “resource” for sociological analysis that fascinated early ethnomethodology. We can be “neutral” about their import for sociological research and study belief ascription practices as a topic. We may ask questions such as under what circumstances people default to folk cartesianism, when they prefer anti-judgment views, when they go “in between” or “contradictory” and so on.

Alternatively we may examine the issue by considering the role of belief ascription practices as a resource for sociological explanation. Are pro-judgment views always effective? Should we go anti-judgment and ignore what people say in favor of their behavior? These are some issues I hope to tackle in future posts.

References

Albahari, M. (2014). Alief or belief? A contextual approach to belief ascription. Philosophical Studies, 167(3), 701–720.

Edin, K., & Kefalas, M. (2011). Promises I Can Keep: Why Poor Women Put Motherhood before Marriage. University of California Press.

Gawronski, B., Peters, K. R., & LeBel, E. P. (2008). What Makes Mental Associations Personal or Extra-Personal? Conceptual Issues in the Methodological Debate about Implicit Attitude Measures. Social and Personality Psychology Compass, 2(2), 1002–1023.

Gendler, T. S. (2008a). Alief and Belief. The Journal of Philosophy, 105(10), 634–663.

Gendler, T. S. (2008b). Alief in Action (and Reaction). Mind & Language, 23(5), 552–585.

Jerolmack, C., & Khan, S. (2014). Talk Is Cheap: Ethnography and the Attitudinal Fallacy. Sociological Methods & Research. https://doi.org/10.1177/0049124114523396

Kriegel, U. (2012). Moral Motivation, Moral Phenomenology, And The Alief/Belief Distinction. Australasian Journal of Philosophy, 90(3), 469–486.

Mandelbaum, E. (2013). Against alief. Philosophical Studies, 165(1), 197–211.

Nosek, B. A., Hawkins, C. B., & Frazier, R. S. (2011). Implicit social cognition: from measures to mechanisms. Trends in Cognitive Sciences, 15(4), 152–159.

Schwitzgebel, E. (2010). Acting contrary to our professed beliefs or the gulf between occurrent judgment and dispositional belief. Pacific Philosophical Quarterly, 91(4), 531–553.

Shapira, H. (2017). Who Cares What They Think? Going About the Right the Wrong Way. Contemporary Sociology, 46(5), 512–517.

Young, A. A. (2006). The Minds of Marginalized Black Men: Making Sense of Mobility, Opportunity, and Future Life Chances. Princeton University Press.

Zimmerman, A. (2007). The Nature of Belief. Journal of Consciousness Studies, 14(11), 61–82.

Are Beliefs Pictures in the Head?

In a recently published piece (Strand & Lizardo, 2015) Mike and I argued that the notion of “belief” if it is to do a more adequate job as a category of analysis in social-scientific research, can best be thought of as a species of habit. I refer the interested reader to the paper for the more detail “exegetical” argumentation excavating the origins of this notion in American pragmatism (mostly in the work of Peirce and Dewey) and European practice theory (mostly in the work of Bourdieu). Here I would like to explore some reasons this proposal may seem to be so counterintuitive given our traditional conceptions of belief.

The fear, to some well-founded, is that substituting the usual notion for the habit notion would cause a net loss, and thus an inability to account for things that would like to account for (e.g. patterns of action that are driven by ideas or thoughts in the head) adequately.

What is the standard notion of belief that the “habit” notion displaces (if not replaces)? The easiest way to think of it is as one in which beliefs are thought to be little “pictures” in the head that people carry around. But what are beliefs pictures of? After all, pictures (even in modern art) usually depict something, however faint. The answer is that they are supposed to be pictures of the world that somehow the person uses to get by.

Because the beliefs are “pictures” (in cognitive science sometimes the word representation is used in this context) they have the representational properties usual pictures have. For instance, they portray the world in a certain way (e.g. under a particular description). In addition, because they are pictures, beliefs have content. That is a belief is always about something (in some philosophical segments, the word “intentionality” is usually brought up here (Searle, 1983)). In this way, some beliefs may be directed at the same state of affairs in the world, but “picture it” in different ways (Hutto, 2013). Finally, and building in on this last distinction, just like pictures claim to depict the world as it is (or at least have a resemblance to it), beliefs can be true (if they portray the world as it is) or they can be false (if the description does not match the world). This truth/falsity relation between the pictures in the head and the world turns out to be crucial for their indispensable job in “explaining” action.

For instance, if somebody opens a refrigerator, grabs a sandwich from it, and eats it, and an outside observer can “explain” the pattern by ascribing a belief to the person. So the person opened the fridge door because they thought (believed) there was a sandwich there. We usually complete this belief-based explanation by adding some kind of motive or desire as a jointly sufficient cause (“they believed there was a sandwich in the fridge and they were hungry”).

But suppose we were to see the same person open the fridge, look around and then go back to the couch empty-handed. This is a different behavioral pattern as before. However, note we can also “explain” this behavior using the same “sandwich” belief mechanism as before. The trick is simply to ascribe a false belief to the person: The imputed picture in the head does not match the actual state of the world. So we can now say, “Sam opened the fridge because they believed there was a sandwich in there and they were hungry.” We attach one more disclaimer: “But Sam was wrong, there was no sandwich.”

This flexibility makes belief-based explanations fairly powerful (they can account for a wide range of behavioral patterns). However, flexibility is also a double-edged sword: Become too flexible and you risk vacuity, explaining everything and thus nothing (see Strand & Lizardo, 2015, pp. 47–48).

Because the belief-desire combo is so flexible (and so pervasive even in our “folk” accounting of each other’s action) some people have argued that it is inevitable. So inevitable it may be the only game in town for explaining action. This would make the “pictures in the head” version of the notion of belief essentially a non-negotiable part of our explanatory vocabulary. One of the main goals of our paper was to argue that there are other options even if they seem weird at first sight.

The alternative we championed was to think of belief as a species of habit. This requires both a revision of our implicit classification of mental concepts and a revision of what we mean by “belief.” In terms of the first aspect, the usual way to think of belief and habit is to see them as distinct categories in our mental vocabulary. A habit is a “thoughtless” activity, while an action driven by belief requires “thought” to be involved. So they are two sets of mental categories, but they are as a distinct as a frog is from a zebra (even if both are a species of animal). In our proposal, however, the overarching category in mental life (for both human and nonhuman animals) is habit, and belief is a subcategory of habit. This does violence to the standard classification so it may take time to get used to.

In this respect, note that the “picture” theory of belief seems to be important in how people differentiate belief from habit. Both can be involved in action, but when action is driven by belief, the picture inside the head is on the driver’s seat and is thus an important (but always presumed) component of the action. In fact, the picture is such an important component we attribute causal force to it. Sam got up from the couch and walked to the fridge because they thought there was a sandwich in there (and they were hungry).

One last observation about the pictures in the head account of action. When the observer imputes the belief “sandwich in the fridge” to Sam and selects this belief as the “cause” of the action, by what criteria is this selection made? I bring this up only to note that there are actually a bunch of other “beliefs” that the observer could have imputed to Sam, and which could be argued to be implicated in the action, but somehow didn’t. For instance, the observer could have said one of the beliefs accounting for Sam’s action is that “there was a fridge in the room.” Or that “the fridge was plugged in” or that “the floor could sustain their weight,” and so on.

This is not just a trivial “philosophical” issue. We could impute an infinity of little world pictures in the head to Sam. In fact, as many as there are “states of affairs” about the world that make it possible for Sam to get up and check the fridge, inclusive of purely hypothetical or even “negative” pictures (e.g. the belief that “there’s not a bomb in the fridge which will be triggered to detonate when the door is opened.”). Yet, we do not (sometimes this is referred to as the “frame problem” (Dennett, 2006) in artificial intelligence circles). This means that belief imputation practices following the picture version are necessarily selective, but the criteria for selection remain obscure. This kind of obscurity should be suspicious for those who want to recruit these types of explanations as scientific accounts of action.

But we are getting ahead of ourselves. The main point of this post is simply to warm you up to the intuition that maybe the pictures in the head version of belief is not as intuitive as you may have thought nor as unproblematic or non-negotiable as it is sometimes depicted. In a future post we I will introduce the alternative conception of belief as habit and see whether it is not subject to these issues.

References

Dennett, D. C. (2006). Cognitive Wheels: The frame problem of AI. In J. L. Bermudez (Ed.), Philosophy of Psychology: Contemporary Readings (Vol. 433, pp. 433–454). New York: Routledge.

Hutto, D. D. (2013). Why Believe in Contentless Beliefs? In N. Nottelmann (Ed.), New Essays on Belief: Constitution, Content and Structure (pp. 55–74). London: Palgrave Macmillan UK.

Searle, J. R. (1983). Intentionality: An essay in the philosophy of mind. New York: Cambridge University Press.

Strand, M., & Lizardo, O. (2015). Beyond World Images: Belief as Embodied Action in the World. Sociological Theory, 33(1), 44–70.