The Decision to Believe

As noted in a previous post, there are analytic advantages with reconceptualizing the traditional denizens of the folk-psychological vocabulary from the point of view of habit theory. So far, however, the argument has been negative and high-level; thinking of belief as habit, for instance, allows us to sidestep a bunch of antinomies and contradictions brought about by the picture theory. In this post, I would like to outline some positive implications of recasting beliefs as a species of habit. However, I will begin by discussing other overlooked implications of the picture theory and then (promise) move on to some clear substantive implications of the habit conception.

As noted before, the picture theory of belief is part of a more general set of folk (and even technical) conceptions of how beliefs work. I have already noted one of them and that is the postulate of incorrigibility: If somebody assents to believing p, then we presume that they have privileged first-person knowledge as to this. It would be nonsensical (and socially uncouth) for a second person to say to them “I know better than you on this one; I don’t think you believe p.” Folk Cartesianism thus operates as a philosophical set of tenets (e.g. the idea we have privileged introspective and maybe even non-inferential access to personal beliefs), and as a set of ethnomethods to coordinate social interaction (accepting people’s claims they believe something when they tell us so without raising a fuss).

I want to point to another, less obvious premise of both folk and technical Cartesianism. This is the notion (which became historically decisive in the Christian West after the Protestant Reformation) that you get to choose what you believe. Just like before, this doubles as a philosophical precept and as an ethnomethod used to organize social relations in doxa-centric societies (Mahmood 2011). If you get to choose what you believe, and if your belief is obnoxious or harmful, then you are responsible for your belief and can be blamed, punished, burned at the stake and so on. As the sociologist David Smilde has also noted, there is a positive version of this implication of folk Cartesianism: if the belief is good for you (e.g. brings with it new friends, behaviors, resources) then we should expect you (under the auspices of charitable ascription) to choose to believe it. However, the weird prospect of people believing something not because they find its truth or validity compelling but because of instrumental reasons raises its ugly head in this case (Smilde 2007, 3ff; 100ff).

The idea of choosing to believe is not as crazy as it sounds. At least the negative of it, the idea we could bring up a consideration (let’s say a standard proposition) and withhold belief from it until we had scrutinized its validity was central to the technical Cartesian method of doubt. Obviously, this requires that we have some reflective control over our decision to believe in something or not while we consider it, so in this respect technical and folk Cartesianism coincide.

As Mike and I discuss in the 2015 paper, rejecting the picture theory (and associated technical/folk Cartesianism) of belief makes hash of the notion of “choosing to believe” as a plausible belief-formation story. Here the strict analogy to prototypical habits helps. Consider a well-honed habit; when exactly did you choose to acquire it? Now even if you made a “decision” to start a new training regimen (e.g. Yoga) at what point did it go from a decision to a habit? Did that involve an act of assent on your part? Now consider a traditional belief stated as an explicit linguistic proposition you claim to believe (e.g. “The U.S. is the land of opportunity”). When did you choose to believe that? We suggest, that even a fairly informal bit of phenomenology will lead to the conclusion that you do not have credible autobiographical memories of having “chosen” any of the things you claim to believe. It’s as if, as Smilde points out, the original memory of decision is “erased” once the conviction to believe takes hold.

We suggest that the apparatus of erased memories and decisions that may or may not have taken place is an unnecessary outgrowth of the picture theory. Just like habits, beliefs are acquired gradually. The problem is that we take trivial (in the strict sense of trivia) encyclopedic statements (e.g. Bahrain is a country in the middle east) as prototypical cases of belief. Because these could be acquired via fast memory binding after a single exposure they seem to be the opposite of the way habits are acquired. However, these linguistic-assent to trivia beliefs are analytically worthless because it is unlikely that if there’s anything like belief that plays a role in action, it takes the form of linguistic-trivia beliefs. That we believe (no pun intended) that these types of propositions are “in control” of action is itself also an unnecessary analytic burden produced by the picture theory.

Instead, as noted before, a lot of our action-implicated beliefs are clusters of dispositions and not passive acts of private assent to linguistic statements. However, trivia-style beliefs capable of being acquired via a single exposure are the main stock in trade of both the folk idea of belief and the intellectualist strand of philosophical discussion on the topic. Thus, they are important to deal with conceptually, even if, from the point of view of the habit theory they represent a degenerate case since from this perspective, repetition, habituation, and perseverance is the hallmark of belief (Smith and Thelen 2003).

That said, what if I told you that the folk-cartesian notion of deciding to believe is inapplicable even in the case, of trivia-style one shot belief? This is the key conclusion of what is now the most empirically successful program on belief formation in cognitive psychology. The classic paper here is Gilbert (1991), who traces the idea back to Spinoza, although the subject has been revived in the recent efflorescence of work in the philosophy of belief. See in particular Mandelbaum (2014) and Rott (2017). This last notes that this was also a central part of the habit-theoretic notion of belief shared by the American pragmatists.

When it comes to one shot propositions, people are natural born believers. In contrast to the idea that conceptions are first considered while withholding belief (as in the Cartesian model) what the evidence shows is that mere exposure or consideration of a proposition leads people to treat as a standing belief in future action and thinking. Thus, people seem incapable of not believing what they bring to mind. While this may seem like a “bug” rather than a feature of a cognitive architecture, it is perfectly compatible with both a habit-theoretic notion of belief, and a wider pragmatist conception of mentality, of the sort championed by James, Dewey, and in particular the avowed anti-Cartesian C. S. Peirce. Just in the same way that every action could be the first in a long line that will fix a belief or a habit, the very act of considering something makes it relevant for us without the intervention of some effortful mental act of acceptance.

So just like you don’t know where your habits come from, you don’t know where your “beliefs” (in the one-shot trivia sense) come from either. The reason for this is that they got in there without having to get an invitation from you. In the same way, an implication of the Spinozist belief-formation process is that the thing that requires effort and controlled intervention is the withdrawal of belief (which is difficult and resource demanding). This links up the Spinozist belief-formation story with dual process models of thinking and action (Lizardo et al. 2016).

This is also in strict analogy with habit: While lots of habits are relatively easy to form (whether or not desirable) kicking a habit is hard. Even the habits that seem to us “hard” to form (e.g. going to the gym regularly) are not hard to form because they are habits; they are hard to form because they have to contend with the existence of even stronger competing habits (lounging at home) that will not go away without putting up a fight. It is the dissolution of the old habit and not the making of the new one that’s difficult.

So with belief. Beliefs are hard to undo. Once again, because we mistakenly take the trivia one-shot version of belief as the prototype this seems like an exaggeration. So if you believed “Bahrain was a country in Africa” and somebody told you “no, actually it’s in the Persian Gulf” it would take some mental energy to give up the old belief and form the new one, but not that much; most people would be successful.

But as noted in a previous entry, most beliefs are clusters of habitual dispositions, not singleton spectatorial propositions toward which we go yea or nay. So (easily!) developing these dispositional complexes in the context of, let’s say, a misogynistic society like the United States, would mean that “unbelieving” the dispositional cluster glossed by the sentential proposition “women can’t make as good as leaders as men” is not a trivial matter. For some, to completely unbelieve this may be close to impossible. This is something that our best social-scientific theories (whether “critical” or not) have yet to handle properly because their conception of “ideology” is still trapped in the picture theory (this is a matter for future posts).

Beliefs, as Mike and I noted in a companion paper (Strand and Lizardo 2017), have an inertia (which Bourdieu referred to as “hysteresis”) that makes them hang around even after a third person observer can diagnose them as “out of phase,” or “outmoded.” This is the double-edged nature of their status as habits; easy to form (when no competing beliefs are around) and easy to use (once fixed via repetition), but hard to drop.

References

Gilbert, Daniel T. 1991. “How Mental Systems Believe.” The American Psychologist 46 (2). American Psychological Association: 107.

Lizardo, Omar, Robert Mowry, Brandon Sepulvado, Dustin S. Stoltz, Marshall A. Taylor, Justin Van Ness, and Michael Wood. 2016. “What Are Dual Process Models? Implications for Cultural Analysis in Sociology.” Sociological Theory 34 (4). journals.sagepub.com: 287–310.

Mahmood, Saba. 2011. Politics of Piety: The Islamic Revival and the Feminist Subject. Princeton University Press.

Mandelbaum, Eric. 2014. “Thinking Is Believing.” Inquiry: A Journal of Medical Care Organization, Provision and Financing 57 (1). Routledge: 55–96.

Rott, Hans. 2017. “Negative Doxastic Voluntarism and the Concept of Belief.” Synthese 194 (8): 2695–2720.

Smilde, D. 2007. Reason to Believe: Cultural Agency in Latin American Evangelicalism. The Anthropology of Christianity. University of California Press.

Smith, Linda B., and Esther Thelen. 2003. “Development as a Dynamic System.” Trends in Cognitive Sciences 7 (8): 343–48.

Strand, Michael, and Omar Lizardo. 2017. “The Hysteresis Effect: Theorizing Mismatch in Action.” Journal for the Theory of Social Behaviour 47 (2): 164–94.

Are the Folk Natural Ryleans?

Folk psychology and the belief-desire accounting system has been formative in cognitive science because of the claim, mainly put forth by philosophers, that it forms the fundamental framework via which everybody (philosopher and non-philosopher alike) understands human action as meaningful. Both proponents of some version of the argument for the ineliminable character of the folk psychological vocabulary (Davidson, 1963; Fodor, 1987), and critics that cannot wait for its elimination by a mature neuroscience as an outmoded theory (Churchland, 1981) accept the basic premise; namely, that when it comes to action understanding, folk psychology is preferred by the folk. The job of philosophy is to systematize and lay bare the “theoretical” structure of the folk system (to save it or disparage it).

In a fascinating new article forthcoming in Philosophical Psychology, Devin Sanchez Curry, tries to challenge this crucial bit of philosophical common wisdom, which he refers to as “Davidson’s Dogma” (Sanchez Curry acknowledges that this might not be exegetically strictly true of Davidson’s writings, although it is true in terms of third-party reception and influence). In particular, Sanchez Curry hones in on the claim that the folk use a “theory” of causation to account for action using beliefs: Essentially the idea that beliefs are inner causes (the cogs in the internal machinery) that produce action when they interact with other beliefs and desires. This is the subject of a previous post.

Sanchez Curry, rather than staying at the purely exegetical or conceptual analysis level,  turns to the empirical literature in psychology on lay belief attribution to shed light on this issue. There he notes something surprising. There’s little empirical evidence that the folk resort to a belief-desire vocabulary or to a theory of these as inner causes (cogs and wheels in the internal machinery) of action. Going through the literature on the development and functioning of “mindreading” abilities, Sanchez Curry shows that the primary conclusion of this line of work is that the explicit attribution of representational (e.g. “pictures in the head”) versions of belief is the exception, not the rule.

Instead, the literature has converged (like many other subfields in social and cognitive psychology) on a dual systems/process view, in which the bulk of everyday mindreading is done by high capacity, high-efficiency automatic systems that do not traffic in the explicit language of representations. Instead, these systems are attuned to routine behavioral dispositions of others and engage in the job of inference and filling-in of other people’s behavior patterns by drawing on well-honed schemata trained by the pervasive experience of watching conspecifics make their way through the world. Explicit representational belief attribution practices emerge when the routine System I process encounter trouble and require either observers or other people to “justify” what they have done using a more explicit accounting.

As Sanchez Curry notes, the evidence here is consistent with the idea (which I alluded to in a previous post) that persons may be “natural Ryleans” but that the Rylean (dispositional) action-accounting system is so routinized as to not have the flashy linguistic bells and whistles of the folk psychological one. This creates the illusion that there’s only one accounting system (the belief-desire one), when in fact there are two, it is just that the one that does most of the work is nondeclarative (Lizardo, 2017), while the declarative one gets most of the attention, even though it’s actually the “emergency” action-accounting system, not the everyday workhorse.

As Sanchez Curry also notes, evidence provided by “new wave” (post-Heider) attribution theorists show that the explicit (and actual) folk psychological accounting system even when activated, seldom posits beliefs as “inner causes” of behavior. Instead, when people enter the folk-psychological mode to explain puzzling behavior that cannot be handled by System I practical mindreading, they look for reasons, not causes. These reasons are holistic, situational, and even “institutional” (in the sociological sense). There are “justifications” that will make the action meaningful while saving the rationality of the actor, given the context. They seldom refer to internal machineries or producing causes. We look for justifications to establish blame, to “make sense” (e.g. “explain”) or “save face” not to establish the inner wellsprings of action. So even in this case the folk are natural Ryleans and focus on the observables of the situation and not the inner wellsprings. This means, that the “theory” of folk psychology is a purely iatrogenic construction of a philosophical discourse on action that plays little role in the actual attributional practices of the folk: Folk psychology in the Davidsonian/Fodorian sense turns out to be the specialized construction of an expert community.

One advantage of this account is that it solves what I previously referred to as the “frame problem” faced by all “pictures in the head” as causal drivers of action. The problem is that the observer has to pick one of a myriad of possible pictures as the “primary” cause for the action. But there is no way to make this selection in a non-arbitrary way if we are stuck with the “inner cause” conception. In the Rylean conception, the “reason” we attribute will depend on the pragmatics and goals of the reason request. Are we seeking to establish blame? Make sense of a puzzle? Save the agent’s face? Make it seem like they are devious?

These arguments have several important implications. The most important one is that mostly, nobody is imputing little world pictures to other people to explain their action, empathize, or even predict or make inferences as to what they will do next. Dedicated, highly trained automatic systems do the job when people are behaving in “predictable” ways. No representations required there (Hutto, 2004). When this action-tracking system fails, we resort to more explicit action accountings, but more accurately we resort to the placing of strange or puzzling action in a less puzzling context. Even here, this is less about getting at occult or inner well-springs than of trying to construct a “reason” why somebody might have acted this way that makes the action less puzzling.

References

Churchland, P. M. (1981). Eliminative Materialism and the Propositional Attitudes. The Journal of Philosophy, 78(2), 67–90.

Davidson, D. (1963). Actions, Reasons, and Causes. The Journal of Philosophy, 60(23), 685–700.

Fodor, J. A. (1987). Psychosemantics: The Problem of Meaning in the Philosophy of Mind. MIT Press.

Hutto, D. D. (2004). The Limits of Spectatorial Folk Psychology. Mind and Language, 19(5), 548–573.

Lizardo, O. (2017). Improving Cultural Analysis: Considering Personal Culture in its Declarative and Nondeclarative Modes. American Sociological Review, 0003122416675175.

The Ascription of Dispositions

It’s what you do” is the title of a wildly successful advertising campaign by the American insurance company GEICO. In each spot, we see either a “type” (people in a horror movie, a camel, a fisherman, a cat, a Mom, a golf commentator) or people familiar enough to the intended middle-aged audience of insurance buyers to be considered types (mainly 80s and 90s musical acts like Europe, Boyz to Men, or Salt-N-Pepa) doing things they “typically” do. These things are either out of place, annoying, rude, or irrational and thus funny within the context of the “frame” (an office, a restaurant, etc.) in which they are presented.

For instance, in a viral spot, Peter Pan shows up at the 50th-anniversary reunion to remind everybody else of how young he is (and how old they are). The voiceover reads: “If you are Peter Pan you stay young forever. It’s what you do.” In another one, a poor guy slowly sinks to his death in quicksand, while imploring a nearby cat to get help. The cat of course just licks her paws without looking at him: “If you are a cat you ignore people. It’s what you do.”

The commercials are of course funny due to the specificity of each setup. I want to suggest, however, that they may carry a more general lesson. Perhaps they strike us as noticeable (and thus humorous) because they use an action accounting system that is inveterately familiar but that we usually keep in abeyance. In fact, it is so familiar that it requires the odd situations in the GEICO commercials to make it stand out. This action accounting system, rather than relying on “belief-desire” ascriptions, points to typicalities in behavior patterns as their own justification. Thus the template “If are you X, you do Y, it’s what you do” may hold the key for prying ourselves loose of belief-desire talk.

In a previous post, I argued that the belief-desire accounting system commits us to a model in which action is driven by “little pictures in the head.” An entire tradition of explaining action by making recourse to the “ideas” that “drive” it is based on such a strategy (Parsons, 1938). This is not as innocent of a move as it may seem. Pictures in the head are entities assumed to have specific properties (e.g. representational, content-ful, and casually power-ful) that ultimately need to be cashed in in any scientific account of action. This may not be possible (Hutto & Myin, 2013).

In a follow-up post, I noted that, even if taking an ontology-neutral stance (Dennett, 1989), the ascription of belief from a third-person perspective is not an unproblematic practice either. Sometimes, different pieces of evidence (e.g. what people claim to believe) clashes with other pieces of evidence (what people do) to make belief ascription a problematic affair. The point there was that sometimes, even in our routine ascription behavior, we don’t treat beliefs as purely pictures. Actions matter too and sometimes we may conclude that what people really believe has nothing to do with the pictures (e.g. propositions) that they claim to have in their head.

So maybe our ascription practices and our action accounting systems can go beyond the usual belief-desire combo of folk psychology. This is important because one of the reasons why the claim that belief is a kind of habit might be problematic to some is that it doesn’t seem to fit any intuitive picture of the way we keep track and explain other people’s action (or our own). Here I will build some intuition for the claim that there are other ways of “explaining” action that doesn’t require the ascription of picture-like constructs that drive action. These are also compatible with the idea that beliefs are a kind of habit. Moreover, these are already ascription practices that we follow in our everyday accountings; it’s just that they are too boring to be noticeable.

The most obvious way in which we sometimes explain action without using the language of belief is to talk about somebody’s tendencies, propensities, inclinations, etc. Just like in the GEICO commercials, instead of ascribing beliefs and desires we simply point to the action as being “typical” of that doer. In the philosophy of action, at least since Ryle (2002), this is usually referred to as using a “dispositional” language. Just like ideas, dispositions are sufficient “causes” of the action they help account for. So going back to the example of Sam the fridge opener: Instead of saying that Sam opened the fridge because they believed there was sandwich inside, we can say: “Sam tends to open the fridge when they are hungry. It’s what they do.” This is a way of accounting for the action that does not resort to the ascription of world pictures. Instead, it points to a regularity or a tendency in Sam’s action that is noted to occur under certain (usually typical) conditions.

These kind of dispositional ascriptions are fairly common. In fact, they are so common they are kind of boring. Maybe they stand out less than the usual belief-desire combo of folk psychology because they are seldom used for action justification, rationality ascription, or storytelling. A serial killer who attempted to mount a defense based on the claim that “killing is just what I do” would be the subject of a short trial. In this sense, dispositional ascriptions are gray and drab (in spite of their strict accuracy) while the trafficking in (and sometimes the clash between) beliefs and desires just tell a more interesting story (in spite of their inherently speculative nature). But the pragmatics of belief-desire language use or their mnemonic advantage should not dictate their use in social-scientific explanatory projects. Dispositions have an advantage here because they commit us to a less inflationary ontology compatible with the naturalistic commitments of cognitive neuroscience.

As Schwitzgebel (2010) has argued, the dispositional approach can be extended to account for our ascription of the usual “attitudes” whether propositional (like beliefs and desires) or not. This also points to a solution to the ascription problems that arise when sayings (or phenomenological experience) does not match up with action. In contrast to pro-judgment (which favor subjective certainties and verbal reports) or anti-judgment (which favors action) views, the idea is to think of the global entity (e.g. the “belief” or the “desire”) as a cluster of dispositions. So rather than any one member (the saying or the doing) being decisive in our ascription, they all count (although we may weigh some more than others). This means that sometimes, the matter of whether somebody “believes” P will be undecidable (the cases of implicit/explicit dissociation) because different dispositions point in different directions.

The bigger point, however, is that all dispositional ascriptions have the structure of “habituals” (Fara, 2005). So when we say Sam “believes” P, what we are really saying is that Sam is predisposed to agree that P under a certain broad range of circumstances. But we also say that Sam is likely to act as if P is true, to have certain subjective experiences consistent with the truth of P and so on. In this respect, the “belief” that P is just a cluster of cognitive, phenomenological, verbal, and behavioral dispositions. This cashes in on the insight that “habit” (or disposition) is the superordinate category in mental life and that the other terms of the mental vocabulary fall of as special cases. This also reinforces the point which Mike and I made in the original paper (see in particular 56-57), that the issue is not the elimination of the language of belief and desire (or the other folk mental concepts), but their proper re-specification within a habit-theoretic framework.

Another nice feature of the dispositional ascription approach is that when we ascribe a belief, we no longer have to commit ourselves to the existence or causal efficacy of problematic entities (e.g. world pictures) but point to the usual set of things clear in experience: Actions, linguistic declarations, comportments, moods, etc.). Usually, these hang together and point in the same direction, sometimes they do not. However, whether this hanging together no longer has to result in a contest between heterogeneous entities (e.g. sayings versus doings) but between different species of the same dispositional genus.

Note, however, that picking one disposition in the cluster as the decisive element in an act of ascription is a conclusion that cannot be reached by virtue of a priori methodological policy (such as those privileging doings over sayings or vice-versa). Instead, we need to commit ourselves to an ascription standard combining inference to the best explanation with a coherentist approach: Attitude ascriptions should maximize harmony across the entire dispositional profile. So it would be a mistake, for instance, to select a single disposition (or phenomenal experience, or verbal report) as the criterion for attitude ascription, when there’s an entire panoply of other dispositions pointing in a different direction.

So the issue is not whether there’s a contest between “sayings” and “doings” (Jerolmack & Khan, 2014). Rather, the best tack is taking a tally of the entire dispositional panoply, which may involve lots of tendencies to say, do, and experience into account. Here some sayings might clash against some sayings and some doings against other doings.  Whether people strive for consistency across their dispositional profile may be as much of a sociocultural matter (as argued by Max Weber) than an a priori analytic issue. In all, however, what we are confronting are dispositions clashing (or harmonizing with) other dispositions, so in this sense, the analytical task becomes tractable from within a single action vocabulary.

References

Dennett, D. C. (1989). The Intentional Stance. MIT Press.

Fara, M. (2005). Dispositions and Habituals. Nous , 39(1), 43–82.

Hutto, D. D., & Myin, E. (2013). Radicalizing Enactivism: Basic Minds Without Content. MIT Press.

Jerolmack, C., & Khan, S. (2014). Talk Is Cheap: Ethnography and the Attitudinal Fallacy. Sociological Methods & Research. https://doi.org/10.1177/0049124114523396

Parsons, T. (1938). The Role of Ideas in Social Action. American Sociological Review, 3(5), 652–664.

Ryle, G. (2002). [1949], The Concept of Mind. Chicago: The University of Chicago Press,. With an lntroduction by Daniel C. Dennett.

Schwitzgebel, E. (2010). Acting contrary to our professed beliefs or the gulf between occurrent judgment and dispositional belief. Pacific Philosophical Quarterly, 91(4), 531–553.