The Decision to Believe12 min read

As noted in a previous post, there are analytic advantages with reconceptualizing the traditional denizens of the folk-psychological vocabulary from the point of view of habit theory. So far, however, the argument has been negative and high-level; thinking of belief as habit, for instance, allows us to sidestep a bunch of antinomies and contradictions brought about by the picture theory. In this post, I would like to outline some positive implications of recasting beliefs as a species of habit. However, I will begin by discussing other overlooked implications of the picture theory and then (promise) move on to some clear substantive implications of the habit conception.

As noted before, the picture theory of belief is part of a more general set of folk (and even technical) conceptions of how beliefs work. I have already noted one of them and that is the postulate of incorrigibility: If somebody assents to believing p, then we presume that they have privileged first-person knowledge as to this. It would be nonsensical (and socially uncouth) for a second person to say to them “I know better than you on this one; I don’t think you believe p.” Folk Cartesianism thus operates as a philosophical set of tenets (e.g. the idea we have privileged introspective and maybe even non-inferential access to personal beliefs), and as a set of ethnomethods to coordinate social interaction (accepting people’s claims they believe something when they tell us so without raising a fuss).

I want to point to another, less obvious premise of both folk and technical Cartesianism. This is the notion (which became historically decisive in the Christian West after the Protestant Reformation) that you get to choose what you believe. Just like before, this doubles as a philosophical precept and as an ethnomethod used to organize social relations in doxa-centric societies (Mahmood 2011). If you get to choose what you believe, and if your belief is obnoxious or harmful, then you are responsible for your belief and can be blamed, punished, burned at the stake and so on. As the sociologist David Smilde has also noted, there is a positive version of this implication of folk Cartesianism: if the belief is good for you (e.g. brings with it new friends, behaviors, resources) then we should expect you (under the auspices of charitable ascription) to choose to believe it. However, the weird prospect of people believing something not because they find its truth or validity compelling but because of instrumental reasons raises its ugly head in this case (Smilde 2007, 3ff; 100ff).

The idea of choosing to believe is not as crazy as it sounds. At least the negative of it, the idea we could bring up a consideration (let’s say a standard proposition) and withhold belief from it until we had scrutinized its validity was central to the technical Cartesian method of doubt. Obviously, this requires that we have some reflective control over our decision to believe in something or not while we consider it, so in this respect technical and folk Cartesianism coincide.

As Mike and I discuss in the 2015 paper, rejecting the picture theory (and associated technical/folk Cartesianism) of belief makes hash of the notion of “choosing to believe” as a plausible belief-formation story. Here the strict analogy to prototypical habits helps. Consider a well-honed habit; when exactly did you choose to acquire it? Now even if you made a “decision” to start a new training regimen (e.g. Yoga) at what point did it go from a decision to a habit? Did that involve an act of assent on your part? Now consider a traditional belief stated as an explicit linguistic proposition you claim to believe (e.g. “The U.S. is the land of opportunity”). When did you choose to believe that? We suggest, that even a fairly informal bit of phenomenology will lead to the conclusion that you do not have credible autobiographical memories of having “chosen” any of the things you claim to believe. It’s as if, as Smilde points out, the original memory of decision is “erased” once the conviction to believe takes hold.

We suggest that the apparatus of erased memories and decisions that may or may not have taken place is an unnecessary outgrowth of the picture theory. Just like habits, beliefs are acquired gradually. The problem is that we take trivial (in the strict sense of trivia) encyclopedic statements (e.g. Bahrain is a country in the middle east) as prototypical cases of belief. Because these could be acquired via fast memory binding after a single exposure they seem to be the opposite of the way habits are acquired. However, these linguistic-assent to trivia beliefs are analytically worthless because it is unlikely that if there’s anything like belief that plays a role in action, it takes the form of linguistic-trivia beliefs. That we believe (no pun intended) that these types of propositions are “in control” of action is itself also an unnecessary analytic burden produced by the picture theory.

Instead, as noted before, a lot of our action-implicated beliefs are clusters of dispositions and not passive acts of private assent to linguistic statements. However, trivia-style beliefs capable of being acquired via a single exposure are the main stock in trade of both the folk idea of belief and the intellectualist strand of philosophical discussion on the topic. Thus, they are important to deal with conceptually, even if, from the point of view of the habit theory they represent a degenerate case since from this perspective, repetition, habituation, and perseverance is the hallmark of belief (Smith and Thelen 2003).

That said, what if I told you that the folk-cartesian notion of deciding to believe is inapplicable even in the case, of trivia-style one shot belief? This is the key conclusion of what is now the most empirically successful program on belief formation in cognitive psychology. The classic paper here is Gilbert (1991), who traces the idea back to Spinoza, although the subject has been revived in the recent efflorescence of work in the philosophy of belief. See in particular Mandelbaum (2014) and Rott (2017). This last notes that this was also a central part of the habit-theoretic notion of belief shared by the American pragmatists.

When it comes to one shot propositions, people are natural born believers. In contrast to the idea that conceptions are first considered while withholding belief (as in the Cartesian model) what the evidence shows is that mere exposure or consideration of a proposition leads people to treat as a standing belief in future action and thinking. Thus, people seem incapable of not believing what they bring to mind. While this may seem like a “bug” rather than a feature of a cognitive architecture, it is perfectly compatible with both a habit-theoretic notion of belief, and a wider pragmatist conception of mentality, of the sort championed by James, Dewey, and in particular the avowed anti-Cartesian C. S. Peirce. Just in the same way that every action could be the first in a long line that will fix a belief or a habit, the very act of considering something makes it relevant for us without the intervention of some effortful mental act of acceptance.

So just like you don’t know where your habits come from, you don’t know where your “beliefs” (in the one-shot trivia sense) come from either. The reason for this is that they got in there without having to get an invitation from you. In the same way, an implication of the Spinozist belief-formation process is that the thing that requires effort and controlled intervention is the withdrawal of belief (which is difficult and resource demanding). This links up the Spinozist belief-formation story with dual process models of thinking and action (Lizardo et al. 2016).

This is also in strict analogy with habit: While lots of habits are relatively easy to form (whether or not desirable) kicking a habit is hard. Even the habits that seem to us “hard” to form (e.g. going to the gym regularly) are not hard to form because they are habits; they are hard to form because they have to contend with the existence of even stronger competing habits (lounging at home) that will not go away without putting up a fight. It is the dissolution of the old habit and not the making of the new one that’s difficult.

So with belief. Beliefs are hard to undo. Once again, because we mistakenly take the trivia one-shot version of belief as the prototype this seems like an exaggeration. So if you believed “Bahrain was a country in Africa” and somebody told you “no, actually it’s in the Persian Gulf” it would take some mental energy to give up the old belief and form the new one, but not that much; most people would be successful.

But as noted in a previous entry, most beliefs are clusters of habitual dispositions, not singleton spectatorial propositions toward which we go yea or nay. So (easily!) developing these dispositional complexes in the context of, let’s say, a misogynistic society like the United States, would mean that “unbelieving” the dispositional cluster glossed by the sentential proposition “women can’t make as good as leaders as men” is not a trivial matter. For some, to completely unbelieve this may be close to impossible. This is something that our best social-scientific theories (whether “critical” or not) have yet to handle properly because their conception of “ideology” is still trapped in the picture theory (this is a matter for future posts).

Beliefs, as Mike and I noted in a companion paper (Strand and Lizardo 2017), have an inertia (which Bourdieu referred to as “hysteresis”) that makes them hang around even after a third person observer can diagnose them as “out of phase,” or “outmoded.” This is the double-edged nature of their status as habits; easy to form (when no competing beliefs are around) and easy to use (once fixed via repetition), but hard to drop.

References

Gilbert, Daniel T. 1991. “How Mental Systems Believe.” The American Psychologist 46 (2). American Psychological Association: 107.

Lizardo, Omar, Robert Mowry, Brandon Sepulvado, Dustin S. Stoltz, Marshall A. Taylor, Justin Van Ness, and Michael Wood. 2016. “What Are Dual Process Models? Implications for Cultural Analysis in Sociology.” Sociological Theory 34 (4). journals.sagepub.com: 287–310.

Mahmood, Saba. 2011. Politics of Piety: The Islamic Revival and the Feminist Subject. Princeton University Press.

Mandelbaum, Eric. 2014. “Thinking Is Believing.” Inquiry: A Journal of Medical Care Organization, Provision and Financing 57 (1). Routledge: 55–96.

Rott, Hans. 2017. “Negative Doxastic Voluntarism and the Concept of Belief.” Synthese 194 (8): 2695–2720.

Smilde, D. 2007. Reason to Believe: Cultural Agency in Latin American Evangelicalism. The Anthropology of Christianity. University of California Press.

Smith, Linda B., and Esther Thelen. 2003. “Development as a Dynamic System.” Trends in Cognitive Sciences 7 (8): 343–48.

Strand, Michael, and Omar Lizardo. 2017. “The Hysteresis Effect: Theorizing Mismatch in Action.” Journal for the Theory of Social Behaviour 47 (2): 164–94.

One comment

  1. Pingback: Habit and the Explanation of Action I – Culture, Cognition, and Action (culturecog)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to Top

Discover more from Culture, Cognition, and Action (culturecog)

Subscribe now to keep reading and get access to the full archive.

Continue reading