Thick and Thin Belief

Knowledge and Belief

A (propositional) knowledge (that) ascription logically entails a belief ascription, right? I mean if I think that Sam knows that Joe Biden is the president of the United States, I don’t need to do further research into Sam’s state of mind or behavioral manifestations to conclude that they also believe that Joe Biden is president of the United States. For any proposition or piece of “knowledge-that,” if I state that an agent X knows that q, I am entitled to conclude by virtue of logic alone that X believes that q.

This, as summarized, has been the standard position in analytic epistemology and philosophy of mind. The entailment of belief from knowledge has been considered so obvious that nobody thinks it needs to be argued for or defended (treated as falling closer to the “analytic” end of the Quinean continuum). Most of the work on belief by epistemologists has therefore focused on the conditions under which belief can be justified, not on whether an attribution of knowledge necessarily entails an attribution of belief to an agent.

Of course, analytic philosophers are inventive folk and there have been attempts (starting around the 1960s), done via the thought experiment route, to come up with hypothetical cases in which the attribution of belief from knowledge didn’t come so easy. But most people protested against these made-up cases, denying that they in fact showed that one could attribute knowledge without attributing belief. Some of the debate, as with many philosophical ones, ultimately turned on philosophical method itself; perhaps the inability of professional philosophers to imagine non-contrived cases in which we can attribute knowledge without belief rests on the very rarefied air that philosophers breathe and the related restricted set of examples that they can imagine.

Myers-Schulz & Schwitzgebel (2013), thus follow a recent trend of “experimental philosophy,” in which philosophers burst out of the philosophical bubble and just confront the folk with various examples and ask them whether they think that those examples merit attributions of knowledge without belief. One of these examples (modified from the original ones proposed from the armchair) has us encountering a nervous student who memorizes the answer to tests, but when it comes to actually answer, gets nervous at the last minute, blanks out, and just guesses the answer to the last question in the test, which they also happen to get right. When regular old folks are then asked whether this “unconfident examinee,” knew the answer to this last question, 87% say yes. But if they are instead asked (in a between-subjects set up) whether the unconfident examinee believed the answer to the last question only 37% say yes (Myers-Schulz & Schwitzgebel, 2013, p. 378).

Interestingly, the same folk dissociation between knowledge and belief ascriptions can be observed when people are exposed to scenarios of discordance between explicit and implicit attitudes, or dissociation between rational beliefs that everyone would hold and irrational fantastic beliefs that are induced at the moment by watching a horror movie. In the “prejudiced professor” case, we have a professor who reflectively holds unprejudiced attitudes and is committed to egalitarian values, but who in their everyday micro-behavior systematically treats student-athletes as if they are less capable. In the “freaked out movie watcher” case, we have a person who just watched a horror movie in which a flood of alien larva comes out of faucets and who, after watching the movie, freaks out when their friend opens the (real world) faucet. In both cases, the great majority of the folk attribute knowledge that (student-athletes are as capable as other students and that only water would come out of the faucet), but only relatively small minorities attribute belief. Other cases have been concocted (e.g., a politician who claims to have a certain set of values, but when it comes to acting on those values, by, for instance, advocating for policies that would further them, fails to act) and these cases also generate the dissociation between knowledge and belief ascription among the folk.

Solving the Puzzle

What’s going on here? Some argue that it comes down to a difference between so-called dispositional and occurrent belief. These are terms of art in analytic philosophy, but it boils down to the difference between a belief that you hold but are not currently entertaining (but could entertain under the right circumstances) and one that you are currently holding. The former is a dispositional belief and the latter is an occurrent belief. When you are sleeping you dispositionally believe everything that you believe when you are professing wide-awake beliefs. So maybe the folk deny that in all of the cases above people who know that x also occurrently believe that x, but they don’t deny that they dispositionally do so. Rose & Schaffer (2013) find support for this hypothesis.

Unfortunately for Rose & Schaffer, a subsequent series of experiments (Murray et al., 2013), show that knowledge/belief dissociation among the folk are pervasive, applied more generally than originally thought, in ways that cannot be easily saved by applying the dispositional/occurrent distinction. For instance, when asked whether God knows or believes a proposition that comes closest to the “analytic” end of Quine’s continuum (e.g., 2 + 2 = 4), virtually everyone (93%) is comfortable attribute knowledge to God, but only 66% say God believes the trivial arithmetical proposition. Murray et al., also show that people are much more comfortable attributing knowledge, compared to belief, to dogs trained to answer math questions, and cash registers. Finally, Murray et al. (2013, p. 94) have the folk consider the case of a physics student who gets perfect scores in astronomy tests, but who had been homeschooled by rabid Aristotelian parents who taught them that the earth stood at the center of the universe and who never gave up allegiance to the teachings of his parents They find that, for regular people, the homeschooled heliocentric college freshman who also gets an A+ on their Astronomy 101 test knows the earth revolves around the sun but doesn’t believe it.

So something else must be going on. In a more recent paper, Buckwalter et al. (2015) propose a compelling solution. Their argument is that the (folk) conception of belief is not unitary and that the contrast with professional epistemologists is that this last group does hold a unitary conception of belief. More specifically, Buckwalter et al. argue that professional philosophy’s concept of belief is thin:

A thin belief is a bare cognitive pro-attitude. To have a thin belief that P, it suffices that you represent that P is true, regard it as true, or take it to be true. Put another way, thinly believing P involves representing and storing P as information. It requires nothing more. In particular, it doesn’t require you to like it that P is true, to emotionally endorse the truth of P, to explicitly avow or assent to the truth of P, or to actively promote an agenda that makes sense given P (749).

But the folk, in addition to countenancing the idea of thin belief, can also imagine the notion of thick belief (on thin and thick concepts more generally, see Abend, 2019). Thick belief contrasts to thin belief in all the dimensions mentioned. Rather than being a purely dispassionate or intellectual holding of a piece of information considered as true, a thick belief “also involves emotion and conation” (749, italics in the original). In addition to merely representing that or P, thick believers in a proposition will also be motivated to want P to be true, will endorse P as true, will defend the truth of P against skeptics, will try to convince others that P is true, will explicitly avow or assent to P‘s truth, and the like. Buckwalter et al. propose that thick and thin beliefs are two separate categories in folk psychology, that thick belief is the default (folk) understanding,  and that therefore the various knowledge/belief dissociation observations can be made sense of by cueing this distinction. In a series of experiments, they show that this is precisely the case. Returning (some of) the cases discussed above, they show that belief ascription rise (most of the time to match knowledge ascriptions) when people are given extra information or a prompt indicating thick of thin belief on the part of the believing agent.

Thin and Thick Belief in the Social Sciences

Interestingly, the distinction between thin and thick belief dovetails a number of distinctions that have been made by sociologists and anthropologists interested in the link between culture and cognition. These discussions have to do with distinctions in the way people internalize culture (for more discussion on this, see here). For instance, the sociologist Ann Swidler (2001) distinguishes between two ways people internalize beliefs (knowledge-that) but uses a metaphor of “depth” rather than thick and thinness (on the idea of cultural depth, see here). For Swidler, people can and do often internalize beliefs and understandings in the form of “faith, commitment, and ideological conviction” (Swidler, 2001, p. 7); that definitely sounds like thick beliefs. However, people also internalize much culture “superficially,” as familiarity with general beliefs, norms, and cultural practices that do not elicit deeply held personal commitment (although they may elicit public acts of behavioral conformity); those definitely sound like thin beliefs. Because deeply internalizing culture is hard and superficially internalizing culture is easy, the amount of culture that is internalized in the more superficial way likely outweighs the culture that is internalized in the “deep” way. In this respect, “[p]eople vary in the ‘stance’ they take toward culture—how seriously versus lightly they hold it.” Some people are thick (serious) believers but most people’s stance toward a lot of the culture they have internalized is more likely to range from ritualistic adherence (in the form of repeated expression of platitudes and cliches taken to be “common sense”) to indifference, cynicism, and even insincere affirmation (Swidler 2001, p. 43–44).

In cognitive anthropology (see Quinn et al., 2018a, 2018b; Strauss 2018), an influential model of the way people internalize beliefs, due to Melford Spiro, also proposes a gradation of belief internalization that matches Buckwalter et al.’s distinction between thin and thick belief, and Swidler’s deep/superficial belief (without necessarily using either metaphor). According to D’Andrade’s summary of Spiro’s model (1995: 228ff), people can go simply being “acquainted with some part of the cultural system of representations without assenting to its descriptive or normative claims. The individual may be indifferent to, or even reject these claims.” Obvious this (level 1) internalization does not count as belief, not even of the thin kind (Buckwalter et al. 2015). However, at internalization level 2, we get something closer. Here “cultural representations are acquired as cliches; the individual honors their descriptive or normative claims more in the breach than in the observance.” This comes closest to Buckwalter et al.’s idea of thin belief (and Swidler’s notion of “superficially internalized” culture) but it is likely that some people might not think this is a full-blown belief. We get there at internalization level 3. Here, “individuals hold their beliefs to be true, correct, or right…[beliefs] structure the behavioral environment of actors and guide their actions.” This seems closer to the notion of belief that is held by professional philosophers, and it is likely the default version of a belief on its way to thickening. Not just a piece of information represented by the actor and held as true on occasion (as in level 2) but one that systematically guides action. Finally, Spiro’s level 4 is the prototypical thick belief in Buckwalter et al.’s sense. Here “cultural representations…[are] highly salient,” being capable of motivating and instigating action. Level 4 beliefs are invested with emotion, which is a core marker of thick belief (Buckwalter et al., 2015, p. 750ff).

Implications

Interestingly, insofar as some influential theories of the internalization of knowledge-that in cultural anthropology and sociology make the thick belief/thin belief distinction, which, as shown by the research indicated above, is also respected by the folk, it indicates that it may be an idiosyncrasy of the philosophical profession to hold a unitary (or non-graded) notion of belief. Both sociologists and anthropologists have endeavored to produce analytic distinctions in the way people internalize belief-like representations from the larger cultural environment that more closely match the folk. This would indicate that many “problems” conceiving of cases of contradictory or in-between beliefs (Gendler, 2008; Schwitzgebel, 2001)  may have been as much iatrogenic as conceptual.

As also noted by Buckwalter et al., the thin/thick belief distinction might be relevant for debates raging in contemporary epistemology and psychological science over what is the most accurate way to conceive of people’s typical belief-formation mechanism. Is it “Descartian” or “Spinozan”? The Descartian picture conforms to the usual philosophical model. Before believing anything, I reflectively consider it, weigh the evidence pro and against, and if it meets other rational considerations (e.g., consistency with my other beliefs), then I believe it. The Spinozan belief-formation mechanism proposes an initially counter-intuitive picture, in which people automatically believe every piece of information they are exposed to without reflective consideration; only un-believing something requires conscious effort and consideration.

The Descartes/Spinoza debate on belief formation dovetails with a debate in the sociology of culture over whether culture is structured or fragmented (Quinn, 2018). The short version of this debate is that sociologists like Swidler think that (most) culture is internalized in a superficial way and that therefore it operates as fragmented bits and pieces that are brought into coherence via external mechanisms (Swidler 2001). Cognitive anthropologists, on the other hand, adduce strong evidence in favor of the idea that people internalize culture in a more structured manner. There’s definitely a problem of talking past one another in this debate: It seems like Swidler is talking about beliefs proper but Quinn is talking about other forms of non-doxastic knowledge. This last kind can no longer be considered propositional knowledge-that but comes closer to (conceptual) knowledge-what.

Regardless, it is clear that if the Spinozan story is true, then beliefs cannot be internalized as a logically coherent web and therefore cannot exert an effect on action as such. Instead, the mind (and the beliefs therein) are fragmented (Egan, 2008). DiMaggio (1997) in a classic paper in culture and cognition studies, drew that test implication from Daniel Gilbert’s research program, showing that people seem to internalize (some) beliefs via Spinozan mechanisms. For DiMaggio, this supported the sociological version of the fragmentation of culture, because if beliefs are internalized as fragmented, disorganized, barely considered bits of information, then whatever coherence they have must come from the outside (e.g., via institutional or other high-level structures), just as Swidler suggests (DiMaggio, 1997, p. 274). 

But if Buckwalter et al.’s distinction track an interesting distinction in kinds of belief (as suggested by Spiro’s degree of internalization story), then it is likely that the fragmentation argument only applies to thin beliefs. Thick beliefs, on the other hand, the ones that people are most motivated to defend, are imbued with emotion, are least likely to give up, and are most likely to guide people’s actions, are unlikely to be internalized as incoherent information bits that people just “coldly” represent or consider.

References

Abend, G. (2019). Thick Concepts and Sociological Research. Sociological Theory, 37(3), 209–233.

Buckwalter, W., Rose, D., & Turri, J. (2015). Belief through thick and thin. Nous , 49(4), 748–775.

DiMaggio, P. J. (1997). Culture and Cognition. Annual Review of Sociology, 23, 263–287.

Egan, A. (2008). Seeing and believing: perception, belief formation and the divided mind. Philosophical Studies, 140(1), 47–63.

Gendler, T. S. (2008). Alief and Belief. The Journal of Philosophy, 105(10), 634–663.

Murray, D., Sytsma, J., & Livengood, J. (2013). God knows (but does God believe?). Philosophical Studies, 166(1), 83–107.

Myers-Schulz, B., & Schwitzgebel, E. (2013). Knowing that P without believing that P. Nous , 47(2), 371–384.

Quinn, N. (2018). An anthropologist’s view of American marriage: limitations of the tool kit theory of culture. In Advances in Culture Theory from Psychological Anthropology (pp. 139–184). Springer.

Quinn, N., Sirota, K. G., & Stromberg, P. G. (2018a). Conclusion: Some Advances in Culture Theory. In N. Quinn (Ed.), Advances in Culture Theory from Psychological Anthropology (pp. 285–327). Palgrave Macmillan.

Quinn, N., Sirota, K. G., & Stromberg, P. G. (2018b). Introduction: How This Volume Imagines Itself. In N. Quinn (Ed.), Advances in Culture Theory from Psychological Anthropology (pp. 1–19). Springer International Publishing.

Rose, D., & Schaffer, J. (2013). Knowledge entails dispositional belief. Philosophical Studies, 166(S1), 19–50.

Schwitzgebel, E. (2001). In-between Believing. The Philosophical Quarterly, 51(202), 76–82.

Strauss, C. (2018). The Complexity of Culture in Persons. In N. Quinn (Ed.), Advances in Culture Theory from Psychological Anthropology (pp. 109–138). Springer International Publishing.

Are Beliefs Pictures in the Head?

In a recently published piece (Strand & Lizardo, 2015) Mike and I argued that the notion of “belief” if it is to do a more adequate job as a category of analysis in social-scientific research, can best be thought of as a species of habit. I refer the interested reader to the paper for the more detail “exegetical” argumentation excavating the origins of this notion in American pragmatism (mostly in the work of Peirce and Dewey) and European practice theory (mostly in the work of Bourdieu). Here I would like to explore some reasons this proposal may seem to be so counterintuitive given our traditional conceptions of belief.

The fear, to some well-founded, is that substituting the usual notion for the habit notion would cause a net loss, and thus an inability to account for things that would like to account for (e.g. patterns of action that are driven by ideas or thoughts in the head) adequately.

What is the standard notion of belief that the “habit” notion displaces (if not replaces)? The easiest way to think of it is as one in which beliefs are thought to be little “pictures” in the head that people carry around. But what are beliefs pictures of? After all, pictures (even in modern art) usually depict something, however faint. The answer is that they are supposed to be pictures of the world that somehow the person uses to get by.

Because the beliefs are “pictures” (in cognitive science sometimes the word representation is used in this context) they have the representational properties usual pictures have. For instance, they portray the world in a certain way (e.g. under a particular description). In addition, because they are pictures, beliefs have content. That is a belief is always about something (in some philosophical segments, the word “intentionality” is usually brought up here (Searle, 1983)). In this way, some beliefs may be directed at the same state of affairs in the world, but “picture it” in different ways (Hutto, 2013). Finally, and building in on this last distinction, just like pictures claim to depict the world as it is (or at least have a resemblance to it), beliefs can be true (if they portray the world as it is) or they can be false (if the description does not match the world). This truth/falsity relation between the pictures in the head and the world turns out to be crucial for their indispensable job in “explaining” action.

For instance, if somebody opens a refrigerator, grabs a sandwich from it, and eats it, and an outside observer can “explain” the pattern by ascribing a belief to the person. So the person opened the fridge door because they thought (believed) there was a sandwich there. We usually complete this belief-based explanation by adding some kind of motive or desire as a jointly sufficient cause (“they believed there was a sandwich in the fridge and they were hungry”).

But suppose we were to see the same person open the fridge, look around and then go back to the couch empty-handed. This is a different behavioral pattern as before. However, note we can also “explain” this behavior using the same “sandwich” belief mechanism as before. The trick is simply to ascribe a false belief to the person: The imputed picture in the head does not match the actual state of the world. So we can now say, “Sam opened the fridge because they believed there was a sandwich in there and they were hungry.” We attach one more disclaimer: “But Sam was wrong, there was no sandwich.”

This flexibility makes belief-based explanations fairly powerful (they can account for a wide range of behavioral patterns). However, flexibility is also a double-edged sword: Become too flexible and you risk vacuity, explaining everything and thus nothing (see Strand & Lizardo, 2015, pp. 47–48).

Because the belief-desire combo is so flexible (and so pervasive even in our “folk” accounting of each other’s action) some people have argued that it is inevitable. So inevitable it may be the only game in town for explaining action. This would make the “pictures in the head” version of the notion of belief essentially a non-negotiable part of our explanatory vocabulary. One of the main goals of our paper was to argue that there are other options even if they seem weird at first sight.

The alternative we championed was to think of belief as a species of habit. This requires both a revision of our implicit classification of mental concepts and a revision of what we mean by “belief.” In terms of the first aspect, the usual way to think of belief and habit is to see them as distinct categories in our mental vocabulary. A habit is a “thoughtless” activity, while an action driven by belief requires “thought” to be involved. So they are two sets of mental categories, but they are as a distinct as a frog is from a zebra (even if both are a species of animal). In our proposal, however, the overarching category in mental life (for both human and nonhuman animals) is habit, and belief is a subcategory of habit. This does violence to the standard classification so it may take time to get used to.

In this respect, note that the “picture” theory of belief seems to be important in how people differentiate belief from habit. Both can be involved in action, but when action is driven by belief, the picture inside the head is on the driver’s seat and is thus an important (but always presumed) component of the action. In fact, the picture is such an important component we attribute causal force to it. Sam got up from the couch and walked to the fridge because they thought there was a sandwich in there (and they were hungry).

One last observation about the pictures in the head account of action. When the observer imputes the belief “sandwich in the fridge” to Sam and selects this belief as the “cause” of the action, by what criteria is this selection made? I bring this up only to note that there are actually a bunch of other “beliefs” that the observer could have imputed to Sam, and which could be argued to be implicated in the action, but somehow didn’t. For instance, the observer could have said one of the beliefs accounting for Sam’s action is that “there was a fridge in the room.” Or that “the fridge was plugged in” or that “the floor could sustain their weight,” and so on.

This is not just a trivial “philosophical” issue. We could impute an infinity of little world pictures in the head to Sam. In fact, as many as there are “states of affairs” about the world that make it possible for Sam to get up and check the fridge, inclusive of purely hypothetical or even “negative” pictures (e.g. the belief that “there’s not a bomb in the fridge which will be triggered to detonate when the door is opened.”). Yet, we do not (sometimes this is referred to as the “frame problem” (Dennett, 2006) in artificial intelligence circles). This means that belief imputation practices following the picture version are necessarily selective, but the criteria for selection remain obscure. This kind of obscurity should be suspicious for those who want to recruit these types of explanations as scientific accounts of action.

But we are getting ahead of ourselves. The main point of this post is simply to warm you up to the intuition that maybe the pictures in the head version of belief is not as intuitive as you may have thought nor as unproblematic or non-negotiable as it is sometimes depicted. In a future post we I will introduce the alternative conception of belief as habit and see whether it is not subject to these issues.

References

Dennett, D. C. (2006). Cognitive Wheels: The frame problem of AI. In J. L. Bermudez (Ed.), Philosophy of Psychology: Contemporary Readings (Vol. 433, pp. 433–454). New York: Routledge.

Hutto, D. D. (2013). Why Believe in Contentless Beliefs? In N. Nottelmann (Ed.), New Essays on Belief: Constitution, Content and Structure (pp. 55–74). London: Palgrave Macmillan UK.

Searle, J. R. (1983). Intentionality: An essay in the philosophy of mind. New York: Cambridge University Press.

Strand, M., & Lizardo, O. (2015). Beyond World Images: Belief as Embodied Action in the World. Sociological Theory, 33(1), 44–70.