The Ascription of Dispositions

It’s what you do” is the title of a wildly successful advertising campaign by the American insurance company GEICO. In each spot, we see either a “type” (people in a horror movie, a camel, a fisherman, a cat, a Mom, a golf commentator) or people familiar enough to the intended middle-aged audience of insurance buyers to be considered types (mainly 80s and 90s musical acts like Europe, Boyz to Men, or Salt-N-Pepa) doing things they “typically” do. These things are either out of place, annoying, rude, or irrational and thus funny within the context of the “frame” (an office, a restaurant, etc.) in which they are presented.

For instance, in a viral spot, Peter Pan shows up at the 50th-anniversary reunion to remind everybody else of how young he is (and how old they are). The voiceover reads: “If you are Peter Pan you stay young forever. It’s what you do.” In another one, a poor guy slowly sinks to his death in quicksand, while imploring a nearby cat to get help. The cat of course just licks her paws without looking at him: “If you are a cat you ignore people. It’s what you do.”

The commercials are of course funny due to the specificity of each setup. I want to suggest, however, that they may carry a more general lesson. Perhaps they strike us as noticeable (and thus humorous) because they use an action accounting system that is inveterately familiar but that we usually keep in abeyance. In fact, it is so familiar that it requires the odd situations in the GEICO commercials to make it stand out. This action accounting system, rather than relying on “belief-desire” ascriptions, points to typicalities in behavior patterns as their own justification. Thus the template “If are you X, you do Y, it’s what you do” may hold the key for prying ourselves loose of belief-desire talk.

In a previous post, I argued that the belief-desire accounting system commits us to a model in which action is driven by “little pictures in the head.” An entire tradition of explaining action by making recourse to the “ideas” that “drive” it is based on such a strategy (Parsons, 1938). This is not as innocent of a move as it may seem. Pictures in the head are entities assumed to have specific properties (e.g. representational, content-ful, and casually power-ful) that ultimately need to be cashed in in any scientific account of action. This may not be possible (Hutto & Myin, 2013).

In a follow-up post, I noted that, even if taking an ontology-neutral stance (Dennett, 1989), the ascription of belief from a third-person perspective is not an unproblematic practice either. Sometimes, different pieces of evidence (e.g. what people claim to believe) clashes with other pieces of evidence (what people do) to make belief ascription a problematic affair. The point there was that sometimes, even in our routine ascription behavior, we don’t treat beliefs as purely pictures. Actions matter too and sometimes we may conclude that what people really believe has nothing to do with the pictures (e.g. propositions) that they claim to have in their head.

So maybe our ascription practices and our action accounting systems can go beyond the usual belief-desire combo of folk psychology. This is important because one of the reasons why the claim that belief is a kind of habit might be problematic to some is that it doesn’t seem to fit any intuitive picture of the way we keep track and explain other people’s action (or our own). Here I will build some intuition for the claim that there are other ways of “explaining” action that doesn’t require the ascription of picture-like constructs that drive action. These are also compatible with the idea that beliefs are a kind of habit. Moreover, these are already ascription practices that we follow in our everyday accountings; it’s just that they are too boring to be noticeable.

The most obvious way in which we sometimes explain action without using the language of belief is to talk about somebody’s tendencies, propensities, inclinations, etc. Just like in the GEICO commercials, instead of ascribing beliefs and desires we simply point to the action as being “typical” of that doer. In the philosophy of action, at least since Ryle (2002), this is usually referred to as using a “dispositional” language. Just like ideas, dispositions are sufficient “causes” of the action they help account for. So going back to the example of Sam the fridge opener: Instead of saying that Sam opened the fridge because they believed there was sandwich inside, we can say: “Sam tends to open the fridge when they are hungry. It’s what they do.” This is a way of accounting for the action that does not resort to the ascription of world pictures. Instead, it points to a regularity or a tendency in Sam’s action that is noted to occur under certain (usually typical) conditions.

These kind of dispositional ascriptions are fairly common. In fact, they are so common they are kind of boring. Maybe they stand out less than the usual belief-desire combo of folk psychology because they are seldom used for action justification, rationality ascription, or storytelling. A serial killer who attempted to mount a defense based on the claim that “killing is just what I do” would be the subject of a short trial. In this sense, dispositional ascriptions are gray and drab (in spite of their strict accuracy) while the trafficking in (and sometimes the clash between) beliefs and desires just tell a more interesting story (in spite of their inherently speculative nature). But the pragmatics of belief-desire language use or their mnemonic advantage should not dictate their use in social-scientific explanatory projects. Dispositions have an advantage here because they commit us to a less inflationary ontology compatible with the naturalistic commitments of cognitive neuroscience.

As Schwitzgebel (2010) has argued, the dispositional approach can be extended to account for our ascription of the usual “attitudes” whether propositional (like beliefs and desires) or not. This also points to a solution to the ascription problems that arise when sayings (or phenomenological experience) does not match up with action. In contrast to pro-judgment (which favor subjective certainties and verbal reports) or anti-judgment (which favors action) views, the idea is to think of the global entity (e.g. the “belief” or the “desire”) as a cluster of dispositions. So rather than any one member (the saying or the doing) being decisive in our ascription, they all count (although we may weigh some more than others). This means that sometimes, the matter of whether somebody “believes” P will be undecidable (the cases of implicit/explicit dissociation) because different dispositions point in different directions.

The bigger point, however, is that all dispositional ascriptions have the structure of “habituals” (Fara, 2005). So when we say Sam “believes” P, what we are really saying is that Sam is predisposed to agree that P under a certain broad range of circumstances. But we also say that Sam is likely to act as if P is true, to have certain subjective experiences consistent with the truth of P and so on. In this respect, the “belief” that P is just a cluster of cognitive, phenomenological, verbal, and behavioral dispositions. This cashes in on the insight that “habit” (or disposition) is the superordinate category in mental life and that the other terms of the mental vocabulary fall of as special cases. This also reinforces the point which Mike and I made in the original paper (see in particular 56-57), that the issue is not the elimination of the language of belief and desire (or the other folk mental concepts), but their proper re-specification within a habit-theoretic framework.

Another nice feature of the dispositional ascription approach is that when we ascribe a belief, we no longer have to commit ourselves to the existence or causal efficacy of problematic entities (e.g. world pictures) but point to the usual set of things clear in experience: Actions, linguistic declarations, comportments, moods, etc.). Usually, these hang together and point in the same direction, sometimes they do not. However, whether this hanging together no longer has to result in a contest between heterogeneous entities (e.g. sayings versus doings) but between different species of the same dispositional genus.

Note, however, that picking one disposition in the cluster as the decisive element in an act of ascription is a conclusion that cannot be reached by virtue of a priori methodological policy (such as those privileging doings over sayings or vice-versa). Instead, we need to commit ourselves to an ascription standard combining inference to the best explanation with a coherentist approach: Attitude ascriptions should maximize harmony across the entire dispositional profile. So it would be a mistake, for instance, to select a single disposition (or phenomenal experience, or verbal report) as the criterion for attitude ascription, when there’s an entire panoply of other dispositions pointing in a different direction.

So the issue is not whether there’s a contest between “sayings” and “doings” (Jerolmack & Khan, 2014). Rather, the best tack is taking a tally of the entire dispositional panoply, which may involve lots of tendencies to say, do, and experience into account. Here some sayings might clash against some sayings and some doings against other doings.  Whether people strive for consistency across their dispositional profile may be as much of a sociocultural matter (as argued by Max Weber) than an a priori analytic issue. In all, however, what we are confronting are dispositions clashing (or harmonizing with) other dispositions, so in this sense, the analytical task becomes tractable from within a single action vocabulary.

References

Dennett, D. C. (1989). The Intentional Stance. MIT Press.

Fara, M. (2005). Dispositions and Habituals. Nous , 39(1), 43–82.

Hutto, D. D., & Myin, E. (2013). Radicalizing Enactivism: Basic Minds Without Content. MIT Press.

Jerolmack, C., & Khan, S. (2014). Talk Is Cheap: Ethnography and the Attitudinal Fallacy. Sociological Methods & Research. https://doi.org/10.1177/0049124114523396

Parsons, T. (1938). The Role of Ideas in Social Action. American Sociological Review, 3(5), 652–664.

Ryle, G. (2002). [1949], The Concept of Mind. Chicago: The University of Chicago Press,. With an lntroduction by Daniel C. Dennett.

Schwitzgebel, E. (2010). Acting contrary to our professed beliefs or the gulf between occurrent judgment and dispositional belief. Pacific Philosophical Quarterly, 91(4), 531–553.

Ascription Practices: The Very Idea

How do we know what others believe? The answer to this question may seem clear but as we will see it has some interesting hidden complexities. Some of these bear directly on some established policies in social-scientific method.

One obvious answer is that if we want to know what others believe, and these others are language using creatures like ourselves is that we ask them: “Do you believe P?” If the person verbally reports believing P, then we are on safe grounds in ascribing belief P.

So far so good. But let us say a person assents to P but acts in other ways that seem counter to the content of that belief. What to do then? Cases of this sort have become popular grist for reflection in recent work in the philosophy of belief. Spurred by a series of papers by Tamar Gendler (Gendler, 2008a, 2008b), a lively literature has developed on what to ascribe when belief “sayings” (usually referred to as “judgments”) come apart from “doings” (for a sampling see Albahari, 2014; Kriegel, 2012; Mandelbaum, 2013; Schwitzgebel, 2010; Zimmerman, 2007). Some cases are stock in trade and involve people who verbally commit to a belief or attitude but act in contrary ways.

Of most interest for social and behavioral scientists are cases of what are called dissociations between “explicit” or more accurately, direct, measures of a construct (such as a belief or an attitude), which usually rely on self report, and so-called “implicit,” or more accurately, indirect measures of the same construct (Gawronski, Peters, & LeBel, 2008). While the so-called “implicit attitude test” is the most familiar indirect measure, there is an entire family composed of dozens of distinct indirect measurement strategies (Nosek, Hawkins, & Frazier, 2011). The key point is, however, that indirect measures usually rely not on verbal reports but on observations of rapid-fire action or behavioral responses that are assumed not to be under voluntary control.

Dissociations between direct and indirect measures are usually good examples of “sayings” and “doings” coming apart. If these are as common as the literature suggests, then belief (or attitude) ascription problems are also more common than we realize.

So let’s say we observe someone who fits the profile of “Chris the implicit racist.” Chris is a white high school teacher who professes the belief(s) that black people in the United States are no more or less intelligent, violent, or hardworking than white people. Yet systematically in their unguarded behavior (observed via ethnographic classroom observation or via indirect measures of implicit bias collected in the lab) Chris shows a preference for white people (e.g. they discipline black students more harshly for similar offenses; they are more likely to call on white students, etc.) and implicitly associates people with dark skin with a host of negative concepts, including lack of intelligence, proneness to violence, and a weaker work ethic.

What does Chris believe? Considerations of dissociative cases like this lead in two interesting directions. The first is that actions, practices, and behaviors have a non-negligible weight in our belief ascription practices. So judgments aren’t everything. This is important because, in polite company, our everyday practices of belief ascription follow folk cartesianism: That is, belief ascription is fixed by explicit reports of what people say they believe and people have incorrigible knowledge about those personal judgments. We cannot ascribe a belief to a person they profess not to have (alternatively people have veto power over second-person ascription practices).

In philosophy, this is called the “pro-judgment” view on ascription. This stance holds we can only ascribe the belief that P if a person reports they believe P. The actions inconsistent with P (e.g. for Chris, the automatic association of Black people with violence, or their penchant to send black students to detention for minor offenses) are acknowledged to exist but they are just not a kind of belief. They are something else. Gendler (2008a) has proposed that given the recalcitrant existence of these types of counter-doxastic behaviors, we should add a new mental category to our lexicon: “aliefs.” So we can say Chris “believes” blacks are no more violent than whites (as given by their self-report), but “alieves” they are more violent (as given by their performance on indirect attitude measures and their avoiding walking in certain predominantly black neighborhoods).

An alternative view, called the “anti-judgment” view says actions speak louder than words. Or, as indicated by the title of a recent entry in a book review symposium dedicated to Arlie Hochschild’s Strangers in Their Own Land, anti-judgment people say “who cares what they think?” (Shapira, 2017). If Chris walks like P, and quacks like P, then Chris believes P.

I should note that this belief ascription strategy is not that bizarre and that it exists as a “second option” in our commonsense arsenal, even if our first option is folk cartesianism. This should alert you that ascription practices may be very much a matter of socio-cultural tradition and regulation as they are a matter of the usual canons of rationality.

This matters if these same ascription practices are followed mindlessly in social science research. This would be a case of folk practices dictating what should be a matter of social scientific consideration. In this sense, social scientists should care very much if they are pro-judgment folk cartesians, anti-judgment, or something else, as this will bear directly on their conclusions. But we are getting ahead of ourselves. The point is that “trumping” a person’s cartesian incorrigibility by pointing to their inconsistent actions is an available move (both in social science and everyday life) but it is also one that should seldom be undertaken lightly.

Note that the pro-judgment and anti-judgment views are not the only available ones. To fill out the space of options: We may ascribe both P and ~P beliefs (the contradictory belief view). Or we may ascribe a mixture of the inconsistent beliefs (the “in-between” belief view). Or we may say Chris believes neither P nor ~P, but that they vacillate inconsistently between the two depending on circumstances (the shifting view) (Albahari, 2014).

And this is the second thing that inconsistency between sayings and doings highlight. Rather than focusing on goings-on trapped within each person’s cartesian theater, we can now see that beliefs are very much a matter of both actions and practices, both in terms of the believer and the ascriber. In this respect, two consideraations come to the fore.

First, there are the “belief proclamation” practices we are used to (e.g. people verbally saying they believe thus and so) but also the myriad of behaviors and actions that other people monitor and that they use to ascribe beliefs to others. It is these belief ascription practices I wanted to highlight in this post. As noted, they are both a matter of everyday interpersonal interaction, and for my purposes, they are a standard, but seldom commented upon, aspect of every social scientific practice. After all, social scientists (especially those who do qualitative work) constantly ask people what they believe about a host of things (e.g. Edin & Kefalas, 2011; Young, 2006), and are thus confronted with self-reports of people claiming to believe things. These same social scientists may also sometimes have the opportunity to observe these people in ecologically natural settings, which allows them to compare self-reports to doings (Jerolmack & Khan, 2014).

In this post, I will not try to adjudicate or argue for what the best ascription strategy is. I will leave that for a future post. Here I note two things. First, it is clear certain ascription practices have elective affinities with certain conceptions of what beliefs are. For instance, “pro-judgment” views have an affinity with the “pictures in the head” conception of belief. As such, pro-judgment views assume all of the things that such a conception assumes, such as representationalism, the relevance of the truth/falsity criterion and so on. “Anti-judgment” views focusing on action, may be said to be more consonant with some forms of practice theory, as can some other (e.g. “in-between” or “contradictory” views).

Second, belief ascription practices have the familiar duality of being both possible “topics” and “resource” for sociological analysis that fascinated early ethnomethodology. We can be “neutral” about their import for sociological research and study belief ascription practices as a topic. We may ask questions such as under what circumstances people default to folk cartesianism, when they prefer anti-judgment views, when they go “in between” or “contradictory” and so on.

Alternatively we may examine the issue by considering the role of belief ascription practices as a resource for sociological explanation. Are pro-judgment views always effective? Should we go anti-judgment and ignore what people say in favor of their behavior? These are some issues I hope to tackle in future posts.

References

Albahari, M. (2014). Alief or belief? A contextual approach to belief ascription. Philosophical Studies, 167(3), 701–720.

Edin, K., & Kefalas, M. (2011). Promises I Can Keep: Why Poor Women Put Motherhood before Marriage. University of California Press.

Gawronski, B., Peters, K. R., & LeBel, E. P. (2008). What Makes Mental Associations Personal or Extra-Personal? Conceptual Issues in the Methodological Debate about Implicit Attitude Measures. Social and Personality Psychology Compass, 2(2), 1002–1023.

Gendler, T. S. (2008a). Alief and Belief. The Journal of Philosophy, 105(10), 634–663.

Gendler, T. S. (2008b). Alief in Action (and Reaction). Mind & Language, 23(5), 552–585.

Jerolmack, C., & Khan, S. (2014). Talk Is Cheap: Ethnography and the Attitudinal Fallacy. Sociological Methods & Research. https://doi.org/10.1177/0049124114523396

Kriegel, U. (2012). Moral Motivation, Moral Phenomenology, And The Alief/Belief Distinction. Australasian Journal of Philosophy, 90(3), 469–486.

Mandelbaum, E. (2013). Against alief. Philosophical Studies, 165(1), 197–211.

Nosek, B. A., Hawkins, C. B., & Frazier, R. S. (2011). Implicit social cognition: from measures to mechanisms. Trends in Cognitive Sciences, 15(4), 152–159.

Schwitzgebel, E. (2010). Acting contrary to our professed beliefs or the gulf between occurrent judgment and dispositional belief. Pacific Philosophical Quarterly, 91(4), 531–553.

Shapira, H. (2017). Who Cares What They Think? Going About the Right the Wrong Way. Contemporary Sociology, 46(5), 512–517.

Young, A. A. (2006). The Minds of Marginalized Black Men: Making Sense of Mobility, Opportunity, and Future Life Chances. Princeton University Press.

Zimmerman, A. (2007). The Nature of Belief. Journal of Consciousness Studies, 14(11), 61–82.

Are Beliefs Pictures in the Head?

In a recently published piece (Strand & Lizardo, 2015) Mike and I argued that the notion of “belief” if it is to do a more adequate job as a category of analysis in social-scientific research, can best be thought of as a species of habit. I refer the interested reader to the paper for the more detail “exegetical” argumentation excavating the origins of this notion in American pragmatism (mostly in the work of Peirce and Dewey) and European practice theory (mostly in the work of Bourdieu). Here I would like to explore some reasons this proposal may seem to be so counterintuitive given our traditional conceptions of belief.

The fear, to some well-founded, is that substituting the usual notion for the habit notion would cause a net loss, and thus an inability to account for things that would like to account for (e.g. patterns of action that are driven by ideas or thoughts in the head) adequately.

What is the standard notion of belief that the “habit” notion displaces (if not replaces)? The easiest way to think of it is as one in which beliefs are thought to be little “pictures” in the head that people carry around. But what are beliefs pictures of? After all, pictures (even in modern art) usually depict something, however faint. The answer is that they are supposed to be pictures of the world that somehow the person uses to get by.

Because the beliefs are “pictures” (in cognitive science sometimes the word representation is used in this context) they have the representational properties usual pictures have. For instance, they portray the world in a certain way (e.g. under a particular description). In addition, because they are pictures, beliefs have content. That is a belief is always about something (in some philosophical segments, the word “intentionality” is usually brought up here (Searle, 1983)). In this way, some beliefs may be directed at the same state of affairs in the world, but “picture it” in different ways (Hutto, 2013). Finally, and building in on this last distinction, just like pictures claim to depict the world as it is (or at least have a resemblance to it), beliefs can be true (if they portray the world as it is) or they can be false (if the description does not match the world). This truth/falsity relation between the pictures in the head and the world turns out to be crucial for their indispensable job in “explaining” action.

For instance, if somebody opens a refrigerator, grabs a sandwich from it, and eats it, and an outside observer can “explain” the pattern by ascribing a belief to the person. So the person opened the fridge door because they thought (believed) there was a sandwich there. We usually complete this belief-based explanation by adding some kind of motive or desire as a jointly sufficient cause (“they believed there was a sandwich in the fridge and they were hungry”).

But suppose we were to see the same person open the fridge, look around and then go back to the couch empty-handed. This is a different behavioral pattern as before. However, note we can also “explain” this behavior using the same “sandwich” belief mechanism as before. The trick is simply to ascribe a false belief to the person: The imputed picture in the head does not match the actual state of the world. So we can now say, “Sam opened the fridge because they believed there was a sandwich in there and they were hungry.” We attach one more disclaimer: “But Sam was wrong, there was no sandwich.”

This flexibility makes belief-based explanations fairly powerful (they can account for a wide range of behavioral patterns). However, flexibility is also a double-edged sword: Become too flexible and you risk vacuity, explaining everything and thus nothing (see Strand & Lizardo, 2015, pp. 47–48).

Because the belief-desire combo is so flexible (and so pervasive even in our “folk” accounting of each other’s action) some people have argued that it is inevitable. So inevitable it may be the only game in town for explaining action. This would make the “pictures in the head” version of the notion of belief essentially a non-negotiable part of our explanatory vocabulary. One of the main goals of our paper was to argue that there are other options even if they seem weird at first sight.

The alternative we championed was to think of belief as a species of habit. This requires both a revision of our implicit classification of mental concepts and a revision of what we mean by “belief.” In terms of the first aspect, the usual way to think of belief and habit is to see them as distinct categories in our mental vocabulary. A habit is a “thoughtless” activity, while an action driven by belief requires “thought” to be involved. So they are two sets of mental categories, but they are as a distinct as a frog is from a zebra (even if both are a species of animal). In our proposal, however, the overarching category in mental life (for both human and nonhuman animals) is habit, and belief is a subcategory of habit. This does violence to the standard classification so it may take time to get used to.

In this respect, note that the “picture” theory of belief seems to be important in how people differentiate belief from habit. Both can be involved in action, but when action is driven by belief, the picture inside the head is on the driver’s seat and is thus an important (but always presumed) component of the action. In fact, the picture is such an important component we attribute causal force to it. Sam got up from the couch and walked to the fridge because they thought there was a sandwich in there (and they were hungry).

One last observation about the pictures in the head account of action. When the observer imputes the belief “sandwich in the fridge” to Sam and selects this belief as the “cause” of the action, by what criteria is this selection made? I bring this up only to note that there are actually a bunch of other “beliefs” that the observer could have imputed to Sam, and which could be argued to be implicated in the action, but somehow didn’t. For instance, the observer could have said one of the beliefs accounting for Sam’s action is that “there was a fridge in the room.” Or that “the fridge was plugged in” or that “the floor could sustain their weight,” and so on.

This is not just a trivial “philosophical” issue. We could impute an infinity of little world pictures in the head to Sam. In fact, as many as there are “states of affairs” about the world that make it possible for Sam to get up and check the fridge, inclusive of purely hypothetical or even “negative” pictures (e.g. the belief that “there’s not a bomb in the fridge which will be triggered to detonate when the door is opened.”). Yet, we do not (sometimes this is referred to as the “frame problem” (Dennett, 2006) in artificial intelligence circles). This means that belief imputation practices following the picture version are necessarily selective, but the criteria for selection remain obscure. This kind of obscurity should be suspicious for those who want to recruit these types of explanations as scientific accounts of action.

But we are getting ahead of ourselves. The main point of this post is simply to warm you up to the intuition that maybe the pictures in the head version of belief is not as intuitive as you may have thought nor as unproblematic or non-negotiable as it is sometimes depicted. In a future post we I will introduce the alternative conception of belief as habit and see whether it is not subject to these issues.

References

Dennett, D. C. (2006). Cognitive Wheels: The frame problem of AI. In J. L. Bermudez (Ed.), Philosophy of Psychology: Contemporary Readings (Vol. 433, pp. 433–454). New York: Routledge.

Hutto, D. D. (2013). Why Believe in Contentless Beliefs? In N. Nottelmann (Ed.), New Essays on Belief: Constitution, Content and Structure (pp. 55–74). London: Palgrave Macmillan UK.

Searle, J. R. (1983). Intentionality: An essay in the philosophy of mind. New York: Cambridge University Press.

Strand, M., & Lizardo, O. (2015). Beyond World Images: Belief as Embodied Action in the World. Sociological Theory, 33(1), 44–70.