Beyond the Framework Model

Most work in cultural analysis in sociology is committed to a “framework” model of culture and language. According to the framework model, persons need culture, because without culture (which usually takes the form of global templates that the person is not aware of possessing) they would not be able to “make sense” of their “raw” perceptual experience. Under this model, culture serves to “organize” the world into predictable categories. Cognition thus reduces to the “typing” of concrete particulars (experientially available via perception) into cultural constituted generalities.

The basic model of cognition here is thus sequential: First the world is made available in raw (particular) form, then it is “filtered” via the (culturally acquired) lenses and then it emerges as a “sensible,” categorically ordered world. This model accounts for the historical and spatial diversity of culture even while acknowledging that at the level of “raw” experience we all inhabit the same world. The only problem is, as Kant understood and as post-Kantians always despaired, that this “raw” universal world does not make sense to anybody! The only world that makes sense is the culturally constituted world. In this sense the price that we pay to have a world that “makes sense” is the donning of conceptual glasses through which we must filter the world; the cost of making sense of the world is not being aware of the cultural means through which that sense is made.

The framework model is pervasive in cultural analysis. However, a consideration of work in the modern cognitive science of perception leads us to question its core tenets.

One major weakness is that the framework model has to rely on a theoretical construction that has a shaky scientific status: This is the counterfactual existence of “raw” (pre-cultural, pre-cognitive) experience. However, it is hard to find a conceivable time-scale at which we could say that there is the possibility that there is a “raw” experience for somebody.

In contrast to the “sequential” model, an alternative way to think of this is that experience qua experience is inherently specified and thus meaningful. That is when persons experience the world that world is always already a world for them and therefore as directly meaningful. It is true that at a slower time scales, after a person experiences a world that is for them, they may also activate conventional representations in which other “meanings” (namely semantic information on objects, events, settings, and persons activated from so-called “long-term” memory) may slow themselves enough to modify their initial meaningful uptake of the world. But none of these meanings are necessary to “constitute” the world of objects, persons and events as meaningful if by meaningful we (minimally) mean capable of being understood and integrated into our everyday practical projects (Gallese & Metzinger, 2003).

The framework model erred because it took a high-level cognitive task (namely classification or in Berger and Luckmann’s mid-twentieth century phenomenological language “typification”) that is not the right kind of task for how the world of perception becomes meaningful to us. Classification is just too slow a task; perception happens much faster than that (Noë, 2004). Because of this, classification is way too flimsy a foundation to build the required model of how persons make a meaningful world. In this respect, cultural analysis in sociology has been hampered by a piece of conceptual metaphor working behind the back of the theorist. The (unconscious) inference that comes from mapping the experiential affordances of the usual things that serve as frameworks or lenses (which included durability and solidity) into the abstract target domain of perception and experience.

Work in the psychology of classification shows that as hard as we may try to search for them, the “hard” lenses and classificatory “structures” dreamed up by contemporary cultural analysis do not exist (Barsalou, 1987). Instead, most classification is shown to be (mystifyingly from the perspective of framework models) fluid and context-sensitive, with the classification shifting even if we change the most minute and seemingly irrelevant thing about the classificatory context (Barsalou, 2005). Thus, at the level of experience, culture surely cannot take the form of (conscious or unconscious) “frameworks” because these frameworks are just nowhere to be found (Turner, 1994).

How can we think of perception if we are not to use the framework model? Here is one alternative. Perception, at its most basic level, is simply identification, and identification is specification. And specification is the production of a relation. That is, a world opens up for an organism when the organism is able to specify, and thus make “contact,” with that world in relation to itself. This kind of specification is an inherent organism-centric activity. A world is always a world for somebody. In this respect, this analysis is less “generic” than traditional cultural analysis, which tends to speak of meaningful worlds in relation to abstract representative (shall we say “collective”?) agents. But meaning is always personal and organism-centered.

This insight implies not the impossibility of impersonal or even collective meaning, but its complexity and difficulty. Modern cultural analysis, by essentially taking the products of collective meaning-making as its starting point (and the mechanisms that produce their status as shared for granted) actually sidestep some of the hardest questions in favor of relatively easy questions (the interpretation of collective symbols for generic subjects). But most symbols are symbols for concrete, embodied subjects who have nothing generic about them. Surprisingly enough the first lesson that the emerging sciences of meaning construction have for contemporary cultural analysis, is that the basic way in which cultural analysts go about “analyzing” meaning is actually too abstract and not quite as concrete (or “personal”) as one would wish.

References

Barsalou, L. W. (1987). The instability of graded structure: Implications for the nature of concepts. Concepts and Conceptual Development: Ecological and Intellectual Factors in Categorization, 10139. Retrieved from https://pdfs.semanticscholar.org/b14d/961c846075ca67ec11cf60ea7b0bc6ea17cd.pdf

Barsalou, L. W. (2005). Situated conceptualization. Handbook of Categorization in Cognitive Science, 619, 650.

Gallese, V., & Metzinger, T. (2003). Motor ontology: the representational reality of goals, actions and selves. Philosophical Psychology, 16(3), 365–388.

Noë, A. (2004). Action in Perception. Bradford book.

Turner, S. P. (1994). The Social Theory of Practices: Tradition, Tacit Knowledge, and Presuppositions. University of Chicago Press.

Ascription Practices: The Very Idea

How do we know what others believe? The answer to this question may seem clear but as we will see it has some interesting hidden complexities. Some of these bear directly on some established policies in social-scientific method.

One obvious answer is that if we want to know what others believe, and these others are language using creatures like ourselves is that we ask them: “Do you believe P?” If the person verbally reports believing P, then we are on safe grounds in ascribing belief P.

So far so good. But let us say a person assents to P but acts in other ways that seem counter to the content of that belief. What to do then? Cases of this sort have become popular grist for reflection in recent work in the philosophy of belief. Spurred by a series of papers by Tamar Gendler (Gendler, 2008a, 2008b), a lively literature has developed on what to ascribe when belief “sayings” (usually referred to as “judgments”) come apart from “doings” (for a sampling see Albahari, 2014; Kriegel, 2012; Mandelbaum, 2013; Schwitzgebel, 2010; Zimmerman, 2007). Some cases are stock in trade and involve people who verbally commit to a belief or attitude but act in contrary ways.

Of most interest for social and behavioral scientists are cases of what are called dissociations between “explicit” or more accurately, direct, measures of a construct (such as a belief or an attitude), which usually rely on self report, and so-called “implicit,” or more accurately, indirect measures of the same construct (Gawronski, Peters, & LeBel, 2008). While the so-called “implicit attitude test” is the most familiar indirect measure, there is an entire family composed of dozens of distinct indirect measurement strategies (Nosek, Hawkins, & Frazier, 2011). The key point is, however, that indirect measures usually rely not on verbal reports but on observations of rapid-fire action or behavioral responses that are assumed not to be under voluntary control.

Dissociations between direct and indirect measures are usually good examples of “sayings” and “doings” coming apart. If these are as common as the literature suggests, then belief (or attitude) ascription problems are also more common than we realize.

So let’s say we observe someone who fits the profile of “Chris the implicit racist.” Chris is a white high school teacher who professes the belief(s) that black people in the United States are no more or less intelligent, violent, or hardworking than white people. Yet systematically in their unguarded behavior (observed via ethnographic classroom observation or via indirect measures of implicit bias collected in the lab) Chris shows a preference for white people (e.g. they discipline black students more harshly for similar offenses; they are more likely to call on white students, etc.) and implicitly associates people with dark skin with a host of negative concepts, including lack of intelligence, proneness to violence, and a weaker work ethic.

What does Chris believe? Considerations of dissociative cases like this lead in two interesting directions. The first is that actions, practices, and behaviors have a non-negligible weight in our belief ascription practices. So judgments aren’t everything. This is important because, in polite company, our everyday practices of belief ascription follow folk cartesianism: That is, belief ascription is fixed by explicit reports of what people say they believe and people have incorrigible knowledge about those personal judgments. We cannot ascribe a belief to a person they profess not to have (alternatively people have veto power over second-person ascription practices).

In philosophy, this is called the “pro-judgment” view on ascription. This stance holds we can only ascribe the belief that P if a person reports they believe P. The actions inconsistent with P (e.g. for Chris, the automatic association of Black people with violence, or their penchant to send black students to detention for minor offenses) are acknowledged to exist but they are just not a kind of belief. They are something else. Gendler (2008a) has proposed that given the recalcitrant existence of these types of counter-doxastic behaviors, we should add a new mental category to our lexicon: “aliefs.” So we can say Chris “believes” blacks are no more violent than whites (as given by their self-report), but “alieves” they are more violent (as given by their performance on indirect attitude measures and their avoiding walking in certain predominantly black neighborhoods).

An alternative view, called the “anti-judgment” view says actions speak louder than words. Or, as indicated by the title of a recent entry in a book review symposium dedicated to Arlie Hochschild’s Strangers in Their Own Land, anti-judgment people say “who cares what they think?” (Shapira, 2017). If Chris walks like P, and quacks like P, then Chris believes P.

I should note that this belief ascription strategy is not that bizarre and that it exists as a “second option” in our commonsense arsenal, even if our first option is folk cartesianism. This should alert you that ascription practices may be very much a matter of socio-cultural tradition and regulation as they are a matter of the usual canons of rationality.

This matters if these same ascription practices are followed mindlessly in social science research. This would be a case of folk practices dictating what should be a matter of social scientific consideration. In this sense, social scientists should care very much if they are pro-judgment folk cartesians, anti-judgment, or something else, as this will bear directly on their conclusions. But we are getting ahead of ourselves. The point is that “trumping” a person’s cartesian incorrigibility by pointing to their inconsistent actions is an available move (both in social science and everyday life) but it is also one that should seldom be undertaken lightly.

Note that the pro-judgment and anti-judgment views are not the only available ones. To fill out the space of options: We may ascribe both P and ~P beliefs (the contradictory belief view). Or we may ascribe a mixture of the inconsistent beliefs (the “in-between” belief view). Or we may say Chris believes neither P nor ~P, but that they vacillate inconsistently between the two depending on circumstances (the shifting view) (Albahari, 2014).

And this is the second thing that inconsistency between sayings and doings highlight. Rather than focusing on goings-on trapped within each person’s cartesian theater, we can now see that beliefs are very much a matter of both actions and practices, both in terms of the believer and the ascriber. In this respect, two consideraations come to the fore.

First, there are the “belief proclamation” practices we are used to (e.g. people verbally saying they believe thus and so) but also the myriad of behaviors and actions that other people monitor and that they use to ascribe beliefs to others. It is these belief ascription practices I wanted to highlight in this post. As noted, they are both a matter of everyday interpersonal interaction, and for my purposes, they are a standard, but seldom commented upon, aspect of every social scientific practice. After all, social scientists (especially those who do qualitative work) constantly ask people what they believe about a host of things (e.g. Edin & Kefalas, 2011; Young, 2006), and are thus confronted with self-reports of people claiming to believe things. These same social scientists may also sometimes have the opportunity to observe these people in ecologically natural settings, which allows them to compare self-reports to doings (Jerolmack & Khan, 2014).

In this post, I will not try to adjudicate or argue for what the best ascription strategy is. I will leave that for a future post. Here I note two things. First, it is clear certain ascription practices have elective affinities with certain conceptions of what beliefs are. For instance, “pro-judgment” views have an affinity with the “pictures in the head” conception of belief. As such, pro-judgment views assume all of the things that such a conception assumes, such as representationalism, the relevance of the truth/falsity criterion and so on. “Anti-judgment” views focusing on action, may be said to be more consonant with some forms of practice theory, as can some other (e.g. “in-between” or “contradictory” views).

Second, belief ascription practices have the familiar duality of being both possible “topics” and “resource” for sociological analysis that fascinated early ethnomethodology. We can be “neutral” about their import for sociological research and study belief ascription practices as a topic. We may ask questions such as under what circumstances people default to folk cartesianism, when they prefer anti-judgment views, when they go “in between” or “contradictory” and so on.

Alternatively we may examine the issue by considering the role of belief ascription practices as a resource for sociological explanation. Are pro-judgment views always effective? Should we go anti-judgment and ignore what people say in favor of their behavior? These are some issues I hope to tackle in future posts.

References

Albahari, M. (2014). Alief or belief? A contextual approach to belief ascription. Philosophical Studies, 167(3), 701–720.

Edin, K., & Kefalas, M. (2011). Promises I Can Keep: Why Poor Women Put Motherhood before Marriage. University of California Press.

Gawronski, B., Peters, K. R., & LeBel, E. P. (2008). What Makes Mental Associations Personal or Extra-Personal? Conceptual Issues in the Methodological Debate about Implicit Attitude Measures. Social and Personality Psychology Compass, 2(2), 1002–1023.

Gendler, T. S. (2008a). Alief and Belief. The Journal of Philosophy, 105(10), 634–663.

Gendler, T. S. (2008b). Alief in Action (and Reaction). Mind & Language, 23(5), 552–585.

Jerolmack, C., & Khan, S. (2014). Talk Is Cheap: Ethnography and the Attitudinal Fallacy. Sociological Methods & Research. https://doi.org/10.1177/0049124114523396

Kriegel, U. (2012). Moral Motivation, Moral Phenomenology, And The Alief/Belief Distinction. Australasian Journal of Philosophy, 90(3), 469–486.

Mandelbaum, E. (2013). Against alief. Philosophical Studies, 165(1), 197–211.

Nosek, B. A., Hawkins, C. B., & Frazier, R. S. (2011). Implicit social cognition: from measures to mechanisms. Trends in Cognitive Sciences, 15(4), 152–159.

Schwitzgebel, E. (2010). Acting contrary to our professed beliefs or the gulf between occurrent judgment and dispositional belief. Pacific Philosophical Quarterly, 91(4), 531–553.

Shapira, H. (2017). Who Cares What They Think? Going About the Right the Wrong Way. Contemporary Sociology, 46(5), 512–517.

Young, A. A. (2006). The Minds of Marginalized Black Men: Making Sense of Mobility, Opportunity, and Future Life Chances. Princeton University Press.

Zimmerman, A. (2007). The Nature of Belief. Journal of Consciousness Studies, 14(11), 61–82.

Are Beliefs Pictures in the Head?

In a recently published piece (Strand & Lizardo, 2015) Mike and I argued that the notion of “belief” if it is to do a more adequate job as a category of analysis in social-scientific research, can best be thought of as a species of habit. I refer the interested reader to the paper for the more detail “exegetical” argumentation excavating the origins of this notion in American pragmatism (mostly in the work of Peirce and Dewey) and European practice theory (mostly in the work of Bourdieu). Here I would like to explore some reasons this proposal may seem to be so counterintuitive given our traditional conceptions of belief.

The fear, to some well-founded, is that substituting the usual notion for the habit notion would cause a net loss, and thus an inability to account for things that would like to account for (e.g. patterns of action that are driven by ideas or thoughts in the head) adequately.

What is the standard notion of belief that the “habit” notion displaces (if not replaces)? The easiest way to think of it is as one in which beliefs are thought to be little “pictures” in the head that people carry around. But what are beliefs pictures of? After all, pictures (even in modern art) usually depict something, however faint. The answer is that they are supposed to be pictures of the world that somehow the person uses to get by.

Because the beliefs are “pictures” (in cognitive science sometimes the word representation is used in this context) they have the representational properties usual pictures have. For instance, they portray the world in a certain way (e.g. under a particular description). In addition, because they are pictures, beliefs have content. That is a belief is always about something (in some philosophical segments, the word “intentionality” is usually brought up here (Searle, 1983)). In this way, some beliefs may be directed at the same state of affairs in the world, but “picture it” in different ways (Hutto, 2013). Finally, and building in on this last distinction, just like pictures claim to depict the world as it is (or at least have a resemblance to it), beliefs can be true (if they portray the world as it is) or they can be false (if the description does not match the world). This truth/falsity relation between the pictures in the head and the world turns out to be crucial for their indispensable job in “explaining” action.

For instance, if somebody opens a refrigerator, grabs a sandwich from it, and eats it, and an outside observer can “explain” the pattern by ascribing a belief to the person. So the person opened the fridge door because they thought (believed) there was a sandwich there. We usually complete this belief-based explanation by adding some kind of motive or desire as a jointly sufficient cause (“they believed there was a sandwich in the fridge and they were hungry”).

But suppose we were to see the same person open the fridge, look around and then go back to the couch empty-handed. This is a different behavioral pattern as before. However, note we can also “explain” this behavior using the same “sandwich” belief mechanism as before. The trick is simply to ascribe a false belief to the person: The imputed picture in the head does not match the actual state of the world. So we can now say, “Sam opened the fridge because they believed there was a sandwich in there and they were hungry.” We attach one more disclaimer: “But Sam was wrong, there was no sandwich.”

This flexibility makes belief-based explanations fairly powerful (they can account for a wide range of behavioral patterns). However, flexibility is also a double-edged sword: Become too flexible and you risk vacuity, explaining everything and thus nothing (see Strand & Lizardo, 2015, pp. 47–48).

Because the belief-desire combo is so flexible (and so pervasive even in our “folk” accounting of each other’s action) some people have argued that it is inevitable. So inevitable it may be the only game in town for explaining action. This would make the “pictures in the head” version of the notion of belief essentially a non-negotiable part of our explanatory vocabulary. One of the main goals of our paper was to argue that there are other options even if they seem weird at first sight.

The alternative we championed was to think of belief as a species of habit. This requires both a revision of our implicit classification of mental concepts and a revision of what we mean by “belief.” In terms of the first aspect, the usual way to think of belief and habit is to see them as distinct categories in our mental vocabulary. A habit is a “thoughtless” activity, while an action driven by belief requires “thought” to be involved. So they are two sets of mental categories, but they are as a distinct as a frog is from a zebra (even if both are a species of animal). In our proposal, however, the overarching category in mental life (for both human and nonhuman animals) is habit, and belief is a subcategory of habit. This does violence to the standard classification so it may take time to get used to.

In this respect, note that the “picture” theory of belief seems to be important in how people differentiate belief from habit. Both can be involved in action, but when action is driven by belief, the picture inside the head is on the driver’s seat and is thus an important (but always presumed) component of the action. In fact, the picture is such an important component we attribute causal force to it. Sam got up from the couch and walked to the fridge because they thought there was a sandwich in there (and they were hungry).

One last observation about the pictures in the head account of action. When the observer imputes the belief “sandwich in the fridge” to Sam and selects this belief as the “cause” of the action, by what criteria is this selection made? I bring this up only to note that there are actually a bunch of other “beliefs” that the observer could have imputed to Sam, and which could be argued to be implicated in the action, but somehow didn’t. For instance, the observer could have said one of the beliefs accounting for Sam’s action is that “there was a fridge in the room.” Or that “the fridge was plugged in” or that “the floor could sustain their weight,” and so on.

This is not just a trivial “philosophical” issue. We could impute an infinity of little world pictures in the head to Sam. In fact, as many as there are “states of affairs” about the world that make it possible for Sam to get up and check the fridge, inclusive of purely hypothetical or even “negative” pictures (e.g. the belief that “there’s not a bomb in the fridge which will be triggered to detonate when the door is opened.”). Yet, we do not (sometimes this is referred to as the “frame problem” (Dennett, 2006) in artificial intelligence circles). This means that belief imputation practices following the picture version are necessarily selective, but the criteria for selection remain obscure. This kind of obscurity should be suspicious for those who want to recruit these types of explanations as scientific accounts of action.

But we are getting ahead of ourselves. The main point of this post is simply to warm you up to the intuition that maybe the pictures in the head version of belief is not as intuitive as you may have thought nor as unproblematic or non-negotiable as it is sometimes depicted. In a future post we I will introduce the alternative conception of belief as habit and see whether it is not subject to these issues.

References

Dennett, D. C. (2006). Cognitive Wheels: The frame problem of AI. In J. L. Bermudez (Ed.), Philosophy of Psychology: Contemporary Readings (Vol. 433, pp. 433–454). New York: Routledge.

Hutto, D. D. (2013). Why Believe in Contentless Beliefs? In N. Nottelmann (Ed.), New Essays on Belief: Constitution, Content and Structure (pp. 55–74). London: Palgrave Macmillan UK.

Searle, J. R. (1983). Intentionality: An essay in the philosophy of mind. New York: Cambridge University Press.

Strand, M., & Lizardo, O. (2015). Beyond World Images: Belief as Embodied Action in the World. Sociological Theory, 33(1), 44–70.