Thick and Thin Belief

Knowledge and Belief

A (propositional) knowledge (that) ascription logically entails a belief ascription, right? I mean if I think that Sam knows that Joe Biden is the president of the United States, I don’t need to do further research into Sam’s state of mind or behavioral manifestations to conclude that they also believe that Joe Biden is president of the United States. For any proposition or piece of “knowledge-that,” if I state that an agent X knows that q, I am entitled to conclude by virtue of logic alone that X believes that q.

This, as summarized, has been the standard position in analytic epistemology and philosophy of mind. The entailment of belief from knowledge has been considered so obvious that nobody thinks it needs to be argued for or defended (treated as falling closer to the “analytic” end of the Quinean continuum). Most of the work on belief by epistemologists has therefore focused on the conditions under which belief can be justified, not on whether an attribution of knowledge necessarily entails an attribution of belief to an agent.

Of course, analytic philosophers are inventive folk and there have been attempts (starting around the 1960s), done via the thought experiment route, to come up with hypothetical cases in which the attribution of belief from knowledge didn’t come so easy. But most people protested against these made-up cases, denying that they in fact showed that one could attribute knowledge without attributing belief. Some of the debate, as with many philosophical ones, ultimately turned on philosophical method itself; perhaps the inability of professional philosophers to imagine non-contrived cases in which we can attribute knowledge without belief rests on the very rarefied air that philosophers breathe and the related restricted set of examples that they can imagine.

Myers-Schulz & Schwitzgebel (2013), thus follow a recent trend of “experimental philosophy,” in which philosophers burst out of the philosophical bubble and just confront the folk with various examples and ask them whether they think that those examples merit attributions of knowledge without belief. One of these examples (modified from the original ones proposed from the armchair) has us encountering a nervous student who memorizes the answer to tests, but when it comes to actually answer, gets nervous at the last minute, blanks out, and just guesses the answer to the last question in the test, which they also happen to get right. When regular old folks are then asked whether this “unconfident examinee,” knew the answer to this last question, 87% say yes. But if they are instead asked (in a between-subjects set up) whether the unconfident examinee believed the answer to the last question only 37% say yes (Myers-Schulz & Schwitzgebel, 2013, p. 378).

Interestingly, the same folk dissociation between knowledge and belief ascriptions can be observed when people are exposed to scenarios of discordance between explicit and implicit attitudes, or dissociation between rational beliefs that everyone would hold and irrational fantastic beliefs that are induced at the moment by watching a horror movie. In the “prejudiced professor” case, we have a professor who reflectively holds unprejudiced attitudes and is committed to egalitarian values, but who in their everyday micro-behavior systematically treats student-athletes as if they are less capable. In the “freaked out movie watcher” case, we have a person who just watched a horror movie in which a flood of alien larva comes out of faucets and who, after watching the movie, freaks out when their friend opens the (real world) faucet. In both cases, the great majority of the folk attribute knowledge that (student-athletes are as capable as other students and that only water would come out of the faucet), but only relatively small minorities attribute belief. Other cases have been concocted (e.g., a politician who claims to have a certain set of values, but when it comes to acting on those values, by, for instance, advocating for policies that would further them, fails to act) and these cases also generate the dissociation between knowledge and belief ascription among the folk.

Solving the Puzzle

What’s going on here? Some argue that it comes down to a difference between so-called dispositional and occurrent belief. These are terms of art in analytic philosophy, but it boils down to the difference between a belief that you hold but are not currently entertaining (but could entertain under the right circumstances) and one that you are currently holding. The former is a dispositional belief and the latter is an occurrent belief. When you are sleeping you dispositionally believe everything that you believe when you are professing wide-awake beliefs. So maybe the folk deny that in all of the cases above people who know that x also occurrently believe that x, but they don’t deny that they dispositionally do so. Rose & Schaffer (2013) find support for this hypothesis.

Unfortunately for Rose & Schaffer, a subsequent series of experiments (Murray et al., 2013), show that knowledge/belief dissociation among the folk are pervasive, applied more generally than originally thought, in ways that cannot be easily saved by applying the dispositional/occurrent distinction. For instance, when asked whether God knows or believes a proposition that comes closest to the “analytic” end of Quine’s continuum (e.g., 2 + 2 = 4), virtually everyone (93%) is comfortable attribute knowledge to God, but only 66% say God believes the trivial arithmetical proposition. Murray et al., also show that people are much more comfortable attributing knowledge, compared to belief, to dogs trained to answer math questions, and cash registers. Finally, Murray et al. (2013, p. 94) have the folk consider the case of a physics student who gets perfect scores in astronomy tests, but who had been homeschooled by rabid Aristotelian parents who taught them that the earth stood at the center of the universe and who never gave up allegiance to the teachings of his parents They find that, for regular people, the homeschooled heliocentric college freshman who also gets an A+ on their Astronomy 101 test knows the earth revolves around the sun but doesn’t believe it.

So something else must be going on. In a more recent paper, Buckwalter et al. (2015) propose a compelling solution. Their argument is that the (folk) conception of belief is not unitary and that the contrast with professional epistemologists is that this last group does hold a unitary conception of belief. More specifically, Buckwalter et al. argue that professional philosophy’s concept of belief is thin:

A thin belief is a bare cognitive pro-attitude. To have a thin belief that P, it suffices that you represent that P is true, regard it as true, or take it to be true. Put another way, thinly believing P involves representing and storing P as information. It requires nothing more. In particular, it doesn’t require you to like it that P is true, to emotionally endorse the truth of P, to explicitly avow or assent to the truth of P, or to actively promote an agenda that makes sense given P (749).

But the folk, in addition to countenancing the idea of thin belief, can also imagine the notion of thick belief (on thin and thick concepts more generally, see Abend, 2019). Thick belief contrasts to thin belief in all the dimensions mentioned. Rather than being a purely dispassionate or intellectual holding of a piece of information considered as true, a thick belief “also involves emotion and conation” (749, italics in the original). In addition to merely representing that or P, thick believers in a proposition will also be motivated to want P to be true, will endorse P as true, will defend the truth of P against skeptics, will try to convince others that P is true, will explicitly avow or assent to P‘s truth, and the like. Buckwalter et al. propose that thick and thin beliefs are two separate categories in folk psychology, that thick belief is the default (folk) understanding,  and that therefore the various knowledge/belief dissociation observations can be made sense of by cueing this distinction. In a series of experiments, they show that this is precisely the case. Returning (some of) the cases discussed above, they show that belief ascription rise (most of the time to match knowledge ascriptions) when people are given extra information or a prompt indicating thick of thin belief on the part of the believing agent.

Thin and Thick Belief in the Social Sciences

Interestingly, the distinction between thin and thick belief dovetails a number of distinctions that have been made by sociologists and anthropologists interested in the link between culture and cognition. These discussions have to do with distinctions in the way people internalize culture (for more discussion on this, see here). For instance, the sociologist Ann Swidler (2001) distinguishes between two ways people internalize beliefs (knowledge-that) but uses a metaphor of “depth” rather than thick and thinness (on the idea of cultural depth, see here). For Swidler, people can and do often internalize beliefs and understandings in the form of “faith, commitment, and ideological conviction” (Swidler, 2001, p. 7); that definitely sounds like thick beliefs. However, people also internalize much culture “superficially,” as familiarity with general beliefs, norms, and cultural practices that do not elicit deeply held personal commitment (although they may elicit public acts of behavioral conformity); those definitely sound like thin beliefs. Because deeply internalizing culture is hard and superficially internalizing culture is easy, the amount of culture that is internalized in the more superficial way likely outweighs the culture that is internalized in the “deep” way. In this respect, “[p]eople vary in the ‘stance’ they take toward culture—how seriously versus lightly they hold it.” Some people are thick (serious) believers but most people’s stance toward a lot of the culture they have internalized is more likely to range from ritualistic adherence (in the form of repeated expression of platitudes and cliches taken to be “common sense”) to indifference, cynicism, and even insincere affirmation (Swidler 2001, p. 43–44).

In cognitive anthropology (see Quinn et al., 2018a, 2018b; Strauss 2018), an influential model of the way people internalize beliefs, due to Melford Spiro, also proposes a gradation of belief internalization that matches Buckwalter et al.’s distinction between thin and thick belief, and Swidler’s deep/superficial belief (without necessarily using either metaphor). According to D’Andrade’s summary of Spiro’s model (1995: 228ff), people can go simply being “acquainted with some part of the cultural system of representations without assenting to its descriptive or normative claims. The individual may be indifferent to, or even reject these claims.” Obvious this (level 1) internalization does not count as belief, not even of the thin kind (Buckwalter et al. 2015). However, at internalization level 2, we get something closer. Here “cultural representations are acquired as cliches; the individual honors their descriptive or normative claims more in the breach than in the observance.” This comes closest to Buckwalter et al.’s idea of thin belief (and Swidler’s notion of “superficially internalized” culture) but it is likely that some people might not think this is a full-blown belief. We get there at internalization level 3. Here, “individuals hold their beliefs to be true, correct, or right…[beliefs] structure the behavioral environment of actors and guide their actions.” This seems closer to the notion of belief that is held by professional philosophers, and it is likely the default version of a belief on its way to thickening. Not just a piece of information represented by the actor and held as true on occasion (as in level 2) but one that systematically guides action. Finally, Spiro’s level 4 is the prototypical thick belief in Buckwalter et al.’s sense. Here “cultural representations…[are] highly salient,” being capable of motivating and instigating action. Level 4 beliefs are invested with emotion, which is a core marker of thick belief (Buckwalter et al., 2015, p. 750ff).

Implications

Interestingly, insofar as some influential theories of the internalization of knowledge-that in cultural anthropology and sociology make the thick belief/thin belief distinction, which, as shown by the research indicated above, is also respected by the folk, it indicates that it may be an idiosyncrasy of the philosophical profession to hold a unitary (or non-graded) notion of belief. Both sociologists and anthropologists have endeavored to produce analytic distinctions in the way people internalize belief-like representations from the larger cultural environment that more closely match the folk. This would indicate that many “problems” conceiving of cases of contradictory or in-between beliefs (Gendler, 2008; Schwitzgebel, 2001)  may have been as much iatrogenic as conceptual.

As also noted by Buckwalter et al., the thin/thick belief distinction might be relevant for debates raging in contemporary epistemology and psychological science over what is the most accurate way to conceive of people’s typical belief-formation mechanism. Is it “Descartian” or “Spinozan”? The Descartian picture conforms to the usual philosophical model. Before believing anything, I reflectively consider it, weigh the evidence pro and against, and if it meets other rational considerations (e.g., consistency with my other beliefs), then I believe it. The Spinozan belief-formation mechanism proposes an initially counter-intuitive picture, in which people automatically believe every piece of information they are exposed to without reflective consideration; only un-believing something requires conscious effort and consideration.

The Descartes/Spinoza debate on belief formation dovetails with a debate in the sociology of culture over whether culture is structured or fragmented (Quinn, 2018). The short version of this debate is that sociologists like Swidler think that (most) culture is internalized in a superficial way and that therefore it operates as fragmented bits and pieces that are brought into coherence via external mechanisms (Swidler 2001). Cognitive anthropologists, on the other hand, adduce strong evidence in favor of the idea that people internalize culture in a more structured manner. There’s definitely a problem of talking past one another in this debate: It seems like Swidler is talking about beliefs proper but Quinn is talking about other forms of non-doxastic knowledge. This last kind can no longer be considered propositional knowledge-that but comes closer to (conceptual) knowledge-what.

Regardless, it is clear that if the Spinozan story is true, then beliefs cannot be internalized as a logically coherent web and therefore cannot exert an effect on action as such. Instead, the mind (and the beliefs therein) are fragmented (Egan, 2008). DiMaggio (1997) in a classic paper in culture and cognition studies, drew that test implication from Daniel Gilbert’s research program, showing that people seem to internalize (some) beliefs via Spinozan mechanisms. For DiMaggio, this supported the sociological version of the fragmentation of culture, because if beliefs are internalized as fragmented, disorganized, barely considered bits of information, then whatever coherence they have must come from the outside (e.g., via institutional or other high-level structures), just as Swidler suggests (DiMaggio, 1997, p. 274). 

But if Buckwalter et al.’s distinction track an interesting distinction in kinds of belief (as suggested by Spiro’s degree of internalization story), then it is likely that the fragmentation argument only applies to thin beliefs. Thick beliefs, on the other hand, the ones that people are most motivated to defend, are imbued with emotion, are least likely to give up, and are most likely to guide people’s actions, are unlikely to be internalized as incoherent information bits that people just “coldly” represent or consider.

References

Abend, G. (2019). Thick Concepts and Sociological Research. Sociological Theory, 37(3), 209–233.

Buckwalter, W., Rose, D., & Turri, J. (2015). Belief through thick and thin. Nous , 49(4), 748–775.

DiMaggio, P. J. (1997). Culture and Cognition. Annual Review of Sociology, 23, 263–287.

Egan, A. (2008). Seeing and believing: perception, belief formation and the divided mind. Philosophical Studies, 140(1), 47–63.

Gendler, T. S. (2008). Alief and Belief. The Journal of Philosophy, 105(10), 634–663.

Murray, D., Sytsma, J., & Livengood, J. (2013). God knows (but does God believe?). Philosophical Studies, 166(1), 83–107.

Myers-Schulz, B., & Schwitzgebel, E. (2013). Knowing that P without believing that P. Nous , 47(2), 371–384.

Quinn, N. (2018). An anthropologist’s view of American marriage: limitations of the tool kit theory of culture. In Advances in Culture Theory from Psychological Anthropology (pp. 139–184). Springer.

Quinn, N., Sirota, K. G., & Stromberg, P. G. (2018a). Conclusion: Some Advances in Culture Theory. In N. Quinn (Ed.), Advances in Culture Theory from Psychological Anthropology (pp. 285–327). Palgrave Macmillan.

Quinn, N., Sirota, K. G., & Stromberg, P. G. (2018b). Introduction: How This Volume Imagines Itself. In N. Quinn (Ed.), Advances in Culture Theory from Psychological Anthropology (pp. 1–19). Springer International Publishing.

Rose, D., & Schaffer, J. (2013). Knowledge entails dispositional belief. Philosophical Studies, 166(S1), 19–50.

Schwitzgebel, E. (2001). In-between Believing. The Philosophical Quarterly, 51(202), 76–82.

Strauss, C. (2018). The Complexity of Culture in Persons. In N. Quinn (Ed.), Advances in Culture Theory from Psychological Anthropology (pp. 109–138). Springer International Publishing.

The Relation(s) Between People and Cultural Kinds

How do people relate to cultural kinds? This is a big topic that will be the subject of future posts. For now, I will say that the discussion has been muddled mostly because, in the history of cultural theory, some cultural kinds have been given excessive powers compared to persons. For instance, in some accounts, people’s natures, essential properties and so on have been seen as entirely constituted by cultural kinds, especially the “mixed” cultural kinds (binding cultural cognitive to artifactual aspects) associated with linguistic symbols (Berger & Luckmann, 1966; Geertz, 1973). The basic idea is usually posed as a counterfactual, presumably aimed at getting at something deep about “human nature” (or the lack thereof): “if people didn’t have language, [or symbols, etc.], then they’d be no different from (non-human) animals.” This is an idea with a very long history in German Romantic thinking (Joas, 1996), and which was revived in 20th century thought by the turn to various “philosophical anthropologies,” most influentially the work of Arnold Gehlen, who conceptualized the “human-animal” as fundamentally incomplete, needing cultural input, and in particular language, symbols, and institutions, to become fully whole (Joas & Knobl, 2011).

I argue that these type of theories (showing up in a variety of thinkers from Berger and Luckman–directly influenced by Gehlen–to Clifford Geertz) has led theorists to fudge what should be the proper relationship between people and cultural kinds in a way that does not respect the ontological integrity between culture and persons. What we need is a way to think about how persons (as their own natural kind) relate to cultural kinds (and even come to depend on them in fairly strong ways) in a way that does not dissolve persons (as ontologically distinct kinds) into cultural kinds (Archer, 1996; Smith, 2010). or, as in some brands of rational actor theory, see people as overpowered, detached manipulators of a restricted set of cultural kinds (usually beliefs), that they can pick up and drop willy-nilly without being much affected by them. Whatever relations we propose, they need to respect the ontological distinctiveness of the two sides of the relata (people and cultural kinds), while also acknowledging the sometimes strong forms of interdependence between people and culture we observe. So this eliminates hyper-strong relations like “constitution” from the outset.

Possession

What are the options? I suggest that there are actually several. For cultural kinds endowed with representational properties (e.g., beliefs, attitudes, values), Abelson’s (1986) idea that they are like possessions is a good one. Thus, we can say that people “have” a belief, a value, or an attitude. For persons, “having” these cultural-cognitive kinds can be seen as the end state of a process that has gone by the name of “internalization” in cultural theory. Note that this possession version of the relation between people and culture works even for the cultural-cognitive kinds that have been called “implicit” in recent work (Gawronski et al., 2006; Krickel, 2018; Piccinini, 2011); thus if a person displays evidence of conforming to an implicit belief, or attitude, etc., we can still say that they “have” it (even if the person disagrees!). This practice is both of sufficient analytic precision while respecting the folk ascription practices visible in the linguistic evidence pointing to the pervasiveness of the conceptual metaphor of possession concerning belief-like states (Abelson 1986). The possession relation also respects the ontological distinctiveness of people and culture, since possessing something doesn’t imply a melding of the identities between the possessor and possessed.

As a bonus, the possession relation is not substantively empty. As Abelson has noted, if beliefs are like possessions, then the relationship should also be subject to a variety of phenomena that have been observed between persons and their literal possessions. People can become attached to their beliefs (and thus refuse to let go of them even when exposed to countervailing evidence), experience loss aversion for the beliefs they already have, or experience their “selves” as extended toward the beliefs they hold (Belk 1988). People may even become “addicted” to their beliefs, experiencing “withdrawal” once they don’t have them anymore (Simi et al. 2017).

Reliance

What about ability-based cultural-cognitive kinds? Here things get a bit more complicated; we can always go with “possession,” and this works for most cases, especially when talking about dispositional skills and abilities (e.g., abilities we impute to people “in stasis” when they are not exercising them). Thus, we can always say that somebody can play the piano, write a lecture, or fix a car even when that ability is not being exercised at the moment; in that respect, abilities are also “like possessions” (Abelson, 1986).

However, possession doesn’t work for “occurrent” cultural kinds exercised in practice. It would be weird to refer to the relation between a person and a practice they are currently engaged in as one of possession; instead, here we must “move up” a bit on the ladder of abstraction, and get a sense of what the “end in view” is (Whitford, 2002). Once we do that, it is easy to see that the relationship between people and the non-conceptual skills they exercise is one of reliance (Dreyfus, 1996). People rely on their abilities to get something (the end in view) done or simply to “cope” with the world (Rouse 2000). The reliance relation concerning non-representational abilities has the same desirable properties as the possession relation for representational cultural-cognitive kinds; it is consistent with folk usage, and respect the ontological distinctiveness between persons as natural kinds and the abilities that they possess. A person can gain an ability (and thus be augmented as a person), and they can also lose an ability (e.g., because they age or have a stroke), and they still count as people.

Parity and Externality

Finally, what about the relation between people and public cultural kinds such as artifacts? First, it is important to consider that, in some cases, artifacts mimic the functional role played by cultural cognitive kinds. So when we use a notepad to keep track of our to-do list, the notepad plays the role of an “exogram” that is the functional analog of biological memory (Sutton 2010). In the same way, when we use a calculator to compute a sum, the calculator plays the same functional role (embodying an ability) that would have been played by our internalized ability to make sums in our head. In that case, as it would not be disallowed to use the same relational descriptors, we use for the relationship between people and cultural-cognitive kinds regardless of location (internalized by people or located in the world). So we would say that Otto possesses the belief that he should pick up butter from the store regardless of whether they committed it to “regular” (intracranial) memory (an “engram”) or to a notebook (an “exogram”).

This “parity principle,” first proposed by Clark and Chalmers (1998) in their famous paper on the “extended mind,” can thus easily be transferred to the case of beliefs, norms, values, “stored” in the world (acknowledging that this does violence to traditional folk-Cartesian usages of concepts such as belief). The same goes for the (lack of) difference between exercising abilities that are acquired via repetition and training, which are ultimately embodied and internalized, and those exercised by reliance on artifacts that also enable people to exercise those abilities (so we would say that you rely on the calculator to compute the sum). In both cases, people use the ability (embodied or externalized) to get something done.

Usage/Dependence/Scaffolding

This last point can be generalized, once we realize that most artifactual cultural kinds (inclusive of those made up of “systems” of mixed—e.g., symbolic–kinds) have a “tool-like” nature. So we say people use language to express meanings or use tools to get something done. Even the most intellectualist understanding of language as a set of spectatorial symbolic representations acknowledges this usage relation. For instance, when theorists say that people “need” (e.g., use) linguistic symbols “to think” (Lizardo, 2016) (a pre-cognitive science exaggeration, based on a folk model of thinking as covert self-talk; most “thinking” is non-linguistic (Lakoff & Johnson, 1999), and a lot of it is unconscious (Dijksterhuis & Nordgren, 2006)).

The general relation between people and artifactual kinds is thus analogous to the relationship between people and the skills they possess; for the most part, people use or depend on public artifactual kinds to get stuff done (another way of saying this is that artifactual cultural kinds enable the pursuit of many ends in view for people). Once again, note that the use or dependence relation is what we want; public cultural kinds do not “constitute” or otherwise generate, or “interpellate” people as a result of its impersonal functioning (as in older structuralist models of language). Instead, people use public artifactual culture as a “scaffold” that allows them to augment internalized abilities and skills to engage in action and pursue goals that would otherwise not be possible (alone or in concert with others).

In sum, we can conceive of the relationship between people and cultural kinds in many ways. Some, (like constitution) are too strong because they dissolve or eliminate the ontological integrity of one of the entities in the relation (usually, people). But there are other options. For representational cultural cognitive kinds, the relation of possession fits the bill; people can have (and lose) beliefs, norms, values, and the like. For non-conceptual abilities, the relation of reliance works. Finally, for externalized artifacts and other “tool-like” public kinds, the relation of usage, and more strongly dependence and scaffolding can do the analytic job.

References

Abelson, R. P. (1986). Beliefs Are Like Possessions. Journal for the Theory of Social Behaviour, 16(3), 223–250.

Belk, R. W. (1988). Possessions and the Extended Self. The Journal of Consumer Research, 15(2), 139–168.

Archer, M. S. (1996). Culture and Agency: The Place of Culture in Social Theory. Cambridge University Press.

Berger, P. L., & Luckmann, T. (1966). The Social Construction of Reality: A Treatise in the Sociology of Knowledge. Doubleday.

Clark, A., & Chalmers, D. (1998). The Extended Mind. Analysis, 58(1), 7–19.

Dijksterhuis, A., & Nordgren, L. F. (2006). A Theory of Unconscious Thought. Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 1(2), 95–109.

Dreyfus, H. L. (1996). The current relevance of Merleau-Ponty’s phenomenology of embodiment. The Electronic Journal of Analytic Philosophy, 4(4), 1–16.

Gawronski, B., Hofmann, W., & Wilbur, C. J. (2006). Are “implicit” attitudes unconscious? Consciousness and Cognition, 15(3), 485–499.

Geertz, C. (1973). The interpretation of cultures: Selected essays. Basic books.

Joas, H. (1996). The Creativity of Action. University of Chicago Press.

Joas, H., & Knobl, W. (2011). Social theory: twenty introductory lectures. Cambridge University Press.

Krickel, B. (2018). Are the states underlying implicit biases unconscious? – A Neo-Freudian answer. Philosophical Psychology, 31(7), 1007–1026.

Lakoff, G., & Johnson, M. (1999). Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought. Basic Books.

Lizardo, O. (2016). Cultural symbols and cultural power. Qualitative Sociology. https://link.springer.com/content/pdf/10.1007/s11133-016-9329-4.pdf

Piccinini, G. (2011). Two Kinds of Concept: Implicit and Explicit. Dialogue: Canadian Philosophical Review / Revue Canadienne de Philosophie, 50(1), 179–193.

Rouse, J. (2000). Coping and its contrasts. Heidegger, Coping, and Cognitive Science.

 

Cognition and Cultural Kinds (Continued)

Culture and Cognition: Rethinking the Terms of the Debate

As noted in the previous post, very few sociologists today doubt that insights from cognitive science are relevant for the study of cultural phenomena. In that respect, DiMaggio’s (1997) call to consider the implications of cognition for cultural analysis has not gone unheeded. Today, questions center on the particular ways cognitive processes may be relevant for cultural explanation and in what (empirical, explanatory, substantive) contexts they are more or less relevant. Some have even begun to speak of a “cognitive” (or “neuro-cognitive”) wing of cultural sociology as being in (productive) tension with other (presumably non-cognitive) ways (e.g., “system”) ways of thinking about culture (Norton, 2019).

At the same time, a now well-established line of work in cognitive science emphasizing the embodied, embedded, enacted, and extended nature of cognition is making analysts rethink traditional conceptions of the cognitive, beyond “brain bound” or “skull bound” conceptions of cognition as internal computation over symbolic representations. The “four E” (embodied, embedded, enacted, and extended) paradigm in cognitive science views cognition as an environmentally situated and world-involving affair, in which internal neural processes and representations are seen as just one of many players involved in the constitution and realization of cognitive activity, on a par with, and complemented by, external bodily pragmatics, material artifacts, environmental structures, technologies, and the concerted action of other agents (Clark, 2008; Clark & Chalmers, 1998; Rowlands, 2009; Wheeler, 2015). For these reasons, as concluded in the last post, it is a good time to revisit the terms of the relationship between the “cognitive” and the “cultural.”

The cultural sociologist Matt Norton (2020), in a recently published paper, has made an insightful attempt to tackle this issue. A critical insight of Norton’s is that, ultimately, how we settle the question of what the exact link between culture and cognition is (or should be), depends not only on what we think “culture” is (as has traditionally been supposed), but, even more importantly, on what we believe cognition is. As such, recent upheavals in cognitive science attempting to redraw the boundaries of the cognitive (by, e.g., incorporating bodies, artifacts, and the situated activity of others in an extended mind framework) has implications for how the terms of engagement between the cognitive and the cultural in sociology and the cognitive social sciences more generally are understood in theory and prosecuted in practice.

Smallism: Blowing the Cognitive Down to Size

One (traditional) approach, and one that was still endorsed by DiMaggio (1997), is simply to follow conventional disciplinary boundaries: Psychologists (or increasingly today cognitive neuroscientists) study the cognitive, and sociologists investigate the socio-cultural. The borrowing and trafficking of concepts and methods happen across disciplinary lines, respecting the corresponding “levels of analysis” that have been traditionally associated with each discipline (e.g., individuals for psychologists and supra-individual analytic levels for cultural sociology). Cultural theorists in sociology can thus help themselves to the panoply of processes and neuro-cognitive mechanisms investigated by the cognitive sciences, but only insofar as these are ensconced at the lowest level of analysis usually considered, such as people and their intra-cranial cogitations.

As Norton notes, this “traditional” arrangement also comes with an equally “traditional” conception of what cognition is; internal computation over mental representations in the standard information-processing picture (or neural computation over brain-bound neural representations in the more recent neuroscientific picture). For Norton, one way to read the emergence of the latest version of cognitive sociology is as the elaboration and incorporation of a variety of individual (or even infra-individual, subpersonal (Lizardo et al., 2019)) mechanisms underlying higher-level cultural processes. There is, however, one big problem with the traditional (brain or individual-bound) version of the cognitive (presumably uncritically adopted in the new cognitive cultural sociology), and the associated explanatory division of labor that it implies: It is “notably narrow,” because “the individual brain and its functionality (or dysfunctionality) dominates the slate of mechanisms that cognitive cultural sociology has proposed for understanding the culture and cognition intersection…”

Norton is correct in noting that there is a conceptual link between “narrow” (e.g., internal, brain-bound) understandings of cognition and the traditional debate in the social sciences as to whether “higher level” explanations must “bottom-out” at the level of individuals and their interactions. Norton (2020: 46ff) even uses the language of “micro-foundations” taken from the debate over methodological individualism in the social sciences to refer to these underlying cognitive processes.

The philosopher R. A. Wilson (2004) refers to this overarching (and seldom questioned) metaphysical tendency across the social, cognitive, psychological, and neurosciences as “smallism,” or (explanatory) “discrimination in favor of the small, and so against the not-so-small. Small things and their properties are seen to be ontologically prior to the larger things that they constitute, and this metaphysics drives both explanatory ideal and methodological perspective” (italics added). The smallist explanatory ideal is “to discover the basic causal powers of particular small things, and the methodological perspective is that of some form of reductionism” (Wilson, 2004, p. 22).

Norton’s (2020) critique of the contemporary “cultural cognitive sociology” is best understood in this light. For Norton (2019), cognitive smallism accounts for what the deep divide between a “cognitive” conception of culture (e.g., culture as the distribution of cultural cognitive kinds such as beliefs located in people) and “system” conceptions emphasizing the properties of systematicity and sharedness among public performances, representations, and symbols found in the world. Coupled with (implicit or explicit) smallism, however, Norton sees the danger of not considering these two versions of culture as having equal explanatory weight. Instead, the cultural cognitive, presumably individual or brain bound cognitive processes are seen as smaller, and thus micro-foundational, forming the metaphysical “rock bottom” from which higher-level cultural properties derive.

For Norton, and despite their protestations to the contrary, the new cultural cognitive sociologists are thus guilty of this tendency, precisely because they retain a “smallist” (biased) conception of cognition in which the cognitive is smaller (even in some v, such as “infra-individualism” smaller than even the individual!) and therefore, by metaphysical implication, more fundamental and foundational. In contrast “cultural” things, being “not so small” are seen as merely supervening on, and thus its properties and causal powers constrained by, the more basic (because small) cognitive mechanisms and processes imported from psychology and the cognitive neurosciences:

[I]n the hunt for theoretical integration it is helpful to relax the idea—rarely expressed in cultural sociological research but easy to slip into due to the mystery, smallness, and contemporary cultural appeal of cognitive neuroscience derived explanatory mechanisms—that the brain is the ultimate microfoundational unit for cultural analysis; it is likewise helpful to relax the related ideas that…cognition is what culture ultimately is, that the skull is a reasonable limit on the bounds of cognitive inquiry, and that the brain is the exclusive, or even a necessarily privileged, site of cognition (Norton 2020: 47).

When cultural theorists fall prey to cognitive smallism, they can’t resist the temptation to think of the more external, extended, public, and intersubjective aspects of culture as epiphenomenal, because “less small” and thus undergirded by the more foundational (because small) cognitive kinds. This would be a raw deal for the “cultural” side of the equation in the exchange because it would get eaten up (from the bottom) by the cognitive. This is explanatory dangerous in that it has

…the potential to transform the pre-existing divide in cultural sociological theory between individual and intersubjective understandings of culture into a vertical arrangement with the individual-level factors forming the more scientifically real, deeper layer of microfoundational mechanisms and intersubjective, public manifestations transformed into culture’s amalgamated macro froth, a residual thrown up by an underlying neuro-cognitive reality (2020: 49).

The main implication being, that “widening” our understanding of cognition (Clark, 2008; Wilson, 2004) should have profound implications for how the cognitive links to or overlaps with the cultural.

Extension and Distribution: Cutting the Cognitive Up to Size

Norton thus recommends that one way to cut cognition down to size is, ironically, by “supersizing” it (Clark, 2008), and thus ensuring a more even and less biased (toward small things) exchange across the boundaries of the cultural and the cognitive. That is by considering heterodox (but increasingly less so) emerging approaches that see cognitive processes as partially realized and constituted by bodily processes and artifactually scaffolded activities taking place in the world (Clark, 2008; Menary, 2010; Rowlands, 2010), or even more strongly, following the work of anthropologist Edwin Hutchins (1995), as being distributed across heterogeneous networks of people, artifacts, settings, and activities, we can see that cognition may be as “wide” and as external and the cultural processes traditionally studied in the socio-cultural sciences (Wilson, 2004). Making cognition “big” (in relation to cultural processes) changes the term of the exchange and reconfigures the usual boundaries, because now the cognitive, and even the notion of what a “cognitive system” is, can be as wide and as “big” as culture and thus there is no longer a predetermined answer to the question as to which counts as more fundamental.

Cultural Kinds and the Supersized Cognitive

There are various ways in which the approach recommended by Norton is consistent with our recent discussions on the nature and variety of cultural kinds. First, an ontic conception of culture as exclusively composed of “underlying” cultural cognitive kinds is too restrictive. Instead, cultural kinds should be seen as “motley” and promiscuous concerning location, and physical structure, along with other clusters of properties they may possess. Approaches to culture that see them as exclusively composed of cultural cognitive sub-kinds are as tendentious and counterproductive as Geertzian takes defining culture purely in terms of overt performances and activities. As we saw before, heterogeneity in location emerges from the fact that some cultural kinds can be internalized by people, but some are not. As such, pluralism regarding physical compositon and structure, as well locational agnosticism is the most coherent approach to theorizing cultural kinds.

In this respect, debates as to whether public culture must necessarily be seen as having “systemic” properties or as occupying an “intersubjective” (shared) space, or even if sub-kind pluralism necessarily entails a confrontation between “culture concepts,” such as the “system” versus “cognitive” conceptions (Norton, 2019), emerge as a less pressing issue. The reason is that, as we have seen, “culture concepts” are actually best thought of as bundles of ontic claims about cultural kinds (including locational, compositional, etiological, etc.), defining possible taxonomies of such kinds. As such, it is unlikely that there are, in fact, “two” (or three or four) versions of what culture is (e.g., “system” versus “cognitive”). Instead, there will be as many culture concepts as coherent (or may not so coherent but at least defensible) combinations of ontic claims we make about culture. This, I think, is even more reason to move away from (always contested) culture concepts, and focus the analysis on cultural kinds, in all their motleyness, varieties, and interconnections.

It is here that Norton’s (2020) consideration of the role that “extended” and “distributed” approaches to cognition may have some radical implications for the we way we usually draw (or presumably deconstructing) the boundaries between culture and cognition and consider the interrelation between the two domains. By cutting cognition “up to size,” Norton seeks to even the playing field between the two domains to avoid smallism and the bias toward thinking that the cognitive “underlies” and contains the basic properties driving an epiphenomenal “cultural froth” located at higher levels. But both the extended and distributed cognition perspectives may have an even more surprising implication: A reversal of our usual conceptualization of the relative scaling relations holding between the cultural and the cognitive.

Flipping the Script

In the traditional “narrow” version that Norton persuasively argues against, the cognitive is small because individual and brain-bound, in relation to (traditional conceptions of) culture as located in a “higher” (shared, intersubjective, public) level. In Norton’s (2020) approach, the cognitive is “cut up to size” so that it meets the cultural in equal terms (so that no one is smaller than the other). We can find cognition, in the world, and even (in the distributed case) between individuals, or in larger socio-ecological settings where human activity takes place; the cognitive is not an infrastructure underlying the cultural, but can be found empirically in heterogenous assemblages of actors, their interactions, relationships, and artifacts, and ecological settings.

However, if we follow the logic of the extended and distributed conceptions all the way through, especially the idea of redefining the concept of a cognitive system as including more than a brain (or even an embodied brain) but also every worldly or environmental process contribution to the cognitive task (which in Hutchin’s approach include other people and their activities), then it is easy to see than in the modal case, the cognitive is usually bigger than the cultural. That is, most examples of cognition (taking, for the sake of argument, the ideas of extended and distributed cognition as non-controversial) the cognitive system represents the whole and cultural kinds (whether artifactual or cultural cognitive) the parts.

This means that, in the widest sense, cognition is the process that the whole cognitive system performs, and cultural cognitive kinds are the vehicles via which it happens. This includes the “vanilla” cases of individualized cognitive extension (e.g., the transactive memory of Otto and his notebook (Clark & Chalmers, 1998), or the completion of a hard multiplication problem by partially offloading computation to pen and paper (Norton (2020: 52)). In these cases, cultural cognitive kinds internalized by people (procedures that allow for manipulation of numbers and arithmetic operations in the “head”) properly coupled to artifactual kinds located in the world (paper, pencil) and link to cultural cognitive kinds internalized as skills by people (reading, writing), help realize the cognitive system in question in what can be called extended cultural cognition.

Here cognition is the whole of what the cognitive system does, and cultural kinds (whether artifactual or cultural cognitive) are the (smaller) materials, vehicles, circuits, and mechanisms (whether in people or the world, or both) making successful cognition possible. Note that this argument for size reversal, if intuitive for the standard case of cognitive extension for a single individual offloading activity to artifacts and the environment, applies with a vengeance to Hutchins-style distributed cognitive systems.

In this last case, as Norton notes, a whole panoply of individuals and artifacts in an ecological setting is the entire cognitive system in question. It is clear that, in this case, cultural cognitive kinds and their various couplings and interactions are smaller than the “cognition” enacted by the system as a whole. Thus if in the “narrow” “neurocognitive” version of the culture cognition link the “micro-foundations” of culture are cognitive, it is easy to see that in most real-world ecological settings, as noted by distributed cognition theorists, the micro-foundations, if we still wanted to use this term in a non-smallist way, of cognition are cultural because realized via the causal coupling and interplay of underlying cultural kinds distributed across people, their activities, and the world.

Relative Size Agnoticism and the Cultural-Cognitive Boundary

There is, of course, no need to go all the way to unilateral advocacy of a complete reversal (smaller cultural kinds underlie bi cognitive processes) to appreciate the force of the argument. We considered together, the decomposion of the traditional “culture concepts” into motley cultural kinds endowed with distinct clusters of properties and the “supersizing” of the notion of cognition to include cases of cognitive extension and distribution, where the “size” of the relevant cognitive system is left to empirical specification (rather than being restricted to individuals by metaphysical fiat), jointly imply that the issue of “relative size” between the cultural and cognitive domains (which one is bigger and which one is smaller) should also not be prejudged.

Just like we should be agnostic with respect to location claims about cultural kinds, we should be agnostic with respect to both the absolute “size” of cognitive systems (an ontic claim with respect to the cognitive) and, by implication, the relative size of cognition with respect to the cultural. There are three ideal-typical possibilities in this respect:

  • In some cases, (a lot of them covered in DiMaggio’s (1997) original essay such as pluralistic ignorance or intergroup bias) the cognitive “underlies” the more macro-cultural process (see also Sperber, 2011 for other examples). These cases, although taken as paradigmatic in some brands of work in culture and cognition (as Norton persuasively argues), may actually more conceptually peripheral than previously presumed. This means that the traditional way of arranging the cognitive with respect to the cultural, where the cognitive is small and underlies the bigger cultural processes, as argued by Norton, is also less substantively relevant than previously thought.
  • Another arrangement, is one where the cultural and the cognitively (distributed) are blown up to (more or less) equal sizes, and thus partake cooperatively in orchestrating the structure, functioning, and organization of cultural cognitive systems. As Norton (2020:55) notes, “in distributed cognition systems, culture…play[s] a centrally infuential role in the cognitive process. Indeed, we can say that culture in distributed cognition is constitutive of the cognitive architecture of the system, central to cognition rather than layered on top of or subject to it.”
  • Finally, there is the size reversal option, in which cultural kinds underlie the functioning of cognitive systems broadly construed, so that cognition is the “bigger” process happening in the system, and cultural kinds are the underlying entities partially contributing to the realization of that process. This possibility, although rarely considered or taken seriously as a route to theory building (due mainly to tendentious definitions of culture), is one that may be more empirically pervasive and explanatory decisive in most real-world ecological settings, and thus deserving of further theoretical reflection and development.

References

Clark, A. (2008). Supersizing the Mind: Embodiment, Action, and Cognitive Extension. Oxford University Press,.

Clark, A., & Chalmers, D. (1998). The Extended Mind. Analysis, 58(1), 7–19.

DiMaggio, P. (1997). Culture and Cognition. Annual Review of Sociology, 23, 263–287.

Hutchins, E. (1995). Cognition in the Wild. MIT Press.

Leschziner, V. (2015). At the Chef’s Table: Culinary Creativity in Elite Restaurants. Stanford University Press.

Lizardo, O., Sepulvado, B., Stoltz, D. S., & Taylor, M. A. (2019). What can cognitive neuroscience do for cultural sociology? American Journal of Cultural Sociology, 1–26.

Menary, R. (2010). Introduction: The extended mind in focus. https://psycnet.apa.org/record/2009-23655-001

Norton, M. (2019a). Meaning on the move: synthesizing cognitive and systems concepts of culture. In American Journal of Cultural Sociology (Vol. 7, Issue 1, pp. 1–28). https://doi.org/10.1057/s41290-017-0055-5

Norton, M. (2020). Cultural sociology meets the cognitive wild: advantages of the distributed cognition framework for analyzing the intersection of culture and cognition. American Journal of Cultural Sociology, 8(1), 45–62.

Rowlands, M. (2009). Extended cognition and the mark of the cognitive. Philosophical Psychology, 22(1), 1–19.

Rowlands, M. (2010). The New Science of the Mind: From Extended Mind to Embodied Phenomenology. MIT Press.

Sperber, D. (2011). A naturalistic ontology for mechanistic explanations in the social sciences. In P. Demeulenaere (Ed.), Analytical sociology and social mechanisms (pp. 64–77). Cambridge University Press.

Wheeler, M. (2015). A tale of two dilemmas: cognitive kinds and the extended mind. http://dspace.stir.ac.uk/handle/1893/23589

Wilson, R. A. (2004). Boundaries of the Mind: The Individual in the Fragile Sciences – Cognition. Cambridge University Press.

An Argument for False Consciousness

Philosophers generally discuss belief-formation in one of two ways: internalist and externalist. Both arguments are concerned with the justification of the beliefs that a given agent purports to have. Internalists and externalists dispute the kinds of justification that can be given to a belief, in order to lend or detract an epistemic justification for the belief in question. For the internalist, a belief is justified if the grounds for it comes from something internal to the believer herself which she can control. For the externalist, belief can be justified without such an internal support. We can still be justified in believing something even if there are no grounds for belief that we can individually control. Between the internalist and externalist, “justifiability” concerns whether a belief can be present or whether what looks like belief is really something else (e.g. “unfounded hunch,” “dogmatism,” “false consciousness”).

Is such a dispute relevant for sociology? The answer, I argue, must be an unqualified yes: such a dispute is very relevant for sociology, but to see why requires a significant change in what it means to justify a belief. As a simple causal statement, sociology seems to support a belief externalism. After all, sociologists are in the business of describing beliefs that find presumably external sources in things like culture, meaning structures, and ideology. Yet, as a matter of action, sociologists seem more inclined toward belief internalism. The beliefs that drive agency are ones that agents themselves seem to control, as internal mental states, at least to the degree that they have a motivation to act and are not “cultural dopes” simply going through the motions. 

This is not a contradiction, it seems, because sociologists do not claim to be in the business of evaluating whether belief is justifiably present or not. In most cases, belief is unproblematically present as a matter of course. Sociologists are far more concerned with belief as an empirical process and beliefs as empirical things that can be used to explain other things. When confronted with questions about the “evaluation” or “justification” of beliefs, sociologists tend to think in terms of “value-neutrality.” The discipline can explain beliefs with even the most objectionable content without evaluating whether they are good or bad in a moral sense, or true or false in an epistemic sense. As some have suggested, not being committed to value-neutrality about beliefs would change our questions entirely and make for a very different discipline (see Abend 2008). 

I want to claim that there is a different way in which sociologists do evaluate beliefs (quite radically in fact) for the simple fact that they commit to belief externalism. This carries significant stakes for sociology as it touches upon a way in which the discipline recognizes and legitimates the presence of belief and by doing so countervails efforts not to recognize it or recognize it in a different way.

Consider a few vignettes (adapted from Srinivasan 2019a):

RACIST DINNER TABLE: A young black woman is invited to dinner at her white friend’s house. Her host’s father seems polite and welcoming, but over the course of the dinner the guest develops the belief that her friend’s father is racist. Should the guest be pressed on the sources of this belief, she says she simply “knows” that her friend’s father is racist. In fact, her friend’s father is racist though his own family does not know it.

CLASSIST COLLEGE: A working class student attends a highly selective college that prides itself on its commitment to social justice. She is assured by her advisor that while much of the student body comes from the richest 10%, she will feel right at home. Over the course of the first month of her attendance, however, the student experiences several instances where her class background becomes an explicit point of attention, ridicule and exclusion. She comes to believe that the university is not meant for those who come from her background. She tells this to her advisor who tells her in turn that, perhaps, she is being too sensitive. No one is trying to shun her.

DOMESTIC VIOLENCE: A woman in a poor rural village is regularly beaten and abused by her husband. Her husband expresses regret for the abuse, but explains to his wife that she “deserves” it based on her not being dutifully attentive to him. The woman believes that she only has herself to blame, an opinion echoed by her family and friends. She has never heard a contrary opinion.

Any sociologist who, having read these vignettes, and who are then asked “Are beliefs present?”, would very likely say “of course beliefs are present.” In fact, that would probably be the furthest thing from their minds. A sociologist would probably find such a question annoying and of dubious validity. There are far more pressing matters in these vignettes. Here is my wager: in saying that belief is present, sociologists actually make a radical evaluation of these beliefs, because they commit to belief externalism. In other words, they commit to the view that belief can be present even if the believer does not have grounds for belief that they can individually control. 

To consider the significance of this, consider some arguments in the philosophy of mind that are specifically meant to discredit belief externalism. As Srinivasan explains, the three cases above seem directly analogous to three famous thought experiments that each have the purpose of showing how belief cannot be present under the circumstances found in each of the vignettes (though the third is slightly tricky). A relevant disanalogy will help show why sociology’s commitment to belief externalism is significant and radical. 

RACIST DINNER TABLE corresponds to the CLAIRVOYANT experiment (Bonjour 1980) in which an individual believes he completely understands a certain subject matter under normal circumstances simply because he does not possess evidence, reasons or counterarguments of any kind against the possibility of his having a clairvoyant cognitive power. “One day [the clairvoyant] comes to believe that the President is in New York City, though he has no evidence either for or against this belief. In fact the belief is true and results from his clairvoyant power, under circumstances in which it is completely reliable.” To say the belief is justified in this instance is absurd, and this seems to prove the necessity to “reflect critically upon one’s beliefs … [in order to] preclude believing things to which one has, to one’s knowledge, no reliable means of epistemic access” (Bonjour 1980: 63). To have a reliable means of epistemic access (e.g. this is why I believe this) is to have an internalist grounds for belief that one can control. Without it, we don’t have beliefs but “unfounded hunches.”

CLASSIST COLLEGE corresponds to the DOGMATIST experiment (Lasonen-Aarnio 2010) in which someone in an art museum forms a belief about a given sculpture as being red, though she is later told by a museum staff member that when the museum visitor saw the sculpture it had been illuminated by a hidden light that momentarily made it seem like it was red when in fact it is white. Even when the museum patron is told this, however, she persists in her belief that the sculpture is red. In this case, such a belief would not be justified because the internalist grounds that would have made it justifiable no longer apply. To justifiably believe that the sculpture is red, the museum patron could not have witnessed the sculpture in its white state and/or could not have been told by the museum staff member why her belief is inaccurate. She is a dogmatist because, while the second condition does apply, her belief persists nevertheless.

DOMESTIC VIOLENCE corresponds to the famous BRAIN-IN-A-VAT experiment. Someone will form beliefs when they are trapped (Matrix-style) in a liquid goo vat that feeds electrochemical signals directly to their nervous system. For some internalists, belief is justifiably present in such circumstances based on the internalist criteria that the person in the vat will have “every reason to believe [that] perception is a reliable process. [The] mere fact unbeknown to [them that] it is not reliable should not affect the justification” (Cohen 1984: 81-82). 

In all three cases, there are analogous circumstances between the vignettes and the thought experiments. The question is why it seems unproblematic to ascribe beliefs in the vignettes while it seems far more problematic to ascribe them in the thought experiments. The answer comes in a relevant disanalogy: the vignettes account for belief-formation by referencing a relational process, of some kind, that an internalist simply cannot recognize and the externalist in these cases only latently recognizes. 

As suggested above, for a sociologist to say that “yes beliefs are present” in such circumstances as RACIST DINNER TABLE, CLASSIST COLLEGE, and DOMESTIC VIOLENCE is unproblematic to the point of absurdity. Yet, if the thought experiments reveal anything, they reveal why attributing belief in these circumstances is really saying something. And it says something without having to rely on CLAIRVOYANT, DOGMATIST or BRAIN-IN-A-VAT kinds of fallacies. This is because sociologists have a very important thing in their back-pocket, something deeply familiar to them: the ability to account for belief-formation, again, in “terms of structural notions rather than individualist ones.” 

This may all seem obvious enough, but it actually opens a large and important horizon that Omar and I (Strand and Lizardo 2015; Strand 2015) have just barely scratched the surface. Belief-formation (and desire-formation) is a primary sociological problem because accounting for the presence of belief is a very good way of sorting out distinctively social effects of various theoretically important kinds that also happen to be inextricably cognitive. But let’s take this one step further. The internalist critique of externalism revolves around the fact that externalists can only describe the presence of belief under such and such circumstances. It is not a normative theory that can be “action-guiding [and] operational under conditions of uncertainty and ignorance” (Srinivasan 2019a). Those who have internalist grounds for belief can presumably apply them in conditions of uncertainty and ignorance. Hence, belief should be formed on grounds of internal criteria and the subject’s individual perspective. 

But consider what externalism might look like as a normative theory. What would it mean for beliefs formed without an internal criteria and only through relationships with others to carry a greater or equivalent epistemic good as beliefs formed through internal criteria that otherwise seem far more respectable ethically speaking (insofar as they allow us to attribute blame and responsibility)? As the scenario between BRAIN-IN-A-VAT and DOMESTIC VIOLENCE suggests, internalist criteria can obviously mislead the attribution of belief in circumstances where it does not apply and where the recognition of externalist grounds for belief can reveal false consciousness. More specifically, the RACIST DINNER TABLE/CLAIRVOYANCE and CLASSIST COLLEGE/DOGMATIST examples suggest that the externalist belief-formation evidenced in these circumstances carries a distinct epistemic good. None of this should be unfamiliar to sociologists. Sociologists are often the ones who recognize, defend and legitimate the presence of belief in these circumstances, despite all countervailing forces.

All of this rests on a certain genealogical anxiety, however, as Srinivasan (2019b) appreciates. As a field, cognitive science massively contributes to this anxiety. For externalism of this sort, of the sociological sort, makes a radical claim to the degree that it radically departs from folk-psychological familiarity, and its overlap with ethical respectability, at least should we try to take this to a logical conclusion. We must conclude that our beliefs—even our good ones, even our “action-guiding” ones—result from some kind of “lucky” or “unlucky” inheritance. They must be genealogical in other words and cannot result from some internalist criteria that remains indelibly ours, under our control and which reflects kindly upon us (or poorly depending on how lucky we are). I will save discussion of these implications for another post.

 

References

Abend, Gabriel. (2008). “Two Main Problems in the Sociology of Morality.” Theory and Society 37: 87-125.

BonJour, Laurence (1980). “Externalist Theories of Empirical Knowledge” Midwest Studies in Philosophy 5: 53–73.

Cohen, Stewart. (1984). “Justification and Truth.” Philosophical Studies 46: 279-296.

Lasonen-Aarnio, Maria. (2010). “Unreasonable Knowledge”. Philosophical Perspectives 24: 1-21.

Strand, Michael. (2015). “The Genesis and Structure of Moral Universalism: Social Justice in Victorian Britain, 1834-1901.” Theory and Society 44: 537-573.

Strand, Michael and Omar Lizardo. (2015). “Beyond World Images: Belief as Embodied Action in the World.” Sociological Theory 33: 44-70.

Srinivasan, Amia. (2019a). “Radical Externalism.” Philosophical Review

_____. (2019b). “Genealogy, Epistemology, and Worldmaking.” Proceedings of the Aristotelian Society 119: 127-156.

When is Consciousness Learned?

Consciousness-learned

Continuing with the theme of innateness and durability from my last post, consider the question: are humans born with consciousness? In a ground-breaking (and highly contested) work, the psychologist Julian Jaynes argued that if only humans have consciousness, it must have emerged at some point in our human history. In other words, consciousness is a socially and culturally acquired skill (Williams 2011).

To summarize his argument: until as recently as the Bronze age (the third millennium BCE) he purports that humans were not, strictly speaking conscious. Rather, humans experienced life in a proto-conscious state he refers to as “bicameralism.” Roughly around the “Axial Age” (cf Mullins et al. 2018), bicameral humans declined and conscious, “unicameral” humans emerged.

One piece of evidence he deploys in support of his thesis is that the content of the Homeric poem the Iliad is substantially different than the later Odyssey. The former, he argues, is devoid of references to introspection, while the latter does have introspection. Jaynes argues a similar pattern emerges between earlier and later books of the Christian Bible. In a recent attempt  (see also Raskovsky et al. 2010) to test this specific hypothesis quantitatively,  Diuk et al. (2012), use Latent Semantic Analysis to calculate the semantic distances between the reference word “introspection” and all other words in a text. Remarkably, their findings are consistent with Jaynes’ argument  (see also: http://www.julianjaynes.org/evidence_summary.php).

Screenshot from 2018-12-19 17-47-55.png
From Diuk et al. (2012): “Introspection in the cultural record of the Judeo-Christian tradition. The New Testament as a single document shows a significant increase over the Old Testament, while the writings of St. Augustine of Hippo are even more introspective. Inset: regardless of the actual dating, both the Old and New Testaments show a marked structure along the canonical organization of the books, and a significant positive increase in introspection.”

Is Consciousness Learned in Childhood?

If consciousness, as Jaynes argued, is a product of social and cultural development, does this also mean that we each must “learn” to be conscious? Some contemporary research suggests something like this might be the case.

To begin we need a simple definition: consciousness is our “awareness of our awareness” (sometimes called metacognition). A problem with considering the extent of our conscious awareness is the normative baggage associated with “not being conscious.” For the folk, it is somewhat insulting to say people are “mindlessly” doing something, and we tend to value “self-reflection.” Certainly this is a generalization, but let’s bracket the notion that non-conscious experience is somehow less good than being conscious. The bulk of what the brain does is below the level of our awareness. For starters, when we are asleep, under general anesthesia, or even in a coma, the brain continues to be quite active. Moving to our waking lives, the kinds of skills and habits that Giddens (1979) confusingly calls the “practical consciousness” is deployed at a speed that outstrips our ability to be aware it is happening until after the fact. The kind of skillful execution associated with athletes and artists, for instance, is often associated with Csikszentmihalyi’s “flow” precisely because there is a “letting go” and letting the situation take over. All this is to say we are conscious far less than we probably think. Indeed asking us when we are not conscious  (Jaynes 1976:23):

…is like asking a flashlight in a dark room to search around for something that does not have any light shining upon it. The flashlight, since there is light in whatever direction it turns, would have to conclude that there is light everywhere. And so consciousness can seem to pervade all mentality when actually it does not.

A second major confusion is the assumption that consciousness is how humans learn ideas or form concepts. As we discuss elsewhere (Lizardo et al. 2016), memory systems are multiple, and while we learn via conscious processes, the bulk of what we learn is via non-conscious processes in “nondeclarative” memory systems (Lizardo 2017). This is especially the case for the most basic concepts we learn from infancy onward. In fact, Durkheim’s argument that it is through ritual—embodied experience—that so-called “primitive” groups learned the “basic categories of the understanding” more or less pre-figures this point (Rawls 2001).

Rather than the experience-near associated with everyday life, consciousness involves introspection and “time traveling” associated both with reconstructing our own biographies from memory and imagining possible (and impossible) futures. A recent school of thought in cognitive science—referred to as “enactivism”—takes a rather radical approach in arguing that the vast majority of human cognition is not, strictly speaking, contentful (Hutto and Myin 2012, 2017). Indeed, a lot of “remembering” does “not require representing any specific past happening or happenings… remembering is a matter of reenactment that does not involve representation” (Hutto and Myin 2017:205). But, what about autobiographical remembering involved in introspection and self-reflection which we might consider the hallmark of consciousness?

To answer this — within the broader enactivist project — they draw on group of scholars who argue that autobiographical memory is “a product of innumerable social experiences in cultural space that provide for the developmental differentiation of the sense of a unique self from that of undifferentiated personal experience” (Nelson and Fivush 2004:507). These scholars find that “a specific kind of memory emerges at the end of pre-school period”  (Nelson 2009:185). Such a theory offers a plausible explanation for “infantile amnesia” — the inability to recall events prior to about three or four — an explanation much less ridiculous than Freud’s contention that these memories were repressed so as to “screen from each one the beginnings of one’s own sex life.”

These theorists go on to argue that “a new form of social skill” associated with this “new type of memory” (Hoerl 2007:630). This skill is “narrating” one’s experience. Parent’s reminiscing with children play a central role in the acquisition of this skill (Nelson and Fivush 2004:500):

…parental narratives make an important contribution to the young child’s concept of the personal past. Talking about experienced events with parents who incorporate the child’s fragments into narratives of the past not only provides a way of organizing memory for future recall but also provides the scaffold for understanding the order and specific locations of personal time, the essential basis for autobiographical memory.

Returning to Jaynes, we find a remarkably analogous description of the emergence of consciousness as  the “development on the basis of linguistic metaphors of an operation of space in which an ‘I’ could narratize out alternative actions to their consequences” (Jaynes 1976:236). That is, we could assert, consciousness is this social skill emerging from the (embodied and social) practice of reminiscing with parents and classmates (or the like) when we are around three years old.

REFERENCES

Diuk, Carlos G., D. Fernandez Slezak, I. Raskovsky, M. Sigman, and G. A. Cecchi. 2012. “A Quantitative Philology of Introspection.” Frontiers in Integrative Neuroscience 6:80.

Giddens, A. (1979). Central problems in social theory. Berkeley: University of California press.

Hoerl, C. 2007. “Episodic Memory, Autobiographical Memory, Narrative: On Three Key Notions in Current Approaches to Memory Development.” Philosophical Psychology.

Hutto, Daniel D. and Erik Myin. 2012. Radicalizing Enactivism: Basic Minds without Content. MIT Press.

Hutto, Daniel D. and Erik Myin. 2017. Evolving Enactivism: Basic Minds Meet Content. MIT Press.

Jaynes, Julian. 1976. The Origin of Consciousness in the Breakdown of the Bicameral Mind.

Lizardo, Omar. 2017. “Improving Cultural Analysis Considering Personal Culture in Its Declarative and Nondeclarative Modes.” American Sociological Review 0003122416675175.

Lizardo, Omar, Robert Mowry, Brandon Sepulvado, Dustin S. Stoltz, Marshall A. Taylor, Justin Van Ness, and Michael Wood. 2016. “What Are Dual Process Models? Implications for Cultural Analysis in Sociology.” Sociological Theory 34(4):287–310.

Mullins, Daniel Austin, Daniel Hoyer, Christina Collins, Thomas Currie, Kevin Feeney, Pieter François, Patrick E. Savage, Harvey Whitehouse, and Peter Turchin. 2018. “A Systematic Assessment of ‘Axial Age’ Proposals Using Global Comparative Historical Evidence.” American Sociological Review 83(3):596–626.

Nelson, Katherine. 2009. Young Minds in Social Worlds: Experience, Meaning, and Memory. Harvard University Press.

Nelson, Katherine and Robyn Fivush. 2004. “The Emergence of Autobiographical Memory: A Social Cultural Developmental Theory.” Psychological Review 111(2):486–511.

Raskovsky, I., D. Fernández Slezak, C. G. Diuk, and G. A. Cecchi. 2010. “The Emergence of the Modern Concept of Introspection: A Quantitative Linguistic Analysis.” Pp. 68–75 in Proceedings of the NAACL HLT 2010 Young Investigators Workshop on Computational Approaches to Languages of the Americas, YIWCALA ’10. Stroudsburg, PA, USA: Association for Computational Linguistics.

Rawls, A. W. (2001). Durkheim’s treatment of practice: concrete practice vs representations as the foundation of reason. Journal of Classical Sociology, 1(1), 33-68.

Williams, Gary. 2011. “What Is It like to Be Nonconscious? A Defense of Julian Jaynes.” Phenomenology and the Cognitive Sciences 10(2):217–39.

Exaption: Alternatives to the Modular Brain, Part II

Scientists discovered the part of the brain responsible for…

In my last post, I discuss one alternative to the modular theory of the mind/brain relationship: connectionism. Such a model is antithetical to modularity in that there are only distributed networks of neurons in the brain, not special-purpose processors.

One strength of the modular approach, however, is that it maps quite well to our folk psychology. And, much of the popular discourse surrounding research in neuroscience involves the celebrated “discovery” of the part of the brain responsible for X. A major theme of the previous posts is that the social sciences should be skeptical of the baggage of our folk psychology. But, is there not some truth to the idea that certain regions of the brain are regularly implicated in certain cognitive processes?

The earliest attempts at localization relied on an association between some diagnosed syndrome—such aphasia discussed in the previous posts—and abnormalities of the brain’s structure (i.e. lesions) identified in post-mortem examinations. For example, Paul Broca, discussed in my previous post, noticed lesions on a particular part of the brain in patients with difficulty producing speech. This part of the brain became known as Broca’s area, but researchers only have a loose consensus as to the boundaries of the area (Lindenberg, Fangerau, and Seitz 2007).

Furthermore, the relationship between lesions in this area and aphasia is partial at best. A century later, Nina Dronkers, the Director of the Center for Aphasia and Related Disorders, states (2000:60):

After several years of collecting data on chronic aphasic patients, we find that only 85% of patients with chronic Broca’s aphasia have lesions in Broca’s area, and only 50–60% of patients with lesions in Broca’s area have a persisting Broca’s aphasia.

More difficult for the modularity thesis, those with damage to Broca’s area and who also have Broca’s aphasia usually have other syndromes. This implies that the area is multi-purpose, and thus not a single-purpose language production module (see this book-length discussion Grodzinsky and Amunts 2006). One reason I focus on Broca’s area (apart from my interest in linguistics) is that it is considered the exemplary case for the modular theory quite dominant (if implicit) in much neuroscientific research (Viola and Zanin 2017).

Part of the difficulty with assessing even weak modularity hypotheses, however, is that neuroanatomical research continues to revise the “parcellation” of the brain. The first such attempt was by Korbinian Brodmann, published in German in 1909  as “Comparative Localization Studies in the Brain Cortex, its Fundamentals Represented on the Basis of its Cellular Architecture.” He divided the cerebral cortex (the outermost “layer” of the brain) into 52 regions based on the structure of cells (cytoarchitecture) sampled from different sections of brains taken from 64 different mammalian species, including humans (see Figure 1). Although Brodmann’s studies were purely anatomical, he wrote: “my ultimate goal was the advancement of a theory of function and its pathological deviations.” Nevertheless, he rejected what he saw as naive attempts at functional localization:

[Dressing] up the individual layers with terms borrowed from physiology or psychology…and all similar expressions that one encounters repeatedly today, especially in the psychiatric and neurological literature, are utterly devoid of any factual basis; they are purely arbitrary fictions and only destined to cause confusion in uncertain minds.

20180522-Selection_003
Figure 1. Brodmann’s handdrawn parcellation of the human brain.

Over a century later, many researchers continue to refer to “Brodmann’s area” numbers as general orientation markers. More recently (see Figure 2), using data from the Human Connectome Project and supervised machine learning techniques, a team of researchers characterized 180 areas in each hemisphere — 97 new areas and 83 areas identified in previous work (Glasser et al. 2016). This study used a “multi-modal” technique which included cytoarchitecture, like Brodmann, but also connectivity, topography and function. For the latter, the study used data from “task functional MRI (tfMRI) contrasts,” wherein resting state measures are compared with measures taken during seven different tasks.

glasser-map

One of these tasks was language processing using a procedure developed by Binder et al. (2011) wherein participants read a short fable and then are asked a forced-choice question. Glasser et al. found reasonable evidence associating this language task with previously identified components of the “language network” (for recent overviews of the quest to localize the language network, see Frederici 2017 and Fitch 2018, both largely within the generative tradition).  Specifically, these are Broca’s area (roughly 44) and Wernicke’s area (roughly PSL), and also identified an additional area, which they call 55b). Their findings also agreed with previous work going back to Broca on the “left-lateralization” of the language network—which means not that language is only in the left hemisphere (as some folk theories purport), but simply the left areas show more activity in response to the language task than in homologous areas in the right hemisphere (an early finding which inspired Jaynes’ Bicameral Mind hypothesis)

Does this mean we have discovered the “language module” theorized by Fodor, Chomsky, and others? Not quite, for three reasons. First, Glasser et al. found if they removed the functional task data, their classifier was nearly as accurate at identifying parcels. Second, the parcels were averaged over a couple hundred brains, and yet the classifier was still able to identify parcels in atypical brains (whether this translated into changes in functionality was outside the scope of the study).

Third, and most important for our purposes, this work does not—and the researchers do not attempt to—determine whether parcels are uniquely specialized (or encapsulated in Fodor’s terms). That is, while we can roughly identify a language network implicating relatively consistent areas across different brains, this does not demonstrate that such structures are necessary and sufficient for human language, and solely used for this purpose. Indeed, language may be a “repurposing” brain parcels used for (evolutionarily or developmentally older) processes. This is precisely the thesis of neural “exaption.”

What is Exaption?

In the last few decades several new frameworks—under labels like neural reuse, neuronal recycling, neural exploitation, or massive redeployment—attempt to offer a bridge between the modularity assumptions which undergird most neuroanatomical research, on one hand, and the connectionist assumptions which spurred advancements in artificial intelligence research and anthropology on the other. Such frameworks also attempt to account for the fact there is some consistency in activation across individuals, which does look a little bit like modularity.

The basic idea is exaption (also called exaptation): some biological tendencies or anatomical constraints may predispose certain areas of the brain to be implicated in certain cognitive functions, but these same areas may be recycled, repurposed, or reused for other functions. Exemplars of this approach are Stanislas Dehaene’s Reading in the Brain and Michael Anderson’s After Phrenology.

Perhaps the easiest way to give a sense of what this entails is to consider cases of neurodiversity, specifically the anthropologist Greg Downey’s essay on the use of echolocation by the visually impaired. While folk understandings may suggest that hearing becomes “better” in those with limited sight, this is not quite the case. Rather, one study finds, when listening to “ a recording [which] had echoes, parts of the brain associated with visual perception in sighted individuals became extremely active.” In other words, the brain repurposed the visual cortex as a result of the individual’s practices. While most humans have limited echolocation abilities and the potential to develop this skill, only some will put in the requisite practice.

Another strand of research supporting neural exaption falls under the heading of “conceptual metaphor theory” (itself a subfield of cognitive linguistics). The basic argument from this literature is that people tend to reason about (target) domains they have had little direct experience with by analogy to (source) domains with which they have had much direct experience (e.g. the nation is a family). As argued in Lakoff and Johnson’s famous Metaphors We Live By, this metaphorical mapping is not just figurative or linguistic, but rather a pre-linguistic conceptual mappings, and an—if not the—essential part of all cognition (Hofstadter and Sander 2013). Therefore, thinking or talking about even very abstract concepts re-activates a coalition of neural associations, many of which are fundamentally adapted to the mundane sensorimotor task of navigating our bodies through space. As we discuss in our forthcoming paper, “Schemas and Frames” (Wood et al. 2018), because talking and thinking recruit areas of our neural system often deployed in other activities—and at time-scales faster than conscious awareness can adequately attend—our biography of embodiment channels our reasoning in ways that seem intuitive and yet are constrained by the pragmatic patterns of those source domains. This is fully compatible with the dispositional theory of the mental Omar discusses.

What does this mean for sociology? I think there are numerous implications and we are just beginning to see how generative these insights are for our field. Here, I will limit myself to discussing just two, specifically related to how we tend to think about the role of language in our work. First, for an actor, knowing what text or talk means involves an actual embodied simulation of the practices it implies, very often (but not necessarily) in service of those practices in the moment (Binder and Desai 2011). Therefore, language should not be understood as an autonomous realm wherein meanings are produced by the internal interplay of contrastive differences within an always deferred linguistic system. Rather, following the later Wittgenstein in the Philosophical Investigations, “in most cases, the meaning of a word is its use.” Furthermore, as our embodiment is largely (but certainly not completely) shared across very different peoples (for example, most of us experience gravity all the time), there is a significant amount of shared semantics across diverse peoples (Wierzbicka 1996)—indeed without this, translation would likely be impossible.

Second, the repurposing of vocabulary commonly used in one context into a new context will often involve the analogical transfer of traces of the old context. This is because invoking such language activates a simulation of practices from the old context while one is in the new context. (Although this is dependent upon the accrued biographies of the individuals involved). This suggests that our language can be constraining in predictable ways, but not because the language itself has a structure or code rendering certain possibilities unthinkable. Rather, it is that language is the manifestation of a habit inextricably involved in a cascade of other habits, making it easier to execute  (and therefore more probable for) some actions or thoughts over others. For example, as Barry Schwartz argued in his (criminally under-appreciated) Vertical Classification, it is nearly universal that UP is associated with power and also the morally good as a result of (near-universal) practices we encounter as babies and children. This helps explain the persistence of the “height premium” in the labor market (e.g., Lundborg, Nystedt, and Rooth 2014).

 

References

Binder, Jeffrey R. et al. 2011. “Mapping Anterior Temporal Lobe Language Areas with fMRI: A Multicenter Normative Study.” NeuroImage 54(2):1465–75.

Binder, Jeffrey R. and Rutvik H. Desai. 2011. “The Neurobiology of Semantic Memory.” Trends in Cognitive Sciences 15(11):527–36.

Dronkers, N. F. 2000. “The Pursuit of Brain–language Relationships.” Brain and Language. Retrieved (http://www.ebire.org/aphasia/dronkers/the_pursuit.pdf).

Fitch, W. Tecumseh. 2018. “The Biology and Evolution of Speech: A Comparative Analysis.” Annual Review of Linguistics 4(1):255–79.

Friederici, Angela D. 2017. Language in Our Brain: The Origins of a Uniquely Human Capacity. MIT Press.

Glasser, Matthew F. et al. 2016. “A Multi-Modal Parcellation of Human Cerebral Cortex.” Nature 536(7615):171–78.

Grodzinsky, Yosef and Katrin Amunts. 2006. Broca’s Region. Oxford University Press, USA.

Hofstadter, Douglas and Emmanuel Sander. 2013. Surfaces and Essences: Analogy as the Fuel and Fire of Thinking. Basic Books.

Lindenberg, Robert, Heiner Fangerau, and Rüdiger J. Seitz. 2007. “‘Broca’s Area’ as a Collective Term?” Brain and Language 102(1):22–29.

Lundborg, Petter, Paul Nystedt, and Dan-Olof Rooth. 2014. “Height and Earnings: The Role of Cognitive and Noncognitive Skills.” The Journal of Human Resources 49(1):141–66.

Viola, Marco and Elia Zanin. 2017. “The Standard Ontological Framework of Cognitive Neuroscience: Some Lessons from Broca’s Area.” Philosophical Psychology 30(7):945–69.

Wierzbicka, Anna. 1996. Semantics: Primes and Universals. Oxford University Press, UK.

Wood, Michael Lee, Dustin S. Stoltz, Justin Van Ness, and Marshall A. Taylor. 2018. “Schemas and Frames.” Retrieved (https://osf.io/preprints/socarxiv/b3u48/).

 

Connectionism: Alternatives to the Modular Brain, Part I

In my previous post, I introduced the task of cognitive neuroscience, which is (largely) to locate processes we associate with the mind in the structures of the brain and nervous system (Tressoldi et al. 2012). I also discussed the classical and commonsensical approach which conceptualizes the brain and mind relationship by analogy to computer hardware and software: distinct physical modules in the brain run operations on a limited set of innate codes (not unlike binary code) to produce outputs. One problem with this I discussed is theoretical: the grounding problem.

Another objection is empirical. If one proposes a strict relationship between functional modularity and structural modularity, using brain imaging technology, researchers should be able to identify these modules in neural architecture with some consistency across persons. However, researchers do not find such obvious evidence (Genon et al. 2018). For example, some of the researchers who pioneered brain imaging techniques, specifically positron emission tomography (PET), attempted to find three components of the “reading system” (orthography, phonology, and semantics) (e.g., Peterson, Fox, Posner, & Mintun, 1989). A decade later, researchers continued to disagree as to where the “reading system” is located (Coltheart 2004).

Part of the problem may be methodological: the technology remains rudimentary and advances come with tradeoffs (Turner 2016; Ugurbil 2016). The fMRI is the most common technique used in research, and high-resolution machines can measure blood flow in voxels (3-dimensional pixels) that are about 1 cubic millimeter in size. With an average of 86 billion neurons in the human brain (Azevedo et al. 2009), there are an average of 100,000 neurons in one voxel (although neurons vary widely in size and structure—see NeuroMorpho.org for  a database of about 90,000 digitally reconstructed human and nonhuman neurons), and each neuron has between hundreds to thousands of synapses connecting it (with varying strengths) to neighboring neurons. To interpret fMRI data, neuronal activity within each voxel is averaged, using the kinds of statistical techniques familiar to many sociologists, and must extract signal from noise. Therefore, it is important to bear in mind, like all inferential analyses, findings are provisional.

Connectionism in Linguistics and Artificial Intelligence

Even if non-invasive imaging resolution were to be extended to the neuronal level in real-time, it may be that there are no special-purpose brain modules to be discovered. That is, it may be that cognitive functions are distributed across the brain and nervous system, in perhaps highly variable ways. Such an alternative relies on a network perspective and comes with many potential forebearers, such as Aristotle, Hume, Berkeley, Herbert Spencer, or William James (Medler 1998).

Take for example Paul Broca and Carl Wernicke’s work on aphasia in the late 19th century. Noting the varieties if aphasia, or the loss of the ability to produce and/or understand speech or writing, Lichtheim (1885) concludes, following the work of Wernicke and Broca: different aspects of language (i.e. speaking, hearing speech, understanding speech, reading, writing, interpreting visual language) are associated with different areas of the brain, but connected via a neural network. Interruption along any one of these pathways can account for observations of the many kinds of aphasia.  

20180419-Selection_001.png
Figure from Lichtheim (1885:436), demonstrates the pathways connecting concepts (B) to “auditory images” (A) and “motor images” (M), each of which might be disrupted causing a specific kind of aphasia.

If language were produced by a discrete module, one would predict global language impairment, not piecemeal. Thus, this work developed the notion that so-called psychological “faculties” like language were distributed across areas of the brain. Following the logic of such evidence, an alternative perspective later referred to as connectionism, argues that the brain has no discrete functional regions and does not operate on symbols in a sequential process as a computer, but rather is distributed neural network which operates in parallel.

The connectionist approach (also called parallel distributed processing or PDP) coalesced primarily around PDP Research Group,  lead by David Rumelhart and James McClelland at the Institute for Cognitive Science at UC-San Diego, as an alternative to the generative grammar approach to modeling brain activity. In particular, the publication of Parallel Distributed Processing in 1986 marked the beginning of the contemporary connectionist perspective.

A key difference with prior computational approaches is that connectionist theories dispense with the analogy of mind as software and brain as hardware. Mental processes are not encoded in some language of thought or translated into neural architecture, they are the neural networks. Furthermore, unlike Chomsky’s generative grammar, a connectionist approach to language can better account for geographical and/or sociological variation—dialects, accents, vocabulary, syntax—within what is commonly considered the “same” language. This is because learning (from a connectionist perspective) plays a key role in both language use and form, and thus is easily coupled with, for example, practice theoretic approaches which reconceptualize folk concepts, like beliefs, into a species of habit.

Take, for example, Basil Bernstein’s pioneering work on linguistic variation across class in England (1960). He demonstrated that, independent of non-verbal measures of intelligence, those in the middle class would use a broader range of vocabulary (and therefore would score higher on verbal measures of intelligence) because elaborating one’s thoughts (and talking about oneself) was an important practice (and therefore habit) for the middle class, but not for the working class. As Bernstein summarized, “The different vocabulary scores obtained by the two social groups may simply be one index, among many, which discriminates between two dominant modes of utilizing speech” (1960:276).

Connectionism and Cognitive Anthropology

Beginning in the 1960s, cognitive anthropology was beginning to see problems with modeling culture using techniques like componential analysis (a technique borrowed from linguistics, see Goodenough 1956), which followed a decision-tree, or “checklist” logic. It is here a small theory-group in cognitive anthropology—called the “cultural models” school surrounding Roy d’Andrade while at Stanford in the 1960s and then UC-San Diego in the 1970s—circulated informally a working paper written by the linguist Charles Fillmore (while at Stanford) in which he outlined “semantic frames” as an alternative to checklist approaches to word meanings. In another paper circulated informally, “Semantics, Schemata, and Kinship,” referred to colloquially as “the yellow paper” (Quinn 2011:36), the anthropologist Hugh Gladwin (while also at Stanford) made a similar argument. Rather than explain the meaning of familial words like “uncle” in minimalist terms, anthropologists should consider how children acquire a “gestalt-like household schema,” and uncle “fits” within this larger cognitive structure.

However, it wasn’t until these cognitive anthropologists paired this new concept of cultural schemas with the connectionism that, according to Roy d’Andrade (1995) and Naomi Quinn (2011), a paradigm shift occurred in cognitive anthropology in the 1980s and 1990s. Quinn recalls the second chapter of Rumelhart, et al’s 1986 book, “Schemata and Sequential Thought Processes in PDP Models” gave the schema a “new and more neurally convincing realization as a cluster of strong neural associations” (Quinn 2011:38).

Beyond d’Andrade and his students and collaborators like Quinn and Claudia Strauss at Stanford, Edwin Hutchins, who also worked closely with Rumelhart and McClelland’s PDP Research Group, was instrumental in extending connectionism from the individual brain to a social group with his concept of “distributed cognition.” Independently of this US West Coast cognitive revolution, the British anthropologist Maurice Bloch was one of the first to recognize the importance of connectionism for anthropology. Beginning with his essay “Language, Anthropology and Cognitive Science,” in which he criticized his discipline for relying on an overly linguistic conceptualization of culture (a criticism which applies with full force to contemporary cultural sociology). 

In a follow-up post, I will consider more recent advances in understanding the brain-mind relationship, specifically the concept of “neural reuse,” and assess the connectionist model in light of this work.

References

d’Andrade, Roy G. 1995. The Development of Cognitive Anthropology. Cambridge University Press.

Azevedo, Frederico A. C. et al. 2009. “Equal Numbers of Neuronal and Nonneuronal Cells Make the Human Brain an Isometrically Scaled-up Primate Brain.” The Journal of Comparative Neurology 513(5):532–41.

Bloch, Maurice. “Language, anthropology and cognitive science.” Man (1991): 183-198.

Bernstein, Basil. 1960. “Language and Social Class.” The British Journal of Sociology 11(3):271–76.

Coltheart, Max. 2004. “Brain Imaging, Connectionism, and Cognitive Neuropsychology.” Cognitive Neuropsychology 21(1):21–25.

Genon, Sarah, Andrew Reid, Robert Langner, Katrin Amunts, and Simon B. Eickhoff. 2018. “How to Characterize the Function of a Brain Region.” Trends in Cognitive Sciences.

Goodenough, Ward H. 1956. “Componential Analysis and the Study of Meaning.” Language 32(1):195–216.

Lichtheim, Ludwig. 1885. “On Aphasia.” Brain 7:433–84.

Medler, David A. 1998. “A Brief History of Connectionism.” Neural Computing Surveys 1:18–72.

Petersen, S.E., Fox, P.T., Posner, M.I., Mintun, M. and Raichle, M.E., 1989. “Positron emission tomographic studies of the processing of single words.” Journal of Cognitive Neuroscience, 1(2), pp.153-170.

Quinn, Naomi. 2011. “The History of the Cultural Models School Reconsidered: A Paradigm Shift in Cognitive Anthropology.” Pp. 30–46 in A Companion to Cognitive Anthropology.

Rumelhart, David E., James L. McClelland, and the PDP Research Group. 1986. Parallel Distributed Processing. Cambridge, MA: MIT Press.

Tressoldi, Patrizio E., Francesco Sella, Max Coltheart, and Carlo Umiltà. 2012. “Using Functional Neuroimaging to Test Theories of Cognition: A Selective Survey of Studies from 2007 to 2011 as a Contribution to the Decade of the Mind Initiative.” Cortext. 48(9):1247–50.

Turner, Robert. 2016. “Uses, Misuses, New Uses and Fundamental Limitations of Magnetic Resonance Imaging in Cognitive Science.” Philosophical Transactions of the Royal Society of London. 371(1705).

Ugurbil, Kamil. 2016. “What Is Feasible with Imaging Human Brain Function and Connectivity Using Functional Magnetic Resonance Imaging.” Philosophical Transactions of the Royal Society of London. 371(1705).

 

The Decision to Believe

As noted in a previous post, there are analytic advantages with reconceptualizing the traditional denizens of the folk-psychological vocabulary from the point of view of habit theory. So far, however, the argument has been negative and high-level; thinking of belief as habit, for instance, allows us to sidestep a bunch of antinomies and contradictions brought about by the picture theory. In this post, I would like to outline some positive implications of recasting beliefs as a species of habit. However, I will begin by discussing other overlooked implications of the picture theory and then (promise) move on to some clear substantive implications of the habit conception.

As noted before, the picture theory of belief is part of a more general set of folk (and even technical) conceptions of how beliefs work. I have already noted one of them and that is the postulate of incorrigibility: If somebody assents to believing p, then we presume that they have privileged first-person knowledge as to this. It would be nonsensical (and socially uncouth) for a second person to say to them “I know better than you on this one; I don’t think you believe p.” Folk Cartesianism thus operates as a philosophical set of tenets (e.g. the idea we have privileged introspective and maybe even non-inferential access to personal beliefs), and as a set of ethnomethods to coordinate social interaction (accepting people’s claims they believe something when they tell us so without raising a fuss).

I want to point to another, less obvious premise of both folk and technical Cartesianism. This is the notion (which became historically decisive in the Christian West after the Protestant Reformation) that you get to choose what you believe. Just like before, this doubles as a philosophical precept and as an ethnomethod used to organize social relations in doxa-centric societies (Mahmood 2011). If you get to choose what you believe, and if your belief is obnoxious or harmful, then you are responsible for your belief and can be blamed, punished, burned at the stake and so on. As the sociologist David Smilde has also noted, there is a positive version of this implication of folk Cartesianism: if the belief is good for you (e.g. brings with it new friends, behaviors, resources) then we should expect you (under the auspices of charitable ascription) to choose to believe it. However, the weird prospect of people believing something not because they find its truth or validity compelling but because of instrumental reasons raises its ugly head in this case (Smilde 2007, 3ff; 100ff).

The idea of choosing to believe is not as crazy as it sounds. At least the negative of it, the idea we could bring up a consideration (let’s say a standard proposition) and withhold belief from it until we had scrutinized its validity was central to the technical Cartesian method of doubt. Obviously, this requires that we have some reflective control over our decision to believe in something or not while we consider it, so in this respect technical and folk Cartesianism coincide.

As Mike and I discuss in the 2015 paper, rejecting the picture theory (and associated technical/folk Cartesianism) of belief makes hash of the notion of “choosing to believe” as a plausible belief-formation story. Here the strict analogy to prototypical habits helps. Consider a well-honed habit; when exactly did you choose to acquire it? Now even if you made a “decision” to start a new training regimen (e.g. Yoga) at what point did it go from a decision to a habit? Did that involve an act of assent on your part? Now consider a traditional belief stated as an explicit linguistic proposition you claim to believe (e.g. “The U.S. is the land of opportunity”). When did you choose to believe that? We suggest, that even a fairly informal bit of phenomenology will lead to the conclusion that you do not have credible autobiographical memories of having “chosen” any of the things you claim to believe. It’s as if, as Smilde points out, the original memory of decision is “erased” once the conviction to believe takes hold.

We suggest that the apparatus of erased memories and decisions that may or may not have taken place is an unnecessary outgrowth of the picture theory. Just like habits, beliefs are acquired gradually. The problem is that we take trivial (in the strict sense of trivia) encyclopedic statements (e.g. Bahrain is a country in the middle east) as prototypical cases of belief. Because these could be acquired via fast memory binding after a single exposure they seem to be the opposite of the way habits are acquired. However, these linguistic-assent to trivia beliefs are analytically worthless because it is unlikely that if there’s anything like belief that plays a role in action, it takes the form of linguistic-trivia beliefs. That we believe (no pun intended) that these types of propositions are “in control” of action is itself also an unnecessary analytic burden produced by the picture theory.

Instead, as noted before, a lot of our action-implicated beliefs are clusters of dispositions and not passive acts of private assent to linguistic statements. However, trivia-style beliefs capable of being acquired via a single exposure are the main stock in trade of both the folk idea of belief and the intellectualist strand of philosophical discussion on the topic. Thus, they are important to deal with conceptually, even if, from the point of view of the habit theory they represent a degenerate case since from this perspective, repetition, habituation, and perseverance is the hallmark of belief (Smith and Thelen 2003).

That said, what if I told you that the folk-cartesian notion of deciding to believe is inapplicable even in the case, of trivia-style one shot belief? This is the key conclusion of what is now the most empirically successful program on belief formation in cognitive psychology. The classic paper here is Gilbert (1991), who traces the idea back to Spinoza, although the subject has been revived in the recent efflorescence of work in the philosophy of belief. See in particular Mandelbaum (2014) and Rott (2017). This last notes that this was also a central part of the habit-theoretic notion of belief shared by the American pragmatists.

When it comes to one shot propositions, people are natural born believers. In contrast to the idea that conceptions are first considered while withholding belief (as in the Cartesian model) what the evidence shows is that mere exposure or consideration of a proposition leads people to treat as a standing belief in future action and thinking. Thus, people seem incapable of not believing what they bring to mind. While this may seem like a “bug” rather than a feature of a cognitive architecture, it is perfectly compatible with both a habit-theoretic notion of belief, and a wider pragmatist conception of mentality, of the sort championed by James, Dewey, and in particular the avowed anti-Cartesian C. S. Peirce. Just in the same way that every action could be the first in a long line that will fix a belief or a habit, the very act of considering something makes it relevant for us without the intervention of some effortful mental act of acceptance.

So just like you don’t know where your habits come from, you don’t know where your “beliefs” (in the one-shot trivia sense) come from either. The reason for this is that they got in there without having to get an invitation from you. In the same way, an implication of the Spinozist belief-formation process is that the thing that requires effort and controlled intervention is the withdrawal of belief (which is difficult and resource demanding). This links up the Spinozist belief-formation story with dual process models of thinking and action (Lizardo et al. 2016).

This is also in strict analogy with habit: While lots of habits are relatively easy to form (whether or not desirable) kicking a habit is hard. Even the habits that seem to us “hard” to form (e.g. going to the gym regularly) are not hard to form because they are habits; they are hard to form because they have to contend with the existence of even stronger competing habits (lounging at home) that will not go away without putting up a fight. It is the dissolution of the old habit and not the making of the new one that’s difficult.

So with belief. Beliefs are hard to undo. Once again, because we mistakenly take the trivia one-shot version of belief as the prototype this seems like an exaggeration. So if you believed “Bahrain was a country in Africa” and somebody told you “no, actually it’s in the Persian Gulf” it would take some mental energy to give up the old belief and form the new one, but not that much; most people would be successful.

But as noted in a previous entry, most beliefs are clusters of habitual dispositions, not singleton spectatorial propositions toward which we go yea or nay. So (easily!) developing these dispositional complexes in the context of, let’s say, a misogynistic society like the United States, would mean that “unbelieving” the dispositional cluster glossed by the sentential proposition “women can’t make as good as leaders as men” is not a trivial matter. For some, to completely unbelieve this may be close to impossible. This is something that our best social-scientific theories (whether “critical” or not) have yet to handle properly because their conception of “ideology” is still trapped in the picture theory (this is a matter for future posts).

Beliefs, as Mike and I noted in a companion paper (Strand and Lizardo 2017), have an inertia (which Bourdieu referred to as “hysteresis”) that makes them hang around even after a third person observer can diagnose them as “out of phase,” or “outmoded.” This is the double-edged nature of their status as habits; easy to form (when no competing beliefs are around) and easy to use (once fixed via repetition), but hard to drop.

References

Gilbert, Daniel T. 1991. “How Mental Systems Believe.” The American Psychologist 46 (2). American Psychological Association: 107.

Lizardo, Omar, Robert Mowry, Brandon Sepulvado, Dustin S. Stoltz, Marshall A. Taylor, Justin Van Ness, and Michael Wood. 2016. “What Are Dual Process Models? Implications for Cultural Analysis in Sociology.” Sociological Theory 34 (4). journals.sagepub.com: 287–310.

Mahmood, Saba. 2011. Politics of Piety: The Islamic Revival and the Feminist Subject. Princeton University Press.

Mandelbaum, Eric. 2014. “Thinking Is Believing.” Inquiry: A Journal of Medical Care Organization, Provision and Financing 57 (1). Routledge: 55–96.

Rott, Hans. 2017. “Negative Doxastic Voluntarism and the Concept of Belief.” Synthese 194 (8): 2695–2720.

Smilde, D. 2007. Reason to Believe: Cultural Agency in Latin American Evangelicalism. The Anthropology of Christianity. University of California Press.

Smith, Linda B., and Esther Thelen. 2003. “Development as a Dynamic System.” Trends in Cognitive Sciences 7 (8): 343–48.

Strand, Michael, and Omar Lizardo. 2017. “The Hysteresis Effect: Theorizing Mismatch in Action.” Journal for the Theory of Social Behaviour 47 (2): 164–94.

Is The Brain a Modular Computer?

As discussed in the inaugural post, cognitive science encompasses numerous sub-disciplines, one of which is neuroscience. Broadly defined, neuroscience is the study of the nervous system or how behavioral (e.g. walking), biological (e.g. digesting), or cognitive processes (e.g. believing) are realized in the (physical) nervous system of biological organisms.

Cognitive neuroscience, then, asks how does the brain produce the mind?

As a starting point, this subfield takes two positions vis a vis two kinds of dualism. First, is the rejection of Descartes’ “substance dualism,” which posits the mind is a nonphysical “ideal” substance. Second, is the assumption that so-called cognitive processes are somehow distinct from simple behavioral or biological processes, referred to as “property dualism.” That is, processes we tend to label “cognitive”—imagining, calculating, desiring, intending, wishing, believing, etc.—are distinct yet “localizable” in the physical structures of the brain and nervous system. As Philosophy Bro summarizes:

…substance dualism says “Oh no you’ve got the wrong thing entirely, stupid” and property dualism says “yeah, no, go on, keep looking at the brain, we’ll get it eventually.”

Broadening the scope to the entire cognitive sciences, including the philosophy of mind, one would be hard-pressed to find a contemporary scholar who takes substance dualism seriously. Thus, whatever the relationship between the mental and the neural, it cannot be that the mental is a nonphysical ideal substance which cannot be studied in empirical ways.

The current debate, rather is between various kinds of property dualist positions and those who argue against even property dualism. However, without diving into these philosophical debates, it is helpful to get a handle on what different trends in cognitive neuroscience contend is the relationship between brains and minds. Here I will briefly review what is considered the classical and commonsensical view, which is a quintessential property dualist approach.

An 1883 phrenology diagram, From People’s Cyclopedia of Universal Knowledge, Wikimedia Commons

The Modular Computer Theory of Mind

The classic approach to localization suggests that the brain is composed of discrete, special-purpose, “modules.” In many ways, this is aligned with our folk psychology: the amygdala is the “fear center,” the visual cortex is the “vision center,” and so on. This approach is most often traced back to Franz Gall and his pseudo-scientific (and racist) “organology” and “cranioscopy,” later referred to as “phrenology.” He argued that there were 27 psychological “faculties” each which had a respective “sub-organ” in the brain.

While most of the work associated with Gall was discarded, the idea that cognitive processes could be located in discrete modules continued, most forcefully in the work of the philosopher Jerry Fodor, specifically The Modularity of Mind (1983). Fodor’s approach builds on Noam Chomsky’s generative grammar. Struck by his observation that young children quickly learn to speak grammatically “correct” sentences, Chomsky argued the acquisition of language cannot be through imitation and trial-and-error. Instead, he proposed human minds have innate (and universal) structures which denote the basic set of rules for organizing language. The environment simply activates different combinations, resulting in the variation across groups. With a finite set of rules, humans can learn to create an infinite number of combinations, but no amount of experience or learning will alter the rules. (I will save the evaluation of Chomsky’s approach to language acquisition for later, but it doesn’t fare well).

Fodor took this one step further and argued that the fundamental contents of “thought” was language-like in this combinatorial sense, or what has come to be known as “mentalese.” In Language of Thought (1975), Fodor proposed that in order to learn anything in the traditional sense, humans must already have some kind of language-like mental contents to work with. As Stephen Turner (2018:45) summarizes in his excellent new Cognitive Science and the Social: A Primer:

If one begins with this problem, one wants a model of the brain as “language ready.” But why stop there? Why think that only grammatical rules are innate? One can expand this notion to the idea of the “culture-ready” brain, one that is poised and equipped to acquire a culture. The picture here is this: cultures and languages consist of rules, which follow a template but which vary in content, to a limited extent; the values and parameters need to be plugged into the template, at which point the culture or language can be rapidly acquired, mutual understanding is possible, and social life can proceed.

Such a thesis rests on the so-called “Computational Theory of Mind,” which by analogy to computers, presumes the mental contents are symbols (a la “binary codes”) which are combined through the application of basic principles producing more complex thought. Perception is, therefore, “represented” in the mind by being associated with “symbols” in the mind, and it is through the organization of perception into symbolic formations that experience becomes meaningful. Different kinds of perceptions can be organized by different modules, but again, the basic symbols and principles unique to each module remains unmodified by use or experience.

Despite the fact such a symbol-computation approach to thinking is “anti-learning,” this view is often implicit in (non-cognitive) anthropology and (cultural) sociology. For example, Robert Wuthnow ([1987] 1989), Clifford Geertz (1966), Jeffrey Alexander with Philip Smith (1993) were each inspired by the philosopher Susanne Langer’s Philosophy in a New Key, in which she argues for the central role of “symbols” in human life. She claims “the use of signs is the very first manifestation of mind” ([1942] 2009:29, thus “material furnished by the senses is constantly wrought into symbols, which are our elementary ideas” ([1942] 2009:45), and approvingly cites Arthur Ritchie’s The Natural History of the Mind, “As far as thought is concerned, and at all levels of thought, it is a symbolic process…The essential act of thought is symbolization” (1936:278–9).

Conceptualizing “thinking” as involving the (computational) translation of perceptual experience into a private, world-independent, symbolic languages, however, makes it difficult to account for “meaning” at all. This is commonly called the “grounding problem,” (which Omar discussed in his 2016 paper, “Cultural symbols and cultural power”), which grapples with the following question (Harnard 1990:335): “How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes [or principles of composition], be grounded in anything but other meaningless symbols?”

The problem is compounded when the mind is conceived as composed of multiple computational “modules,” each of which is independent from the other. The most famous thought-experiment demonstrating the problem with this approach is Searle’s (1980) “Chinese Room Argument.” To summarize, Searle posits a variation on the Turing Test where both sides of the electronically-mediated conversation are human (as opposed to one human and the other artificial); however, both speak different languages:

Suppose that I’m locked in a room and given a large batch of Chinese writing . . . . To me, Chinese writing is just so many meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules . . . The rules are in English, and I understand these rules . . . and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response . . . . Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols . . . my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. (Searle 1980:350–1)

Despite his acquired proficiency at symbol manipulation, locked in the room, he does not understand Chinese, nor does the content of his responses have any meaning to him. Therefore, Searle concludes, thinking cannot be fundamentally computational in this sense.

There are viable alternatives to this modular computer theory of the mind, many of which may run counter to folk understandings, but which square better with evidence. More importantly, these alternatives (which will be covered extensively in this blog) would likely be considered more “sociological,” as they invite (and often require) a role for both learning and context in explaining cognitive processes.

References

Alexander, Jeffrey C. and Philip Smith. 1993. “The Discourse of American Civil Society: A New Proposal for Cultural Studies.” Theory and Society 22(2):151–207.

Fodor, Jerry A. 1975. The Language of Thought. Harvard University Press.

Fodor, Jerry A. 1983. The Modularity of Mind. MIT Press.

Geertz, Clifford. 1966. “Religion as a Cultural System.” Pp. 1–46 in Anthropological Approaches to the Study of Religion, edited by M. Banton.

Harnad, Stevan. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1-3), 335-346.

Langer, Susanne K. [1942] 2009. Philosophy in a New Key: A Study in the Symbolism of Reason, Rite, and Art. Harvard University Press.

Lizardo, Omar. (2016). Cultural symbols and cultural power. Qualitative Sociology, 39(2), 199-204.

Ritchie, Arthur D. 1936. The Natural History of Mind. Longmans, Green and co.

Searle, John R. (1980). Minds, brains, and programs. Behavioral and brain sciences, 3(3), 417-424.

Turner, Stephen P. 2018. Cognitive Science and the Social: A Primer. Routledge.

Wuthnow, Robert. [1987] 1989. Meaning and Moral Order: Explorations in Cultural Analysis. Berkeley: University of California Press.

Are the Folk Natural Ryleans?

Folk psychology and the belief-desire accounting system has been formative in cognitive science because of the claim, mainly put forth by philosophers, that it forms the fundamental framework via which everybody (philosopher and non-philosopher alike) understands human action as meaningful. Both proponents of some version of the argument for the ineliminable character of the folk psychological vocabulary (Davidson, 1963; Fodor, 1987), and critics that cannot wait for its elimination by a mature neuroscience as an outmoded theory (Churchland, 1981) accept the basic premise; namely, that when it comes to action understanding, folk psychology is preferred by the folk. The job of philosophy is to systematize and lay bare the “theoretical” structure of the folk system (to save it or disparage it).

In a fascinating new article forthcoming in Philosophical Psychology, Devin Sanchez Curry, tries to challenge this crucial bit of philosophical common wisdom, which he refers to as “Davidson’s Dogma” (Sanchez Curry acknowledges that this might not be exegetically strictly true of Davidson’s writings, although it is true in terms of third-party reception and influence). In particular, Sanchez Curry hones in on the claim that the folk use a “theory” of causation to account for action using beliefs: Essentially the idea that beliefs are inner causes (the cogs in the internal machinery) that produce action when they interact with other beliefs and desires. This is the subject of a previous post.

Sanchez Curry, rather than staying at the purely exegetical or conceptual analysis level,  turns to the empirical literature in psychology on lay belief attribution to shed light on this issue. There he notes something surprising. There’s little empirical evidence that the folk resort to a belief-desire vocabulary or to a theory of these as inner causes (cogs and wheels in the internal machinery) of action. Going through the literature on the development and functioning of “mindreading” abilities, Sanchez Curry shows that the primary conclusion of this line of work is that the explicit attribution of representational (e.g. “pictures in the head”) versions of belief is the exception, not the rule.

Instead, the literature has converged (like many other subfields in social and cognitive psychology) on a dual systems/process view, in which the bulk of everyday mindreading is done by high capacity, high-efficiency automatic systems that do not traffic in the explicit language of representations. Instead, these systems are attuned to routine behavioral dispositions of others and engage in the job of inference and filling-in of other people’s behavior patterns by drawing on well-honed schemata trained by the pervasive experience of watching conspecifics make their way through the world. Explicit representational belief attribution practices emerge when the routine System I process encounter trouble and require either observers or other people to “justify” what they have done using a more explicit accounting.

As Sanchez Curry notes, the evidence here is consistent with the idea (which I alluded to in a previous post) that persons may be “natural Ryleans” but that the Rylean (dispositional) action-accounting system is so routinized as to not have the flashy linguistic bells and whistles of the folk psychological one. This creates the illusion that there’s only one accounting system (the belief-desire one), when in fact there are two, it is just that the one that does most of the work is nondeclarative (Lizardo, 2017), while the declarative one gets most of the attention, even though it’s actually the “emergency” action-accounting system, not the everyday workhorse.

As Sanchez Curry also notes, evidence provided by “new wave” (post-Heider) attribution theorists show that the explicit (and actual) folk psychological accounting system even when activated, seldom posits beliefs as “inner causes” of behavior. Instead, when people enter the folk-psychological mode to explain puzzling behavior that cannot be handled by System I practical mindreading, they look for reasons, not causes. These reasons are holistic, situational, and even “institutional” (in the sociological sense). There are “justifications” that will make the action meaningful while saving the rationality of the actor, given the context. They seldom refer to internal machineries or producing causes. We look for justifications to establish blame, to “make sense” (e.g. “explain”) or “save face” not to establish the inner wellsprings of action. So even in this case the folk are natural Ryleans and focus on the observables of the situation and not the inner wellsprings. This means, that the “theory” of folk psychology is a purely iatrogenic construction of a philosophical discourse on action that plays little role in the actual attributional practices of the folk: Folk psychology in the Davidsonian/Fodorian sense turns out to be the specialized construction of an expert community.

One advantage of this account is that it solves what I previously referred to as the “frame problem” faced by all “pictures in the head” as causal drivers of action. The problem is that the observer has to pick one of a myriad of possible pictures as the “primary” cause for the action. But there is no way to make this selection in a non-arbitrary way if we are stuck with the “inner cause” conception. In the Rylean conception, the “reason” we attribute will depend on the pragmatics and goals of the reason request. Are we seeking to establish blame? Make sense of a puzzle? Save the agent’s face? Make it seem like they are devious?

These arguments have several important implications. The most important one is that mostly, nobody is imputing little world pictures to other people to explain their action, empathize, or even predict or make inferences as to what they will do next. Dedicated, highly trained automatic systems do the job when people are behaving in “predictable” ways. No representations required there (Hutto, 2004). When this action-tracking system fails, we resort to more explicit action accountings, but more accurately we resort to the placing of strange or puzzling action in a less puzzling context. Even here, this is less about getting at occult or inner well-springs than of trying to construct a “reason” why somebody might have acted this way that makes the action less puzzling.

References

Churchland, P. M. (1981). Eliminative Materialism and the Propositional Attitudes. The Journal of Philosophy, 78(2), 67–90.

Davidson, D. (1963). Actions, Reasons, and Causes. The Journal of Philosophy, 60(23), 685–700.

Fodor, J. A. (1987). Psychosemantics: The Problem of Meaning in the Philosophy of Mind. MIT Press.

Hutto, D. D. (2004). The Limits of Spectatorial Folk Psychology. Mind and Language, 19(5), 548–573.

Lizardo, O. (2017). Improving Cultural Analysis: Considering Personal Culture in its Declarative and Nondeclarative Modes. American Sociological Review, 0003122416675175.