What are conscious representations?

In a previous post, we discussed the concept of “unconscious representations.” Now, we’ll delve into the related topic of “conscious representations.” This is a complex matter because it’s not typical to describe a mental or neural representation as “conscious.” Instead, consciousness is usually seen as a property of an entire organism. For instance, we might say, “John was in a coma for four years but then regained consciousness,” or ponder whether our pet cats and dogs possess consciousness like humans do. In the first—”organismic”—sense, consciousness denotes a specific physiological state in which a person is awake, can move, speak, report on their subjective experiences, and so on. These contrast to other physiological states in which the person lacks these abilities, such as when they are asleep, knocked on the ground after a blow to the head, under the influence of a tranquilizing drug, in a coma, and so on. The second sense refers to consciousness as an overarching quality of those experiences that can be subjectively reported. It involves a personal, subjective quality, like being aware of the redness of a rose, the sharpness of a pin, or the loudness of an ambulance passing by. In the literature, this is sometimes called “phenomenal consciousness.”

Our focus here is not on consciousness as a general organismic state or phenomenal property of experience but on consciousness as a property of the mental representations that a person entertains or is active in their minds at a given moment. When discussing consciousness as a property of representations, we will take for granted the understanding of consciousness in the organismic sense, for only an alert and awake person can entertain conscious mental representations. We will also take for granted that every conscious representation has a phenomenal or subjective “feel,” so that there is something that “it is like” to hold a belief, experience a pain, see a painting, and so forth (that is in in addition to the usual perceptual and affective phenomenology, there is also a cognitive phenomenology). So, what does it mean for a mental representation to be conscious?

A mental representation is conscious if a person can report on its content when the representation is active at a given time. To illustrate, consider a person saying to themselves: “I believe that the President is responsible for the price of gas.” This is a standard sentential belief expressed in propositional form. The global belief-like representation, which may be a cluster of lower-level (and likely unconscious as defined in the previous post alluded to before) phonological, linguistic, imagistic, and other representations, is conscious because the person is aware of its content when it is active. Similarly, if a stranger were to ask them: “Who do you think is responsible for how expensive gas is?” The person can respond: “I think the President is responsible.” This is because the questioner activated the conscious representation with the relevant content, “the President is responsible for the price of gas,” as the most plausible response; the person checked for its content and reported on their belief. The same analysis applies to other non-propositional conscious representations, like perceptual or interoceptive representations (e.g., representations about the state of the body). Thus, people can report seeing a red rose right now, that they are currently tired, have a headache, and the like. 

Philosophers sometimes fret about distinguishing the types of content applicable to belief-like states and those more applicable to perceptual, interoceptive, or affective experiences. In the first case, it is clear that the only way a person can report the content of a propositional belief is by commanding the underlying concepts constituting the sentential belief. Thus, in our previous example, a person cannot report on the content of the conscious representation in question without having knowledge of the concepts “President,” “Responsibility,” “Price,” and “Gas.” Thus, we say that that particular conscious representation has conceptual content. Now, when it comes to a particular perceptual experience, this constraint is not necessary. Yes, we can say that a person who sees a red rose can report on the experience by using their knowledge of the concept of “Red.” But this is not a necessity. A person can encounter a rose, a shade of red or pink that they have never seen before, and still report on their experience using a demonstrative “this shade of red.”

While some philosophers who love concepts and even think that without concepts, we wouldn’t have any experiences would still see this report as relying on a special type of “demonstrative” concept, a better solution is simply to say that perceptual, interoceptive, or affective representations are conscious representations with non-conceptual content. People can tell you that they are sad or have a stomach ache; this does not imply that they wouldn’t be able to report or have these experiences without commanding the concepts of “Sadness” or “Stomach Ache.” Instead, as in the usual “German word for,” phenomenon, people can experience all kinds of feelings they don’t have concepts (or linguistic labels in this case, which are not the same) for. The same goes for the manifold perceptual experiences that people can have conscious representations of, which very much exceeds whatever visual, auditory, olfactory, or tactile toolkit of concepts we may command.

Note that the property of being “conscious” bifurcates into two variants when applied to representations, only one of which is our main concern here. We can say that a representation is conscious when activated at a given moment. Thus, if a sighted person stands before a big green wall, their visual system will produce a conscious representation of one big green patch. This is an occurrent conscious representation of greenness. However, conscious representations of (all different shades of green) can also be dispositional. That is, they are not conscious right now but have the potential to become conscious when the occasion is suitable. For instance, given the constitution of the human visual system, sighted people have the ability to experience all kinds of representations of greenness, from the wall mentioned above to the leaves of a tree or the color of the Boston Celtics uniforms. In this way, the entire range of activation states of the visual system or the range of possible conscious beliefs expressible in linguistic format form a (possibly open-ended) set of potentially active conscious representations a person could have. 

We do not want to use the term “conscious” for representations that could be conscious but are currently not. Instead, we will refer to potentially conscious representations as “p-conscious” (for potentially-conscious). Representations can be p-conscious either because, while not currently active, a person has the general capacity to experience them as conscious representations or because they are activated but not in such an intense way that they can report on their content. In the first, passively dispositional sense, any one person’s brain has the dispositional capacity to activate a virtual infinity of conscious representations of (among other things) perceptual, imagistic, linguistic, interoceptive format, and phenomenal “feel.” In the second, weakly activated sense, we can say that a p-conscious representation is “pre-conscious” because it could become conscious if its activation level increases or the person’s focus is diverted in the right direction. For instance, at this very moment, all kinds of weakly activated pre-conscious representations (e.g., of how your elbow feels against your chair) could become conscious if you divert your attention to them. 

The “ocurrent” conscious representations currently active for any given person and the ones that could potentially be active share a cluster of properties in common, and that is their ability to be explicitly entertained by the person and be globally accessible to their cognitive system (including those that command language, speech, and action) allowing the person to report on their currently conscious representational states and to take them into account in modulating their action. Thus, the idea of conscious representation is explicitly and tightly tied to an operational criterion: No conscious representation without the ability to report (typically but not necessarily using words) the representation’s content, even if that content is minimal or non-coneptual (e.g., “that tree’s leaves are a shade of green”). This capacity to report on the content of a currently active conscious representation is sometimes called “access-consciousness.” Overall, the basic idea is that all conscious representations should enjoy access consciousness. 

Note that in this way, conscious representation is directly tied to the criterion of “tellability,” which has been connected to the distinction between tacit and explicit knowledge, at least since Polanyi (1958). That is when Polanyi noted that “we know more than we can tell,” he was, in principle, saying that knowledge consists of more than conscious representations since these just cover the stuff we can tell about. In the same way, the distinction between Declarative and Non-Declarative culture (Lizardo, 2017) also rides on a reportability criterion: Declarative Culture is that aspect of personal culture that people could report on (in an interview, survey, or focus group). Declarative Culture must be composed of p-conscious (in its internalized but not yet active form) or occurently conscious representations (when people report on their currently active beliefs, experiences, ideological commitments, and the like). 

References

Lizardo, O. (2017). Improving Cultural Analysis: Considering Personal Culture in its Declarative and Nondeclarative Modes. American Sociological Review, 82(1), 88–115.

Polanyi, M. (1958). Personal knowledge, towards a post critical epistemology. Chicago, IL: University of.

 

Are We Cognitively Susceptible to Tests?

In one the clearest statements about the difference it makes to emphasize cognition in the study of culture and, more generally, for the social sciences as a whole, the anthropologist Maurice Bloch (2012) writes that, if we consider closely every time we use the word “meaning” in social science, then “a moment’s reflection will reveal that ‘meaning’ can only signify ‘meaning for people’. To talk of, for example, ‘the meaning of cultural symbols’, as though this could be separated from what these symbols mean, for one or a number of individuals, can never be legitimate. This being so, an absolute distinction between public symbols and private thought becomes unsustainable” (4). 

As a critique of Geertzian and neo-Diltheyan arguments for “public meaning” and “cultural order” sui generis, Bloch’s point is fundamental, as it reveals a core problem with arguments built on those foundations once they have been untethered from “meaning for people” and can almost entirely be given over to “meaning for analysts.”  Yet, and as Bloch makes it a point to emphasize, such critiques can only get us so far in attempting to change practices, as even if “a moment’s reflection” like this may lead some to agree with Bloch’s claim, without an alternative, these models will persist more or less unchanged. If “meaning for people” stands as some equivalent for a tethering to cognitive science as recommended by theorists like Stephen Turner (2007), then what is needed is a programmatic way of doing social theory without “minimizing the cognitive” by attempting, instead, to bridge social theory and cognitive neuroscience in the design of concepts.

In fairness to Geertz, one of his more overlooked essays proposes a culture concept that seems to want to avoid the very problem that Bloch identifies. In “The Growth of Culture and the Evolution of Mind” Geertz (1973) draws a connection between culture and “man’s nervous system,” emphasizing in particular the interaction of culture and the (evolved) mind in the following terms: “Like a frightened animal, a frightened man may run, hide, bluster, dissemble, placate or, desperate with panic, attack; but in his case the precise patterning of such overt acts is guided predominantly by cultural rather than genetic templates.” Here the problem of relating the cultural to the cognitive seems clearly resolved, as the latter is reduced to “genetic templates.” Yet, contrary to Sewell’s (2005) positive estimation of this aspect of Geertz’s thought as “materialist,” we should be wary of taking lessons from Geertz if by “materialist” Sewell means a culture concept that does due diligence to the evolved, embodied, and finite organisms we all are. Nonetheless, in many respects, the Geertzian move still prevails in contemporary cultural sociology which, likewise, features an admission of the relevance of the cognitive to the cultural, but retains a similar bracketing as de facto for figuring out the thorny culture + cognition relation. 

For instance, recently Mast (2020) has emphasized that “representation” (qua the proverbial turtle) works all the way down, even in the most neurocognitive of dimensions, and so we cannot jettison culture even if we want to include a focus on cognition because we need cultural theory to account for representation. Likewise, Norton (2018) makes a similar claim by drawing a distributed cognition framework into sociology, but making “semiotics” the ingredient for which we need a designated form of cultural theory (in this case, his take on Peircean “semeiotics”) to understand. Kurkian (2020), meanwhile, argues that unless we admit distinguishably cultural ingredients like these, attempting any sort of marriage of culture + cognition will fail, because cognition will be about something that does not tread on culture’s terrain, like “information” for instance.

Each of these is a worthwhile effort, yet in some manner they misunderstand the task at hand in attempting a culture + cognition framework, recapitulating what Geertz did in 1973. This is because any such framework must rest on new concept-formation rather than what amounts to a defense of established concepts. This would admit that cultural theories of the past cannot be so straightforwardly repurposed without amendments. What we tend to see, rather, are associations of culture concepts (semiotics, representation) and cognitive concepts (distributed cognition, mirror neurons) by drawing essentially arbitrary analogies and parallels between concepts that otherwise remain unchanged. In most cases, such a bracketed application replicates the disciplinary division of labor in thought because the onus is never placed on revision, despite the dialectical encounter and the possibilities that each bank of concepts presents to the deficiencies and arbitrariness of the other. We either hold firm to our cultural theories of choice, or we engage in elaborate mimicry of a STEM-like distant relation. 

Following Deleuze (1995), we should appreciate that to “form concepts” is at the very least “to do something,” like, for instance, making it wrong to answer the question “what is justice?” by pointing out a particular instance of justice that happened to me last weekend. Deleuze adds insight in saying that concepts attempt to find “singularities” from within a “continuous flow.” The insight is apt to the degree that culture + cognition thinking seems rooted in the sense that there is a “flow” here and that, maybe, the concepts we’ve inherited, most of them formed over the last 80 years, that make culture and cognition “singular” are simply not helpful anymore. Yet to rehash settled, unrevised cultural theories and bring them into relation with emerging cognitive theories (also unchanged) is essentially to “do” something with our concepts like affirm a thick boundary between sociologists’ jurisdiction and cognitive science’s jurisdiction, forbidding anything that looks like culture + cognition, and, in all likelihood, creating only an awkward, fraught, short-lived marriage between the two, which, despite the best of intentions, will continue to “minimize mentalistic content,” have the effect of carefully limiting the role that “psychologically realistic mechanisms” can play in concept-formation, and which will, in retrospect, probably only produce a brand of social theory that will seem hopelessly antique for sociologists looking back from the vantage of a future state of the field, one possibly even more removed from present-time concerns with “cognitive entanglements.” 

The task should instead be something akin to what Bourdieu (1991) once called “dual reference” in his attempt to account for the strange verbiage littered throughout Heidegger’s philosophy (dasein, Sorge, etc). For Bourdieu, Heidegger’s work remains incomprehensible to us if we reference only the philosophical field in which he worked, and likewise incomprehensible if we reference only the Weimar-era political field in which he was firmly implanted. Instead, Heidegger’s philosophy, in particular these keywords, consists of position-takings in both fields simultaneously, which for Bourdieu goes some way in explaining the strange and tortured reception of Heidegger (with Being and Time something of a bestseller in Germany when published in 1927 and still canonical in pop philosophy pursuits today) to present-day. 

Thus, in forming concepts, the goal should not be to posit an order of influence (culture → cognition, cognition → culture), nor to bracket the two (culture / cognition) and state triumphantly that this is where culture concepts can be brought to bear and this where cognitive ones can be, leaving both unchanged. Norton is right: Peirce has lots of bearing on contemporary cognitive science (see Menary 2015). But to say this and not amend an understanding of semeiotics (which, it seems, Peirce would probably advocate were he alive today, as he always considered his semeiotics as a branch of the “natural science” he always pursued) is a non-starter. 

My argument is that concept-formation of the culture + cognition kind should yield dual reference concepts rather than bracketing concepts or order of influence concepts. The proposal will be that the concept of “test” demonstrates such a dual reference concept. We cannot account for the apparent ubiquity of tests, why they are meaningful, and how they are meaningful without reference to both a cognitive mechanism and a sociohistorical configuration that combines with, appropriates, and evokes it. The analysis here involves genealogy, institutional practice, site-specificity, and social relations.

Elsewhere (Strand 2020) I have advocated a culture + cognition styled approach as the production of “extraordinary discourse” and, relatedly, as concept-formation that can be adequate for “empirical cognition” as a neglected, minor tradition since the time of Kant (Strand 2021; though one with a healthy presence in classical theory). More recently, Omar and I have attempted concept-formation that more or less looks like this in recommending a probabilistic revision of basic tenets of the theory of action (forthcoming, forthcoming). To put it starkly: we need new concepts if we want something like culture + cognition. To work under the heading of “cognitive social science” is akin to a compass-like designation in a new direction. And rather like Omar (2014) has said, if theorists, so often these days casting about for a new conversation to be part of now that “cultural theory” is largely exhausted and we can only play with the pieces, want a model for this kind of work, they might study the role that philosophers have come to play in cognitive science, as engaged in what very much seems like a project of concept-formation.

In this post, I will attempt something similar, more generally as a version of deciphering “meaning for people” by asking a simple question: Why are tests so meaningful and seemingly ubiquitous in social life (Marres and Stark 2020; Ronnell 2005; Pinch 1993; Potthast 2017)? I will consider a potential “susceptibility” to tests and why this might explain why we find them featured so fundamentally in areas as varied as education, science, interpersonal relationships, medicine, morality, technology, and religion, as a short list, and how they can be given a truly generalized significance if we conceptualize test as trial (Latour 1988). More generally, the new(ish) “French pragmatist sociology” has made the epreuve (what mutually translates “test” and “trial” into French) a core concept as a way of “appreciating the endemic uncertainty of social life” (Lemieux 2008) though without implying too much about what a cognitive-heavy phase like “endemic uncertainty” might mean. The French pragmatists [1] might be on to something: test or trial may qualify as a “total social phenomena” in the tradition of Mauss (1966), less because we can single out one test as “at once a religious, economic, political, family, phenomena” and more because each of these orders depends, in some manner, on tests. This is more fitting with a cognitive susceptibility perspective, as I will articulate further below.

Provisionally, I will define a test as the creation of uncertainty, a suspension of possibilities, a way of “inviting chance in,” for the purpose of then resettling those possibilities and resolving that uncertainty by singling out a specific performance. After a duration of time has elapsed, the performance is complete. The state of affairs found at the end is what we can call an “outcome,” and it carries a certain kind of “objective” status to the extent that the initial uncertainty or open possibility is different now, less apparent than it was before, and “final” in some distinguishable way. 

If testing appears ubiquitous and “total,” this is not because tests necessarily work better than other potential alternatives as ways of handling “endemic uncertainty.” It is also not because testing features as part of some larger cultural process in motion (like “modernity’s fascination with breaking known limitations” [Ronnell 2005]). Rather, I want to claim that if tests are ubiquitous, this indicates a cognitive susceptibility to tests, thus revealing latent “dispositions,” such that we could not help but find tests “meaningful for people” like us. Some potential reasons why are suggested by referencing a basic predictive processing mechanism: 

According to [predictive processing], brains do not sit back and receive information from the world, form truth evaluable representations of it, and only then work out and implement action plans. Instead brains, tirelessly and proactively, are forever trying to look ahead in order to ensure that we have an adequate practical grip on the world in the here and now. Focused primarily on action and intervention, their basic work is to make the best possible predictions about what the world is throwing at us. The job of brains is to aid the organisms they inhabit, in ways that are sensitive to the regularities of the situations organisms inhabit (Hutto 2018).

Thus, in this rendering, we cannot help but notice “sensory perturbations” as those elements of our sensory profile that defy our expectation (or, in more “contentful” terms, our predictions). These errors stand out as what we perceive, and we attend to them by either adjusting ourselves to fit with the error (like sitting up a little more comfortably in our chair) or by acting to change those errors, so that we do not notice them anymore. In basic terms, then, the predictive processing “disposition” involves an enactive engagement with the world that seeks some circumstance in which nothing is perceived, because, we might say, everything is “meaningful” (i.e. expected). If we define “meaning” as something akin to “whatever subjectively defined qualities of one’s life make active persistence appealing,” then this adaptation of the test concept might be a way of accounting for meaning without a “minimum of mentalistic content” while incorporating a “psychologically realistic mechanism” (Turner 2007).

In what follows I will examine whether there is some alignment between this disposition and tests as a ubiquitous social process. If so, then it may be worthwhile to build on the foundation laid by the French pragmatists for concept-formation of the culture + cognition kind.

 

On cognitive susceptibility

The notion of cognitive “susceptibility” is drawn from Dan Sperber (1985) and the idea that, rather than dispositions that create a more direct link between cognition and cultural forms, that link may more frequently operate as susceptibility.

Dispositions have been positively selected in the process of biological evolution; susceptibilities are side-effects of dispositions. Susceptibilities which have strong adverse effects on adaptation get eliminated with the susceptible organisms. Susceptibilities which have strong positive effects may, over time, be positively selected and become, therefore, indistinguishable from dispositions. Most susceptibilities, though, have only marginal effects on adaptation;  they owe their existence to the selective pressure that has weighed, not on them, but on the disposition of which they are a side-effect (80-81).

Sperber uses the example of religion. “Meta-representation” is an evolved cognitive disposition to create mental representations that do not have to pass the rigorous tests that apply to everyday knowledge. It enables representations not just of environmental and somatic phenomena, but even of “information that is not fully understood” (83). Because it has these  capabilities, the meta-representational disposition creates “remarkable susceptibilities. The obvious function served by the ability to entertain half-understood concepts and ideas is to provide intermediate steps towards their full understanding. It also creates, however, the possibility for conceptual mysteries, which no amount of processing could ever clarify, to invade human minds” (84). Thus, Sperber concludes that “unlike everyday empirical knowledge, religious beliefs develop not because of a disposition, but because of a susceptibility” (85).

The disposition/susceptibility distinction can be quite helpful in navigating the murky waters around Bloch’s trope of “meaning for people,” because we do not necessarily have to give cultural forms over directly to dispositions. Rather, those cultural forms can arise as susceptibilities, which offer far more bandwidth to capture the cognitive dimensions of cultural forms as instances of “meaning for people.”

Thus, when God “tests the faith” of Abraham by ordering him to sacrifice his child Isaac, a space of chances is opened, and depending on how the test goes, something about Abraham will become definitive, at least for a while. A perceived lack of faith becomes equivalent to a noticeable error here, and it can be resolved by absorbing this uncertainty through some process that generates an outcome to that effect. Even though Abraham does not end up sacrificing Isaac in the story, he was prepared to do so, and thus he “proves” his faith. Some equivalent to this “sacrifice” remains integral to tests of faith of all sorts (Daly 1977).

I hypothesize that there must be a (cognitive) reason why this test, and the whole host of others we might come across, in fields and pursuits far removed from Abrahamic religion, is found in moments like these and in situations that mimic (even vaguely) God’s “test” of Abraham. The role of tests in this religious tradition, and potentially as a total social phenomenon, indicates something about “susceptibility” (in Sperber’s sense) to them. “Disposition” in this case concerns the predictive processing disposition to eliminate prediction error by either adapting a generative model to the error or by acting to change the source of the error; either way, our expectations change and we do not notice what stood out for us before. For tests, the construction of uncertainty and more possibilities than will ultimately be realized is a kind of susceptibility that corresponds to the predictive disposition. More specifically, this means that tests allow something to be known to us by enabling us to expect things of it.

 

Tests: scientific, technological, moral

What is remarkable about this is the range of circumstances to which we turn to tests to construct our expectations. Consider Latour’s description of the Pasteur’s experimental technique: 

How does Pasteur’s own account of the first drama of his text modify the common sense understanding of fabrication? Let us say that in his laboratory in Lille Pasteur is designing and actor. How does he do this? One now traditional way to account for this feat is to say that Pasteur designs trials for the actor to show its mettle. Why is an actor defined through trials? Because there is no other way to define an actor but through its actions, and there is no other way to define an action but by asking what other actors are modified, transformed, perturbed or created by the character that is the focus of attention … Something else is necessary to grant an x an essence, to make it into an actor: the series of laboratory trials through which the object x proves it mettle … We do not know what it is, but we know what it does from the trials conducted in the lab. A series of performances precedes the definition of the competence that will later be made the sole cause of these performances (1999: 122, 119).

Here the test (or “trial”) design works in an experimental fashion by exposing a given yeast ferment to different substances, under various conditions just to see what it would do. By figuring this out, Pasteur “designs an actor,” which we can rephrase as knowing an object by now being able to hold expectations of it, being able to make predictions about it, and therefore no longer needing to fear what it might do or even have to notice it.

Latour is far from alone in putting such emphasis on testing for the purposes of science. Karl Popper (1997), for instance, insists on the centrality of the test and its trial function: “Instead of discussing the ‘probability’ of a hypothesis we should try to assess what tests, what trials, it has withstood; that is, we should try to assess how far it has been able to prove its fitness to survive by standing up to tests. In brief, we should try to assess how far it has been ‘corroborated.’” To put a hypothesis on trial is, then, to imperil its existence, as an act of humility. Furthermore, it is to relinquish one’s own claim over the hypothesis. If a “test of survival” is the metric of scientific worth, then one scientist cannot single-handedly claim control: hypotheses need “corroboration,” a word which Popper prefers over “confirmation” because corroboration suggests something collective.

When Popper delineates the nuances of the scientific test, he also seems to establish tests for membership in a scientific community, as based on this sort of collective orientation, which requires individual humility, and in which, from the individual scientist’s standpoint means “inviting chance in” relative to their own hypothesis, making them subject to more possibilities than what the scientist might individually intend, including the possibility that they could be completely wrong. 

Meanwhile, in Pinch’s approach, which focuses specifically on technology, tests work through “projection”:  

If a scale model of a Boeing 747 airfoil performs satisfactorily in a wind tunnel, we can project that the wing of a Boeing 747 will perform satisfactorily in actual flight … It is the assumption of this similarity relationship that enables the projection to be made and that enables engineers warrantably to use the test results as grounds that they have found out something about the actual working of the technology (1993: 29).

The connection with a predictive mechanism is clear here, as projection entails not being surprised when we move into the new context of the “actual world” having specified certain relationships in the “test world.” The projection/predictive aspect is made almost verbatim here: “In order to say two things are similar, we bracket, or place in abeyance, all the things that make for possible differences. In other words, we select from myriad possibilities the relevant properties whereby we judge two things to be similar … [The] outcome of the tests can be taken to be either a success or a failure, depending upon the sorts of similarity and difference judgments made” (32).

Thus, a generative model is made in the testing environment, and it is then applied in the actual world environment on the understanding that we will not need to identify predictive error when we do this, as the generative model is similar enough to the actual world that we will have already resolved those. As Pinch concludes, “The analysis of testing developed here is, I suggest, completely generalizable. The notion of projection and the similarity relationships that it entails are present in all situations in which we would want to talk about testing” (37). And, it does seem that this particular use of testing can find analogues far and wide, including with the laboratory testing that is Latour’s focus and more generally we might say with educational or vocational testing where, likewise, a similarity relationship depends on a test that can minimize the difference between two contexts (a difference that we can understand according to the presence, or hopefully lack thereof, of prediction error). But what if we try to apply the test concept to something more remote from science and technology, like morality?

On this front, we can find statements like the following, from Boltanski and Thevenot:

A universe reduced to a common world would be a universe of definite worths in which a test, always conclusive (and thus finally useless), could absorb the commotion and silence it. Such an Eden-like universe in which ‘nothing ever happens by chance’ is maintained by a kind of sorcery that exhausts all the contingencies … An accident becomes a deficiency … Disturbed situations are often the ones that lead to uncertainties about worth and require recourse to a test in order to be resolved. The situation is then purified … In a true test, deception is unveiled: the pea under the mattress discloses the real princess. The masks fall; each participant finds his or her place. By the ordering that it presupposes, a peak moment distributes the beings in presence, and the true worth of each is tested (2006: 136-138).

In this rendering, tests are quite explicitly meant to make “accidents” stand out, in addition to fraud and fakery. The goal is the construction of a situation removed of all contingencies, in which, likewise, we do not notice anything because the test has put it in its proper order. When we do notice certain things (e.g. “the same people win all the same tests,” “they are singled out unfairly,” “they never got the opportunity,” etc), these are prediction errors based on some predictive ordering of the world that creates expectation. Simultaneously they are meaningful (for people) as forms of injustice. 

Boltanski and Thevenot dovetail, on this point, with something that became clear for at least one person in the tradition of probability theory, namely Blaise Pascal (see Daston 1988: 15ff). For Pascal, the expectations formed by playing a game of chance could themselves be the source of noticing the equivalent of “error,” for instance, when some player wins far too often while another never wins. A test is the source of an order “without contingency” where “nothing ever happens by chance,” which in this case means a test is the rules of the game that allow for possibilities (all can win) while resolving those possibilities into a result (only one will win). This creates expectations, and Boltanski and Thevenot extrapolate from this (citing sports contests as epitomizing their theory)  to identify “worlds” as different versions of this predictive ordering. Injustice is officially revealed at a second level of testing, then, as the test that creates this order can itself be tested (see Potthast 2017). Prediction errors can be noticed, likewise these can be resolved through the adaptation of a generative model, which would seem to demand a reformative (or revolutionary) change of the test in a manner that would subsequently allow it to meet expectations.

 

A genealogy of testing

What is interesting about these examples is, abstracted from history as they are, they demonstrate parallel wings of a tradition that Foucault traces to the decline of the “ordeal” and the birth of the “inquiry.” Both of these fit the profile of the test, though only the former gives the outcome the kind of official status or legitimacy of the laboratory test, the technological test, or the moral test. The ordeal involves a sheer confrontation that can occur at any time, and which creates expectations strictly in relation to some other specific thing, whether this be another person or something inanimate and possibly dangerous (like fire) or a practice of some kind (like writing a book). One can always test themselves against this again, and to move beyond known limitations, they must test themselves if they are to do anything like revise a generative model by encountering different prediction errors. 

Foucault’s larger point here recommends a more general argument, rooted in a kind of genealogy, that justice requires a caraceral; that the only form of justice is the one that rests in illegality. On the contrary, in his earlier work Foucault recommends a different approach to justice, one that renders any necessary association of justice and “the carceral archipelago” mistaken, as it would only consist of a relatively recent, though impactful, appropriation of justice. Thus, the argument Foucault presents is less nominal than it may seem at first, particularly when we consider the following: 

What characterizes the act of justice is not resort to a court and to judges; it is not the intervention of magistrates (even if they had to be simple mediators or arbitrators). What characterizes the juridical act, the process or the procedure in the broad sense, is the regulated development of a dispute. And the intervention of judges, their opinion or decision, is only ever an episode in this development. What defines the juridical order is the way in which one confronts one another, the way in which one struggles. The rule and the struggle, the rule in the struggle, this is the juridical (Foucault 2019: 116).

Here the meaning of justice is expanded to refer to the “regulated development of a dispute,” which may or may not have judges, which may or may not take place in a court, result in a judgment, or find at its culmination some sort of definitive decision or “judgment.” All of these are added features to the basic dispute.

Elsewhere Foucault expands on this by changing the language he uses in a significant way: from “dispute” justice shifts to “trial,” which he gives this an expansive meaning by drawing a distinction within the category of trial itself and distinguishing between epreuve and inquiry. There is a historical tension in the distinction: inquiries will come to replace epreuves (or “ordeals”) in a Eurocentric history. This division is apparent as early as the ancient Greeks who, in a Homeric version, would create justice through the rule-governed dispute, with the responsibility for deciding—not who spoke the truth, but who was right–entrusted to the fight, the challenge, and “the risk that each one would run.” Contrary to this the Oedipus Rex form, as exemplified by Sophocles’ great play. Here, in order to resolve a dispute of apparent patricide, we find one of the emblems of Athenian democracy: “the people took possession of the right to judge, of the right to tell the truth, to set the truth against their own masters, to judge those who governed them” (Foucault 2000: 32-33).

This division would be replicated in the later distinctions of Roman law, as rooted inquiry, and Germanic law, as rooted in something more resembling the contest or epreuve, with disputes conducted through either means. Yet with the collapse of the Carolingian Empire in the tenth century, “Germanic law triumphed, and Roman law fell into oblivion for several centuries.” Thus, feudal justice consisted of “disputes settled by the system of the test,” whether this be a “test of the individual’s social standing,” a test of verbal demonstration in formulaically presenting the grievance or denunciating one another, tests of an oath in which “the accused would be asked to take an oath and if he declined or hesitated he would lose the case,” and finally “the famous corporal, physical tests called ordeals, which consisted of subjecting a person to a sort of game, a struggle with his own body, to find out whether he would pass or fail.”

As the trajectory of justice moves, then, the role and place of the epreuve ascends to prominence; testing becomes justice, in other words, as the means to resolve a dispute centers around the ordeal and its outcome, more generally as a way of letting God’s voice speak. In one general account, the trial by “cold water” involved “dunking the accused in a pond or a cistern; if the person sank, he or she was pronounced innocent, and if the person floated, he or she was found guilty and either maimed or killed.” In the trial by “hot iron,” the accused would “carry a hot iron a number of paces, after which the resulting wound was bandaged. If the wound showed signs of healing after three days, the accused was declared innocent, but if the wound appeared to be infected, a guilty verdict ensued” (Kerr, Forsyth and Plyey 1992).

The epreuve, in this case, remains a trial of force or between forces, which may be codified and regulated as the case may be, as water or iron would be blessed before the ordeal, and therefore made to speak the word of God. More generally, to decline the test was to admit guilt in this binary structure, and this carried into the challenge by another in a dispute to a contest. Thus, justice ended in a victory or a defeat, which appeared definitive, and this worked in an almost “automatic” way, because it required no third party in the form of one who judges. 

Across this genealogy, we find something equivalent to the creation of uncertainty, in some cases deliberately made, in other cases not, and then its resolution by some means into an outcome after a given duration of time. This outcome may have an institutional sanction (as “justice”) or it could have something more like the sanction of a fight, and presumably the certainty of what would happen should a fight happen again. In these different ways, predictions are made and expectations settled. An “error” stands out as noticeable in a variety of forms: as someone with whom one has a dispute, as an action taken or event that happened but was not expected, whether according to explicitly defined rules or not, or in the case of the democratic link suggested by Foucault, the pressing question of who should rule and whether such rule can be legitimate (see Mouffe 2000). 

Some equivalent to the test (whether as inquiry or ordeal) is involved in all of these cases, and in the genealogy at least, we can glimpse how consequential it might be for a new test form to come on the scene, or to win out over another, as a way of, in a sense, appropriating cognitive susceptibilities that must be activated should “testing” make any difference for predictive dispositions.

 

Conclusion

The larger point is that the concept of test is substantive, here, because we can bridge its properties to properties of cognition. The task is to say that the predictive dispositions that are cognitive create a susceptibility to tests: more specifically, we are likely to find tests meaningful because of our predictive dispositions. If tests are drawn upon across all of these different areas, specifically in cases of uncertainty (whether as dispute, as experiment, as how to design a technology) or what we have established in general terms as “situations in which we are presently engaged with prediction error that we cannot help but notice a lot,” then it would follow that we are susceptible to tests as what allows us to absorb this uncertainty, a process we cannot understand or even fully recognize without reference to “real features of real brains” (Turner 2007). This, I want to propose, is how we can approach “test” as a dual reference concept, and its applicability in areas as varied as religion, politics, science, morality, and technology.

Tests are “meaningful for people” when they absorb uncertainty and generate expectation. They are also meaningful for people when they create uncertainty and enable critique. We could not identify something like a “test” if tests did not have these kinds of cognitive effects, and we cannot understand those cognitive effects without finding a distinguishably cognitive process (e.g. “psychologically real” with lots of “mentalistic content” extending even to neurons). In this case, the parallel of testing and uncertainty and predictive processing and prediction error is not a distant analogy, as is often the case with bracketing concepts. To understand testing’s absorption of uncertainty we need predictive processing, but to understand how predictive processing might matter for the things sociologists care about we need testing.

I’ll conclude with the suggestion that if “test” can qualify as this sort of dual reference concept then we should favor it over other potential concepts that can account for meaning (e.g. “categories,” “worldview,” “interpretation”) but, arguably, cannot be dual reference.

 

Something that looks like endnotes

[1] The French “pragmatists” are, in centering “test” in their concept-formation, not to be received as illegitimate appropriators of that title. Peirce (1992) himself encouraged a focus on the study of “potential” as referring to something “indeterminate yet capable of determination in any special case.” This could very well serve as clarified restatement of the definition of test. Dewey (1998) makes the connection more explicit in his thorough conceptualization of test: “The conjunction of problematic and determinate characters in nature renders every existence, as well as every idea and human act, an experiment in fact, even though not in design. To be intelligently experimental is but to be conscious of this intersection of natural conditions so as to profit by it instead of being at its mercy. The Christian idea of this world and this life as a probation is a kind of distorted recognition of the situation; distorted because it applied wholesale to one stretch of existence in contrast with another, regarded as original and final. But in truth anything which can exist at any place and at any time occurs subject to tests imposed upon it by surroundings, which are only in part compatible and reinforcing. These surroundings test its strength and measure its endurance … That stablest thing we can speak of is not free from conditions set to it by other things … A thing may endure secula seculorum and yet not be everlasting; it will crumble before the gnawing truth of time, as it exceeds a certain measure. Every existence is an event.”

 

References

Bloch, Maurice (2012). Anthropology and the Cognitive Challenge. Cambridge UP.

Boltanski, Luc and Laurent Thevenot. (2006). On Justification. Princeton UP.

Bourdieu, Pierre. (1991). The Political Ontology of Martin Heidegger. Stanford UP.

Daston, Lorraine. (1988). Classical Probability in the Enlightenment. Princeton UP.

Daly, Robert. (1977). “The Soteriological Significance of the Sacrifice of Isaac.” The Catholic Biblical Quarterly 39: 45-71.

Deleuze, Gilles and Guattari, Felix. (1995). What is Philosophy? Columbia UP.

Foucault, Michel. (2019). Penal Theories and Institutions: Lectures at the College de France, 1971-72, edited by Bernard Harcourt. Palgrave.

Foucault, Michel. (2000). “Truth and Juridical Forms” in Power: The Essential Works of Michel Foucault, 1954-1984, edited by James D. Faubion. The New Press.

Geertz, Clifford. (1973). “The Growth of Culture and the Evolution of Mind” in Interpretation of Cultures.

Hutto, Daniel. (2018). “Getting into predictive processing’s great guessing game: Bootstrap heaven or hell?” Synthese 195: 2445-2458.

Kerr, Margaret, Forsyth, Richard, and Michel Plyey. (1992). “Cold Water and Hot Iron: Trial by Ordeal in England.” Journal of Interdisciplinary History 22: 573-595.

Kurkian, Dmitry. (2020). “Culture and Cognition: the Durkheimian Principle of Sui Generis Synthesis vs. Cognitive-Based Models of Culture.” American Journal of Cultural Sociology 8: 63-89.

Latour, Bruno. (1988). The Pasteurization of France. Harvard UP.

Latour, Bruno. (1999). Pandora’s Hope. Harvard UP.

Lemieux, Cyril. (2008) “Scene change in French sociology?” L’oeil Sociologique

Lizardo, Omar. (2014). “Beyond the Comtean Schema: The Sociology of Culture and Cognition Versus Cognitive Social Science.” Sociological Forum 29: 983-989.

Marres, Noortje and David Stark. (2020). “Put to the Test: For a New Sociology of Testing.” British Journal of Sociology 71: 423-443.

Mast, Jason. (2020). “Representationalism and Cognitive Culturalism: Riders on Elephants on Turtles All the Way Down.” American Journal of Cultural Sociology 8: 90-123.

Marcel, Mauss. (1966). The Gift. Something UP.

Menary, Richard. (2015). “Pragmatism and the Pragmatic Turn in Cognitive Science” in The Pragmatic Turn: Toward Action-Oriented Views in Cognitive Science. MIT Press. 

Mouffe, Chantal. (2008). The Democratic Paradox. Verso. 

Norton, Matthew. (2018). “Meaning on the Move: Synthesizing Cognitive and Systems Concepts of Culture.” American Journal of Cultural Sociology 7: 1-28.

Pinch, Trevor. (1993). “Testing—One, Two, Three… Testing!”: Toward a sociology of testing. Science, Technology, & Human Values, 18(1), 25–41.

Potthast Jorg. (2017) The sociology of conventions and testing. In: Benzecry C, Krause M and Reed IA (eds) Social Theory Now. Chicago: University of Chicago Press, 337–361.

Popper, Karl. (1997). The Logic of Scientific Discovery. Routledge.

Ronnell, Avital. (2005). The Test Drive. University of Illinois Press.

Sewell, William. (2005). “History, Synchrony, and Culture: Reflections on the Work of Clifford Geertz” in Logics of History

Sperber, Dan. (1985). “Anthropology and Psychology: Towards an Epidemiology of Representations.” Man 20: 73-89.

Strand, Michael. (2020). “Sociology and Philosophy in the United States since the Sixties: Death and Resurrection of a Folk Action Obstacle.” Theory and Society 49: 101-150.

Strand, Michael (2021). “Cognition, Practice and Learning in the Discourse of the Human Sciences” in Handbook in Classical Sociological Theory. Springer.

Strand, Michael and Omar Lizardo. (forthcoming). “For a Probabilistic Sociology: A History of Concept-Formation with Pierre Bourdieu” Theory and Society 

Strand, Michael and Omar Lizardo (forthcoming). “Chance, Orientation and Interpretation: Max Weber’s Neglected Probabilism and the Future of Social Theory.” Sociological Theory 

Turner, Stephen. (2007). “Social Theory as Cognitive Neuroscience.” European Journal of Social Theory 10: 357-374.

What is an intuition?

Steve Vaisey’s 2009 American Journal of Sociology paper is, deservedly, one of the most (if not the most) influential pieces in contemporary work on culture and cognition in sociology. It is single-handedly responsible for the efflorescence of interest in the study of cognitive processes by sociologists in general, and more specifically it introduced work on dual-process models and dual-process theorizing to the field (see Leschziner, 2019 for a recent review of this work).

Yet, like many broadly influential pieces in science, there’s an odd disconnect between the initial theoretical innovations (and inspirations) of the original piece and the way that the article figures in contemporary citation practices by sociologists. There are also some key misrepresentations of the original argument that have become baked into sociological lore. One of the most common ones is the idea that Vaisey introduced the dual-process model to sociologists or the “sociological dual-process model” (see Leschziner, 2019).

However, as my co-authors and I pointed out in a 2016 piece in Sociological Theory, the use of the singular to refer to dual-process models in social and cognitive psychology is a mistake. From the beginning, dual-process theorizing has consisted of a family of models and theories designed to explain a wide variety of phenomena, from stereotyping to persuasion, biases in reasoning, problem-solving, and decision-making, categorization and impression-formation, individual differences in personality, trust, and so forth. As noted in the title of the two most influential collections on the subject (Chaiken & Trope, 1999 and its sequel Sherman, Gawronski & Trope, 2014), social psychologists refer to “dual process theories” and comment on their variety and compatibility with one another. In the paper, we proposed seeing dual-process theorizing as united by a broad meta-theoretical grammar (which we called the “dual process framework”) from which specific dual-process models can be built. In fact, Vanina Leschziner in the aforementioned piece follows this practice and refers to “dual-process models” in sociology.

We also noted that another generator of variety among dual-process theories is the actual aspect of cognition they focus on. Thus, there are dual-process models of learning, memory, action, and so forth, and these need to be analytically kept distinct from one another, so that their interconnections (or lack thereof) can be properly theorized. Although all dual-process models share a family resemblance, they have different emphases and propose different mechanisms, and core imageries depending on what aspect of cognition they aim to make sense of.

As we pointed out in the Sociological Theory piece, this means that the particular dual-process model Vaisey used as inspiration in his original piece becomes relevant. This was Jonathan Haidt’s (2001) “social intuitionist” model of moral judgment. Vaisey (correctly) framed his paper as a contribution to the “culture in action” debate in cultural sociology inaugurated by Ann Swidler (1986) in her own classic paper. Yet, the dual-process model that served as inspiration was really about judgment (what we called culture in “thinking”) and not action (although you can make a non-controversial proposal that judgment impacts action). Moreover, Haidt’s model was not about judgment in general, but about judgment in a restricted domain: Morality. Regardless, the key point to keep in mind is that the core construct in Haidt’s social intuitionist model was intuition, not action. Haidt’s basic point is that most judgments of right and wrong result from an intuitive and not a reflective “reasoning” process, and that post hoc moral “reasoning” emerges after the fact to justify and make sense of our intuitively derived judgments.

Oddly, and perhaps due to the fact that Vaisey’s paper has mostly been interpreted with regard to action theory and research in sociology, the fact that it built on a key construct in social and cognitive psychology, namely, intuition, has essentially dropped out of the picture for sociologists today. For instance, despite its wide influence, Vaisey’s piece has not resulted in sociologists thinking about or theorizing about intuition in judgment and decision-making, developing a sociological approach to intuition (or a “sociology of intuition”), or even thinking seriously about what intuition is, and about the theoretical and empirical implications of the fact that a lot of time we reason via intuition. This is despite the fact that intuition is a going concern across a wide range of fields (Epstein, 2010).

Here I argue that this is something that needs to be corrected. Intuition is a rich and fascinating topic, cutting across a variety of areas of concern in the cognitive, social, and behavioral sciences (see Hodgkinson et al. 2008) and one that could benefit from more concerted sociological attention and theorizing both inside and outside the moral domain. But this means going back to Vaisey’s article (or Jonathan Haidt’s 2001 piece for that matter) and re-reading it in a different theoretical context, one focused on the very idea of what intuition is in the first place, the theoretical implications that a good chunk of our judgments and beliefs come to us via intuition, while revisiting the question of where intuitions come from in the first place.

What are Intuitions?

So, what are intuitions? The basic idea is deceptively simple, but as we will see, the devil is in the details. First, as already noted, “intuition” is best thought of as a quality or a property of certain judgments or reasoning processes (Dewey, 1925, p. 300). Although sometimes people use intuition as a noun, to refer to the product of such an intuitive reasoning process (e.g., “an intuition”). In what follows I stick to the process conception, with the caveat that usually we are dealing with a process/product couplet.

So, we say a given judgment is “intuitive” instead of what? The usual complement is something like “reasoned” or “analytic.” That is, when trying to solve a problem or come up with a judgment, it seems like we can go through the problem step by step in some kind of logical, effortful, or reasoned way, or we can just let the solution “come to us” without experiencing any phenomenological signature of having gone through a reasoned process. This last is an intuition.

Thus, according to the social and cognitive neuroscientist Matthew Lieberman (2000, p. 109), “phenomenologically, intuition seems to lack the logical structure of information processing. When one relies on intuition, one has no sense of alternatives being weighted algebraically or a cost-benefit analysis being undertaken.” Jerome Bruner, provides a similar formulation, noting that intuition is “…the intellectual technique of arriving at plausible but tentative conclusions without going through the analytic steps by which such formulations would be found to be valid or invalid conclusions” (1960, p. 13). When applied to beliefs, the quality of being intuitive is thus connected to the fact that judgments regarding their truth or falsity are arrived at “automatically” without going through a long deductive chain of reasoning from first principles (Baumard & Boyer, 2013; Sperber, 1997). In the original case of moral reasoning (Haidt, 2001), these are beliefs that particular practices or actions are just “wrong,” but where the actor cannot quite tell you where the judgment of wrongness comes from.

Notably, there appears to be a convergence among various dual-process theorists that “intuition” could be the best global descriptor of what would otherwise be referred to with the uninformative label of “Type I cognition.”  For instance, the cognitive psychologist Steve Sloman (2014) in an update to a classic dual-process theory piece on “two systems of reasoning” (Sloman, 1996) complains about the proliferation of terms that emerged in the interim to refer to the ideal-typical types of cognition in dual-process models (e.g., “…associative-rule based, tacit-explicit thought implicit-explicit, experiential-rational, intuitive-analytical…” 2014, p. 70), while also rejecting the usefulness of the uninformative numerical labels proposed by Stanovich and West, as these lack descriptive power. To solve the problem, Sloman recommends abandoning his previous (1996) distinction of “associative versus rule-based processing” in favor of the distinction between intuition and deliberation. These folk terms are apposite according to Sloman because they provide a minimal set of theoretical commitments for the dual-process theorist centered on the idea that “…in English, an intuition is a thought whose source one is not conscious of, and deliberation involves sequential consideration of symbolic strings in some form” (ibid, p. 170).

These definitions should already give a sense that intuition is a rich and multifaceted phenomenon, which makes it even more of a shame that no sociological approach to intuitive judgment, intuitive reasoning, or even intuitive belief (as it exists, for instance, in the cognitive science of religion) has been developed in the field in the wake of Vaisey’s influential article. One exception to this, noted in a previous post, is Gordon Brett’s and Andrew Miles’s call to study socially contextualized variation in “thinking dispositions.” Clearly, reliance on intuition to solve problems, make judgments, and arrive at decisions is something that varies systematically across people, such that an intuitive disposition is one such individual attribute worthy of sociological consideration.

In the remaining, I will comment on one core issue related to intuition, ripe for future consideration in culture and cognition studies in sociology, that follows naturally from the idea that people exercise intuitive judgment relatively frequently across a wide variety of arenas and domains, namely, the question of the origins of intuitions.

Intuition and Implicit Learning

Where do intuitions come from in the first place? Surprisingly, there is actually now a well-developed consensus that intuitions develop in life as a result of implicit learning (Epstein, 2010; Lieberman, 2000). This is a substantive theoretical linkage between two sets of dual-process models developed for two distinct aspects of cognition (reasoning and learning). In our 2016 Sociological Theory piece, we made the point that different flavors of the dual-process model result from whether you focused on four distinct aspects of cognition (learning, memory, thinking, or action). However, this work shows that there is a systematic linkage between intuitive reasoning and implicit learning (see Reber, 1993) so that we reason intuitively about domains for which we have acquired experience via implicit learning mechanisms. The linkage between intuition and implicit learning in recent work (e.g., Epstein, 2010) thus speaks to the advantages of distinguishing the different flavors of dual-process theories rather than putting them all into a non-distinct clump.

What is implicit learning? The modern theory of implicit learning has been developed by the psychologist Arthur Reber (1993) who connects it to Michael Polanyi’s (1966) reflections on tacit and explicit knowledge as well as work by the American pragmatists like William James. Reber defines implicit learning as “the acquisition of knowledge that takes place largely independently of conscious attempts to learn and largely in the absence of explicit knowledge about what was acquired” (1993, p. 5). Essentially, implicit learning leads to the acquisition of tacit knowledge, which operates differently from the explicit knowledge acquired via traditional learning mechanisms. Importantly, implicit learning is involved in the extraction of “rule-like” patterns that are encoded in environmental regularities. As Vaisey (2009) noted in his original paper, this is precisely the sort of learning mechanism required by habitus-type theories like Bourdieu’s (1990) where rule-like patterns are acquired from enculturation processes keyed to experience without the internalization of explicit rules.

In this way, the connection between implicit learning and intuition links naturally with recent work in culture and cognition studies dealing with socialization, internalization, and enculturation (see Lizardo, 2021). This also clarifies an aspect of Vaisey’s (2009) argument that remained somewhat fuzzy, especially when making the link between Haidt’s social intuitionist approach and the work of Bourdieu and Giddens. In the original piece, Vaisey noted that Bourdieu’s habitus could be a sociological equivalent of the “intuitive mind” described in terms of the dual-process framework (and contrasted with the conscious or reflective mind in charge or “justifications”). The intuitive mind was usually in charge and the reflective mind provide conscious confabulations that made it look like it was in charge. In this respect, the link between Bourdieu and cognitive science Vaisey made was with respect to content: The contents of the intuitive mind described by social and cognitive psychologists were equivalent to the “unconscious dispositions” that Bourdieu thought made up the habitus.

But in linking implicit learning to intuition, we can make a more substantive linkage between the process via which habitus develops and the penchant to engage particular life domains via intuition. This is something that is closer to the dynamic enculturation model of habitus that Vaisey noted was developed by the anthropologists Claudia Strauss and Naomi Quinn when they explicitly liked  “habitus to the set of unconscious schemas that people develop through life experience” (Vaisey, 2009, p. 1685).

Thus, intuitions (product conception), as (one of the) contemporaneous contents of the “implicit mind” have their origin in an implicit learning process of abstraction of consistent patterns from the regularities of experience (social and otherwise). As Hodgkinson et al. note, “[i]mplicit learning and implicit knowledge contribute to the knowledge structures upon which individuals draw when making intuitive judgments” (2008, p. 2). If you think this is an unwarranted or forced conceptual linkage, note that the equation between implicit learning and intuition was even made by Reber in the original statement of the modern theory of implicit learning and tacit knowledge. According to Hodgkinson et al. (2008, p. 6; paraphrasing Reber, 1989, p. 232):

Intuition may be the direct result of implicit, unconscious learning: through the gradual process of implicit learning, tacit implicit representations emerge that capture environmental regularities and are used in direct coping with the world (without the involvement of any introspective process). Intuition is the end product of this process of unconscious and bottom-up learning, to engage in particular classes of action.

Note that an implication of this is that we cannot have “intuitions” about domains for which have not had consistent histories of implicit learning. Instead, absent such history, we will tend to default to coming up with judgments and decisions using explicit reasoning mechanisms (“type 2 cognition”). This means that experts in a given domain will likely have more intuitions about that domain than non-experts (Hodgkinson, et al. 2008).

Overall, the implications for the study of the link between enculturation processes and down-the-line outcomes and group differences in thinking and action of Vaisey’s original argument is one thread that sociologists would do well to pick up again. The aforementioned also speaks for the value of keeping different flavors of dual-process theorizing analytically distinct so that we can theorize their interconnections.

References

Baumard, N., & Boyer, P. (2013). Religious beliefs as reflective elaborations on intuitions: A modified dual-process model. Current Directions in Psychological Science22(4), 295-300.

Bruner, J. S. (1960). The Process of Education. Vintage Books.

Chaiken, S., & Trope, Y. (1999). Dual-process Theories in Social Psychology. Guilford Press.

Dewey, J. (1925). Experience and Nature. Open Court.

Haidt, J. (2001). The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychological Review, 108(4), 814–834.

Hodgkinson, G. P., Langan‐Fox, J., & Sadler‐Smith, E. (2008). Intuition: A fundamental bridging construct in the behavioural sciences. British Journal of Psychology99(1), 1-27.

Leschziner, V. (2019). Dual-Process Models in Sociology. In W. Brekhus and Gabe Ignatow (Ed.), The Oxford Handbook of Cognitive Sociology. Oxford University Press.

Lieberman, M. D. (2000). Intuition: a social cognitive neuroscience approach. Psychological Bulletin, 126(1), 109–137.

Lizardo, O. (2021). Culture, cognition, and internalization. Sociological Forum , 36, 1177–1206.

Lizardo, O., Mowry, R., Sepulvado, B., Stoltz, D. S., Taylor, M. A., Van Ness, J., & Wood, M. (2016). What are dual process models? Implications for cultural analysis in sociology. Sociological Theory34(4), 287-310.

Polanyi, M. (1966). The Tacit Dimension. Peter Smith.

Reber, A. S. (1989). Implicit learning and tacit knowledge. Journal of Experimental Psychology. General, 118(3), 219–235.

Reber, A. S. (1993). Implicit Learning and Tacit Knowledge: An Essay on the Cognitive Unconscious. Oxford University Press.

Sherman, J. W., Gawronski, B., & Trope, Y. (Eds.). (2014). Dual-process theories of the social mind. Guilford Publications.

Sloman, S. A. (2014). Two systems of reasoning: An update. In J. W. Sherman, B. Gawronski, & Y. Trope (Eds.), Dual-process theories of the social mind (Vol. 624, pp. 69–79). The Guilford Press, xvi.

Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychological Bulletin, 119(1), 3–22.

Sperber, D. (1997). Intuitive and reflective beliefs. Mind & Language, 12(1), 67–83.

Swidler, A. (1986). Culture in Action: Symbols and Strategies. American Sociological Review, 51(2), 273–286.

Cognitive Artifacts, Affordances, and External Representations: Implications for Cognitive Sociology

We use all kinds of artifacts in our everyday life to accomplish different types of cognitive tasks. We write scientific articles and blog posts by using word-processing programs. We prepare to-do lists to organize work tasks, and those of us who engage in statistical or computational analysis of data use computer programs to perform complex calculations that would be impossible to perform without them.

In this post, I argue that cognitive sociologists should pay more attention to cognitive artifacts and their affordances since many cognitive processes in our everyday lives cannot be properly understood and explained without taking them into account. I will proceed by first characterizing the concepts of cognitive artifact, affordance, and external representation. Then I will briefly discuss my recent paper which analyzes college and university rankings by utilizing these three concepts and the conceptual theory of metaphor.

Cognitive Artifacts

Donald Norman coined the concept of cognitive artifact in the early 1990s. According to his definition, a cognitive artifact is “an artificial device designed to maintain, display, or operate upon information in order to serve a representational function” (Norman 1990: 17). Richard Heersmink (2013) has more recently proposed a taxonomy of cognitive artifacts that includes non-representational cognitive artifacts in addition to representational cognitive artifacts. Here, I will rely on Norman’s definition and focus exclusively on representational cognitive artifacts.

Norman (1990) emphasized that the use of cognitive artifacts changes the nature of the cognitive tasks that a person performs—instead of just amplifying the person’s brain-based cognitive abilities—and, thereby, enhances the overall performance of the integrated system that is composed of the person and her artifact. For example, consider the case of organizing your daily work tasks by means of a to-do list, thereby transforming the cognitive task of remembering and planning your work tasks into the following cognitive tasks:

  1. writing a list of the relevant work tasks that may be ordered according to their relative priority or some other principle
  2. remembering to consult the list during the workday
  3. reading and interpreting the items written on the list one by one.

To-do lists enhance ones’ overall work performance during the workday, for example, by eliminating the moments in which the person thinks about what to do next.

From a cultural-historical and developmental viewpoint, it can also be argued that the uses of cognitive artifacts and technologies have transformed our cognitive lives in profound ways. Norman (1991; 1993) and many others (e.g., Donald 1991; Tomasello 1999) have emphasized that one of the distinctive features of our species is our ability to modify our environments by creating new artifacts, refining the artifacts that our ancestors have invented, and transmitting these artifacts to subsequent generations. Here is a relatively random list of some important types of cognitive artifacts that our species has invented: cave paintings, bookkeeping documents, handwritten texts, maps, calendars, clocks, compasses, printed texts, diagrams, thermometers, physical scale models, computers, computational models, GPS devices, and social media messages.

This list illuminates at least two facts. The first is that cognitive artifacts are not a recent innovation in human history since, for example, the earliest cave paintings date back to over 30 000 years and the earliest writing systems were developed over 5 millennia ago. The second is that most of these artifacts have developed gradually over many generations. Many researchers have also emphasized how new cognitive artifacts, tools, and technologies transform the embodied cognitive processes and capacities of people when they become integral parts of their everyday environments and cultural practices, including those pertaining to cognitive development (e.g., Clark 1997; 2003; Donald 1991; Hutchins 1995; 2008; Malafouris & Renfrew 2010; Menary & Gillett 2022; Kirsh 2010; Vygotsky 1978). Hence, cognitive artifacts and technologies are important for understanding historical and cultural variation in human cognition.

Affordances

The concept of affordance provides a useful tool for analyzing the properties of cognitive artifacts in the contexts where they are used. James J. Gibson (1979) introduced the notion of affordance as a part of his ecological theory of visual perception. Gibson writes that “[t]he affordances of the environment are what it offers the animal, what it provides or furnishes, either for good or ill” (p. 127). Gibson’s theory addressed the question of how living organisms perceive their immediate natural environments and emphasized the action-relatedness of perceptual processes. Norman (1993: 106) extended the concept of affordance to the domain of human-made artifacts and technologies by arguing that “[d]ifferent technologies afford different operations” for their users, thereby making “some things easy to do, others difficult or impossible”. It is important to understand that the affordances of a particular technology or a cognitive artifact not only depend on its intrinsic properties but also on the user’s particular bodily and cognitive features, abilities, and skills. For example, a geographical map provides cognitive affordances for navigation only for those who can read cartographic symbols and compass points. In this sense, affordances are relational.

External Representations

Since cognitive artifacts serve representational functions, the notion of external representation can be used to analyze how the affordances of a cognitive artifact shape how its users process information. According to David Kirsh (2010: 441), external representations that are maintained, displayed, or operated by cognitive artifacts may transform our cognitive capacities in at least seven ways:

They change the cost structure of the inferential landscape; they provide a structure that can serve as a shareable object of thought; they create persistent referents; they facilitate re-representation; they are often a more natural representation of structure than mental representations; they facilitate the computation of more explicit encoding of information; they enable the construction of arbitrarily complex structure; and they lower the cost of controlling thought – they help coordinate thought.

Although not all cognitive artifacts do all these things, Kirsh’s list and his examples clarify that cognitive artifacts are not just external aids to internal cognitive processes. Instead, they tend to alter the cognitive processes of their users by enabling them to outsource cognitive tasks that they would otherwise have to (attempt to) perform internally and, in some cases, enable them to accomplish new cognitive tasks that would be impossible without using the cognitive artifact. In their recent article, Richard Menary and Alexander Gillett (2022) also emphasize that cognitive tools (or cognitive artifacts in my terminology) function as tools for enculturation, thereby transforming the embodied cognitive capacities of their users who participate in culturally specific cognitive practices (see also Hutchins 1995; 2008).

Implications: Explaining the Paradox of University Rankings

In my recent article (Kaidesoja 2022), I used Wendy Nelson Espeland and Michael Sauder’s (e.g., 2016) case study of the U.S. News and World Report (shortly: USN) magazine’s law school ranking as a springboard to develop a theoretical framework for explaining the paradox of university rankings, by which I refer to the process where the impact of global and national university rankings has increased at the same time as a growing number of researchers has documented their methodological flaws and counterproductive consequences for university-based research and education (Kaidesoja 2022: 129-130). One aspect of the theoretical framework was my suggestion that the published league tables of university rankings can be understood as cognitive artifacts that provide specific affordances for their audiences to perform cognitive tasks. For example, the latest USN league table of law schools (see here) provides at least the following affordances to the decision-making of prospective law students who, it is plausible to assume, are all literate and numerate:

  • Affords them to perceive a hierarchical and transitive order represented by the spatial relations among the names of law schools such that highly ranked law schools are at the top;
  • Affords them to make unequivocal, quick, and easy comparisons between any two law schools in terms of their rank;
  • Affords them to coordinate information about the rank, location, tuition, and enrollment for each school;
  • Affords them to compare the rank of a university to its ranks in the previously published tables;
  • Affords them to share the ranking results with others (e.g., through social media);
  • Provides them with a stable object that affords joint attention and references in conversations (either in web-mediated or face-to-face communication) (Kaidesoja 2022: 144–145).

These affordances relate both to the visual features of the league tables and their functional properties as parts of the socially distributed cognitive processes that involve more than one actor. An example of the latter could be a situation where a prospective student justifies her decision to apply to Yale University to her parents by showing them that it is the best law school in the league table.

However, my argument was not that the USN ranking of law schools is the only factor that affects the decision-making of prospective students, since it is obvious that other things also influence this process, such as law schools’ distance to home, the financial resources of their parents, their career plans, and their own LSAT scores. Despite this, there is evidence that the USN ranking of law schools is an important factor that influences how many prospective students end up with their choices between law schools (see Espeland & Sauder 2016: chapter 3). It seemed to me that one reason for this is that the published league tables afford such perceptions, comparisons, and communications to prospective students that would be difficult or impossible without the league table. Hence, I hypothesized that the affordances of these cognitive artifacts are part of the explanation of why and how many prospective law students use the USN league tables to outsource part of their decision-making to the USN rankings.

I also argued that we must consider the embodied cognitive processes of prospective law students through which they interpret the ranking results since these processes motivate them to integrate the USN rankings as a part of their decision-making. By relying on Lakoff and Johnson’s (e.g., 2003) conceptual theory of metaphor, I proposed that prospective law students use the league tables of team sports as a source system for a metaphorical analogy guiding their understanding of the law schools rankings (that are also published in the league table format by the USN). My hypothesis was that the league table metaphor of this kind leads many prospective students to assume that – just like the competition between teams in a sports league – the competition between law schools for ranking scores is a zero-sum game, in which excellent quality is a scarce resource, and in which the quality is objectively measured by the ranking scores that determine the law school’s ranking position (Kaidesoja 2022, 141–142). Although these assumptions provide prospective students a way of making sense of the ranking results, they are quite problematic given the methodological problems and biases that are involved in the USN rankings, such as the fact that they overlook contextual differences between law schools, overemphasize competitive relations between law schools, and include arbitrary value judgments concerning the quality of law education (Espeland & Sauder 2016: chapter 1; Kaidesoja 2022: 143).

Moving Forward

In a recent paper on two traditions of cognitive sociology co-authored with Mikko Hyyryläinen and Ronny Puustinen (2021), we argued, among other things, that interdisciplinary cognitive sociologists, who emphasize the importance of integrating cognitive scientific perspectives to cultural sociology, have not yet systematically addressed cognitive artifacts and their affordances. Rather, most of them have focused on how culture influences the intracranial cognition of individuals. Without denying the importance of this project, we argued that there are good reasons to also consider the extracranial elements of cognitive mechanisms and begin to develop new theoretical and methodological approaches for studying the role of cognitive artifacts and technologies in social actions and cognitive development (cf. Norton 2020; Lizardo 2022; Turner 2018). I hope that my paper on university rankings provides some ideas about how one could develop mechanistic explanations that include both extracranial and intracranial cognitive elements.

References

Clark, A. (1997). Being There: Putting Brain, Body, and World Together Again. MIT Press.

Clark, A. (2003) Natural-Born Cyborgs: Minds, Technologies, and the Future of Intelligence. Oxford University Press.

Donald, M. (1991). Origins of the Modern Mind: Three Stages in the Evolution of Culture and Cognition. Harvard University Press.

Espeland, W.N. & Sauder M. (2016) Engines of Anxiety: Academic Rankings, Reputation, and Accountability. Russell Sage Foundation.

Gibson, J.J. (1979) The Ecological Approach to Visual Perception. Houghton Mifflin Harcourt.

Heersmink, R. (2013). A Taxonomy of Cognitive Artifacts: Function, Information, and Categories. Review of Philosophy and Psychology, 4(3), 465–481. https://link.springer.com/article/10.1007/s13164-013-0148-1

Hutchins, E. (1995) Cognition in the Wild. The MIT Press.

Hutchins, E. (2008) The Role of Cultural Practices in the Emergence of Modern Human Intelligence. Philosophical Transactions of the Royal Society B: Biological Sciences 363 (1499): 2011–2019.

Kaidesoja, T. (2022) A Theoretical Framework for Explaining the Paradox of University Rankings. Social Science Information. 61(1) 128–153. https://journals.sagepub.com/doi/full/10.1177/05390184221079470

Kaidesoja, T., Hyyryläinen, M. & Puustinen, R. (2021) Two Traditions of Cognitive Sociology: An Analysis and Assessment of Their Cognitive and Methodological Assumptions. Journal for the Theory of Social Behavior. https://onlinelibrary.wiley.com/doi/full/10.1111/jtsb.12341

Kirsh, D. (2010) Thinking with External Representations. AI & Society 25: 441–454.

Lakoff, G. & Johnson, M. (2003) Metaphors We Live by (With a New Afterword). The University of Chicago Press.

Lizardo, O. (2022). What is Implicit Culture? Journal for the Theory of Social Behavior. https://onlinelibrary.wiley.com/doi/10.1111/jtsb.12333

Malafouris, L., & Renfrew, C. (Eds.). (2010). The Cognitive Life of Things: Recasting the Boundaries of the Mind. McDonald Institute Monographs.

Menary, R. & Gillett, A (2022) The Tools of Enculturation. Topics in Cognitive Science: 1–25. https://onlinelibrary.wiley.com/doi/10.1111/tops.12604

Norman, D.A. (1991) Cognitive Artifacts. In: Carroll, J.M. (ed.) Designing Interaction. Cambridge University Press, pp.17–38.

Norman, D.A. (1993) Things That Make Us Smart: Defending Human Attributes in the Age of the Machine. Addison–Wesley.

Norton, M. (2020). Cultural Sociology Meets the Cognitive Wild: Advantages of the Distributed Cognition Framework for Analyzing the Intersection of Culture and Cognition. American Journal of Cultural Sociology, 8, 45–62. https://doi.org/10.1057/s41290-019-00075-w

Tomasello, M. (1999). The Cultural Origins of Human Cognition. London: Harvard University Press.

Turner, S. P. (2018). Cognitive Science and the Social. Routledge.

Vygotsky, L. S. (1978). Mind in Society: The Development of Higher Psychological Processes. Harvard University Press.

 

From Dual-Process Theories to Cognitive-Process Taxonomies

Although having a history as old as the social and behavioral sciences (and for some, as old as philosophical reflections on the mind itself), dual-process models of cognition have been with us only for a bit over two decades, becoming established in cognitive and social psychology in the late 1990s (see Sloman, 1996 and Smith and DeCoster, 2000 for foundational reviews). The implicit measurement revolution provided the “data” side to the theoretical and computational modeling side, thus fomenting further theoretical and conceptual development (Strack & Deutsch, 2004; Gawronski & Bodenhausen, 2006). Although not without its critics, the dual-process approach has now blossomed into an interdisciplinary framework useful for studying learning, perception, thinking, and action (Lizardo et al., 2016). In sociology, dual-process ideas were introduced by way of the specific dual-process model of moral reasoning developed by Jonathan Haidt (2001) in Steve Vaisey’s (2009) now classic and still heavily cited paper. Sociological applications of the dual-process framework for specific research problems now abound, with developments on both the substantive and measurement sides (Miles, 2015; Miles et al. 2019; Melamed et al. 2019; Srivastava & Banaji, 2011).

The dual-process framework revolves around the ideal-typical distinction between two “modes” or “styles” of cognition (Brett & Miles, 2021). These are now very familiar. One is the effortful, usually conscious, deliberate processing of serially presented information, potentially available for verbal report (as when reasoning through a deductive chain or doing a hard math problem in your head). The other is the seemingly effortless, automatic, usually unconscious, associative processing information (as when a solution to a problem just “comes” to you, or when you just “know” something without seemingly having gone through steps to reach the solution). This last is usually referred to as intuitive, automatic, or associative “Type 1” cognition, and the former is usually referred to as effortful, deliberate, or non-automatic “Type 2” cognition.

As with many hard and fast distinctions, there is the virtue of simplification and analytic power, but there is the limitation, evident to all, that the differentiation between Type 1 versus Type 2 cognition occludes as much as it reveals. For instance, people wonder about the existence of “mixed” types of cognition or iterative cycles between the two modes or the capacity of one mode (usually Type 2) to override the outputs of the other (usually Type 1). It seems like the answer to all these wonders is a general “yes.” We can define a construct like “automaticity” to admit various “in-between” types (Moors & De Houwer, 2006), suggesting that a pure dichotomy is too simple (Melnikoff & Bargh, 2018). And yes, the two types of cognition interact and cycle (Cunningham & Zelazo, 2007). The interactive perspective is even built into some measurement strategies, which rely on overloading or temporarily overwhelming the deliberate system to force people to respond with intuitive Type 1 cognition (as in so-called “cognitive load” techniques; see Miles, 2015 for a sociological application).

Another sort of wonder revolves around whether these are the only types of cognition that exist. Are there any more types? Accordingly, some analysts speak of “tri” or “quad” process models and the like (Stanovich, 2009). It seems, therefore, that field is moving toward a taxonomic approach to the study of cognitive processes. However, the criteria or “dimensions” around which such taxonomies are to be constructed are in a state of flux. As I noted in a previous post, moving toward a taxonomic approach is generally a good thing. Moreover, the field of memory research is a good model for how to build taxonomic theory in cognitive social science (CSS), especially since the “kinds” typically studied in CSS are usually “motley” (natural kinds that decompose into fuzzy subkinds). When studying motley kinds and organizing into fruitful taxonomies, it is essential to focus on the analytic dimensions and let the chips fall where they may. This is different from thinking up “new types” of cognition from the armchair in unprincipled ways, where the dimensions that define the types are ill-defined (as with previous attempts to talk about tri-process models of cognition and the like). Moreover, the dimensional approach leaves things open to discover surprising “subkinds” that join properties that we would consider counter-intuitive.

Accordingly, an upshot of everyone now accepting (even begrudgingly) some version of the dual-process theory is that we also agree that the cognitive-scientific kind “cognition” is itself motley! That is, whatever it is, cognition is not a single kind of thing. Right now, we kind of agree that it is at least two things (as I said, an insight that is as old as the Freudian distinction between primary and secondary process), but it is likely that it could be more than two. In this post, I’d like to propose one attempt to define the possible dimensional space from which a more differentiated typology of cognitive processes can be constructed.

Taxonomizing Cognition

So if we needed to choose dimensions to taxonomize cognition, where would we begin? I think a suitable candidate is to pick two closely aligned dimensions of cognition that people thought were fused or highly correlated but now are seen as partially orthogonal. For example, in a previous post on the varieties of “implictness” (which is arguably the core dimension of cognition that defines the core distinction in dual-process models), I noted that social and cognitive psychologists differentiate between two criteria for deeming something “implicit.” First, a-implicitness uses an “automaticity” criterion. Here, cognition is implicit if it is automatic and explicit if it is deliberate or effortful. Second, there is u-implicitness, which uses a(n) (un)consciousness criterion. Here, cognition is implicit if it occurs outside of consciousness, and it is explicit if it is conscious.

I implied (but did not explicitly argue) in that post that maybe these two dimensions of explicitness could come apart. If they can, these seem like pretty good criteria to build a taxonomy of cognitive process kinds that goes beyond two! This is precisely what the philosophers Nicholas Shea and Chris Frith did in a paper published in 2016 in Neuroscience of ConsciousnessCross-classifying the type of processing (deliberate v. automatic) against the type of representations over which the processing occurs (conscious v. unconscious), yields a new “type” of cognition which they refer to as “Type 0 cognition.”

In Shea and Frith’s taxonomy, our old friend Type 1 cognition refers to the automatic processing of initially conscious representations, typically resulting in conscious outputs. In their words, “[t]ype 1 cognition is characterized by automatic, load-insensitive processing of consciously represented inputs; outputs are typically also conscious.” (p.4). This definition is consistent with Evans’s (2019) more recent specification of Type 1 cognition as working-memory independent cognition that still uses working memory to “deposit” the output of associative processing. In Evans’s words,

While Type 1 processes do not require the resources of working memory or controlled attention for their operation (or they would be Type 2) they do post their products into working memory in a way that many autonomous processes of the brain do not. Specifically, they bring to mind judgements or candidate responses of some kind accompanied by a feeling of confidence or rightness in that judgement (p. 384).

For Shea and Frith (2016), on the other hand, our other good friend, Type 2 cognition, refers to the deliberate, effortful processing of conscious representations. In their words,

Type 2 cognition is characterized by deliberate, non-automatic processing of conscious representations. It is sensitive to cognitive load: type 2 processes interfere with one another. Type 2 cognition operates on conscious representations, typically in series, over a longer timescale than type 1 cognition. It can overcome some of the computational limitations of type 1 cognition, piecemeal, while retaining the advantage of being able to integrate information from previously unconnected domains. It is computation-heavy and learning-light: with its extended processing time, type 2 cognition can compute the correct answer or generate optimal actions without the benefit of extensive prior experience in a domain (p. 5).

By way of contrast with these familiar faces, our new friend Type 0 cognition refers to the automatic processing of non-conscious representations. Shea and Frith see isolating Type 0 cognition as a separate cognitive-process subkind as their primary contribution. Previous work, in their view, has run Type 0 and Type 1 cognition together, to their analytic detriment. Notably, they argue for the greater (domain-specific) efficiency and accuracy of Type 0 cognition over Type 1. They note that various deficiencies of Type 1 cognition identified in such research programs as the “heuristics and biases” literature come from the fact that, in Type 1 cognition, there is a mismatch between process and representation because automatic/associative processes are recruited to deal with conscious representational inputs.

For instance, Type 1 cognition is at work when Haidt asks people whether they would wear Hitler’s t-shirt, and they say “ew, no way!” but are unable to come up with a morally reasonable reason why (or make up an implausible one on the spot). Type 1 moral cognition “misfires” here because the associative (“moral intuition”) system was recruited to process conscious inputs, relied on an associative/heuristic process to generate an answer (in this case, based on implicit contact, purity, and contagion considerations), and produced a conscious output, the origins of with subjects are completely unaware of (and is thus forced to retrospectively confabulate using Type 2 cognition). The same goes for judgment and decision-making producing answers to questions when engaging in the base-rate fallacy, using a representativeness heuristic, and the like (Kahneman, 2011). 

The types of cognition for which a match is made in heaven between process and representation (like Type 2 and their Type 0) result in adaptive cognitive processes that “get the right answer.” Type 2 cognition refers to domain-general problems requiring information integration and the careful weighing of alternatives. In Type 0 cognition, this refers to domain-specific problems requiring fast, adaptive cognitive processing and action control, where consciousness (if it were to rear its ugly head) would spoil the fun and impair the effectiveness of the cognitive system to do what is supposed to do, similar to athletes who “choke” when they become conscious of what they are doing (see Beilock, 2011).

So, what is Type 0 cognition good for? Shea and Frith point to things like the implicit learning of probabilistic action/reward contingencies after many exposures (e.g., reinforcement learning), where neither the probabilities nor the learning process is consciously represented, and the learning happens via associative steps. As they note, in “model-free reinforcement learning can generate optimal decisions when making choices for rewards, and feedback control can compute optimal action trajectories…non-conscious representation goes hand-in-hand with correct performance” (p. 3). In the same way, “Type 0 cognition is likely to play a large role in several other domains, for example in the rich inferences which occur automatically and without consciousness in the course of perception, language comprehension and language production” (ibid).

Organizing the Types

So, where does Shea and Frith’s taxonomy of cognitive process kinds leave us? Well, maybe something like the dimensional typology shown in Figure 1. It seems like at least three different cognitive process kinds are well-defined, especially if you are convinced that we should distinguish Type 0 from Type 1 cognition (and I think I am).

Figure 1.

However, as I argued earlier, a key advantage of beginning with dimensions in any taxonomical exercise is that we may end up with a surprise. Here, it is the fact that a fourth potential type of cognition now appears in the lower-right quadrant, one that no one has given much thought to before. Type ??? cognition: deliberate processing of unconscious representations. Can this even be a thing? Shea and Frith do note this implication of their taxonomic exercise but think it is too weird. They even point out that it may be a positive contribution of their approach to have discovered this “empty” slot in cognitive-process-kind space. In their words, “[w]hat of the fourth box? This would be the home of deliberate processes acting on non-conscious representations. It seems to us that there may well be no type of cognition that fits in this box. If so, that is an important discovery about the nature of consciousness” (p. 7).

Nevertheless, are things so simple? Maybe not. The Brains Blog dedicated a symposium to the paper in 2017 in which three authors provided commentaries. Not surprisingly, some of the commenters did not buy the “empty slot” argument. In their commentJacob Berger points to some plausible candidates for Shea and Frith’s Type ??? cognition (referred to as “Type 0.5 cognition”). This includes the (somewhat controversial) work of Dijksterhuis, Aarts, and collaborators (e.g., Dijksterhuis & Nordgren, 2006; Dijksterhuis & Aarts, 2010) on “unconscious thought theory” (UTT) (see Bargh, 2011 for a friendly review). In the UTT paradigm, participants are asked to make seemingly deliberate choices between alternatives, with a “right” answer aimed at maximizing a set of quality dimensions. At the same time, conscious thinking is impaired via cognitive load. The key result is that participants who engage in this “unconscious thinking” end up making choices that are as optimal as people who think about it reflectively. So, this seems to be a case of a deliberate thinking process operating over unconscious representations.

Berger does anticipate an objection to UT as being a candidate for Type ??? cognition, which itself brings up an issue with critical taxonomic ramifications:

S&F might reply that such [UT] cases are not genuinely unconscious because, like examples of type-1 cognition, they involve conscious inputs and outputs. But if this processing is not type 0.5, then it is hard to see where S&F’s taxonomy accommodates it. The cognition does not seem automatic, akin to the processing of type 0 or type 1 of which one is unaware (it seems, for example, rather domain general); nor does it seem to be a case of type-2 cognition, since one is totally unaware of the processing that results in conscious outputs. Perhaps what is needed is an additional distinction between the inputs/outputs of a process’ being conscious and the consciousness of states in the intervening processing. In type-1 cognition, the inputs/outputs are conscious, but the states involved in the automatic processing are not; in type-2, both are conscious. We might therefore regard Dijksterhuis’ work as an instance of ‘type-1.5’ cognition: conscious inputs/outputs, but deliberative unconscious processing.

Thus, Berger proposes to dissociate not only conscious/unconscious representations from deliberate/automatic processing but also adds the dimension of whether the inputs and outputs of the cognitive process and its intervening steps are themselves conscious or unconscious. Berger’s implied taxonomy can thus be represented as in Figure 2.

Figure 2.

Figure 2 clarifies that the actual mystery type does not connect conscious inputs and outputs with deliberate unconscious processing (UT), but a type linking unconscious inputs and outputs with deliberate unconscious processing (the new Type ???). Also, the figure makes clear that the proper empty slot is a type of cognition conjoining unconscious inputs and outputs with deliberate conscious processing; this bizarre and implausible combination can indeed be ruled out on a priori grounds. Note, in contrast, that if there is such a thing as deliberate unconscious processing (and the jury is still out on that), there is no reason to rule out the new Type ??? cognition shown in Figure 2 on a priori grounds (as Shea and Frith tried to do with Berger’s Type 1.5). For instance, Bargh (2011) argues that unconscious goal pursuit (a type of unconscious thought) can be triggered outside of awareness (unconscious input) and also has behavioral consequences (e.g., trying hard on a task) that subjects may also be unaware of (unconscious output). In this sense, Bargh’s unconscious goal pursuit would qualify as a candidate for Type ??? cognition. So, following Berger’s recommendation, we end up with five (I know an ugly prime) candidate cognition types. 

So, What?

Is all we are getting after all of this a more elaborate typology? Well, yes. And that is good! However, I think the more differentiated approach to carving the cognitive-process world also leads to some substantive insight. I refer in particular to Shea and Frith’s introduction of the Type 0/Type 1 distinction. For instance, in a recent review (and critique) of dual-process models of social cognition, Amodio proposes an “interactive memory systems” account of attitudes and impression formation (“Social Cognition 2.0”) that attempts to go beyond the limitations of the traditional dual-process model (“Social Cognition 1.0”).

Amodio’s argument is wide-ranging, but his primary point is that there are multiple memory systems and that a conception of Type 1 cognition as a single network of implicit concept/concept associations over which unconscious cognition operates is incomplete. In addition to concept/concept associations, Amodio points to other types of associative learning, including Pavlovian (affective) and instrumental (reinforcement learning). Amodio’s primary point is that something like an “implicit attitude,” insofar as it recruits multiple but distinct (and dissociable) forms of memory and learning subserved by different neural substrates, is not a single kind of thing (a taxonomical exercise for the future!). This dovetails nicely with the current effort to taxonomize cognitive processes. Thus, a standard conceptual association between categories of people and valenced traits operates via Type 1 cognition. However, it is likely that behavioral approach/avoid tendencies toward the same type of people, being the product of instrumental/reinforcement learning mechanisms, operate via Shea and Frith’s Type 0 cognition.

References

Bargh, J. A. (2011). Unconscious Thought Theory and Its Discontents: A Critique of the Critiques. Social Cognition, 29(6), 629–647.

Beilock, S. L. (2011). Choke. The secret of performing under pressure. London: Constable.

Brett, G., & Miles, A. (2021). Who Thinks How? Social Patterns in Reliance on Automatic and Deliberate Cognition. Sociological Science, 8, 96–118.

Cunningham, W. A., & Zelazo, P. D. (2007). Attitudes and evaluations: a social cognitive neuroscience perspective. Trends in Cognitive Sciences, 11(3), 97–104.

Dijksterhuis, A., & Aarts, H. (2010). Goals, attention, and (un)consciousness. Annual Review of Psychology, 61, 467–490.

Evans, J. S. B. T. (2019). Reflections on reflection: the nature and function of type 2 processes in dual-process theories of reasoning. Thinking & Reasoning, 25(4), 383–415.

Gawronski, B., & Bodenhausen, G. V. (2006). Associative and propositional processes in evaluation: an integrative review of implicit and explicit attitude change. Psychological Bulletin, 132(5), 692–731.

Haidt, J. (2001). The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychological Review, 108(4), 814–834.

Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.

Lizardo, O., Mowry, R., Sepulvado, B., Stoltz, D. S., Taylor, M. A., Van Ness, J., & Wood, M. (2016). What are dual process models? Implications for cultural analysis in sociology. Sociological Theory, 34(4), 287–310.

Melamed, D., Munn, C. W., Barry, L., Montgomery, B., & Okuwobi, O. F. (2019). Status Characteristics, Implicit Bias, and the Production of Racial Inequality. American Sociological Review, 84(6), 1013–1036.

Melnikoff, D. E., & Bargh, J. A. (2018). The mythical number two. Trends in cognitive sciences22(4), 280-293.

Miles, A. (2015). The (Re)genesis of Values: Examining the Importance of Values for Action. American Sociological Review, 80(4), 680–704.

Miles, A., Charron-Chénier, R., & Schleifer, C. (2019). Measuring Automatic Cognition: Advancing Dual-Process Research in Sociology. American Sociological Review, 84(2), 308–333.

Moors, A., & De Houwer, J. (2006). Automaticity: a theoretical and conceptual analysis. Psychological Bulletin, 132(2), 297–326.

Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychological Bulletin, 119(1), 3.-22

Smith, E. R., & DeCoster, J. (2000). Dual-Process Models in Social and Cognitive Psychology: Conceptual Integration and Links to Underlying Memory Systems. Personality and Social Psychology Review: An Official Journal of the Society for Personality and Social Psychology, Inc, 4(2), 108–131.

Srivastava, S. B., & Banaji, M. R. (2011). Culture, Cognition, and Collaborative Networks in Organizations. American Sociological Review, 76(2), 207–233.

Stanovich, K. E. (2009). Distinguishing the reflective, algorithmic, and autonomous minds: Is it time for a tri-process theory? In J. S. B. T. Evans (Ed.), In two minds: Dual processes and beyond , (pp (Vol. 369, pp. 55–88). Oxford University Press.

Strack, F., & Deutsch, R. (2004). Reflective and impulsive determinants of social behavior. Personality and Social Psychology Review: An Official Journal of the Society for Personality and Social Psychology, Inc, 8(3), 220–247.

Vaisey, S. (2009). Motivation and Justification: A Dual-Process Model of Culture in Action. American Journal of Sociology, 114(6), 1675–1715.

Consciousness and Schema Transposition

In a recent paper published in American Sociological ReviewAndrei Boutyline and Laura Soter bring much-needed conceptual clarification to the sociological appropriation of the notion of schemas while also providing valuable and welcome guidance on future uses of the concept for practical research purposes. The paper is a tour de force, and all of you should read it (carefully, perhaps multiple times), so this post will not summarize their detailed argument. Instead, I want to focus on a subsidiary but no less important set of conclusions towards the end, mainly having to do with the relationship between declarative and nondeclarative cognition and an old idea in sociological action theory due to Bourdieu (1980/1990) that was further popularized in the highly cited article by Sewell (1992) on the duality of structure. I refer to the notion of schematic transposition.

In what follows, I will first outline Bourdieu’s and Sewell’s use of the notion and then go over how Boutyline and Soter raise a critical technical point about it, pointing to what is perhaps a consequential theoretical error. Finally, I will close by pointing to some lines of evidence in cognitive neuroscience that seem to buttress Boutyline and Soter’s position.

The idea of schematic transposition is related to an older idea due to Piaget of schema transfer. The basic proposal is that we can learn to engage in a set of concrete activities (e.g., let’s say “seriation” or putting things in rows or lines) in one particular practical context (putting multiple pebbles or marbles in a line). Then, after many repetitions, we develop a schema for it. Later, when learning about things in another context, let’s say “the number line” in basic arithmetic, we understand (assimilate) operations in this domain in terms of the previous seriation schema. Presumably, analogies and conceptual metaphors also depend on this schema transfer mechanism. In Logic of Practice, Bourdieu built this dynamic capacity for schema transfer into the definition of habitus everyone loves to hate, noting that the habitus can be thought of as “[s]ystems of durable, transposable dispositions, structured structures predisposed to function as structuring structures…” and so forth (p. 53).

This idea of transposibility ends up being essential for a habit theory like Bourdieu’s because it adds much-needed flexibility and creativity to how we conceive the social agent going about their lives (Joas, 1996). This is because thinking of action as driven by habitus does not entail people stuck with “one-track” inflexible or mechanical dispositions. Instead, via their capacity to transpose classificatory or practical habits learned in one domain to others, their internalized practical culture functions in a more “multi-track” way, being thus adaptive and creative. In an old paper on the notion of habitus (2004), I noted something similar to this, pointing out that “it is precisely this idea of flexible operations that allows for the habitus to not be tied to any particular content…instead, the habitus is an abstract, non-context specific, transposable matrix” (p. 391-392). Thus, there is something about transposability that seems necessary in a theory of action so that it does not come off as overly deterministic or mechanical.

In his famous 1992 paper, Sewell went even further, putting transposability at the very center of his conception of social change and agency. Departing from a critique of Bourdieu, Sewell noted two things. First (p. 16), any society contains a multiplicity of “structures” (today, we’d probably use the term “field,” “sphere,” or “domain”). Secondly (p. 17), this means people need to navigate across them somehow. Single-track theories of habit and cognition cannot explain how this navigation is possible. This navigation is made possible, according to Sewell, only by theorizing “the transposability of schemas.” As Sewell notes:

…[T]he schemas to which actors have access can be applied across a wide range of circumstances…Schemas were defined above as generalizable or transposable procedures applied in the enactment of social life. The term “generalizable” is taken from Giddens; the term “transposable,” which I prefer, is taken from Bourdieu…To say that schemas are transposable, in other words, is to say that they can be applied to a wide and not fully predictable range of cases outside the context in which they are initially learned…Knowledge of a rule or a schema by definition means the ability to transpose or extend it-that is, to apply it creatively. If this is so, then agency, which I would define as entailing the capacity to transpose and extend schemas to new contexts, is inherent in the knowledge of cultural schemas that characterizes all minimally competent members of society (p. 17-18).

Thus, in Sewell, the very concept of agency becomes defined by the actor’s capacity to transpose schemas across contexts and domains!

Nevertheless, is the link between the idea of schema and that of schematic transposition cogent? Boutyline and Soter (2021) incisively point out that it may not be. To see this, it is important to reiterate their “functional” definition of schemas as “socially shared representations deployable in automatic cognition” (735). The key here is “automatic cognition.” As I noted in an earlier post on “implicit culture,” a common theoretical error in cultural theory consists of taking the properties of forms of “explicit” representations we are familiar with and then postulating that there are “implicit” forms of representation having the same properties, except that they happen to be unconscious, tacit, implicit and the like. The problem is that representations operating at the tacit level need not (and usually cannot) share the same properties as those operating at the explicit level.

Boutyline and Soter note a similar tension in ascribing the property “transposable,” to a tacit or nondeclarative form of culture like a schema, which generally operates in type I cognition. In their words,

A..correlate of Type I cognition is domain-specificity. Type II knowledge can be context-independent and abstract—qualities enabled in part via the powerful expressive characteristics of language—and tied to general-purpose intelligence and logical or hypothetical reasoning…In contrast, Type I knowledge is often domain-specific—thoroughly tied to, and specifically functioning within, contexts closely resembling the one in which it was learned…Type II knowledge (e.g., mathematical or rhetorical tools) can be transposed with relative ease across diverse contexts, but the principles that underlie Type I inferences may not be transferrable to other domains without the help of Type II processes.

So, it seems like both Bourdieu and Sewell (drawing on Bourdieu) made a crucial property conjunction error, bestowing a magical power (transposability) to implicit (personal) culture. This type of personal culture cannot display the transposability property precisely because it is implicit (previously, I argued that people do this with a version of symbolic representational status). Boutyline and Soter (p. 742) revisit Sewell’s example of the “commodity schema,” convincingly demonstrating that, to the extent that this schema ends up being “deep” because it is transposable, specific episodes of transposability cannot themselves operate in automatic autopilot. Instead, “novel instance[s] of commodification” must be “consciously and intentionally devised” (ibid). Thus, to the extent that they are automatically deployable, schemas are non-transposable. Transposability of schemas requires that they be “representationally redescribed” (in terms of Karmiloff-Smith 1995) into more flexible explicit formats. Tying this insight to recent work on the sociological dual-process model, Boutyline and Soter conclude that the “application of existing knowledge to new domains understood as a feature of effortful, controlled cognition” (750).

Boutyline and Souter’s compelling argument does pose a dilemma and a puzzle. The dilemma is that a really attractive theoretical property of schemas (for Bourdieu, Sewell, and the many, many people who have used their insights and been influenced by their formulation) was transposability. Without it, it seems like schemas become a much diminished and less helpful concept. The puzzle is that there are many historical and contemporary examples of empirical instances of what looks like schematic transposition. How does this happen?

Here, Boutyline and Soter provide a very elegant theoretical solution, drawing on recent work suggesting that culture can “travel” within persons across the declarative/nondeclarative divide via redescription processes and across the public/personal one via internalization/externalization processes. They note that because schemas are representational, they can be externalized (or representationally redescribed) into explicit formats (from nondeclarative to declarative). People can also internalize them from the public domain when they interact in the world (from public to personal/nondeclarative; see Arseniev-Koehler and Foster, 2020). As Boutyline and Soter note, representational redescription,

…could make the representational contents of a cultural schema available to effortful conscious cognition, which we suspect may be generally necessary to translate these representations to novel domains. After they are transformed to encompass new settings, the representational contents could then travel the reverse pathway, becoming routinized through repeated application into automatic cognition. The end product of this process would be a cultural schema that largely resembles the original schema but now applies to a broader set of domains. Representational redescription may thus be key to social reproduction, wherein familiar social arrangements backed by widely shared cultural schemas…are adapted so they may continue under new circumstances (751).

Does cognitive neuroscience’s current state of the art support the idea that consciousness is required to integrate elements from multiple experiential and cultural domains? The answer seems to be a qualified “yes,” with the strongest proponents suggesting that the very function of consciousness and explicit processing is cross-domain information integration (Tononi, 2008). A more plausible weaker hypothesis is that consciousness greatly facilitates such integration. Without it, the task would be challenging, and for complex settings such as the socio-cultural domains of interest to sociologists, perhaps impossible. As noted by the philosophers Nicholas Shea and Chris Frith,

The role of consciousness in facilitating information integration can be seen in several paradigms in which local regularities are registered unconsciously, but global regularities are only detected when stimuli are consciously represented…consciousness makes representations available to a wider range of processing, and processing that occurs over conscious representations takes a potentially wider range of representations as input (2016, p. 4).

This account supports Boutyline and Soter’s insightful observation that it was an initial mistake to link the property of transposability to schemas, especially in the initial formulation by Bourdieu, where schemas were seen as part of habitus (Vaisey, 2009). Therefore, schemas reside in the implicit mind and operate as automatic Type I cognition (Sewell was more ambiguous in this last respect). Work in cognitive psychology and the cognitive neuroscience of consciousness supports the idea that transposition requires information integration across domains. For complex domains, conscious representation and deliberate processing may be necessary for the initial stages of transposition (Shea & Frith, 2016). Of course, as Boutyline and Souter note, once institutional entrepreneurs have engaged in the first bout of transposition mediated by explicit representations, the new schema-domain linkage can be learned by others via proceduralization and enskilment, becoming part of implicit personal culture operating as Type I cognition.

Finally, a corollary of the preceding is that we may not want to follow Sewell in completely collapsing the general concept of agency into the more restricted idea of schematic transposition, as this would have the untoward consequence of reducing agency to conscious representations and system II processing over these, precisely the thing that practice and habit theories were designed to prevent. 

References

Arseniev-Koehler, A., & Foster, J. G. (2020). Machine learning as a model for cultural learning: Teaching an algorithm what it means to be fat. In arXiv [cs.CY]. arXiv. https://doi.org/10.31235/osf.io/c9yj3

Bourdieu, P. (1990). The logic of practice (R. Nice, trans.). Stanford University Press. (Original work published 1980)

Boutyline, A., & Soter, L. K. (2021). Cultural Schemas: What They Are, How to Find Them, and What to Do Once You’ve Caught One. American Sociological Review86(4), 728–758.

Joas, H. (1996). The Creativity of Action. University of Chicago Press.

Karmiloff-Smith, A. (1995). Beyond Modularity: A Developmental Perspective on Cognitive Science. MIT Press.

Lizardo, O. (2004). The Cognitive Origins of Bourdieu’s Habitus. Journal for the Theory of Social Behavior34(4), 375–401.

Sewell, W. H., Jr. (1992). A Theory of Structure: Duality, Agency, and Transformation. The American Journal of Sociology98(1), 1–29.

Shea, N., & Frith, C. D. (2016). Dual-process theories and consciousness: the case for ‘Type Zero’cognition. Neuroscience of Consciousness2016(1).

Tononi, G. (2008). Consciousness as integrated information: a provisional manifesto. The Biological Bulletin215(3), 216-242.

Vaisey, S. (2009). Motivation and Justification: A Dual-Process Model of Culture in Action. American Journal of Sociology114(6), 1675–1715.

A Sociology of “Thinking Dispositions”

In a recent interview about his life and career, the Nobel Prize-winning psychologist and economist Daniel Kahneman said two particularly interesting things. First, he said much of his current work is focused on individual differences in what he refers to as “System 1” and “System 2” thinking. He discussed his fascination with the Cognitive Reflection Test (CRT), which includes the famous “bat and ball problem”:

A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost? _____ cents.

What makes this a great question is that it has an intuitive (but wrong) answer that immediately comes to mind (10 cents), and a correct answer (5 cents) that requires you to override that initial intuition and think deliberately to attain it. Some people read this question and simply “go with their gut,” while others take time and think more carefully about it. Kahneman says that what makes this so interesting is that people who are certainly intelligent enough to obtain the correct answer (like students at Harvard) get this wrong all the time and that it predicts important things, including belief in conspiracy theories and receptivity to pseudo-profound “bullshit” (see Pennycook et al., 2015; Rizeq et al., 2020).

     As Shane Frederick (a post-doctoral student of Kahneman’s, who developed the measure) proposed, the CRT measures ““cognitive reflection”—the ability or disposition to resist reporting the response that first comes to mind.” (2005:35). The CRT is one of several measures of what psychologists refer to as “thinking dispositions” or “cognitive styles,” general differences in the propensity to use Type 2 processing to regulate responses primed by Type 1 processing. People with more reflective or analytical thinking dispositions are more careful, thorough, and effortful thinkers, while those with more intuitive or experiential thinking dispositions are more likely to “go with their gut” and trust in their initial responses (Cacioppo et al. 1996; Epstein et al. 1996; Pennycook et al. 2012; Stanovich 2009, 2011).

The second interesting thing Kahneman discussed was his omission of the work of the late psychologist Seymour Epstein. In the early 1970’s, when Kahneman and Amos Tversky started publishing their work on heuristics and biases, Epstein was developing his “cognitive-experiential self theory”: a dual-process theory that proposed that people process information through either a rational-analytical system or an intuitive-experiential system. Apparently, Epstein was upset that Kahneman had failed to recognize his work, even in his popular book Thinking Fast and Slow (2011). Kahneman said that he regretted not engaging with his ideas because they were directly relevant to his work on System 1 and System 2 thinking.

Individual Differences in Thinking Dispositions

What neither Kahneman nor the interviewer seemed to recognize is that Kahneman’s recent interest in individual differences in dual-process cognition and his omission of Epstein’s work are in some ways interrelated. Arguably, Kahneman is quite late to the “individual-differences” party. Psychologists have been using measures of thinking dispositions for many years; they have already been established as a workhorse for research in social and cognitive psychology and proven invaluable for explaining pressing issues, including the susceptibility to fake news, the acceptance of scientific evidence, and beliefs and behaviors around COVID-19 (Erceg et al., 2020; Fuhrer and Cova, 2020; Pennycook et al., 2020; Pennycook and Rand, 2019). However, if he had followed Epstein’s work more closely, he likely would have gotten to these individual differences much sooner in his career. Almost a decade before the validation of the CRT, Epstein and his colleagues (1996) developed the popular Rational-Experiential Inventory (REI), a self-report measure of differences in intuitive and analytical thinking.

If Kahneman is late to the party, sociologists do not even seem to know or care about it. Cultural sociologists have been engaging with dual-process models for years, and this scholarship has been highly generative (e.g., DiMaggio, 1997; Lizardo et al., 2016; Vaisey, 2009). However, this work is almost always accompanied by claims about how cognition operates in general. For example, in DiMaggio`s (1997) agenda-setting “Culture and Cognition,” he asserted that due to its inefficiency, deliberate cognition was “necessarily rare” (1997: 271). Similarly, Vaisey (2009:1683) argued that “practical consciousness” is “usually in charge” (2009: 1683). Conversely, those who argue against these works draw on “social psychologically oriented models that assume greater reflexivity on the part of social actors” (Hitlin and Kirkpatrick-Johnson, 2015: 1434) or suggest that “findings from cognitive neuroscience suggest that this model places too much emphasis on the effects of subconscious systems on decision-making” (Vila-Henninger, 2015: 247). These claims presuppose a general, “one-size fits all” model of social actors and the workings of human cognition.

At some level, the lack of consideration for individual differences in sociological work on dual-process cognition is entirely understandable. The term “individual differences,” closely associated with psychological research on intelligence and personality, certainly sounds “non-sociological.” Accordingly, it is not likely to inspire much faith or curiosity from sociologists, similar to the way they might turn their nose up at psychological research about “choice” and “decision-making” (Vaisey and Valentino, 2018). However, these individual differences exist, and therefore sociological models of culture, cognition, and action may be missing something important by not accounting for this individual variability. Furthermore, there is good reason to think that these “individual” differences are actually socially patterned.   

Thinking Dispositions in Sociological Work

We can go back to the classics to find concepts that approximate thinking dispositions and propositions about how and why they are socially patterned. Georg Simmel argued that the psychological conditions of the metropolis (e.g., constant sensory stimulation, the money economy) produce citizens that (dispositionally and habitually) react “with [their] head instead of [their] heart” (2012[1905]: 25) – a more conscious, intellectual, rational, and calculating mode of thought. Relatedly, John Dewey (2002[1922], 1933) wrote about a “habit of reflection” or a “reflective disposition” born out of education and social customs. 

We can also find this line of thinking in more contemporary works. Pierre Bourdieu (2000) argued that the conditions of the skholè foster a “scholastic disposition” characterized by scholastic reasoning or hypothetical thinking. Annette Lareau’s (2011) account of “concerted cultivation” found that wealthier families aimed to stimulate and encourage their children’s rational thinking and deliberate information processing to develop their “cognitive skills.” Critical realists aiming to hybridize habitus and reflexivity have argued that certain conditions (e.g., late-modernity, socialization that emphasizes contemplation) produce habiti in which reflexivity itself becomes dispositional – a reflexive habitus (Adkins, 2003; Mouzelis, 2009; Sweetman, 2003). All of these accounts broadly suggest that people in different social locations are exposed to different types of social and cultural influences which lead them to develop thinking dispositions. 

Socially Locating Thinking Dispositions

In a recent paper with Andrew Miles, I put these considerations to the empirical test by comprehensively establishing the social patterns of thinking dispositions (Brett and Miles, 2021). We quickly found that some psychologists had indeed tested this, particularly using Epstein’s (1996) REI. However, this research was limited in several respects; these studies measured for differences (usually based on age, education, and gender) with little to no theoretical explanation for why these differences exist, nor analytic justification for why they were tested. Furthermore, they typically used bivariate analyses and convenience samples, and taken together, they offered conflicting findings on whether these variables actually matter. As such, we first performed a meta-analysis of 63 psychological studies that used the REI to measure differences in thinking dispositions based on age, education, and gender, followed by an original analysis with nationally representative data. Overall, we found strong evidence that thinking dispositions vary by age, education, and gender, and weaker evidence that they vary by income, marital status, and religion.

While this covers some social patterns of thinking dispositions as an object of study, sociologists would do well to establish their causes and consequences. The thinkers above suggest a variety of mechanisms that may promote thinking dispositions, including specific child-rearing practices and forms of socialization, heightened sensory stimulation, and having the time and space for imaginative, contemplative, or experimental thought – all of which could be tested empirically. But perhaps more importantly, thinking dispositions likely hold significant consequences for culture, cognition, and action that ought to be explored. 

For example, in a recent paper with Vanina Leschziner (Leschziner and Brett, 2019) I used the notion of thinking dispositions to help explain patterns of culinary creativity. We found that chefs who were more invested in innovative styles of cooking tended to be more analytical in their approach, while chefs invested in more traditional styles of cooking held a more heuristic approach to cooking. Notably, this was not simply the result of exogenous pressures they had to create novel dishes; instead, these chefs developed an inclination and excitement for these modes of thought during the creative process that had become dispositional over time. While culture and cognition scholars would typically ascribe these differences to the type of restaurants chefs worked in or the style of food they produced, this misses the distinct link between cognitive styles and culinary styles. As this illustrates, thinking dispositions may hold important but (as of now) largely untapped explanatory value for sociologists.

References

Adkins, Lisa. 2003. “Reflexivity: Freedom or Habit of Gender?” Theory, Culture & Society 20(6):21-42.

Bourdieu, Pierre. 2000. Pascalian Meditations. Stanford, CA: Stanford University Press.

Brett, Gordon, and Andrew Miles. 2021. “Who Thinks How? Social Patterns in Reliance on Automatic and Deliberate Cognition.” Sociological Science 8: 96-118.

Cacioppo, John T., Richard E. Petty, Jeffrey A. Feinstein, and W. Blair G. Jarvis. 1996. “Dispositional Differences in Cognitive Motivation: The Life and Times of Individuals Varying in Need for Cognition.” Psychological Bulletin 119(2):197–253.

Dewey, John. 1933. How We Think: A Restatement of the Relation of Reflective Thinking to the Educative Process. New York: D.C. Heath.

Dewey, John.  [1922] 2002. Human Nature and Conduct. Amherst, New York,: Prometheus Books

DiMaggio, Paul. 1997. “Culture and Cognition.” Annual Review of Sociology 23(1):263–87.

Epstein, Seymour, Rosemary Pacini, Veronika Denes-Raj, and Harriet Heier. 1996. “Individual Differences in Intuitive–Experiential and Analytical–Rational Thinking Styles.” Journal of Personality and Social Psychology 71(2):390–405.

Erceg, Nikola, Mitja Ružojčić, and Zvonimir Galić. 2020 “Misbehaving in the corona crisis: The role of anxiety and unfounded beliefs.” Current Psychology: 1-10.

Fuhrer, Joffrey, and Florian Cova. 2020. ““Quick and Dirty”: Intuitive Cognitive Style Predicts Trust in Didier Raoult and his Hydroxychloroquine-based Treatment Against COVID- 19.” Judgment & Decision Making 15(6):889–908.

Hitlin, Steven, and Monica Kirkpatrick Johnson. 2015. “Reconceptualizing Agency Within the Life Course: The Power of Looking Ahead.” American Journal of Sociology 120(5):1429-1472.

Kahneman, Daniel. 2011. Thinking, Fast and Slow. New York: Penguin.

Lareau, Annette. 2011. Unequal Childhoods: Class, Race, and Family Life. Univ of California Press.

Leschziner, Vanina, and Gordon Brett. 2019. “Beyond Two Minds: Cognitive, Embodied, and Evaluative Processes in Creativity.” Social Psychology Quarterly 82(4):340-366.

Lizardo, Omar, Robert Mowry, Brandon Sepulvado, Dustin S. Stoltz, Marshall A. Taylor, Justin Van Ness, and Michael Wood. 2016. “What Are Dual Process Models? Implications for Cultural Analysis in Sociology.” Sociological Theory 34(4):287–310.

Mouzelis, Nicos. 2009. “Habitus and Reflexivity.” In Modern and Postmodern Social Theorizing: Bridging the Divide. Cambridge, UK: Cambridge University Press.

Pennycook, Gordon, James Allan Cheyne, Nathaniel Barr, Derek J. Koehler, and Jonathan A. Fugelsang. 2015. “On the Reception and Detection of Pseudo-Profound Bullshit.” Judgment and Decision making 10(6):549-563.

Pennycook, Gordon, James Allan Cheyne, Derek Koehler, and Jonathan Albert Fugelsang. 2020. “On the Belief that Beliefs Should Change According to Evidence: Implications for Conspiratorial, Moral, Paranormal, Political, Religious, and Science Beliefs.” Judgment and Decision Making 15 (4):476–498.

Pennycook, Gordon, James Allan Cheyne, Paul Seli, Derek J. Koehler, and Jonathan A. Fugelsang. 2012. “Analytic Cognitive Style Predicts Religious and Paranormal Belief.” Cognition 123(3):335–46.

Pennycook, Gordon, and David G. Rand. 2019. “Lazy, not Biased: Susceptibility to Partisan Fake News is Better Explained by Lack of Reasoning than by Motivated Reasoning.” Cognition 188:39-50.

Rizeq, Jala, David B. Flora, and Maggie E. Toplak. 2020. “An Examination of the Underlying Dimensional Structure of Three Domains of Contaminated Mindware: Paranormal Beliefs, Conspiracy Beliefs, and Anti-Science Attitudes.” Thinking & Reasoning 27(2):187-211.

Simmel Georg. 1964 [1902]. “The Metropolis and Mental Life.” Pp. 409–24 in The Sociology of Georg Simmel, edited and translated by K. H. Wolff. New York: Free Press.

Stanovich, Keith E. 2009. What Intelligence Tests Miss: The Psychology of Rational Thought. New Haven, CT: Yale University Press.

Stanovich, Keith E. 2011. Rationality and the Reflective Mind. Oxford: Oxford University Press.

Sweetman, Paul. 2003. “Twenty-First Century Dis-Ease? Habitual Reflexivity or the Reflexive Habitus.” Sociological Review 51:528–49

Vaisey, Stephen. 2009. “Motivation and Justification: A Dual-Process Model of Culture in Action.” American Journal of Sociology 114(6):1675–715.

Vaisey, Stephen, and Lauren Valentino. 2018.”Culture and Choice: Toward Integrating Cultural Sociology with the Judgment and Decision-Making Sciences.” Poetics 68: 131-143.

Vila‐Henninger, Luis Antonio. 2015. “Toward Defining the Causal Role of Consciousness: Using Models of Memory and Moral Judgment from Cognitive Neuroscience to Expand the Sociological Dual‐Process Model.” Journal for the Theory of Social Behaviour 45(2): 238-260.

On Looping Effects

In this post, I sketch out some preliminary ideas for introducing repetition into theories of social formation and for situating cognition at their base. The major principle for this endeavor is what I (unoriginally) propose as loops. More originally, I argue that loops take at least three different forms and that not all looping effects are created equal. Loops are equal, however, in putting the onus on repetition as a source of scaled orders and formations (e.g. “structures,” “enclosures,” “molds,” “modulations”) rather than on generality.

A loop, quite simply, is a generative process with one necessary condition: a non-identity between two parts, but two parts that repeat in their connection. A repeating loop makes a scaled order particular rather than general, because it is fundamentally a sequence.  Loops might be broadly instantiated, but they cannot find equivalences everywhere. As repetitions, they need the same ingredients. As connections and cycles, they remain distinctive rather than ordinary. 

Douglas (1986: 100-02) suggests that a looping effect need not directly involve a human agent at all; they are mostly natural. Knowledge of microbes leads to medications; when those medications are applied to microbes, the microbes adapt. This explains a clear feedback relation, but it is not a loop: microbes change because of knowledge about them; but the change needs no repetition. It is adaptive instead. 

The thing about a loop is that it must repeat  (again and again). A loop must generate its own momentum. This link between repetition and loops has been neglected to date. Such neglect means that looping effects are discussed separately from the prevalence of scaled orders and formations,  for instance those that are “disciplinary” versus those that are “control,” leaving both of fuzzy meaning and provenance, used interchangeably. When we concentrate on looping effects as repetitions, it becomes clear that orders and formations of scale (e.g. not presumed to be general; that cannot find equivalences everywhere) rest largely on an edifice of cognition that, in practice, does not need to remain implicit in order to be effective.

 

Hacking loops (enclosures, molds)

For Hacking (1995; 2006), loops come from the process of classification and categorization that feeds a dynamic nominalism. Classifications are made by people about people, as an index of traits, of the properties displayed by the latter. In a Hacking loop, classifications made by people about people loop into their target and alter it. The classifiers “create certain kinds of people that in a certain sense did not exist before.” The onus rests on the name given to these traits that collects them together. Hacking loops therefore represent a form of nominalism: they need not become entangled with real kinds or make any difference for them. What matters more is the name, its legitimation by expertise, its elaboration by institutions, its officialization by bureaucracy, all of which reinforces its external, public and legitimate presence. 

As Hacking puts it: 

In 1955 “multiple personalities” was not a way to be a person, people did not experience themselves in this way, they did not interact with their friends, their families, their employers, their counsellors, in this way; but in 1985 this was a way to be a person, to experience oneself, to live in society (2006).

The intervention of 30 years meant the formulation of this name, the proliferation of knowledge and the accumulation of references under its heading, all giving it a stable external presence, by indexing various evident things (e.g. “this is that”) in a distinguishable way. While the traits of multiple personality (or manic depression, anxiety disorder, etc) might have preceded the name, this is not Hacking’s point: as a “way to be a person,” multiple personality needed a name that could index these traits and thereafter be a way of indexing of oneself (e.g. “that is me”).  Such a sequence (this is that → that is me) becomes fully reversible (that is me → this is that). A name, increasingly standardized, rationalized and externalized (e.g. “discourse”) makes up and stabilizes people by feeding back into their identification (this is that ←→ that is me).

To be a certain type of person, to live in society as that person, to be interacted with as that person, and most importantly to experience oneself as that person occurs through a Hacking loop. Importantly, all of these effects require only an external process rather than a change of “inner life.” A name is “deep” in a contingent sense: the loop rests upon the identification of those who are indexed. It makes no substantive difference for the traits that are indexed that they now receive a name. In fact, Hacking loops seem to require this given the proliferation of names and entire professions and forms of expertise dedicated to classification. However, it does make a difference for those who are classified and named. Hacking loops create an enclosure or mold, from which there might be no escape so long as the name is externally maintained.

 

Mutually sustaining relations (structures)

A different loop is proposed by Sewell in what we might call his principle of mutually sustaining relations. The terminology here concentrates on a “mutually sustaining” loop between two different kinds of things (schemas and resources), as mentioned in the following influential formula and applied (famously) to a theory of structure:

Structures … are constituted by mutually sustaining cultural schemas and sets of resources that empower and constrain social action and tend to be reproduced by that action. Agents are empowered by structures, both by the knowledge of cultural schemas that enables them to mobilize resources and by the access to resources that enables them to enact schemas (27).

As Sewell implies, schemas and resources mutually sustain each other through a repeating connection. A schema remains an effect of resources, just resources are the effect of schemas. 

When the priest transforms the host and wine into the body and blood of Christ and administers the host to communicants, the communicants are suffused by a sense of spiritual well-being. Communion therefore demonstrates to the communicants the reality and power of the rule of apostolic succession that made the priest a priest. In short, if resources are instantiations or embodiments of schemas, they therefore inculcate and justify the schemas as well (13)

Unlike a Hacking loop, Sewell’s “mutually sustaining” loop does suggest a deep effect, as the very constitution of a set of properties as “resources” are schema dependent, just as the constitution of mental categories as “schemas” are resource dependent. A resource is equivalent to the traits that a Hacking loop collects under the heading of a name, but a schema does not “name” them. Sewell characterizes the mutually sustaining link as instead “reading” or “interpreting.” A resource needs to be read as a resource in order to be a resource. A schema does the reading. A schema, presumably, is not a schema if it does not read or interpret resources. The loop can be initiated through either end: resource accumulation to a schema (resource → schema) or schema accumulation to a resource (schema → resource). A loop becomes difficult to sustain in cases that allow for too much agency (e.g. transposition of schemas), which prevents an unambiguous rendering of resources. 

In cases where there are limited schemas for “reading” and “interpreting”  a resource, and this is in turn “sustained” by limited resources for other possible schemas, a “structure” will result. A structure is distinguishable from a “mold” or “enclosure” in a Hacking loop. Structure, by contrast, suggests not only a potential source of resistance but also the limits of meaning. This entails the  “depth” of structure as opposed to the externality of a mold. Structure refers to inner life, which it substantially depends on shaping and altering. The surface-level chaos of capitalism, for instance, only signals the depth of a schema ←→ resource loop: the schematic and repeating transformation of use- to exchange-value is a necessary condition for “resource” in this context; resources, meanwhile, accumulate to “schemas” that involve a use-to-exchange transformation.

We should expect structures to change through disruption to an established loop, via the interchangeability and replacement of both parts of structural loops (schemas and resources). This creates demands on inner life through transpositions that likely appear “impractical” in their interpretations and reading of things. The chance of resource accumulation keeps the possibility of structural change open.

 

Expectations-chances loops (modulations)

Hacking loops and Sewell’s mutually sustaining loops are both known well-enough by this point as to render the above discussion boring by comparison. To finish this post, I want to make two proposals: first, that Hacking’s “molds” and “castings” and Sewell’s “structures” are both loops found within a disciplinary order. This suggests a relative limit on their generality, though equally they remain contingent on repetition (as loops). Second, I want to understand a disciplinary order as distinct from a control order based on a different loop that engages cognition differently than naming, reading, or interpreting (Deleuze 1992). This is a expectations-chances loop that works according to (objective) prediction and (subjective) guessing (see Bourdieu 1973: 64).  

In one version of this loop, the tale is told indicatively as follows:

Acrimonious debates about the calculative abilities of individuals and the limits of human rationality have given way to an empirical matter-of-factness about measuring action in real life, and indeed in real time. The computers won, but not because we were able to build abstract models and complex simulations of human reasoning. They bypassed the problem of the agent’s inner life altogether. The new machines do not need to be able to think; they just need to be able to learn. Correspondingly, ideas about action have changed (Fourcade and Healy 2017: 24). 

Hence, a proposal for non-intentional action becomes applicable to data-gathering mechanisms, but the “index” is different in this scenario, as it includes “inner life” no longer. “Culture” is an association rather than internalized pattern generator. It does not have effects, but rather stands for a history of traces:

When people are presumptively rational, behavioral failure comes primarily from the lack of sufficient information, from noise, poor signaling or limited information-processing abilities. But when information is plentiful, and the focus is on behavior, all that is left are concrete, practical actions, often recast as good or bad ‘choices’ by the agentic perspective dominant in common sense and economic discourse. The vast amounts of concrete data about actual ‘decisions’ people make offer many possibilities of judgment, especially when the end product is an individual score or rating. Outcomes are thus likely to be experienced as morally deserved positions, based on one’s prior good actions and good taste.

A theory of action remains, then, even despite the absence of inner life; because data is simply action. Data can modulate action through a “herding” or directing effect, creating futures based on past performance and subsequent encoding. Since there is no inner life, classification is based on information collected at junctures that create possible futures. The causes of action are not of interest (only that action happens), though there are consequences to action. This can exercise a disciplinary effect through anticipation, as facilitated by the rationalization of trials. Since there is no ideal model (or name gathering of characteristics a priori), however, this is not integral to control. There is only the fact that one must have been through certain trials and then out of them. 

Predictions made through data protocols interface with predictions made in action. Trials introduce uncertainties that meet with anticipations; a certain future is achievable when possibilities are presented algorithmically and displace an otherwise “wild’ cognition. Control becomes an algorithmic modulation of future possibilities rather than a generative modulation of guesses. 

The systematic production of “good matches” is based on controls exercised on the means of prediction from both ends: the expropriation of the means of prediction and the controlled distribution of what they predict. This keeps the loop closed between the (objective) provision of possibilities and (subjective) anticipations or guesses, making “this matching feel all the more natural because it comes from within—from cues about ourselves that we volunteered, or erratically left behind, or that were extracted from us in various parts of the digital infrastructure” (Fourcade and Healy 17). 

Modulation takes place through cognitive loops, constructing a “self-deforming cast that will continually change from one moment to the other, or a sieve whose mesh will transmute from point to point” (Deleuze 4). Conventionally, the connection between “schema” and power is content-laden and substantive: it provides a way to “read” resources (Sewell 13). An expectations-chances loop finds no equivalent to “reading” (or interpreting or naming); the key process is guessing instead. A non-individual recorder or record-keeper (qua technology) can guess even if it cannot read, and it can adapt its guesses, improve them. Here looping is incompatible with “molding” or “casting”; “structure” is static by comparison. After all, you can know when you leave the “cast” and its standard no longer applies.

The theory of power embedded in a schema-resources loop puts the onus on schemas that “read” resources; this is where we find agency. In a disciplinary context, an ideal or standard (a telos) is enforced and sought after. In control contexts, such a standard goes missing. Trials are not examinations. A model is volunteered rather than enforced. An individual is a record, though there is no record-keeping individual (“examiner” or “recorder”). Rather than being incorporated into a structure (through schemas), agents are made precise as a code or classification. They do not exercise effects (structural or otherwise) but are given possible futures. They are not shoehorned into the fixed parameters of a schema. They bootstrap themselves into sequences that look increasingly like their own good matches. 

 

Conclusion 

We should therefore expect the genesis and transposition of expectations just as we do those of schemas or names, in looping connection with chances, as a way of inviting chance in or taming it. But there is a catch. The consequence of a “controlled” expectations-chances loop can be similar to the amnesiac returning to memory after several long years: “My God! What did I do in all those years?” (Bourdieu [1995] quoting Deleuze 1993). Consider, along exactly similar lines, a “coming to” after diving down an algorithmically modulated rabbit-hole. The explanation must be cognitive because this occurs through repeating loops. Disciplinary formations can achieve (reflexive) “consciousness” and nothing will change; the same is not true for control formations. 

 

References

Bourdieu, Pierre. (1973). “Three forms of theoretical knowledge.” Social Science Information 12: 53-80.

Bourdieu, Pierre. (1995). The State Nobility. Stanford University Press.

Deleuze, Gilles. (1992). “Postscript on societies of control.” October 59: 3-7.

Deleuze, Gilles. (1993). The Fold: Leibniz and the Baroque. University of Minnesota Press.

Douglas, Mary. (1986). How Institutions Think. Syracuse University Press.

Fourcade, Marion and Kieran Healy. (2017). “Seeing like a market.” Socio-Economic Review 15: 9-29.

Hacking, Ian. (1995). “The looping effects of human kinds.” Pp. 351-394 in Causal Cognition: A Multidisciplinary Debate.

Hacking, Ian. (2006). “Making up people.” LRB 28.

Sewell, William. (1992). “A theory of structure: duality, agency and transformation.” American Journal of Sociology 98: 1-29.

Habit as Prediction

In a previous post, Mike Strand points to the significant rise of the “predictive turn” in the sciences of action and cognition under the banner of “predictive processing” (Clark, 2015; Wiese & Metzinger, 2017). This turn is consequential, according to Mike, because it takes prediction and turns it from something that analysts, forecasters (and increasingly automated algorithms) do from something that everyone does as the result of routine activity and everyday coping with worldly affairs. According to Mike:

To put it simply, predictive processing makes prediction the primary function of the brain. The brain evolved to allow for the optimal form of engagement with a contingent and probabilistic environment that is never in a steady state. Given that our grey matter is locked away inside a thick layer of protective bone (e.g., the skull), it has no direct way of perceiving or “understanding” what is coming at it from the outside world. What it does have are the senses, which themselves evolved to gather information about that environment. Predictive processing says, in essence, that the brain can have “knowledge” of its environment by building the equivalent of a model and using it to constantly generating predictions about what the incoming sensory information could be. This works in a continuous way, both at the level of the neuron and synapse, and at the level of the whole organism. The brain does not “represent” what it is dealing with, then, but it uses associations, co-occurrences, tendencies and rhythms to predict what it is dealing with.

In this post, I would like to continue the conversation on the central role of prediction in the explanation of action and cognition that Mike started by linking it to some previous discussions on the nature and role of habit in action and the explanation of action (see here, here, and here). The essential point that I wish to make here is that there is a close link between habit and prediction. This claim may sound counterintuitive at first. The reason is that the primary way that habit and practice have been incorporated into contemporary action theory is by making habit, in its “repetitive” or “iterative” aspect, a phase or facet of action that looks mainly backward to the past (e.g., Emirbayer & Mische, 1998). Because prediction is necessarily future-oriented, most analysts think of it as also necessarily non-habitual and thus point to other non-habit like processes, such as Schutzian “projection,” that implies a break with habitual iteration. These analysts presume that there is a natural antithesis between habit and iteration (which at best may bring the past into the present) and anticipation of forthcoming futures.

Rethinking Habit for Prediction

The idea that habit is antithetical to prediction makes sense, as far as it goes, but only because it hews closely to a conception of habit that accentuates the “iterative” or repetitive side. But there are more encompassing conceptions of the role of habit in action that emphasize an iterative side to habit and an adaptive, and even “anticipatory” side. Here I focus on one such intellectual legacy of thinking about habit, which remains mostly unknown in contemporary action theory in sociology. It was developed by a cadre of thinkers, mainly in France, beginning in the early nineteenth century and extending into the early twentieth century. This approach to the notion of habit characteristically combined elements of Aristotelian, Roman-stoic, scholastic, British-empiricist, Scottish-commonsense, French-rationalist, and German-idealist philosophy, and then-novel developments in neurophysiology such as the work of Xavier Bichat. Its two leading exponents were Pierre Maine de Biran (1970) and the largely neglected (but see Carlisle (2010) and Sinclair (2019)) work of Félix Ravaisson (2008). These thinkers exercised a broad influence in the way habit was conceptualized in the French tradition, extending its influence into the work of the philosophers Albert Lemoine, Henry Bergson, and more notably, Maurice Merlau-Ponty (Sinclair, 2018).

The Double Law of Habit

The primary contribution of these two thinkers, especially Ravaisson, was developing the double law of habit. This was the proposal that habit (conceptualized as behavioral or environmental repetition) had “contradictory” effects on the “passive” (sensory, feeling) and the active (skill, action) faculties: “sensation, continued or repeated, fades, is gradually obscured and ends by disappearing without leaving a trace. Repeated movement [on the other hand] gradually becomes more precise, more prompt, and easier” (de Biran, 1970, p. 219)

In other words, facilitation in the realm of perception leads to “habituation,” meaning that experience becomes less capable of capturing attention. We become inured to the sensory flow, or in the case of experience that generate feelings (e.g., of pleasure, disgust, and so forth), the feelings “fade” in intensity (e.g., think of the difference between a first-year medical student and an experienced surgeon in the presence of a corpse). This is an argument that was deployed by Simmel to explain the “deadening” effect of urbanism on sensory discrimination and emotional reaction, generative of what he called the “blase attitude” in his classic essay on the “Metropolis and the Life of the Spirit.”

When it comes to action, on the other hand, habituation via repetition leads to the opposite of passivity; namely, facilitation of the activity (becoming faster, more precise, more self-assured) and the creation of an automatic disposition (e.g., triggered in partial or complete independence from a feeling of “willing” the action) equipped with its own inertia and bound to continue to its consummation unless interrupted. Habituated action “becomes more of a tendency, an inclination” (Ravaisson 2008: 51). This is the double face (or “law”) of habit.

Prediction as Attenuation

Trying to puzzle out these apparently contradictory effects of habituation led to a lot of head-scratching (and creative theorizing) both on the part of de Biran and Ravaisson and subsequent epigones like Bergson, Heidegger, Merleau-Ponty, and Ricoeur. Nevertheless, it becomes clear that a solution to the “double-law” puzzles emerges when the predictive dimension of both perception and action is brought to the fore. The case of “perceptual attenuation” considered below, for instance, provides the mechanism for the “fading” of the vibrancy of experience whenever we become proficient at canceling out the error produced by those experiences via top-down predictions (Hohwy, 2013). Here the “top” are generative hierarchical models instantiated across different layers in the cortex, and the bottom is incoming sensory stimulation from the world (where the job of the model is to infer the hidden causes of such stimulation).

That is, as experience is repeated and the distributed, hierarchical generative models tune their parameters to effectively figure out what’s coming before it comes, we begin to preemptively cancel out prediction error. Cancelation of prediction error leads to subsequent perceptual attenuation, such that incoming sensory information no longer commands (or requires) attention. The result is that attention is freed to concentrate on other more pressing things (e.g., the parts of the experience that are still producing precise error and thus demand it). In this respect, sensory and feeling attenuation is the price we pay for becoming good at predicting what the world offers. Prediction is at the basis of “passive” habituation (the first face of the double law).

Prediction as Facilitation

But what about the facilitation side? Here prediction, in the form of what is known as active inference, is also at play. However, this time, instead of prediction in the service of canceling out error from exteroceptive signals, the acquisition of skill turns into our capacity to cancel out prediction error emanating from our action in the world, for instance, via proprioceptive signals that track the sensory consequences of our activity. Repeated activity leads us to form increasingly accurate generative models of our action (the dynamic motor trajectory of our bodies and their effectors) in a particular environment. This means that we can anticipate what we are going to do before we do it, leading to the loss (via the mechanism described above) of the feeling of “effort” or even “willing” at the point of action initiation (Wegner, 2002), which is a phenomenological signature of habitual activity.

This is consistent with the idea that Parsonian “effort” rather than being the sine qua non of truly “free” action partially unmoored from its “conditions” (as the Kantian legacy led Parsons to implicitly assume) actually points to poorly performed (because badly predicted) action, in other words, action that is driven by generative models that are not very good at anticipating our next move. This is action that is at war with the environment not because it is “independent” from it, but because (due to lack of habituation an attunement to its objective structure of probabilities) is partially at war with it, and thus disconnected from its offerings (Silver, 2011).

The connection between habit and prediction becomes clear. On the one hand, repetition results in the attenuation of sensory input. While this was usually referred to as the “passive” side of the double-law, we can now see, drawing on recent work on predictive processing, that this is only a seeming passivity. At the subpersonal level, attenuation happens via the successful operation of well-honed generative models of the environmental causes of the input, working continuously to cancel out those incoming signals that they successfully predict. These models are one set of “habitual tracks” laid out by our experience of consistent patterns of experience.

On the “active” side, which is more clearly recognized as “habit,” proficiency in action execution also comes via prediction, but this time, instead of predicting how the distal structure of the world, we predict the same world we “self-fulfill,” as we act. Moving in the world feels like something to us (proprioception), and as we repeat activities, we become proficient in predicting the very sensory stimulation that we generate via our actions. The two sides of the double-law, which show up in contemporary predictive cognitive science as the difference between “perceptual” and “active” inference (Pezzulo et al., 2015; Wiese & Metzinger, 2017), are thus built on the predictive capacities of habits. This was something that was anticipated by Ravaisson when he noted that

[A] sort of obscure activity that increasingly anticipates both the impression of external objects in sensibility and the will in activity. In activity this reproduces the action itself; in sensibility it does not reproduce the sensation, the passion…but class for it, invokes it; in a certain sense it implores the sensation (Ravaisson 2008: 51).

Habit is thus the confluence of what has been called perceptual inference (predicting incoming signals by tuning a generative model of their causes) and active inference (self-fulfilling incoming signals via action so that they conform to the model that already exist), in other words, prediction as it facilitates our engaged coping with the world, is the nature of habit. More accurately, to the extent that we can predict the world, we do so via habit.

References

Carlisle, C. (2010). Between Freedom and Necessity: Félix Ravaisson on Habit and the Moral Life. Inquiry: A Journal of Medical Care Organization, Provision, and Financing, 53(2), 123–145.

Clark, A. (2015). Surfing Uncertainty: Prediction, Action, and the Embodied Mind. Oxford University Press.

de Biran, P. M. (1970). The Influence of Habit on the Faculty of Thinking. Greenwood.

Emirbayer, M., & Mische, A. (1998). What is agency? The American Journal of Sociology, 103(4), 962–1023.

Hohwy, J. (2013). The Predictive Mind. Oxford University Press.

Pezzulo, G., Rigoli, F., & Friston, K. (2015). Active Inference, homeostatic regulation and adaptive behavioural control. Progress in Neurobiology, 134, 17–35.

Ravaisson, F. (2008). Of Habit. Bloomsbury Publishing.

Silver, D. (2011). The moodiness of action. Sociological Theory, 29(3), 199–222.

Sinclair, M. (2018). Habit and time in nineteenth-century French philosophy: Albert Lemoine between Bergson and Ravaisson. British Journal for the History of Philosophy: BJHP: The Journal of the British Society for the History of Philosophy, 26(1), 131–153.

Sinclair, M. (2019). Being Inclined: Félix Ravaisson’s Philosophy of Habit. Oxford University Press.

Wegner, D. M. (2002). The Illusion of Conscious Will. MIT Press.

Wiese, W., & Metzinger, T. (2017). Vanilla PP for Philosophers: A Primer on Predictive Processing. In T. Metzinger & W. Wiese (Eds.), Philosophy and Predictive Processing.

Explaining social phenomena by multilevel mechanisms

Four questions about multilevel mechanisms

In our previous post, we discussed mechanistic philosophy of science and its contribution to the cognitive social sciences. In this blog post, we will discuss three case studies of research programs at the interface of the cognitive sciences and the social sciences. In our cases, we apply mechanistic philosophy of science to make sense of the epistemological, ontological, and methodological aspects of the cognitive social sciences. Our case studies deal with the phenomena of social coordination, transactive memory, and ethnicity.

In our work, we have drawn on Stuart Glennan’s minimal account of mechanisms, according to which a mechanism for a phenomenon “consists of entities (or parts) whose activities and interactions are organized so as to be responsible for the phenomenon” (Glennan 2017: 17). We understand entities and activities liberally so as to accommodate the highly diverse sets of entities that are studied in the cognitive social sciences, from physically grounded mental representations to material artifacts and entire social systems. In our article, we make use of the following four questions drawn from William Bechtel’s (2009) work to assess the adequacy and comprehensiveness of mechanistic explanations:

  1. What is the phenomenon to be explained (‘looking at’)?
  2. What are the relevant entities and their activities (‘looking down’)?
  3. What are the organization and interactions of these entities and activities through which they contribute to the phenomenon (‘looking around’)?
  4. What is the environment in which the mechanism is situated, and how does it affect its functioning (‘looking up’)?

The visual metaphors of looking at the phenomenon to be explained, looking down at the entities and activities that underlie the phenomenon, looking around at the ways in which these entities and activities are organized, and looking up at the environment in which the mechanism operates, are intended to emphasize that mechanistic explanations are not strongly reductive or “bottom-up” explanations. Rather, multilevel mechanistic explanations can bring together more “bottom-up” perspectives from the cognitive sciences with more “top-down” perspectives from the social sciences in order to provide integrated explanations of complex social phenomena. In the following, we will illustrate how we have used mechanistic philosophy of science in our case studies and what we have learned from them.

Social Coordination

Interpersonal social coordination has been studied during recent decades in many different scientific disciplines, from developmental psychology (e.g., Carpenter&Svetlova 2016) to evolutionary anthropology (e.g., Tomasello et al. 2005) and cognitive science (e.g., Knoblich et al. 2011). However, despite their shared interests, there has so far been relatively limited interaction between different disciplinary research programs studying social coordination. In this case study, we argued that mechanistic philosophy of science can ground a feasible division of labor between researchers in different scientific disciplines studying social coordination.

In evolutionary anthropology and developmental psychology, one of the most important ideas that has gained considerable empirical support during recent decades is that human agents and our nearest primate relatives differ fundamentally in our dispositions to social coordination and cooperation: for example, chimpanzees rarely act together instrumentally in natural settings, and they are not motivated to engage in the types of social games and joint attention that human infants find intrinsically rewarding already at an early age (Warneken et al. 2006). Importantly, this does not seem to be due to a deficit in general intelligence since chimpanzees score as well as young human infants on tests of quantitative, spatial, and causal cognition (Herrmann et al. 2007). According to the shared intentionality -hypothesis of evolutionary anthropologist Michael Tomasello, this is because “human beings, and only human beings, are biologically adapted for participating in collaborative activities involving shared goals and socially coordinated action plans (joint intentions)” (Tomasello et al. 2005).

Given a basic capacity to engage in social coordination, one can raise the question of what types of cognitive mechanisms enable individuals to share mental states and act together with other individuals. To answer this question, we made use of the distinction between emergent and planned forms of coordination put forth by cognitive scientist Günther Knoblich and his collaborators. According to Knoblich et al. (2011: 62), in emergent coordination, “coordinated behavior occurs due to perception-action couplings that make multiple individuals act in similar ways… independent of any joint plans or common knowledge”. In planned coordination, ”agents’ behavior is driven by representations that specify the desired outcomes of joint action and the agent’s own part in achieving these outcomes.” Knoblich et al. (2011) discuss four different mechanisms for emergent coordination: entrainment, common object affordances, action simulation, and perception-action matching. While emergent coordination is explained primarily by sub-intentional mechanisms of action control (which space does not allow us to discuss in more detail here), planned coordination is explained by reference to explicit mental representations of a common goal, the other individuals in joint action, and/or the division of tasks between the participants.

In our article, we argued that cognitive scientists and social scientists answer different questions (see above) about mechanisms that bring about and sustain social coordination in different environments and over time. Thus they are in a position to make mutually interlocking yet irreducible contributions to a unified mechanistic theory of social coordination, although they may also sometimes reach results that challenge assumptions that are deeply ingrained in the other group of disciplines. For a more detailed discussion of how cognitive and social scientists can collaborate in explaining social coordination, we refer the reader to our article (Sarkia et al. 2020: 8-11).

Transactive Memory

Our second case study concerned the phenomenon of transactive memory, which has been studied in the fields of cognitive, organizational, and social psychology as well as in communication studies, information science, and management. The social psychologist Daniel Wegner and his colleagues (Wegner et al. 1985: 256) define transactive memory in terms of the following two components:

  1. An organized store of knowledge that is contained entirely in the individual memory systems of the group members
  2. A set of knowledge-relevant transactive processes that occur among group members.

They attribute transactive memory systems to organized groups insofar as these groups perform functionally equivalent roles in group-level information processing as individual memory mechanisms perform in individual cognition, i.e. (transactive) encoding, (transactive) storing, and (transactive) retrieving of information. For example, Wegner et al. (1985) found that close romantic couples responded to factual and opinion questions by using integrative strategies, such as interactive cueing in memory retrieval. Subsequent research on transactive memory systems has addressed small interaction groups, work teams, and organizations in addition to intimate couples (e.g., Ren & Argote 2011; Peltokorpi 2008). What is crucial for the development of a transactive memory system is that the group members have at least partially different domains of expertise and that the group members have learned about each other’s domains of expertise. If these two conditions are met, each group member can utilize the other group members’ domain-specific information in group-related cognitive tasks and transcend the limitations of their own internal memories.

In our article, we made use of the theory of transactive memory systems to argue that some cognitive mechanisms transcend the brains and bodies of individuals to the social and material environments that they inhabit. For example, in addition to brain-based memories, individual group members may also utilize material artifacts, such as notebooks, archives, and data files, as their memory stores. In addition, other members’ internal and external memory storages may in an extended sense be understood as part of the focal member’s external memory storages as long as she knows their domains of expertise and can communicate with them. Thus the theory of transactive memory can be understood as describing a socially distributed and extended cognitive system that goes beyond intra-cranial cognition (Hutchins 1995; Sutton et al. 2010). For a more detailed discussion of this thesis and its implications for interdisciplinary memory studies, we refer the reader to our article (Sarkia et al. 2011, 11-15).

Ethnicity

The sociologist Rogers Brubaker and his collaborators (Brubaker et al. 2004) has made use of theories in cognitive psychology and anthropology to challenge traditional approaches to ethnicity, nationhood, and race that view them as substantial groups or entities with clear boundaries, interests, and agency. Rather, he treats them as different ways of seeing the world, based on universal cognitive mechanisms, such as categorizing the world into ‘us’ and ‘them.’ Brubaker et al. (2004) also make use of the notions of cognitive schema and stereotype, defining stereotypes as “cognitive structures that contain knowledge, beliefs, and expectations about social groups” and schemas as “representations of knowledge and information-processing mechanisms” (DiMaggio 1997). For example, Brubaker et al. (2004, 44) discuss the process of ethnicization, where ”ethnic schemas become hyper-accessible and… crowd out other interpretive schemas.”

In our article, we made use of Brubaker’s approach to ethnicity to illustrate how cognitive accounts of social phenomena need to be supplemented by traditional social scientific research methods, such as ethnographic and survey methods when we seek to understand the broader social and cultural environment in which cognitive mechanisms operate. For example, in their case study of Cluj, a Romanian town with a significant Hungarian minority, Brubaker et al. (2006) found that while public discourse was filled with ethnic rhetoric, ethnic tension was surprisingly scarce in everyday life. By collecting data with interviews, participant observation, and group discussions, they were able to identify cues in various situations that turned a unique person into a representative of an ethnic group. Importantly, this result could not be achieved simply by studying the universal cognitive mechanisms of stereotypes, schemas, and categorization, since these mechanisms serve merely as the vehicles of ethnic representations, and they do not teach us about the culture-specific contents that these vehicles carry. We refer the reader to our article for more discussion of the complementarity of social scientific and cognitive approaches to ethnicity (Sarkia et al. 2020, 15-17).

References

Bechtel W (2009) “Looking down, around, and up: mechanistic explanation in psychology.” Philosophical Psychology 22(5): 543–564.

Brubaker R, Loveman M and Stamatov P (2004) “Ethnicity as cognition.” Theory and Society​ 33(1): 31–64.

Brubaker R, Feischmidt M, Fox J, Grancea L (2006) Nationalist Politics and Everyday Ethnicity in a Transylvanian Town. Princeton: Princeton University Press.

Carpenter M, Svetlova M (2016) “Social development.” In: Hopkins B, Geangu E, Linkenauer S (eds) Cambridge Encyclopedia of Child Development. Cambridge: Cambridge University Press, 415–423.

DiMaggio P (1997) “Culture and cognition.” Annual Review of Sociology 23: 263-287.

Herrmann E, Call J, Hernandez-Loreda, M, Hare B, and Tomasello, M (2007). “Humans have evolved specialized skills of social cognition: the cultural intelligence hypothesis.” Science 317: 1360-1366.

Hutchins E (1995) Cognition in the wild. Cambridge (MA): MIT Press.

Peltokorpi V (2008). Transactive memory systems. Review of General Psychology 12(4): 378–394.

Ren Y and Argote L (2011) “Transactive memory systems 1985–2010: An integrative framework of key dimensions, antecedents, and consequences.” The Academy of Management Annals 5(1): 189–229.

Sarkia M, Kaidesoja T, and Hyyryläinen (2020). “Mechanistic explanations in the cognitive social sciences: lessons from three case studies.” Social Science Information. Online first (open access). https://doi.org/10.1177%2F0539018420968742

Sutton J, Harris C.B., Keil P.G. and Barnier A.J. 2010. “The psychology of memory, extended cognition and socially distributed remembering.” Phenomenology and the Cognitive Sciences 9(4), pp. 521-560.

Tomasello M, Carpenter M, Call J, et al. (2005) “Understanding and sharing intentions: The origins of cultural cognition.” Behavioral and Brain Sciences 28: 675–691.

Warneken F, Chen F, Tomasello M (2006) “Cooperative activities in young children and chimpanzees.” Child Development 77(3): 640–663.

Wegner DM, Giuliano T and Hertel P (1985) “Cognitive interdependence in close relationships.” In: Ickes WJ (ed) Compatible and Incompatible Relationships. New York: Springer, pp. 253–276.