Gabriel Tarde on Belief and Desire

How did the late 19th and early 20th century French sociologist Gabriel Tarde understand belief and desire? He did so with much novelty, I want to argue, because he did not associate either belief or desire with “content” (see Hutto 2013). Neither did he associate them with what we might call today qualitative data. Consider the following statement from his most thorough treatment of the topic: “attention is the desire to clarify the nascent sensation, this amounts to saying that it is the desire for an increase in current belief” (1880: 152). Here, attention is a type of desire linked with belief. We pay attention to increase belief. Moreover, we achieve sensation based on belief, which refers to an amount (of something) associated with the clarity of sensation.

Lots of moving parts here! And the arrangement among them, and the relations between them, are novel (and, as I will claim below, perhaps quite contemporary), particularly if we are used to hearing belief and desire linked in a customary, deductive folk-psychological model.

Tarde gives the example of a child questioning whether to walk underneath a large rock teetering on a ledge as he makes his way down a narrow path. The child thinks “if this rock falls it will crush me.” The image of the falling movement of the rock “presents itself to the child’s mind; and his mind … establishes no link of positive or negative faith between the two ideas,” either the rock falling or it not falling. But “he desires, he needs to believe, to affirm or deny. This desire, which has a future belief as its object, is questioning” (153). The premise here seems to be that to walk under the rock, the child needs to establish a positive faith that it will not fall. 

So desire and belief are linked, in Tarde’s view, and both are associated with a theory of action, but if both are also quantitative, then the operant question is less what you believe or what your desire is, than it is how much you believe and desire, which is a question we cannot answer without attending to perception. Thus, “belief, no more than desire, is neither logically nor psychologically posterior to sensations; that far from arising from the aggregation of these, it is indispensable to their formation, as well as their grouping” (152).

As Tarde explains, his thesis is that “belief and desire are quantities,” and that “sensation is not in itself a quantity” (161). Thus, it makes no sense to say that we sense something with greater or lesser intensity, though it does make sense to say that we believe or desire with greater or lesser intensity. 

Suppose that we are a candidate for public office and it is election night. With each new bit of news and gossip our hopes and fears rise and fall. As Tarde puts it, “the calculation of probabilities plays no role in this. But what is quite clear is the very marked quantitative character of these hopes and fears” (172). Our being elected to office, as a particular event in the world, becomes more or less likely, though that is not because we actually calculate a statistical percentage (e.g. 90% chance it will happen). Contrary to the present-day association of the quantitative with the frequency of occurrence of many events, Tarde associates quantitative with particularity. Thus, the only way a frequency of occurrence helps us is by increasing our quantity of belief, though to do this it would have, seemingly paradoxically, to have the effect of making whatever is counted with frequency appear more particular and less diffuse.

Suppose, further, that we see someone at a distance who might be our friend. Initially, we have little belief as all we can see is a mere speck on the horizon. However, “our faith in the reality of [our friend’s] presence” steadily grows as they approach us (172). As Tarde explains, “here again no application of the calculus of probabilities is possible or imaginable. However, these are, I believe, quantitative variations in the same way as the rise or fall of temperature.”

Tarde draws a simple but paradoxical lesson from these examples: quantification increases alongside individualization or particularization. In other words, as the indefinite speck comes into view as our friend Bob, we have quantitatively more belief; meanwhile, as the gossip winds blow strongly against us, we quantitatively lose hope. In both cases, the “clarity of sensation” entails a future object, which implies that Tarde’s statements are of the order of prediction

Thus, as Tarde himself suggests, probability is integral to belief and desire. As he puts it, “if the calculation of probabilities has a real basis, if it is not a false calculation,” then probability cannot have a mathematical meaning. For belief to directly track “mathematical reasons for believing,” then our “increases and decreases of belief” would need to be “proportioned exactly to the increases and decreases of what one might call the mathematical reasons for believing.” Yet the “reasons for believing” are not reflective of “intrinsic characteristics of things.” If they were, that would “restore an objective meaning to probability.” Rather, as Tarde claims:

These are entirely subjective reasons themselves, which consist in the knowledge we have, not of the causes of an expected and unknown event, but of the limits of the field [du champ] outside of which we are sure they will not occur, and of the division of this domain into two unequal portions, one called favorable chances, the other adverse chances, the inequality of which can be quantified (170).

For Tarde, then, belief and desire are not only quantitative, and characterized by increase and decrease, but also probabilistic, and characterized by prediction. (Notably, Tarde was an admirer of the obscure 19th century French philosopher Antoine Auguste Cournot who associated probability with a “higher faculty whereby we comprehend the order and rationality of things,” and which he in turn associated with “those unshakeable beliefs we call common sense.”)   Belief and desire reflects an expansion and contraction in the field of possibility. Outside the field, there is no possibility. Within the field, that possibility is dictated by a relative weighing of chances for and chances again. Quantification, again, traces the clarity of our sensation of something particular—the more particular our object of it is, the more clear our sensation. Our prediction of future events is based on our perceptual information and what appears to be the kind of weight it lends to a field of possibility. 

If belief and desire are “psychological” properties in Tarde’s view, that does not mean they are discrete and “contentful.” Thus, they are not “qualitative,” nor can they be treated as qualitative data. They are intentional attitudes, however, as belief and desire is about something that exists in some state of extension relative to us (e.g. distance either in space or in time). When we believe and desire we exhibit a directedness toward something; but we cannot dissect our belief or desire into a propositional attitude. They do not, in other words, have a representational content that makes a proposal about the world (e.g. “I believe that” or “I desire that”). Instead, they function more as continuums of greater or lesser. 

The image the Tarde provides is comparable to what I have elsewhere theorized as a loop. Tarde makes it clear that we feel “belief” and “desire,” they are affectual, intensive states, but they appear to be moderated by probability. Tarde does not endorse the separation of belief and desire from perception: “All the force of belief and desire at our disposal and which flows, not without loss, into our conduct and our thoughts, is produced, in fact, or rather provoked by the continual experiences of our senses” (174). As Tarde would say: “belief (la croyance), desire (le désir) and sensation (le sensation),” these are seuls éléments de Vâme (the only elements of the soul) (153). Perception (sensation) provokes belief and desire, though if belief equals “stabilization” and desire equals “extension”, then the “double power” that Tarde alludes to has something of a contradictory task.

If any of this is accurate, then Tarde would seem to anticipate, quite closely, the basic mechanism introduced by the predictive processing paradigm according to which, likewise, the brain is not a representation engine but a prediction engine that also features a kind of double power (see Reeder, Sala and van Leeuwen 2024). Here, perception is achieved through a balanced weighing of predictions based on prior knowledge (top-down) and sensory information from the environment (bottom-up). As Tarde puts it, belief and desire are of the order of transcendental conditions for sensation: 

 … it is time to enter into the very heart of our subject, – belief and desire are, in our opinion, like space and time, quantities which, serving as a link and support for qualities, make them participate in their quantitative character; they are, in other words, constant identities which, far from preventing the heterogeneity of the things embedded within them, enhance them, penetrate them entirely without, however, constituting them, unite them without confusing them, and subsist unalterably in their midst despite the close intimacy of this union. … My thesis, as we see, implies two: 1. belief and desire are quantities; 2. there are no others in psychology, or there are only derivatives of these; which amounts to saying that sensation is not in itself a quantity (161).

In an optimal perceptual system, priors (expectation) will be deployed prior to perception, so that the least amount of energy will be spent in updating one’s internal model with the accumulation of sensual information. “Prediction error” refers to the difference between predicted and actual sensory input. It is inherent to top-down mechanisms of perception. What Tarde argues about sensation and its lack of quantity, makes a parallel point. Belief and desire secure sensation by moving toward clarity in a nascent sensation.  

On these grounds, Tarde asks whether it is legitimate to “totalize” belief and desire, which seems to mean whether he can talk about them as taking on a collective form. His argument is that sensations differ between individuals, but “if believing and desiring also differed, tradition would be nothing but an empty phrase; nothing human could be transmitted unaltered from one generation to the next” (175). Tarde’s point, again, only makes sense if we do not assume that belief and desire has “content.” As he continues, “only through belief, only through desire, do we collaborate, we fight; only through this, therefore, do we resemble each other.” For Tarde, “the totalization of the quantities of belief or desire of distinct individuals is legitimate.” (176). We present such a totalization when we track “variations in the market value of things, statistical figures …” However, to explain these (really existing) “totalities” as ones of content (e.g. beliefs that or values that) would be deeply misleading.

Weber deflected his own probabilistic approach from Tarde’s on two occasions (2019/1921-22: 100-01; 1981/1913: 167-68). Both times he pivots from Tarde’s principle of “imitation” to a consideration of objective probability. If there is “consensus,” Weber argues, it means that “action oriented toward expectations about the behavior of others has an empirically realistic chance of seeing these expectations fulfilled because of the objective probability that these others will, in reality, treat those expectations as meaningfully “valid” for their behavior, despite the absence of an explicit agreement” (1981/1913: 168). But as Weber continues, “the objectively ‘valid’ consensus—in the sense of calculable probabilities—is naturally not to be confused with the individual actor’s reliance that others will treat his expectations as valid.” It is instead because of the relation of “adequate causation between the average objective validity of the probability and the currently average subjective expectation” (168). This would imply that the source of consensus is also not to be found in content, and that it also arises as a probabilistic alignment of individuals.

Weber, thus, might have more in common with Tarde then he imagines. But I’ll save that for another post.

 

References

Hutto, Daniel (2013). “Why Believe in Contentless Beliefs?” Pp. 55-74 in New Essays on Belief. Springer.

Reeder, Reshanne, Giovanni Sala, Tessa M van Leeuwen (2024). “A Novel Model of Divergent Predictive Perception.” Neuroscience of Consciousness 2024, no. 1: niae006.

Tarde, Gabriel (1880). “La Croyance et le Désir: La Possibilité de leur Mesure.” Revue Philosophique de la France et de l’Étranger (July-December): 150-80.

Weber, Max (2019/1920-21). Economy and Society: A New Translation. Harvard University Press.

_____. (1981/1913). “Some Categories of Interpretive Sociology.” Sociological Quarterly 22, no. 2: 151-80.

Are We Cognitively Susceptible to Tests?

In one the clearest statements about the difference it makes to emphasize cognition in the study of culture and, more generally, for the social sciences as a whole, the anthropologist Maurice Bloch (2012) writes that, if we consider closely every time we use the word “meaning” in social science, then “a moment’s reflection will reveal that ‘meaning’ can only signify ‘meaning for people’. To talk of, for example, ‘the meaning of cultural symbols’, as though this could be separated from what these symbols mean, for one or a number of individuals, can never be legitimate. This being so, an absolute distinction between public symbols and private thought becomes unsustainable” (4). 

As a critique of Geertzian and neo-Diltheyan arguments for “public meaning” and “cultural order” sui generis, Bloch’s point is fundamental, as it reveals a core problem with arguments built on those foundations once they have been untethered from “meaning for people” and can almost entirely be given over to “meaning for analysts.”  Yet, and as Bloch makes it a point to emphasize, such critiques can only get us so far in attempting to change practices, as even if “a moment’s reflection” like this may lead some to agree with Bloch’s claim, without an alternative, these models will persist more or less unchanged. If “meaning for people” stands as some equivalent for a tethering to cognitive science as recommended by theorists like Stephen Turner (2007), then what is needed is a programmatic way of doing social theory without “minimizing the cognitive” by attempting, instead, to bridge social theory and cognitive neuroscience in the design of concepts.

In fairness to Geertz, one of his more overlooked essays proposes a culture concept that seems to want to avoid the very problem that Bloch identifies. In “The Growth of Culture and the Evolution of Mind” Geertz (1973) draws a connection between culture and “man’s nervous system,” emphasizing in particular the interaction of culture and the (evolved) mind in the following terms: “Like a frightened animal, a frightened man may run, hide, bluster, dissemble, placate or, desperate with panic, attack; but in his case the precise patterning of such overt acts is guided predominantly by cultural rather than genetic templates.” Here the problem of relating the cultural to the cognitive seems clearly resolved, as the latter is reduced to “genetic templates.” Yet, contrary to Sewell’s (2005) positive estimation of this aspect of Geertz’s thought as “materialist,” we should be wary of taking lessons from Geertz if by “materialist” Sewell means a culture concept that does due diligence to the evolved, embodied, and finite organisms we all are. Nonetheless, in many respects, the Geertzian move still prevails in contemporary cultural sociology which, likewise, features an admission of the relevance of the cognitive to the cultural, but retains a similar bracketing as de facto for figuring out the thorny culture + cognition relation. 

For instance, recently Mast (2020) has emphasized that “representation” (qua the proverbial turtle) works all the way down, even in the most neurocognitive of dimensions, and so we cannot jettison culture even if we want to include a focus on cognition because we need cultural theory to account for representation. Likewise, Norton (2018) makes a similar claim by drawing a distributed cognition framework into sociology, but making “semiotics” the ingredient for which we need a designated form of cultural theory (in this case, his take on Peircean “semeiotics”) to understand. Kurkian (2020), meanwhile, argues that unless we admit distinguishably cultural ingredients like these, attempting any sort of marriage of culture + cognition will fail, because cognition will be about something that does not tread on culture’s terrain, like “information” for instance.

Each of these is a worthwhile effort, yet in some manner they misunderstand the task at hand in attempting a culture + cognition framework, recapitulating what Geertz did in 1973. This is because any such framework must rest on new concept-formation rather than what amounts to a defense of established concepts. This would admit that cultural theories of the past cannot be so straightforwardly repurposed without amendments. What we tend to see, rather, are associations of culture concepts (semiotics, representation) and cognitive concepts (distributed cognition, mirror neurons) by drawing essentially arbitrary analogies and parallels between concepts that otherwise remain unchanged. In most cases, such a bracketed application replicates the disciplinary division of labor in thought because the onus is never placed on revision, despite the dialectical encounter and the possibilities that each bank of concepts presents to the deficiencies and arbitrariness of the other. We either hold firm to our cultural theories of choice, or we engage in elaborate mimicry of a STEM-like distant relation. 

Following Deleuze (1995), we should appreciate that to “form concepts” is at the very least “to do something,” like, for instance, making it wrong to answer the question “what is justice?” by pointing out a particular instance of justice that happened to me last weekend. Deleuze adds insight in saying that concepts attempt to find “singularities” from within a “continuous flow.” The insight is apt to the degree that culture + cognition thinking seems rooted in the sense that there is a “flow” here and that, maybe, the concepts we’ve inherited, most of them formed over the last 80 years, that make culture and cognition “singular” are simply not helpful anymore. Yet to rehash settled, unrevised cultural theories and bring them into relation with emerging cognitive theories (also unchanged) is essentially to “do” something with our concepts like affirm a thick boundary between sociologists’ jurisdiction and cognitive science’s jurisdiction, forbidding anything that looks like culture + cognition, and, in all likelihood, creating only an awkward, fraught, short-lived marriage between the two, which, despite the best of intentions, will continue to “minimize mentalistic content,” have the effect of carefully limiting the role that “psychologically realistic mechanisms” can play in concept-formation, and which will, in retrospect, probably only produce a brand of social theory that will seem hopelessly antique for sociologists looking back from the vantage of a future state of the field, one possibly even more removed from present-time concerns with “cognitive entanglements.” 

The task should instead be something akin to what Bourdieu (1991) once called “dual reference” in his attempt to account for the strange verbiage littered throughout Heidegger’s philosophy (dasein, Sorge, etc). For Bourdieu, Heidegger’s work remains incomprehensible to us if we reference only the philosophical field in which he worked, and likewise incomprehensible if we reference only the Weimar-era political field in which he was firmly implanted. Instead, Heidegger’s philosophy, in particular these keywords, consists of position-takings in both fields simultaneously, which for Bourdieu goes some way in explaining the strange and tortured reception of Heidegger (with Being and Time something of a bestseller in Germany when published in 1927 and still canonical in pop philosophy pursuits today) to present-day. 

Thus, in forming concepts, the goal should not be to posit an order of influence (culture → cognition, cognition → culture), nor to bracket the two (culture / cognition) and state triumphantly that this is where culture concepts can be brought to bear and this where cognitive ones can be, leaving both unchanged. Norton is right: Peirce has lots of bearing on contemporary cognitive science (see Menary 2015). But to say this and not amend an understanding of semeiotics (which, it seems, Peirce would probably advocate were he alive today, as he always considered his semeiotics as a branch of the “natural science” he always pursued) is a non-starter. 

My argument is that concept-formation of the culture + cognition kind should yield dual reference concepts rather than bracketing concepts or order of influence concepts. The proposal will be that the concept of “test” demonstrates such a dual reference concept. We cannot account for the apparent ubiquity of tests, why they are meaningful, and how they are meaningful without reference to both a cognitive mechanism and a sociohistorical configuration that combines with, appropriates, and evokes it. The analysis here involves genealogy, institutional practice, site-specificity, and social relations.

Elsewhere (Strand 2020) I have advocated a culture + cognition styled approach as the production of “extraordinary discourse” and, relatedly, as concept-formation that can be adequate for “empirical cognition” as a neglected, minor tradition since the time of Kant (Strand 2021; though one with a healthy presence in classical theory). More recently, Omar and I have attempted concept-formation that more or less looks like this in recommending a probabilistic revision of basic tenets of the theory of action (forthcoming, forthcoming). To put it starkly: we need new concepts if we want something like culture + cognition. To work under the heading of “cognitive social science” is akin to a compass-like designation in a new direction. And rather like Omar (2014) has said, if theorists, so often these days casting about for a new conversation to be part of now that “cultural theory” is largely exhausted and we can only play with the pieces, want a model for this kind of work, they might study the role that philosophers have come to play in cognitive science, as engaged in what very much seems like a project of concept-formation.

In this post, I will attempt something similar, more generally as a version of deciphering “meaning for people” by asking a simple question: Why are tests so meaningful and seemingly ubiquitous in social life (Marres and Stark 2020; Ronnell 2005; Pinch 1993; Potthast 2017)? I will consider a potential “susceptibility” to tests and why this might explain why we find them featured so fundamentally in areas as varied as education, science, interpersonal relationships, medicine, morality, technology, and religion, as a short list, and how they can be given a truly generalized significance if we conceptualize test as trial (Latour 1988). More generally, the new(ish) “French pragmatist sociology” has made the epreuve (what mutually translates “test” and “trial” into French) a core concept as a way of “appreciating the endemic uncertainty of social life” (Lemieux 2008) though without implying too much about what a cognitive-heavy phase like “endemic uncertainty” might mean. The French pragmatists [1] might be on to something: test or trial may qualify as a “total social phenomena” in the tradition of Mauss (1966), less because we can single out one test as “at once a religious, economic, political, family, phenomena” and more because each of these orders depends, in some manner, on tests. This is more fitting with a cognitive susceptibility perspective, as I will articulate further below.

Provisionally, I will define a test as the creation of uncertainty, a suspension of possibilities, a way of “inviting chance in,” for the purpose of then resettling those possibilities and resolving that uncertainty by singling out a specific performance. After a duration of time has elapsed, the performance is complete. The state of affairs found at the end is what we can call an “outcome,” and it carries a certain kind of “objective” status to the extent that the initial uncertainty or open possibility is different now, less apparent than it was before, and “final” in some distinguishable way. 

If testing appears ubiquitous and “total,” this is not because tests necessarily work better than other potential alternatives as ways of handling “endemic uncertainty.” It is also not because testing features as part of some larger cultural process in motion (like “modernity’s fascination with breaking known limitations” [Ronnell 2005]). Rather, I want to claim that if tests are ubiquitous, this indicates a cognitive susceptibility to tests, thus revealing latent “dispositions,” such that we could not help but find tests “meaningful for people” like us. Some potential reasons why are suggested by referencing a basic predictive processing mechanism: 

According to [predictive processing], brains do not sit back and receive information from the world, form truth evaluable representations of it, and only then work out and implement action plans. Instead brains, tirelessly and proactively, are forever trying to look ahead in order to ensure that we have an adequate practical grip on the world in the here and now. Focused primarily on action and intervention, their basic work is to make the best possible predictions about what the world is throwing at us. The job of brains is to aid the organisms they inhabit, in ways that are sensitive to the regularities of the situations organisms inhabit (Hutto 2018).

Thus, in this rendering, we cannot help but notice “sensory perturbations” as those elements of our sensory profile that defy our expectation (or, in more “contentful” terms, our predictions). These errors stand out as what we perceive, and we attend to them by either adjusting ourselves to fit with the error (like sitting up a little more comfortably in our chair) or by acting to change those errors, so that we do not notice them anymore. In basic terms, then, the predictive processing “disposition” involves an enactive engagement with the world that seeks some circumstance in which nothing is perceived, because, we might say, everything is “meaningful” (i.e. expected). If we define “meaning” as something akin to “whatever subjectively defined qualities of one’s life make active persistence appealing,” then this adaptation of the test concept might be a way of accounting for meaning without a “minimum of mentalistic content” while incorporating a “psychologically realistic mechanism” (Turner 2007).

In what follows I will examine whether there is some alignment between this disposition and tests as a ubiquitous social process. If so, then it may be worthwhile to build on the foundation laid by the French pragmatists for concept-formation of the culture + cognition kind.

 

On cognitive susceptibility

The notion of cognitive “susceptibility” is drawn from Dan Sperber (1985) and the idea that, rather than dispositions that create a more direct link between cognition and cultural forms, that link may more frequently operate as susceptibility.

Dispositions have been positively selected in the process of biological evolution; susceptibilities are side-effects of dispositions. Susceptibilities which have strong adverse effects on adaptation get eliminated with the susceptible organisms. Susceptibilities which have strong positive effects may, over time, be positively selected and become, therefore, indistinguishable from dispositions. Most susceptibilities, though, have only marginal effects on adaptation;  they owe their existence to the selective pressure that has weighed, not on them, but on the disposition of which they are a side-effect (80-81).

Sperber uses the example of religion. “Meta-representation” is an evolved cognitive disposition to create mental representations that do not have to pass the rigorous tests that apply to everyday knowledge. It enables representations not just of environmental and somatic phenomena, but even of “information that is not fully understood” (83). Because it has these  capabilities, the meta-representational disposition creates “remarkable susceptibilities. The obvious function served by the ability to entertain half-understood concepts and ideas is to provide intermediate steps towards their full understanding. It also creates, however, the possibility for conceptual mysteries, which no amount of processing could ever clarify, to invade human minds” (84). Thus, Sperber concludes that “unlike everyday empirical knowledge, religious beliefs develop not because of a disposition, but because of a susceptibility” (85).

The disposition/susceptibility distinction can be quite helpful in navigating the murky waters around Bloch’s trope of “meaning for people,” because we do not necessarily have to give cultural forms over directly to dispositions. Rather, those cultural forms can arise as susceptibilities, which offer far more bandwidth to capture the cognitive dimensions of cultural forms as instances of “meaning for people.”

Thus, when God “tests the faith” of Abraham by ordering him to sacrifice his child Isaac, a space of chances is opened, and depending on how the test goes, something about Abraham will become definitive, at least for a while. A perceived lack of faith becomes equivalent to a noticeable error here, and it can be resolved by absorbing this uncertainty through some process that generates an outcome to that effect. Even though Abraham does not end up sacrificing Isaac in the story, he was prepared to do so, and thus he “proves” his faith. Some equivalent to this “sacrifice” remains integral to tests of faith of all sorts (Daly 1977).

I hypothesize that there must be a (cognitive) reason why this test, and the whole host of others we might come across, in fields and pursuits far removed from Abrahamic religion, is found in moments like these and in situations that mimic (even vaguely) God’s “test” of Abraham. The role of tests in this religious tradition, and potentially as a total social phenomenon, indicates something about “susceptibility” (in Sperber’s sense) to them. “Disposition” in this case concerns the predictive processing disposition to eliminate prediction error by either adapting a generative model to the error or by acting to change the source of the error; either way, our expectations change and we do not notice what stood out for us before. For tests, the construction of uncertainty and more possibilities than will ultimately be realized is a kind of susceptibility that corresponds to the predictive disposition. More specifically, this means that tests allow something to be known to us by enabling us to expect things of it.

 

Tests: scientific, technological, moral

What is remarkable about this is the range of circumstances to which we turn to tests to construct our expectations. Consider Latour’s description of the Pasteur’s experimental technique: 

How does Pasteur’s own account of the first drama of his text modify the common sense understanding of fabrication? Let us say that in his laboratory in Lille Pasteur is designing and actor. How does he do this? One now traditional way to account for this feat is to say that Pasteur designs trials for the actor to show its mettle. Why is an actor defined through trials? Because there is no other way to define an actor but through its actions, and there is no other way to define an action but by asking what other actors are modified, transformed, perturbed or created by the character that is the focus of attention … Something else is necessary to grant an x an essence, to make it into an actor: the series of laboratory trials through which the object x proves it mettle … We do not know what it is, but we know what it does from the trials conducted in the lab. A series of performances precedes the definition of the competence that will later be made the sole cause of these performances (1999: 122, 119).

Here the test (or “trial”) design works in an experimental fashion by exposing a given yeast ferment to different substances, under various conditions just to see what it would do. By figuring this out, Pasteur “designs an actor,” which we can rephrase as knowing an object by now being able to hold expectations of it, being able to make predictions about it, and therefore no longer needing to fear what it might do or even have to notice it.

Latour is far from alone in putting such emphasis on testing for the purposes of science. Karl Popper (1997), for instance, insists on the centrality of the test and its trial function: “Instead of discussing the ‘probability’ of a hypothesis we should try to assess what tests, what trials, it has withstood; that is, we should try to assess how far it has been able to prove its fitness to survive by standing up to tests. In brief, we should try to assess how far it has been ‘corroborated.’” To put a hypothesis on trial is, then, to imperil its existence, as an act of humility. Furthermore, it is to relinquish one’s own claim over the hypothesis. If a “test of survival” is the metric of scientific worth, then one scientist cannot single-handedly claim control: hypotheses need “corroboration,” a word which Popper prefers over “confirmation” because corroboration suggests something collective.

When Popper delineates the nuances of the scientific test, he also seems to establish tests for membership in a scientific community, as based on this sort of collective orientation, which requires individual humility, and in which, from the individual scientist’s standpoint means “inviting chance in” relative to their own hypothesis, making them subject to more possibilities than what the scientist might individually intend, including the possibility that they could be completely wrong. 

Meanwhile, in Pinch’s approach, which focuses specifically on technology, tests work through “projection”:  

If a scale model of a Boeing 747 airfoil performs satisfactorily in a wind tunnel, we can project that the wing of a Boeing 747 will perform satisfactorily in actual flight … It is the assumption of this similarity relationship that enables the projection to be made and that enables engineers warrantably to use the test results as grounds that they have found out something about the actual working of the technology (1993: 29).

The connection with a predictive mechanism is clear here, as projection entails not being surprised when we move into the new context of the “actual world” having specified certain relationships in the “test world.” The projection/predictive aspect is made almost verbatim here: “In order to say two things are similar, we bracket, or place in abeyance, all the things that make for possible differences. In other words, we select from myriad possibilities the relevant properties whereby we judge two things to be similar … [The] outcome of the tests can be taken to be either a success or a failure, depending upon the sorts of similarity and difference judgments made” (32).

Thus, a generative model is made in the testing environment, and it is then applied in the actual world environment on the understanding that we will not need to identify predictive error when we do this, as the generative model is similar enough to the actual world that we will have already resolved those. As Pinch concludes, “The analysis of testing developed here is, I suggest, completely generalizable. The notion of projection and the similarity relationships that it entails are present in all situations in which we would want to talk about testing” (37). And, it does seem that this particular use of testing can find analogues far and wide, including with the laboratory testing that is Latour’s focus and more generally we might say with educational or vocational testing where, likewise, a similarity relationship depends on a test that can minimize the difference between two contexts (a difference that we can understand according to the presence, or hopefully lack thereof, of prediction error). But what if we try to apply the test concept to something more remote from science and technology, like morality?

On this front, we can find statements like the following, from Boltanski and Thevenot:

A universe reduced to a common world would be a universe of definite worths in which a test, always conclusive (and thus finally useless), could absorb the commotion and silence it. Such an Eden-like universe in which ‘nothing ever happens by chance’ is maintained by a kind of sorcery that exhausts all the contingencies … An accident becomes a deficiency … Disturbed situations are often the ones that lead to uncertainties about worth and require recourse to a test in order to be resolved. The situation is then purified … In a true test, deception is unveiled: the pea under the mattress discloses the real princess. The masks fall; each participant finds his or her place. By the ordering that it presupposes, a peak moment distributes the beings in presence, and the true worth of each is tested (2006: 136-138).

In this rendering, tests are quite explicitly meant to make “accidents” stand out, in addition to fraud and fakery. The goal is the construction of a situation removed of all contingencies, in which, likewise, we do not notice anything because the test has put it in its proper order. When we do notice certain things (e.g. “the same people win all the same tests,” “they are singled out unfairly,” “they never got the opportunity,” etc), these are prediction errors based on some predictive ordering of the world that creates expectation. Simultaneously they are meaningful (for people) as forms of injustice. 

Boltanski and Thevenot dovetail, on this point, with something that became clear for at least one person in the tradition of probability theory, namely Blaise Pascal (see Daston 1988: 15ff). For Pascal, the expectations formed by playing a game of chance could themselves be the source of noticing the equivalent of “error,” for instance, when some player wins far too often while another never wins. A test is the source of an order “without contingency” where “nothing ever happens by chance,” which in this case means a test is the rules of the game that allow for possibilities (all can win) while resolving those possibilities into a result (only one will win). This creates expectations, and Boltanski and Thevenot extrapolate from this (citing sports contests as epitomizing their theory)  to identify “worlds” as different versions of this predictive ordering. Injustice is officially revealed at a second level of testing, then, as the test that creates this order can itself be tested (see Potthast 2017). Prediction errors can be noticed, likewise these can be resolved through the adaptation of a generative model, which would seem to demand a reformative (or revolutionary) change of the test in a manner that would subsequently allow it to meet expectations.

 

A genealogy of testing

What is interesting about these examples is, abstracted from history as they are, they demonstrate parallel wings of a tradition that Foucault traces to the decline of the “ordeal” and the birth of the “inquiry.” Both of these fit the profile of the test, though only the former gives the outcome the kind of official status or legitimacy of the laboratory test, the technological test, or the moral test. The ordeal involves a sheer confrontation that can occur at any time, and which creates expectations strictly in relation to some other specific thing, whether this be another person or something inanimate and possibly dangerous (like fire) or a practice of some kind (like writing a book). One can always test themselves against this again, and to move beyond known limitations, they must test themselves if they are to do anything like revise a generative model by encountering different prediction errors. 

Foucault’s larger point here recommends a more general argument, rooted in a kind of genealogy, that justice requires a caraceral; that the only form of justice is the one that rests in illegality. On the contrary, in his earlier work Foucault recommends a different approach to justice, one that renders any necessary association of justice and “the carceral archipelago” mistaken, as it would only consist of a relatively recent, though impactful, appropriation of justice. Thus, the argument Foucault presents is less nominal than it may seem at first, particularly when we consider the following: 

What characterizes the act of justice is not resort to a court and to judges; it is not the intervention of magistrates (even if they had to be simple mediators or arbitrators). What characterizes the juridical act, the process or the procedure in the broad sense, is the regulated development of a dispute. And the intervention of judges, their opinion or decision, is only ever an episode in this development. What defines the juridical order is the way in which one confronts one another, the way in which one struggles. The rule and the struggle, the rule in the struggle, this is the juridical (Foucault 2019: 116).

Here the meaning of justice is expanded to refer to the “regulated development of a dispute,” which may or may not have judges, which may or may not take place in a court, result in a judgment, or find at its culmination some sort of definitive decision or “judgment.” All of these are added features to the basic dispute.

Elsewhere Foucault expands on this by changing the language he uses in a significant way: from “dispute” justice shifts to “trial,” which he gives this an expansive meaning by drawing a distinction within the category of trial itself and distinguishing between epreuve and inquiry. There is a historical tension in the distinction: inquiries will come to replace epreuves (or “ordeals”) in a Eurocentric history. This division is apparent as early as the ancient Greeks who, in a Homeric version, would create justice through the rule-governed dispute, with the responsibility for deciding—not who spoke the truth, but who was right–entrusted to the fight, the challenge, and “the risk that each one would run.” Contrary to this the Oedipus Rex form, as exemplified by Sophocles’ great play. Here, in order to resolve a dispute of apparent patricide, we find one of the emblems of Athenian democracy: “the people took possession of the right to judge, of the right to tell the truth, to set the truth against their own masters, to judge those who governed them” (Foucault 2000: 32-33).

This division would be replicated in the later distinctions of Roman law, as rooted inquiry, and Germanic law, as rooted in something more resembling the contest or epreuve, with disputes conducted through either means. Yet with the collapse of the Carolingian Empire in the tenth century, “Germanic law triumphed, and Roman law fell into oblivion for several centuries.” Thus, feudal justice consisted of “disputes settled by the system of the test,” whether this be a “test of the individual’s social standing,” a test of verbal demonstration in formulaically presenting the grievance or denunciating one another, tests of an oath in which “the accused would be asked to take an oath and if he declined or hesitated he would lose the case,” and finally “the famous corporal, physical tests called ordeals, which consisted of subjecting a person to a sort of game, a struggle with his own body, to find out whether he would pass or fail.”

As the trajectory of justice moves, then, the role and place of the epreuve ascends to prominence; testing becomes justice, in other words, as the means to resolve a dispute centers around the ordeal and its outcome, more generally as a way of letting God’s voice speak. In one general account, the trial by “cold water” involved “dunking the accused in a pond or a cistern; if the person sank, he or she was pronounced innocent, and if the person floated, he or she was found guilty and either maimed or killed.” In the trial by “hot iron,” the accused would “carry a hot iron a number of paces, after which the resulting wound was bandaged. If the wound showed signs of healing after three days, the accused was declared innocent, but if the wound appeared to be infected, a guilty verdict ensued” (Kerr, Forsyth and Plyey 1992).

The epreuve, in this case, remains a trial of force or between forces, which may be codified and regulated as the case may be, as water or iron would be blessed before the ordeal, and therefore made to speak the word of God. More generally, to decline the test was to admit guilt in this binary structure, and this carried into the challenge by another in a dispute to a contest. Thus, justice ended in a victory or a defeat, which appeared definitive, and this worked in an almost “automatic” way, because it required no third party in the form of one who judges. 

Across this genealogy, we find something equivalent to the creation of uncertainty, in some cases deliberately made, in other cases not, and then its resolution by some means into an outcome after a given duration of time. This outcome may have an institutional sanction (as “justice”) or it could have something more like the sanction of a fight, and presumably the certainty of what would happen should a fight happen again. In these different ways, predictions are made and expectations settled. An “error” stands out as noticeable in a variety of forms: as someone with whom one has a dispute, as an action taken or event that happened but was not expected, whether according to explicitly defined rules or not, or in the case of the democratic link suggested by Foucault, the pressing question of who should rule and whether such rule can be legitimate (see Mouffe 2000). 

Some equivalent to the test (whether as inquiry or ordeal) is involved in all of these cases, and in the genealogy at least, we can glimpse how consequential it might be for a new test form to come on the scene, or to win out over another, as a way of, in a sense, appropriating cognitive susceptibilities that must be activated should “testing” make any difference for predictive dispositions.

 

Conclusion

The larger point is that the concept of test is substantive, here, because we can bridge its properties to properties of cognition. The task is to say that the predictive dispositions that are cognitive create a susceptibility to tests: more specifically, we are likely to find tests meaningful because of our predictive dispositions. If tests are drawn upon across all of these different areas, specifically in cases of uncertainty (whether as dispute, as experiment, as how to design a technology) or what we have established in general terms as “situations in which we are presently engaged with prediction error that we cannot help but notice a lot,” then it would follow that we are susceptible to tests as what allows us to absorb this uncertainty, a process we cannot understand or even fully recognize without reference to “real features of real brains” (Turner 2007). This, I want to propose, is how we can approach “test” as a dual reference concept, and its applicability in areas as varied as religion, politics, science, morality, and technology.

Tests are “meaningful for people” when they absorb uncertainty and generate expectation. They are also meaningful for people when they create uncertainty and enable critique. We could not identify something like a “test” if tests did not have these kinds of cognitive effects, and we cannot understand those cognitive effects without finding a distinguishably cognitive process (e.g. “psychologically real” with lots of “mentalistic content” extending even to neurons). In this case, the parallel of testing and uncertainty and predictive processing and prediction error is not a distant analogy, as is often the case with bracketing concepts. To understand testing’s absorption of uncertainty we need predictive processing, but to understand how predictive processing might matter for the things sociologists care about we need testing.

I’ll conclude with the suggestion that if “test” can qualify as this sort of dual reference concept then we should favor it over other potential concepts that can account for meaning (e.g. “categories,” “worldview,” “interpretation”) but, arguably, cannot be dual reference.

 

Something that looks like endnotes

[1] The French “pragmatists” are, in centering “test” in their concept-formation, not to be received as illegitimate appropriators of that title. Peirce (1992) himself encouraged a focus on the study of “potential” as referring to something “indeterminate yet capable of determination in any special case.” This could very well serve as clarified restatement of the definition of test. Dewey (1998) makes the connection more explicit in his thorough conceptualization of test: “The conjunction of problematic and determinate characters in nature renders every existence, as well as every idea and human act, an experiment in fact, even though not in design. To be intelligently experimental is but to be conscious of this intersection of natural conditions so as to profit by it instead of being at its mercy. The Christian idea of this world and this life as a probation is a kind of distorted recognition of the situation; distorted because it applied wholesale to one stretch of existence in contrast with another, regarded as original and final. But in truth anything which can exist at any place and at any time occurs subject to tests imposed upon it by surroundings, which are only in part compatible and reinforcing. These surroundings test its strength and measure its endurance … That stablest thing we can speak of is not free from conditions set to it by other things … A thing may endure secula seculorum and yet not be everlasting; it will crumble before the gnawing truth of time, as it exceeds a certain measure. Every existence is an event.”

 

References

Bloch, Maurice (2012). Anthropology and the Cognitive Challenge. Cambridge UP.

Boltanski, Luc and Laurent Thevenot. (2006). On Justification. Princeton UP.

Bourdieu, Pierre. (1991). The Political Ontology of Martin Heidegger. Stanford UP.

Daston, Lorraine. (1988). Classical Probability in the Enlightenment. Princeton UP.

Daly, Robert. (1977). “The Soteriological Significance of the Sacrifice of Isaac.” The Catholic Biblical Quarterly 39: 45-71.

Deleuze, Gilles and Guattari, Felix. (1995). What is Philosophy? Columbia UP.

Foucault, Michel. (2019). Penal Theories and Institutions: Lectures at the College de France, 1971-72, edited by Bernard Harcourt. Palgrave.

Foucault, Michel. (2000). “Truth and Juridical Forms” in Power: The Essential Works of Michel Foucault, 1954-1984, edited by James D. Faubion. The New Press.

Geertz, Clifford. (1973). “The Growth of Culture and the Evolution of Mind” in Interpretation of Cultures.

Hutto, Daniel. (2018). “Getting into predictive processing’s great guessing game: Bootstrap heaven or hell?” Synthese 195: 2445-2458.

Kerr, Margaret, Forsyth, Richard, and Michel Plyey. (1992). “Cold Water and Hot Iron: Trial by Ordeal in England.” Journal of Interdisciplinary History 22: 573-595.

Kurkian, Dmitry. (2020). “Culture and Cognition: the Durkheimian Principle of Sui Generis Synthesis vs. Cognitive-Based Models of Culture.” American Journal of Cultural Sociology 8: 63-89.

Latour, Bruno. (1988). The Pasteurization of France. Harvard UP.

Latour, Bruno. (1999). Pandora’s Hope. Harvard UP.

Lemieux, Cyril. (2008) “Scene change in French sociology?” L’oeil Sociologique

Lizardo, Omar. (2014). “Beyond the Comtean Schema: The Sociology of Culture and Cognition Versus Cognitive Social Science.” Sociological Forum 29: 983-989.

Marres, Noortje and David Stark. (2020). “Put to the Test: For a New Sociology of Testing.” British Journal of Sociology 71: 423-443.

Mast, Jason. (2020). “Representationalism and Cognitive Culturalism: Riders on Elephants on Turtles All the Way Down.” American Journal of Cultural Sociology 8: 90-123.

Marcel, Mauss. (1966). The Gift. Something UP.

Menary, Richard. (2015). “Pragmatism and the Pragmatic Turn in Cognitive Science” in The Pragmatic Turn: Toward Action-Oriented Views in Cognitive Science. MIT Press. 

Mouffe, Chantal. (2008). The Democratic Paradox. Verso. 

Norton, Matthew. (2018). “Meaning on the Move: Synthesizing Cognitive and Systems Concepts of Culture.” American Journal of Cultural Sociology 7: 1-28.

Pinch, Trevor. (1993). “Testing—One, Two, Three… Testing!”: Toward a sociology of testing. Science, Technology, & Human Values, 18(1), 25–41.

Potthast Jorg. (2017) The sociology of conventions and testing. In: Benzecry C, Krause M and Reed IA (eds) Social Theory Now. Chicago: University of Chicago Press, 337–361.

Popper, Karl. (1997). The Logic of Scientific Discovery. Routledge.

Ronnell, Avital. (2005). The Test Drive. University of Illinois Press.

Sewell, William. (2005). “History, Synchrony, and Culture: Reflections on the Work of Clifford Geertz” in Logics of History

Sperber, Dan. (1985). “Anthropology and Psychology: Towards an Epidemiology of Representations.” Man 20: 73-89.

Strand, Michael. (2020). “Sociology and Philosophy in the United States since the Sixties: Death and Resurrection of a Folk Action Obstacle.” Theory and Society 49: 101-150.

Strand, Michael (2021). “Cognition, Practice and Learning in the Discourse of the Human Sciences” in Handbook in Classical Sociological Theory. Springer.

Strand, Michael and Omar Lizardo. (forthcoming). “For a Probabilistic Sociology: A History of Concept-Formation with Pierre Bourdieu” Theory and Society 

Strand, Michael and Omar Lizardo (forthcoming). “Chance, Orientation and Interpretation: Max Weber’s Neglected Probabilism and the Future of Social Theory.” Sociological Theory 

Turner, Stephen. (2007). “Social Theory as Cognitive Neuroscience.” European Journal of Social Theory 10: 357-374.

Toward a Theory of Habitus Breakage

In the synthesis of theories of practice and predictive processing (here and here), it becomes clear that what concepts like habitus and agency mean cannot be separated from what prediction and objective probability mean. Habitus formation is just another word for learning probabilities and for making predictions accordingly. This implies that the more exposure we have to certain objective probabilities, as in a field configuration, the more objective will become our predictions, the more we will do what is expected, and the better we will anticipate what those similarly ensconced in the field as us can recognize as, appropriately, a “next thing to do.” But as, in this sense, our action becomes almost entirely a form of social action, it also loses its purely subjective imprint, which in probabilistic terms, can be redefined as its randomness, its unpredictability, and surprise. 

Rephrased this becomes the now classical question of whether a habitus can be broken or how, once acquired, habitus does not just become a determinative, “reproductive” mechanism. In probabilistic terms, this means that learned probabilities become limits of action and perception, which is not to say that action is fully predictable, only that we can find boundaries across which it will not go, perceptions which it will not perceive. For some, the fact that surprise is still possible negates the very presence of habitus (King 2000). For others, including Bourdieu (2000: 234) himself, the “relative autonomy of the symbolic order” is available to prevent the reproductive tendencies of habitus, by providing moments when objective probabilities themselves break and learned expectations no longer matter. This mimics objective situations (“critical moments”) when the loop really does break, and symbolic orders that have no specific grip on the world become just as likely as anything else to loop into the future. But we should not expect that this symbolic escape can be sustained as a replacement (Bourdieu 1988: chap. 5).

In this post, I want to suggest something different. In spaces of highly developed objective probability, “saturated with history” to such an extent that what can be done shows very little deviation from what has been done, an engagement with chance can allow for a form of habitus breakage. The space in question will be the field of art, which, so saturated with history, means that it is rife with reproduction and cliche. The interesting thing is that, in the examples below, some artists take note of this and engage chance in order to prevent themselves from being fully “objective” and therefore cliched. Such a standpoint on art-making can be described as follows, but this statement can easily be extrapolated toward a probabilistic theory of agency more generally:

If we consider a canvas before the painter begins working, all the places on it seem to be equivalent; they are all equally “probable.” And if they are not equivalent, it is because the canvas is a well- defined surface, with limits and a center. But even more so, it depends on what the painter wants to do, and what he has in his head: this or that place becomes privileged in relation to this or that project. The painter has a more or less precise idea of what he wants to do, and this pre-pictorial idea is enough to make the probabilities unequal. There is thus an entire order of equal and unequal probabilities on the canvas. And it is when the unequal probability becomes almost a certitude that I can begin to paint. But at that very moment, once I have begun, how do I proceed so that what I paint does not become a cliche? (Deleuze 2003: 93).

In the predictions that shape action, the artist will predict only what is already objective, which is to say, what is historical and has already been done. They will do this even as they try to do what is “improbable” as this is associated with artistic “worth” as the performance of creativity. As this suggests, to recognize an action as artistic requires some link to expectation: what one does is what is expected relative to those who are, likewise, oriented toward certain objective possibilities. More specifically, if we perceive only what we cannot predict, and if the experienced artist can predict most things about their own work, this means that they perceive less, which is a phenomenon that seems to follow from the acquisition of any form of expertise (Dreyfus 2004). Hence, art “experts” cannot help but fall into cliche and objective rationalization.

So how to break out of this? Consider the following description of what the art historian Yve-Alain Bois describes as “compositional art.” In this instance, the facts of composition are clear and speak directly to a specifically designed presence: 

Composition is an intended, ordered relationship of discrete parts, a relationship that suggests-that at once builds and needs-an interiority, a solid, plotted depth that fills both the artist as intentional actor and the visual field, however flat, that underpins the painting: one is an analogue for the other … Composition names the pictorial relationship of discrete parts across a field, parts arranged according to a visual order that both underlies the whole, and of which it-the painting as a whole-is an individual instance, proof of laws and orders … The composed object, the structured or designed one, appears right and it appears necessary and specific-because the ordered relationship between the parts of the object structure a relationship between the object and the viewer, and more, between vision as conception and the world (Singerman 2003: 131). 

Thus, what is composed is “flat,” “ordered,” “right” and “necessary.” It fulfills expectations. It can serve as proof of what art or a kind of art (a “style” or “genre”) should be. But all of this is exactly what non-composition attempts to work against, prevent and break. “Non-compositional art” is that which attempts to work against composition by finding its origin in what Bois calls the “motivation of the arbitrary.” 

For Bois, in the most prominent examples of this kind of art-making, the technique used serves as a direct analogue to inviting chance in, because it is precisely the inclusion of non-compositional, chance mechanisms (invited in) that make it possible to find this very peculiar kind of motivation. An example of this is the approach taken by the artist Ellsworth Kelly and his effort, in the 1950s, to “escape the weight of Picasso.”

What is there left to do in the wake of someone who has invented everything?  In this respect, at least, Kelly remains an American in Paris. Like Pollock and an entire generation along with him, he knows that, if he wants to accomplish anything, his first task is to escape the weight of Picasso. And since the latter cannot be outdone, the trick is not to try to outdo him. How? By eliminating the human figure, to start with which means not just refraining from producing effi­gies ex nihilo, but also not engendering them by displacement or condensation, since even at this little game, Picasso is unbeatable (he has a magician’s touch: not only does he have an infinite array of images in reserve, but he also has the ability to make something out of nothing) … But getting away from Picasso also en­tails renouncing all choice, all composition (Bois 1992: 16).

Thus, Kelly will set about evading Picasso by essentially trying to undermine his own compositional skills, or what we can appreciate as Kelly’s attempt to break his habitus as it can only reproduce objective possibilities that Picasso has done the most to define. Kelly first attempts a method of transfer, what he calls “transformed already-made” in which he simply sketches what is already there: “we are dealing here with a simple transfer—along the same lines as a photograph or an imprint … The raw material remains untransformed, but above all … it is almost untransformable: just try to make a human shape out of a grid!” (Bois 16). This includes sketching the lines of windows, seaweed, of objects seen in churches, of tennis courts, of street patterns, and most notably perhaps, of the awnings of a hotel (Awnings, Avenue Matignon). Thus in his answer to Picasso, Kelly can now say: “No need to play sor­cerer’s apprentice; the figurative intention is not a necessary con­dition of esthetic transubstantiation; art can be made without transforming anything, without having to re-baptise; anything goes.” The goal, as Rosalind Krauss (1977) put it, is to produce an “index” without a “referent” as the epitome of the figurative opacity of the modernist movement.

Awnings, Avenue Matignon, Ellsworth Kelly, 1950
Automatic Drawing: Pine Branches VI, Ellsworth Kelly, 1950

Yet everything is still too compositional for Kelly, and so his habitus breakage engages with a second stage. This includes a more forthright practice with chance mechanisms. He begins to sketch under conditions of sensory deprivation (with his eyes closed, not looking at the paper) a clear effort to not allow himself to predict his own perceptions (Automatic Drawing, Pine Branches VI). Yet the problem he encounters are that the drawings come out as “too perfect, testifying to an exemplary motor coordination, or they are illegible failures that Kelly is not willing to accept as such, first because they look more like straightforward clumsiness than the product of chance, and also because, a priori, there is nothing to prevent one from detecting ‘unconscious’ images in them, and thus falling back—the old surrealist saw—into figura­tion” (Bois 25). What he realizes is that the “aleatory as such does not so easily give itself up to the eye … for the strictest order of chance to manifest itself, it has to be opposed to the strictest possible order …” And so Kelly makes a third attempt at habitus breakage, now more systematic. In a series of collages, Kelly would use chance mechanisms for their composition: he would cut a drawing into identically-shaped squares and then glue them in an order that tries to reproduce the original composition. 

Cité, Ellsworth Kelly, 1951

As Bois puts it, “it is impossible not to notice the unpredictable character of the joints … the geometrical distribution ends up being ever so slightly disrupted …” (25). In arguably his most famous efforts in this direction, Kelly starts with a modular grid and, then, pulling numbers from a box and darkening the corresponding blocks in the grid, produces a representation of light flickering on the surface of the Seine River. In another engagement with chance, Kelly uses a stock of gummed papers he happened to discover in a stationary shop, and taking the small squares of color, arranges them intuitively within a modular grid, “using no system or scientific method except to proceed progressively from the grid’s lateral sides toward the center.” In still other collages he would number the grids randomly, number the colors, and then put the colored squares into their corresponding boxes. 

Study for Seine, Ellsworth Kelly, 1951
Seine, Ellsworth Kelly, 1951
Spectrum of Colors Arranged by Chance II, Ellsworth Kelly, 1951

Thus, in these works, “the system pre­siding over [them]  is in every case rigorous, but also conceived as a pure receptacle of chance. The notion of a systematic art flowed from the necessity of suppressing the arbi­trariness of composition, but the arbitrariness presiding over the choice of system remained. Kelly’s extraordinary insight was to counterbalance that of the system with the still greater arbitrari­ness of chance, so as best to eliminate all subjective determina­tion” (Bois (1999: 26). While this marks an attempt to get beyond and exterior to “subjectivity” in compositional decision-making, it is not an engagement with “end of author” approaches that would otherwise, in familiar fashion, emphasize structure or  signification, over hermeneutics and depth. Rather, as Bois’ emphasizes, non-composition appears in technique, as practice, and in this capacity directly engages with chance as the antithesis to “subjective decision-making” and its compositional effects. 

Three Studies for a Portrait of George Dyer (on Light Ground), Francis Bacon, 1964

For other painters like Francis Bacon, the engagement with chance comes in the form of making “free marks” on the canvas, “so as to destroy the nascent figuration in it and to give the Figure a chance, which is the improbable itself. These marks are accidental, ‘by chance’; but clearly the same word, ‘chance,’ no longer designates probabilities, but now designates a type of choice or action without probability” (Deleuze 94). In the case of Three Studies for a Portrait of George Dyer (on Light Ground), the white mark drawn to shape Dyer’s face and strewn across it, is exactly this kind of “free mark” that seems to prevent the painting, based on a photograph, from being figurative. In Painting, Bacon describes how he first tried to draw a bird in a yard, but it turned out to be a man with an umbrella: “Well, one of the pictures I did in 1946, the one like a butcher’s shop, came to me as an accident. I was attempting to make a bird alighting on a field. … suddenly the lines that I’d drawn suggested something totally different, and out of this suggestion arose this picture. I had no intention to do this picture; I never thought of it in that way. It was like one continuous accident mounting on top of another. … I don’t think the bird suggested the umbrella; it suddenly suggested this whole image.” In this case, the marks that Bacon made turn out to be free marks, as they seem to elude the initial predictions he was making of his own work, such that he would eventually produce what he set out to. And so he, instead, fills the canvas with free marks.

Painting, Francis Bacon, 1946

What is notable about each of these “non-compositions” is that, as examples of inviting chance in, they also demonstrate the evident connection between probability and action. Merleau-Ponty (1964) describes the painter Henri Matisse, having been captured painting with a slow motion camera, amazed at how deliberate his brushstrokes appeared to be from this vantage, when from his vantage they were anything but deliberate. Using this engagement with time, it became clear that while viewing the recorded brush-strokes, the viewer (and Matisse himself) could witness an “infinite number of data … an infinite number of options” as possible. “Matisse’s hand did hesitate,” as Merleau-Ponty put its, choosing among “twenty conditions which were unformulated or even informulable for anyone but Matisse, since they were only defined and imposed by the intention of executing this painting which did not yet exist” (46). 

It becomes evident that “art” or “painting” or “modernist painting” is not a structure of signification, though it still remains something that we can call objective. In this case, it is not Matisse’s creation; he, instead, works with it. But what he works with are probabilities, or historically-developed chances, ones that his hand transforms into something actual, that in some sense looks like it should, as he puts brush to canvas. This entire process, as Merleau-Ponty emphasizes, is not different from what happens when we write or speak. What we do not do with language is “dwell in already elaborated signs and in an already speaking world” and simply “reorganize our significations according to the indications of the signs.” In this rendering, there is no space for probability, because all possibilities have already been determined. In Bourdieu’s distinction, this is more accurately described as an apparatus that requires no motivation on the part of actors in order to use it, because objective rationalization so completely displaces all uncertainty.

This is a false picture, however, because a structure tends to ignore certain minor facts like what happens when we speak, and that as we do we engage not with pre-established signs the meaning of which is removed from probability. As Merleau-Ponty puts it, “to speak is not to put a word under each thought; if it were, nothing would ever be said.” Instead, language constitutes a space of objective probabilities that our action (speaking, writing) can be oriented to and which, in this case, acquires its meaning at least partially through  the words we use to evoke our thought and make it actual (just as Matisse uses art to make his painting actual). Thus, the meaning of what we speak and write is at least partially determined by how probable or improbable our words are, what else we could have said given the objective probabilities of language at a given historical time, plus an interlocutor who has likewise learned those probabilities. 

Both language and art are objectively constituted by probability, and when we engage with them, we therefore “take chances” and engage with uncertainty. That uncertainty can widely vary both subjectively and objectively. When we have not (yet) formed expectations, or learned objective probabilities, we might experience lots of (subjective) uncertainty, even if what we are doing is not improbable (objectively speaking). By the same token, an objective uncertainty can apply to what we do because what we do is improbable (at least at the current moment). If we return to non-compositional art, we can better appreciate what it means to invite chance in as, in this case, a technique to interrupt a kind of self-feeding loop that occurs when expectations so perfectly match probabilities that it can only result in, as Bois puts it, “the communication of form.” To engage in non-compositional technique, then, breaks habitus by making chance decide what will happen next, where the hand will go next, what will be the next result, rather than a finely-attuned set of expectations that can, essentially, do nothing unexpected in relation to what is objectively probable.

 

References

Bois, Yve-Alain. (1992). “Ellsworth Kelly in France: Anti- Composition in its many Guises” in Ellsworth Kelly : the Years in France, 1948-1954. National Gallery of Art.

_____. (1999). “Kelly’s Trouvailles: Findings in France” in Ellsworth Kelly: The Early Drawings, 1948-1955. Harvard Art Museum.

Bourdieu, Pierre. (1988). Homo Academicus. Stanford UP.

_____. (2000). Pascalian Meditations. Stanford UP.

Deleuze, Gilles. (2003). Francis Bacon: The Logic of Sensation. Continuum.

Dreyfus, Stuart. (2004). “The Five-Stage Model of Adult Skill Acquisition.” Bulletin of Science, Technology & Society 24: 177-181.

King, Antony. (2000). “Thinking with Bourdieu Against Bourdieu: A ‘Practical’ Critique of the Habitus.” Sociological Theory 18: 417-433.

Krauss, Rosalind. (1977). “Notes on the Index: Seventies Art in America. Part 2.” October 4: 58-67.

Merleau-Ponty, Maurice. (1964). “Indirect Language and the Voices of Silence” in Signs. Northwestern UP.

Singerman, Howard. (2003). “Non-Compositional Effects, or the Process of Painting in 1970.” Oxford Art Journal 26: 125-250.

On Looping Effects

In this post, I sketch out some preliminary ideas for introducing repetition into theories of social formation and for situating cognition at their base. The major principle for this endeavor is what I (unoriginally) propose as loops. More originally, I argue that loops take at least three different forms and that not all looping effects are created equal. Loops are equal, however, in putting the onus on repetition as a source of scaled orders and formations (e.g. “structures,” “enclosures,” “molds,” “modulations”) rather than on generality.

A loop, quite simply, is a generative process with one necessary condition: a non-identity between two parts, but two parts that repeat in their connection. A repeating loop makes a scaled order particular rather than general, because it is fundamentally a sequence.  Loops might be broadly instantiated, but they cannot find equivalences everywhere. As repetitions, they need the same ingredients. As connections and cycles, they remain distinctive rather than ordinary. 

Douglas (1986: 100-02) suggests that a looping effect need not directly involve a human agent at all; they are mostly natural. Knowledge of microbes leads to medications; when those medications are applied to microbes, the microbes adapt. This explains a clear feedback relation, but it is not a loop: microbes change because of knowledge about them; but the change needs no repetition. It is adaptive instead. 

The thing about a loop is that it must repeat  (again and again). A loop must generate its own momentum. This link between repetition and loops has been neglected to date. Such neglect means that looping effects are discussed separately from the prevalence of scaled orders and formations,  for instance those that are “disciplinary” versus those that are “control,” leaving both of fuzzy meaning and provenance, used interchangeably. When we concentrate on looping effects as repetitions, it becomes clear that orders and formations of scale (e.g. not presumed to be general; that cannot find equivalences everywhere) rest largely on an edifice of cognition that, in practice, does not need to remain implicit in order to be effective.

 

Hacking loops (enclosures, molds)

For Hacking (1995; 2006), loops come from the process of classification and categorization that feeds a dynamic nominalism. Classifications are made by people about people, as an index of traits, of the properties displayed by the latter. In a Hacking loop, classifications made by people about people loop into their target and alter it. The classifiers “create certain kinds of people that in a certain sense did not exist before.” The onus rests on the name given to these traits that collects them together. Hacking loops therefore represent a form of nominalism: they need not become entangled with real kinds or make any difference for them. What matters more is the name, its legitimation by expertise, its elaboration by institutions, its officialization by bureaucracy, all of which reinforces its external, public and legitimate presence. 

As Hacking puts it: 

In 1955 “multiple personalities” was not a way to be a person, people did not experience themselves in this way, they did not interact with their friends, their families, their employers, their counsellors, in this way; but in 1985 this was a way to be a person, to experience oneself, to live in society (2006).

The intervention of 30 years meant the formulation of this name, the proliferation of knowledge and the accumulation of references under its heading, all giving it a stable external presence, by indexing various evident things (e.g. “this is that”) in a distinguishable way. While the traits of multiple personality (or manic depression, anxiety disorder, etc) might have preceded the name, this is not Hacking’s point: as a “way to be a person,” multiple personality needed a name that could index these traits and thereafter be a way of indexing of oneself (e.g. “that is me”).  Such a sequence (this is that → that is me) becomes fully reversible (that is me → this is that). A name, increasingly standardized, rationalized and externalized (e.g. “discourse”) makes up and stabilizes people by feeding back into their identification (this is that ←→ that is me).

To be a certain type of person, to live in society as that person, to be interacted with as that person, and most importantly to experience oneself as that person occurs through a Hacking loop. Importantly, all of these effects require only an external process rather than a change of “inner life.” A name is “deep” in a contingent sense: the loop rests upon the identification of those who are indexed. It makes no substantive difference for the traits that are indexed that they now receive a name. In fact, Hacking loops seem to require this given the proliferation of names and entire professions and forms of expertise dedicated to classification. However, it does make a difference for those who are classified and named. Hacking loops create an enclosure or mold, from which there might be no escape so long as the name is externally maintained.

 

Mutually sustaining relations (structures)

A different loop is proposed by Sewell in what we might call his principle of mutually sustaining relations. The terminology here concentrates on a “mutually sustaining” loop between two different kinds of things (schemas and resources), as mentioned in the following influential formula and applied (famously) to a theory of structure:

Structures … are constituted by mutually sustaining cultural schemas and sets of resources that empower and constrain social action and tend to be reproduced by that action. Agents are empowered by structures, both by the knowledge of cultural schemas that enables them to mobilize resources and by the access to resources that enables them to enact schemas (27).

As Sewell implies, schemas and resources mutually sustain each other through a repeating connection. A schema remains an effect of resources, just resources are the effect of schemas. 

When the priest transforms the host and wine into the body and blood of Christ and administers the host to communicants, the communicants are suffused by a sense of spiritual well-being. Communion therefore demonstrates to the communicants the reality and power of the rule of apostolic succession that made the priest a priest. In short, if resources are instantiations or embodiments of schemas, they therefore inculcate and justify the schemas as well (13)

Unlike a Hacking loop, Sewell’s “mutually sustaining” loop does suggest a deep effect, as the very constitution of a set of properties as “resources” are schema dependent, just as the constitution of mental categories as “schemas” are resource dependent. A resource is equivalent to the traits that a Hacking loop collects under the heading of a name, but a schema does not “name” them. Sewell characterizes the mutually sustaining link as instead “reading” or “interpreting.” A resource needs to be read as a resource in order to be a resource. A schema does the reading. A schema, presumably, is not a schema if it does not read or interpret resources. The loop can be initiated through either end: resource accumulation to a schema (resource → schema) or schema accumulation to a resource (schema → resource). A loop becomes difficult to sustain in cases that allow for too much agency (e.g. transposition of schemas), which prevents an unambiguous rendering of resources. 

In cases where there are limited schemas for “reading” and “interpreting”  a resource, and this is in turn “sustained” by limited resources for other possible schemas, a “structure” will result. A structure is distinguishable from a “mold” or “enclosure” in a Hacking loop. Structure, by contrast, suggests not only a potential source of resistance but also the limits of meaning. This entails the  “depth” of structure as opposed to the externality of a mold. Structure refers to inner life, which it substantially depends on shaping and altering. The surface-level chaos of capitalism, for instance, only signals the depth of a schema ←→ resource loop: the schematic and repeating transformation of use- to exchange-value is a necessary condition for “resource” in this context; resources, meanwhile, accumulate to “schemas” that involve a use-to-exchange transformation.

We should expect structures to change through disruption to an established loop, via the interchangeability and replacement of both parts of structural loops (schemas and resources). This creates demands on inner life through transpositions that likely appear “impractical” in their interpretations and reading of things. The chance of resource accumulation keeps the possibility of structural change open.

 

Expectations-chances loops (modulations)

Hacking loops and Sewell’s mutually sustaining loops are both known well-enough by this point as to render the above discussion boring by comparison. To finish this post, I want to make two proposals: first, that Hacking’s “molds” and “castings” and Sewell’s “structures” are both loops found within a disciplinary order. This suggests a relative limit on their generality, though equally they remain contingent on repetition (as loops). Second, I want to understand a disciplinary order as distinct from a control order based on a different loop that engages cognition differently than naming, reading, or interpreting (Deleuze 1992). This is a expectations-chances loop that works according to (objective) prediction and (subjective) guessing (see Bourdieu 1973: 64).  

In one version of this loop, the tale is told indicatively as follows:

Acrimonious debates about the calculative abilities of individuals and the limits of human rationality have given way to an empirical matter-of-factness about measuring action in real life, and indeed in real time. The computers won, but not because we were able to build abstract models and complex simulations of human reasoning. They bypassed the problem of the agent’s inner life altogether. The new machines do not need to be able to think; they just need to be able to learn. Correspondingly, ideas about action have changed (Fourcade and Healy 2017: 24). 

Hence, a proposal for non-intentional action becomes applicable to data-gathering mechanisms, but the “index” is different in this scenario, as it includes “inner life” no longer. “Culture” is an association rather than internalized pattern generator. It does not have effects, but rather stands for a history of traces:

When people are presumptively rational, behavioral failure comes primarily from the lack of sufficient information, from noise, poor signaling or limited information-processing abilities. But when information is plentiful, and the focus is on behavior, all that is left are concrete, practical actions, often recast as good or bad ‘choices’ by the agentic perspective dominant in common sense and economic discourse. The vast amounts of concrete data about actual ‘decisions’ people make offer many possibilities of judgment, especially when the end product is an individual score or rating. Outcomes are thus likely to be experienced as morally deserved positions, based on one’s prior good actions and good taste.

A theory of action remains, then, even despite the absence of inner life; because data is simply action. Data can modulate action through a “herding” or directing effect, creating futures based on past performance and subsequent encoding. Since there is no inner life, classification is based on information collected at junctures that create possible futures. The causes of action are not of interest (only that action happens), though there are consequences to action. This can exercise a disciplinary effect through anticipation, as facilitated by the rationalization of trials. Since there is no ideal model (or name gathering of characteristics a priori), however, this is not integral to control. There is only the fact that one must have been through certain trials and then out of them. 

Predictions made through data protocols interface with predictions made in action. Trials introduce uncertainties that meet with anticipations; a certain future is achievable when possibilities are presented algorithmically and displace an otherwise “wild’ cognition. Control becomes an algorithmic modulation of future possibilities rather than a generative modulation of guesses. 

The systematic production of “good matches” is based on controls exercised on the means of prediction from both ends: the expropriation of the means of prediction and the controlled distribution of what they predict. This keeps the loop closed between the (objective) provision of possibilities and (subjective) anticipations or guesses, making “this matching feel all the more natural because it comes from within—from cues about ourselves that we volunteered, or erratically left behind, or that were extracted from us in various parts of the digital infrastructure” (Fourcade and Healy 17). 

Modulation takes place through cognitive loops, constructing a “self-deforming cast that will continually change from one moment to the other, or a sieve whose mesh will transmute from point to point” (Deleuze 4). Conventionally, the connection between “schema” and power is content-laden and substantive: it provides a way to “read” resources (Sewell 13). An expectations-chances loop finds no equivalent to “reading” (or interpreting or naming); the key process is guessing instead. A non-individual recorder or record-keeper (qua technology) can guess even if it cannot read, and it can adapt its guesses, improve them. Here looping is incompatible with “molding” or “casting”; “structure” is static by comparison. After all, you can know when you leave the “cast” and its standard no longer applies.

The theory of power embedded in a schema-resources loop puts the onus on schemas that “read” resources; this is where we find agency. In a disciplinary context, an ideal or standard (a telos) is enforced and sought after. In control contexts, such a standard goes missing. Trials are not examinations. A model is volunteered rather than enforced. An individual is a record, though there is no record-keeping individual (“examiner” or “recorder”). Rather than being incorporated into a structure (through schemas), agents are made precise as a code or classification. They do not exercise effects (structural or otherwise) but are given possible futures. They are not shoehorned into the fixed parameters of a schema. They bootstrap themselves into sequences that look increasingly like their own good matches. 

 

Conclusion 

We should therefore expect the genesis and transposition of expectations just as we do those of schemas or names, in looping connection with chances, as a way of inviting chance in or taming it. But there is a catch. The consequence of a “controlled” expectations-chances loop can be similar to the amnesiac returning to memory after several long years: “My God! What did I do in all those years?” (Bourdieu [1995] quoting Deleuze 1993). Consider, along exactly similar lines, a “coming to” after diving down an algorithmically modulated rabbit-hole. The explanation must be cognitive because this occurs through repeating loops. Disciplinary formations can achieve (reflexive) “consciousness” and nothing will change; the same is not true for control formations. 

 

References

Bourdieu, Pierre. (1973). “Three forms of theoretical knowledge.” Social Science Information 12: 53-80.

Bourdieu, Pierre. (1995). The State Nobility. Stanford University Press.

Deleuze, Gilles. (1992). “Postscript on societies of control.” October 59: 3-7.

Deleuze, Gilles. (1993). The Fold: Leibniz and the Baroque. University of Minnesota Press.

Douglas, Mary. (1986). How Institutions Think. Syracuse University Press.

Fourcade, Marion and Kieran Healy. (2017). “Seeing like a market.” Socio-Economic Review 15: 9-29.

Hacking, Ian. (1995). “The looping effects of human kinds.” Pp. 351-394 in Causal Cognition: A Multidisciplinary Debate.

Hacking, Ian. (2006). “Making up people.” LRB 28.

Sewell, William. (1992). “A theory of structure: duality, agency and transformation.” American Journal of Sociology 98: 1-29.

Did John Dewey Put Prediction into Action?

Prediction does not appear, at first, to be something that a sociologist, or really any analyst of anything, can safely ascribe to those (or that) which they are studying without running afoul of about a thousand different stringent rules that define how probability can be used for the purposes of generating knowledge. If we follow the likes of Ian Hacking (1975) and Lorraine Daston (1988) (among others), then “modern fact-making” has a lot to do with ways of using probability, especially for the purposes of making predictions. To the degree that this transforms probability into prediction, as referring to the epistemic practices that analysts use to generate a knowledge claim, this usage actually places limits on what probability can mean, how prediction can be used and where we might find it. If we don’t have certain epistemic practices (e.g. a nice regression analysis) then we can’t say that prediction is occurring anywhere if we are not doing it ourselves.

As Hacking and Daston indicate, however, for probability to be limited almost entirely to epistemic practices in this sense would appear strange to those who can stake any sort of claim to having “discovered” probability, especially Blaise Pascal. He, for one, did not understand probability to be limited to efforts at making predictions for the purposes of knowledge. For Pascal, probability had direct analogues in lived experience (without calculation) in the form of senses of risk and high stakes, and the perceived fairness of outcome, particularly in games of chance.  If this seems unusual to us now, given the strictures we place on probability and prediction, these points are far less unusual for what is fast appearing as a major paradigm in cognitive science, namely predictive processing (see Clark 2013; Friston 2009; Wiese and Metzinger 2017; Williams 2018; Hohwy 2020).

To put it simply, predictive processing makes prediction the primary function of the brain. The brain evolved to allow for the optimal form of engagement with a contingent and probabilistic environment that is never in a steady state. Given that our grey matter is locked away inside a thick layer of protective bone (e.g. the skull), it has no direct way of perceiving or “understanding” what is coming at it from the outside world. What it does have are the senses, which themselves evolved to gather information about that environment. Predictive processing says, in essence, that the brain can have “knowledge” of its environment by building the equivalent of a model and using it to constantly generating predictions about what the incoming sensory information could be. This works in a continuous way, both at the level of the neuron and synapse, and at the level of the whole organism. The brain does not “represent” what it is dealing with, then, but it uses associations, co-occurrences, tendencies and rhythms to predict what it is dealing with. 

All of this is contingent on making the equivalent of constant, future-oriented but past-deriving, best guesses. When those guesses are wrong, this generates error, which forms the content of our perceptions. In other words, what we perceive and consciously attend to is the leftover error of our generative models and their predictions of our sensory input. When those guesses are right, by contrast, we don’t have perceptual content because there is no error. The generative models we build are themselves multi-tiered, and the predictions they make work at several different levels of composition. A full explanation of predictive processing far exceeds the limits of this post. But this, in particular, is worth mentioning because it means that a generative model is not static or unchanging. Quite the contrary, generative models constantly change (at some compositional level) in order to better ensure prediction error minimization.

Some of these points will probably not sound that unusual. The relationship between minimized perceptual content and action is commonly referred to in discussions of embodiment and moral intuition, for example. What probably sounds very unusual, however, is the central role given here to prediction. 

As mentioned, prediction has been essentially cordoned off in the protected sphere of knowledge, to be used only by specialists wielding specialist tools and training. While it can be done by the folk, we (the analysts) love to point out how they do it poorly. On the off chance they happen to predict correctly (e.g. gambling on the long shot), this is celebrated as the exception that proves the rule. After all, the folk do not have our epistemic practices or training. All they have is their (subjective) experience and biases. In fact, brandishing those presumably bad at predicting by those with increasingly sophisticated techniques to make predictions on increasingly large datasets has become par for the course in the era of “analytics” (Hofman, Sharma and Watts 2017), and this particular symbolic power is now wielded quite overtly in a variety of fields (like baseball). Thus, to take prediction away from action could have, all along, been just another way of saying that because we (the analysts) predict and they (the folk) do not predict or do so poorly, they need us.

But is this the case? Predictive processing poses a serious question to this assumption and, with it, the role that prediction plays in making sociological knowledge different from folk knowledge. There is also a bit of history worth mentioning. The assumption that prediction plays only a negligible part in action, while other things like values and beliefs play a big part, comes from Talcott Parsons, who explicitly set out to marginalize prediction (1937: 64). Sociologists are rightfully in the mood of poo-pooing Parsons and have been for quite some time; but any proposal to put prediction into action remains just as heretical today as it did to Parsons in the 1930s. As one of his major points about action, the presumption that prediction can play no direct or significant role in action has still not been revisited let alone revised.

The purpose of this post is simply to sketch out the suggestion that we can even do this (e.g. put prediction into action) without falling over our feet and retreating sheepishly to the safety of the domain the Parsons carved out for us should we ever wish to talk about “action” again. Far be it from me to attempt to do this on my own. So for the purposes of illustration, a few pages from John Dewey’s Logic: The Theory of Inquiry (1938: 101-116) (and few from Human Nature and Conduct [1922]) will be enlisted for the task. I will argue that, in these pages, which are themselves famous because in them Dewey gives specific proposals about the process or stages of inquiry, Dewey does put prediction into action, and he does so in a way that does not seem that controversial; though, for any legitimate contemporary meaning of “prediction,” these are heretical claims. 

For Dewey, in contrast to Parsons, the action situation is not neatly parsed into the “objective state of affairs” that could be described with scientific precision by an external observer (and for which prediction is appropriate) and the “subjective point of view” of the actor (for which, by implication, prediction does not apply, lest we “squeeze out” the creative, voluntaristic element). Instead, the “state of affairs” is, according to Dewey (1922, p. 100ff), irreducibly composed of an entanglement of both objective and subjective elements. The very act of perception of a given state of affairs on the part of the actor introduces such a subjective element (for Parsons perception was not necessarily part of the subjective element of the action schema). 

Perception is not just purely spectatorial or contemplationist, then, but serves as the “initial stage” in a dynamic action cycle. Perception is for something, and this something is anticipation and prediction. Thus, “the terminal outcome when anticipated (as it is when a moving cause of affairs is perceived) becomes an end-in-view, an aim, purpose, a prediction usable as a plan in shaping the course of events” (Dewey 1922:101, italics added). In a stronger sense, for Dewey perceptions are predictions, which in their turn are ends-in-view. Perceptions are “projections of possible consequences; they are ends-in-view. The in-viewness of ends is as much conditioning by antecedent natural conditions as is perception of contemporary objets external to the organism, trees and stones or whatever” (102).

For Dewey (1938), this can extend even further into what arguably remains his most influential contribution to pragmatist thought: the process of inquiry, as it “enters into every aspect of every area” of life (101). Inquiry, as Dewey defines it, is the “controlled or directed transformation of an indeterminate situation into one that is so determinate in its constituent distinctions and relations as to convert the elements of the original situation into a unified whole” (104-105). This filters into all subsequent understandings of pragmatist problem-solving.

The “indeterminate situation” (105) that provides antecedent conditions for inquiry is constituted by doubt, but this is not a purely subjective state (“in us”). Doubt refers to our placement in a situation that is doubtful because we cannot respond to it as we are accustomed: “the particular quality of what pervades the given materials, constituting them a situation … is a unique doubtfulness which makes that situation to be just and only the situation it is” (105). Specifically this means that we cannot form ends-in-view with respect to the situation, though we can “[respond] to it … [in] blind and wild overt activities.” As Dewey stresses, “it is the situation that has these traits,” which means that we are simply a part of the situation in being doubtful; one part of the total configuration. To simply “change our mind” with respect to the doubtful situation is hardly enough to change it, though with any indeterminate situation, we might respond by carrying through a “withdrawal from reality.” The only thing that will really be effective, however, is what Dewey calls a “restoration of integration” in which the situation changes as our situation within it changes (e.g. as we change) (106).

Underlying Dewey’s proposals, then, is a kind of cognitive mechanism, which he does not label outright, but which, likewise, rests on prediction, and on which the stages of inquiry itself appear to rest. For Dewey (107-108), it is possible to remain in the doubtful situation forever, particularly should you find an effective means of “withdrawing from reality.” The next stage in the process of inquiry will only occur through a change in “cognitive operations,” specifically what Dewey labels “the institution of the problem … The first result of evocation of inquiry is that the situation is taken, adjudged, to be problematic. To see that a situation requires inquiry is the initial step in inquiry” (107). But to take this step, as Dewey implies, requires a change in the manner of prediction, and in a not dissimilar sense as a roughly equivalent mechanism identified by predictive processing.

If the indeterminate situation does not allow for perceptions as “ends-in-view,” then in the problematic situation the actor (e.g. “the interpretant”) changes because, in the situation, she is now characterized by an explicit representation: “without a problem, there is blind grasping in the dark.” This representation is needed as a change in cognition, but only as a mediating and not a permanent state. But the constant in this process, that allows representation to appear now explicitly and then only to disappear later on, can only be successive forms of prediction that, in Dewey’s terms, is trying to obtain an end-in-view. In other words, the explicit representation of “problem” itself presupposes a prediction about error. More generally, we are part of a problematic situation because we predict that it should go one way and it does not, and then we anticipate what would be required to minimize that error, which then forms the basis for future action. In almost a directly analogous sense, predictive processing refers to this as “active inference.” 

Hence, what follows this (“the determination of a problem-situation”)  is subsequently characterized by the generation of “ideas” as part of the inherently progressive nature of inquiry along the lines of continuous prediction or forward-searching (e.g. guessing): “The statement of a problematic situation in terms of a problem has no meaning save as the problem instituted has, in the very terms of its statement, reference to a possible solution” (108). Put differently, the one (problem) never occurs without the other (solution); we actively infer solutions because we have problems. Dewey (110-111) uses this to critique all prior conceptions of “ideas” in a western philosophical tradition (empiricists, rationalists and Kantians) for not seeing how perceptions and ideas function correlatively rather than separately:

Observations of facts and suggested meanings or ideas arise and develop in correspondence with each other. The more the facts of the case come to light in consequence of their being subjected to observation, the clearer and more pertinent become the conceptions of the way the problem constituted by these facts is to be dealt with. On the other side, the clearer the idea, the more definite … become the operations of observation and of execution that must be performed in order to resolve the situation (109).

Ideas are not removed from the situation, or entirely defined by the situation. Rather, the most important thing about them is that they have a direction in relation to the situation. But this only works if they suggest a forward-facing (temporally speaking) cognitive mechanism, which again seems perfectly analogous to a predictive function that is trying (slowly) to minimize error. Dewey seeks to redeem the role of “suggestions” (which have “received scant courtesy in logical theory”) by giving them not the diminished importance of half-completed ideas, but elevating them to “the primary stuff of logical ideas.” In this sense, suggestions demonstrate how “perceptual and conceptual materials are instituted in functional correlativity to each other in such a manner that the former locates and describes the problem which the latter represents a method of solution” (111; emphasis added). 

To “reason,” then, means to examine the meaning of ideas according to their simultaneous statement of problem and solution (e.g. “relationally”). For Dewey, this process involves “operating with symbols (constituting propositions)… in the sense of ratiocination and rational discourse.” If a suggested meaning is “immediately accepted,” then the inquiry will end prematurely. Full reasoning consists of a kind of “check upon immediate acceptance [as] the examination of … the meaning in question” according to what it “implies in relation to other meanings in the system of which it is a member” (111). By “meaning”  Dewey refers to symbols in a semiotic sense or the connection of sign and object in a non-problematic or habitual way. This therefore opens those habitual associations up to transformation as the situation becomes more determinate. Dewey also emphasizes how symbols perform the semiotic function of “fact-meanings.” The process of inquiry subjects these connections to “ideas [as] operational in that they instigate and direct further operations of observation; they are proposals and plans for acting upon existing conditions to bring new facts to light and to organize all the selected facts into a coherent whole” (112-113). The process remains forward-facing, which means that there can be “trial facts” that can be taken on-board with a certain provisionality: “they are tested and ‘proved’ with respect to their evidential function.” Ideas and facts, then, become “operative” in the process of inquiry (problem-solving) “to the degree in which they are connected with experiment” (114). Again, all of this presupposes that forward momentum, or searching, appears to be fueled by advancing and constant prediction.

Thus, for Dewey, the transformation of the situation into “determinate” involves a change of “symbols” in the form of habitual associations (sign to object) which themselves always remain provisional and never fully determinate (114-115). This is what alters our “self” (interpretant) within the situation as no longer in a doubtful state, and replaces this with what we might call a “confident” state as signifying a kind of assurance of action in relation to the situation. 

Thus, having passed through the stages of inquiry, and with new habitual associations, we are now predicting it well within the continuous flow of action. In Dewey’s terms, problem and solution effectively merge at the end of inquiry, and the forward-facing search ends. But we can translate the folk terms that Dewey uses here almost directly into the more technical terms that form the basis of predictive processing: the problem or trial-situation ends with the erasure of prediction error by a change in the generative model, such that the tiered coding of sensory input will generate the perceptions that the generative model expects. X is now Y in a non-problematic way, which for Dewey means that it becomes a “symbol” as a connection that is now habitual (see also Peirce CP 2: 234). Inquiry in “common sense” and inquiry in science are not different, according to Dewey, they simply involve differences in problems. For common sense, problems appear from symbols as the habitual culture of groups (115-116). 

This can lead us to make an even more radical claim: prediction in action and prediction in sociology are also not different; they simply involve differences in problems between those that occur in the continuous course of action, and those that are deliberately manufactured for the purposes of staging trials and leveraging them in order to make knowledge claims. Shared generative models also appear among actually-existing groups that make similar predictions, perceive similar things based on similar error, make similar active inferences, and therefore “solve problems” in ways that have a family resemblance. 

It seems then, without too much presumptuousness, we can take Dewey’s original definition of inquiry and retranslate it into its implied cognitive terms:

The controlled or directed transformation of an indeterminate situation into one that is so determinate in its constituent distinctions and relations as to convert the elements of the original situation into a unified whole (Dewey 1938: 104-105).

We can translate this into a general statement about problem-solving as follows

The higher order transformation of a situation with lots of prediction error into a generative model that is able to convert the elements of the original situation into a predictable whole.  

A follow-up post will discuss the broader significance of this translation in relation to pragmatist theories of action.

References

Clark, Andy. 2013. “Whatever next? Predictive Brains, Situated Agents, and the Future of Cognitive Science.” The Behavioral and Brain Sciences 36(3):181–204.

Daston, Lorraine. 1988. Classical Probability in the Enlightenment. Princeton University Press.

Dewey, John. 1938. Logic: The Theory of Inquiry. New York: Holt, Reinhart and Winston.

Dewey, John. 1922. Human Nature and Conduct.  New York: Henry Holt.

Friston, Karl. 2009. “The Free-Energy Principle: A Rough Guide to the Brain?” Trends in Cognitive Sciences 13:293–301.

Hacking, Ian. 1975. The Emergence of Probability: A Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference.  Cambridge University Press.

Hofman, Jake M., Amit Sharma, and Duncan J. Watts. 2017. “Prediction and Explanation in Social Systems.” Science 355(6324):486–88.

Hohwy, Jakob. 2020. “New directions in predictive processing.” Mind and Language 35: 209-223.

Parsons, Talcott. 1937. The Structure of Social Action. New York: Free Press.

Wiese, Wanja, and Thomas Metzinger. 2017. “Vanilla PP for Philosophers: A Primer on Predictive Processing.” in Philosophy and Predictive Processing.

Williams, Daniel. 2018. “Pragmatism and the Predictive Mind.” Phenomenology and the Cognitive Sciences 17:835–59.

 

The Cognitive Hesitation: or, CSS’s Sociological Predecessor

Simmel is widely considered to be the seminal figure from the classical sociological tradition on social network analysis. As certain principles and tools of network analysis have been transposed to empirical domains beyond their conventional home, Simmel has also become the classical predecessor for formal sociology, giving license to the effort and providing a host of formal techniques with which to pursue the work (Erikson 2013; Silver and Lee 2012). As Silver and Brocic (2019) argue, part of the appeal of Simmel’s “form” is its pragmatic utility and adaptability. Simmel demonstrates this in applying different versions of form to different empirical objects ( e.g. “the stranger” versus “exchange”). This suggests that we need not make much headway on deciphering what “form” actually is and still practice a formal sociology.

Though it may not seem like it, these recent efforts at formal sociology find their heritage in a sometimes rancorous debate etched deeply into Simmel’s cross-Atlantic translation into American sociology (and therefore not insignificant on shaping cross-field perceptions of sociology as “science”). Historically, this has found proponents of a middle-range application of form set against those who appeal to a more diffuse concern with the status of form. The debate has proven contentious enough, including at least one occasion of translation/retranslation of terminology from Simmel’s work. Robert Merton retranslated the German term ubersehbar to mean “visible to” (in the sentence from “The Nobility” [or Aristocracy”] discussion in Soziologie: “If it is to be effective as a whole, the aristocratic group must be “visible to” [ubersehbar] every single member of it. Each element must be personally acquainted with every other”) instead of what Kurt Wolff had originally translated as “surveyable by.” For various reasons, “visible to” carried far less of a “phenomenological penumbra” and fit with Merton’s interest (e.g. disciplinary position-taking) in structure, but arguably did not match Simmel’s own interest in finding the “vital conditions of an aristocracy” (see Jaworski 1990).

More recently, a kind of detente has emerged between the two sides. To the degree that there is any concern for the status of “form” itself, formal sociology has taken on board what is arguably the most thoroughgoing defense of Simmel’s “phenomenology” to date: the philosopher Gary Backhaus’ 1999 argument for Simmel’s “eidetic social science.” Backhaus reads Simmel with the help of Edmund Husserl, the founder of phenomenology, and therefore reads him against the grain of what the philosophically-minded had conventionally read as Simmel’s more straightforward neo-Kantianism. In part, this detente with phenomenology has been done because Backhaus made it easy to do. His reading does not require that formal sociology do anything that would deviate from network analysis’ own bracketing of the content of social ties from the formal pattern of social ties. His reading of Simmel also remains compatible with a pluralist/pragmatist application of form.

The purpose of this post is threefold: (1) to question that the status of Simmel’s “form” is philosophical and therefore capable of being resolved into either a phenomenology or neo-Kantianism; (2) to situate Simmel as part of a lost 19th century interscience (volkerpsychologie) that, instead of philosophical, potentially makes “form” cognitive in a surprisingly contemporary way; and (3) to perhaps in the process rejuvenate theoretical interest in the status of “form” separate from its application.

Backhaus (1999) argued that Simmel’s formal sociology has an “affinity with” the phenomenology of Husserl, in particular the intentional relationality of mental acts, or the structures of pure consciousness (eides) that, in Simmel’s case, apply to forms of association. Instead of identifying empirical patterns or correlations, formal sociology registers the “cognition of an eidetic structure” (e.g. of “competition,” “conflict,” or “marriage”) (Backhaus 273). Like Husserl’s phenomenology, Simmel identifies these structures as transcendent in relation to particular, sensible and empirical instantiations; but he also does not suggest that forms are “empirical universals” that do not vary according to their instantiations or are not independent from them. If that were the case, then formal sociology would be an empirical science with a “body of collective positive content” that predetermines what can and cannot be present in a specific empirical setting and therefore what counts as having a “legitimate epistemic status” (such as the causes and effects of conflict). Simmel’s emphasis, by contrast, focuses on the analysis of form as it exhibits a “necessary structure” and allows the empirical “given” to appear as it does (Backhaus 264).

More generally, Backhaus concludes as follows:

The attempt to fit Simmel’s a priori structures of the forms of association into a Kantian formal a priori is not possible. Both … interactional and cognitive structures characterize the objects of sociological observation and are not structures inherent to the subjective conditions of the observer (Backhaus 262).

Backhaus’ argument here has given a certain license to formal sociology to spread beyond the friendly confines of network analysis. That spread is contingent on finding forms “not constituted by transactions but instead [giving] form to transactions—because they posit discrete, pregiven, and fixed entities that exist outside of the material plane prior to their instantiation” (Erikson 2013: 225). To posit these entities does not require finding a cognitive structure for the purposes of meaningful synthesis (in Kantian pure cognition). Simmel refers to forms of sociation as instead “[residing] a priori in the elements themselves, through which they combine, in reality, into the synthesis, society” (1971: 38).

So here is the puzzle. If we follow Backhaus’ lead and not read forms of sociation as Kantian categories, then we commit (eo ipso) to a priori elements as part of social relations, not simply in faculties of reason. How is that possible? Backhaus interprets this as being equivalent to the material a priori proposed by Husserl, in which forms of sociation are analogous to intentional objects (1999: 262). In principle, there is much to recommend this argument, not least that it resonates with Simmel’s methodological pluralism vis-a-vis form (Levine 1998). However, the best that Backhaus can do to support a Husserl/Simmel connection is to say that Simmel’s thought has an “affinity with” Husserl’s phenomenology. As he writes elsewhere:

 Simmel was neither collaborator nor student of Husserl, and Simmel’s works appear earlier than the Husserlian influenced philosophers who were to become the first generation phenomenologists. Based on the supposition that Simmel’s later thought does parallel Husserl’s, can it be said that Simmel was coming to some of the same conclusions as Husserl, but yet did not recognize that what he was doing was unfolding an emergent philosophical orientation? An affirmative answer appears plausible. Yet, it is likely that Husserl was an influence on Simmel, without receiving public acknowledgement, since Simmel infrequently cites other thinkers within the body of his texts or within his limited use of footnotes (2003: 223-224).

And yet there is no available evidence (to date) that can document a direct influence of Husserl’s phenomenology on Simmel’s theory of forms (and/or vice versa). Beyond this, the timelines for such an influence do not exactly match, although Simmel and Husserl were contemporaries and, by all accounts, friends. While they did exchange letters, of the ones that survive there is (at least according to one interpretation) nothing of “philosophical value” in them (Staiti 2004: 173; though see Goodstein 2017: 18n9).  Simmel’s concern with “psychology” long predates the publication of Husserl’s Logical Investigations in 1900-01. Simmel’s Philosophy of Money was published around the same time (1900) and marked his most extensive engagement with formal sociology by that point (as Simmel called it, “the first work … that is really my work”). Husserl, however, does not discuss the material a priori in Logical Investigations. In fact, the key source for Husserl’s claims about it doesn’t appear until much later: his 1919-1927 Natur und Geist lectures (Staiti 2004: chap 5).  While Husserl does discuss “eidetic ontologies” in the first volume of Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy (1912), written during Husserl’s tenure at the University of Gottingen (1901-1916), it seems relevant that Simmel’s two key discussions of “form” (in the Levine reader: “How is Society Possible?” and “The Problem of Sociology”) are both found in Simmel’s 1908 Soziologie: Untersuchungen uber die Formen der Vergesellschaftung and draw from material that appears much earlier (Goodstein 2017: 66).

None of this omits or definitively puts to rest an influence of Husserl on Simmel that, as Backhaus suggests, goes uncited and cannot be traced through published work. All of these details of a connection still without an authoritative answer makes Goodstein (2017: 18n9) propose  tracking down the personal and intellectual relationship between Husserl and Simmel as a “good dissertation topic.” At the very least, this suggests that there might be more to the story apropos the status of “form” than we understand at this point and which is enough to reopen a (seemingly) closed case on which formal sociology (at least partially) rests, making this about a lot more than just an obscure footnote in the boring annals of sociology. It also seems relevant to emphasize a possible different reason why Husserl’s eidos and Simmel’s forms seem so similar, but in fact are not.

There is a definite parallel between Husserl and Simmel in that they both took positions against experimental psychology at the time. However, to assume that this means they both took the same position (which, in this case, would be one that Husserl would be credited with making, and which was against “psychologism” in toto) could make the most sense in retrospect only because the historical context has not yet been thoroughly described enough to allow us to see a different position available at the time, one whose content could be described (in the negative) as not experimental psychology, not phenomenology and not descriptive psychology. On these terms, this remains effectively a non-position in the present-day disciplinary landscape, with experimental psychology, phenomenology and descriptive psychology (qua culture) all being more or less still recognizable between now and then. This is only true, however, if we omit a nascent position (still) to be made now, possibly as cognitive social science (see Lizardo 2014), and which was available then as volkerpsychologie.

All of this suggests contextual reasons not to settle for reading Simmel as a phenomenologist. What I want to propose is that there are also further biographical reasons only recently come to light. Elizabeth Goodstein points in the direction with her insight that when Simmel uses the term “‘a priori … this usage … extends the notion of epistemological prerequisites to include their cultural-psychological and sociological formation [which] had its intellectual roots in Volkerpsychologie” (Goodstein 2017: 65; see also Frisby 1984). Goodstein here draws from the late German scholar Klaus Kohnke in what is arguably the most authoritative source on Simmel’s early influences: Kohnke’s untranslated Der junge Simmel in Theoriebeziehungen und sozialen Bewegungen (1996). Goodstein interprets this reading of Simmel’s a priori (both non-Kantian and non-phenomenological) as “[recognizing] the constructive role of culture and narrative framework in constituting and maintaining knowledge practices” (65). Even this is not completely satisfactory, however, as Kohnke (1990) himself suggests by observing the direct influence of volkerpsychologie on Simmel’s appropriation of two of its major themes—“condensation” and “apperception”—which can be categorized as “cultural” (in any contemporary meaning of the word) only very partially (see also Frisby 1984).

So where are we? Simmel’s a priori is essential to formal sociology, but it is not Kantian. We also have little reason to believe that is phenomenological, though this currently provides its best defense. It also cannot be translated as cultural, at least not in a contemporary sense. What we are left with is the influence of volkerpsychologie as part of Simmel’s intellectual history.

We are helped in defining volkerpsychologie by the fact that it has recently become a topic of conversation among historians of science (see Hopos Spring 2020). This interest has been piqued by a recognition of volkerpsychologie as a kind of interscientific space in the developing universe of the human sciences in the 19th century. Specifically, it was not experimental psychology (Wilhelm Wundt) and not descriptive psychology (Wilhelm Dilthey). In the latter sense, it was not an antidote to experimentalism and did not center around “understanding.” In the former sense, it promoted an explanatory framework but outside of the laboratory. Officially, Volkerpsychologie was initiated by the philosophers and philologists Moritz Lazarus and Heymann Steinthal in the mid-19th century. When Simmel entered Berlin University in 1876, his initial interest was history, studying with Theodor Mommsen. His interests soon shifted to psychology, however, and Lazarus became his main teacher.

The subsequent influence of Lazarus and Steinthal on Simmel is clear. Much of Simmel’s initial work in the early 1880s (including his rejected dissertation on music; Simmel [1882] 1968) was published in the journal that Lazarus and Steinthal founded and edited: Zeitschrift für Völkerpsychologie und Sprachwissenschaft (Reiners 2020). Simmel sent his essay (1892) on a nascent sociologie (“Das probleme der sociologie”) to Lazarus on his seventieth birthday, adding a letter in which Simmel wrote that the essay constituted “the most recent result of lines of thought that you first awakened in me. For however, divergent my subsequent development became, I shall nonetheless never forget that before all others, you directed me to the problem of the superindividual and its depths, whose investigation will probably fill out the productive time that remains to me” (quoted in Goodstein 2017: 65). In 1891, Steinthal directed readers of the journal that replaced Volkerpsychologie (Zeitschrift des Vereins für Volkskunde) to the “work of Georg Simmel” in order to see how volkerpsychologie and the nascent field of sociology both search for “the psychological processes of human society” (Kusch 2019: 264).

If Simmel was influenced by volkerpsychologie, he was far from alone (Klautke 2013). Durkheim was familiar with the volkerpsychologie, particularly the work of Lazarus and Steinthal. In fact, he cites (1995/1912:12n14) volkerpsychologie in the Elementary Forms as “putting the hypothesis first for “mental constitution [as depending] at least in part upon historical, hence social factors … Once this hypothesis is accepted, the problem of knowledge can be framed in new terms.” Durkheim references the Zeitschrift für Völkerpsychologie and mentions Steinthal in particular. Franz Boas (1904), meanwhile, gives “special mention” to volkerpsychologie as being a major influence on the history of anthropology for proposing “psychic actions that take place in the individual as a social unit,” also referencing the work of Steinthal (520). For his part, Bronislaw Malinowski had studied with Wundt in Leipzig and started an (unfinished) dissertation in volkerpsychologie (Forster 2010: 204ff). Boas and Malinowski provide a direct link from Lazarus and Steinhal’s volkerpsychologie to the “culture concept” (see Stocking 1966; Kalmar 1987). Mikhail Bakhtin also mentions Lazarus and Steinhal’s volkerpsychologie as an influence on his definition of dialogics and speech genres or “problems of types of speech.” Volkerpsychologie anticipates “a comparable way of conceptualizing collective consciousness” (see Reitan 2010).

This historiography thus finds the influence of volkerpsychologie on a variety of recognized disciplines and influences that reach into the present. More recent efforts are able to distinguish that influence from the influence of descriptive psychology, which is well-documented. Volkerpsychologie constituted a space of possibility in human science that did not settle into the disciplinary arrangement of the research university that still persists largely unchanged into the present (Clark 2008). As Goodstein (2017) notes, Simmel himself mirrors this with an oeuvre that remains unrecognizable from any single disciplinary guise. If Simmel did not identify with volkerpsychologie when certain bureaucratic requirements required him to declare a scholarly identity, this was at least partially because of the association of volkerpsychologie with scholars of Jewish heritage (including Lazarus and Steinthal), combined with prevailing anti-Semitism, with which Simmel was all too familiar (Kusch 2019: 267ff). Volkerpsychologie itself would later be terminologically appropriated by the Nazified “volk” which further contributed to the erasure of its 19th century history.

The purpose of recounting this history (obscure no doubt) is to perhaps rejuvenate interest in Simmel’s formal approach as more appropriately situated within a disciplinary space that anticipates cognitive social science. The ramifications of this are far beyond the scope of this post to draw out in sufficient detail. That will be saved for a later post (maybe). To close, I’ll just sketch one possible implication, using Omar’s recent distinction between “cognitive” and “cultural kinds.”

To make that distinction requires some way of distinguishing the cognitive from the cultural, i.e. giving it a “mark.” The philosopher Mark Rowlands (2013: 212) attempts this as follows: what marks the cognitive is “(1) the manipulation and transformation of information-bearing structures, where this (2) has the proper function of making available, either to the subject or to subsequent processing operations, information that was hitherto unavailable, where (3) this making available is achieved by way of the production, in the subject of the process, of a representational state and (4) the process belongs to a cognitive subject.” Rowlands subscribes to extended, enactive, embodied and embedded (4Es) cognition in making this argument, in which the key claim is not about “the mind” but about “mental phenomena.”

The proposal here is that a volkerpsychologie reading could be more accurate in situating “form” as having something more like a “mark of the cognitive” than the material a priori. For his part, Backhaus (1999) is careful to bracket the level of eidos from what he calls psychological associations and empirical universals. Perhaps, what would be identified as form could be empirically identified as carrying a cognitive content as “information-bearing structures.” This suggests an alternate way of finding a priori conditions in social relations. The problem is that this would commit a far more egregious “reading into” Simmel than reading Husserl into him. Any such effort would  erase the historicism that guides my critique of Backhaus.

However, to the degree that volkerpsychologie is situated in a similar disciplinary space as cognitive social science (akin to 4Es cognition) this might lessen the violation. One historical effort (Kusch 2019) reads much of the original German-language research, published alongside Simmel’s own, and finds general commitments to relativism and materialism, meaning that (following the “strong” version of Lazarus and Steinthal) volkerpsychologie finds apperceptions “compressed” in even unproblematic forms of consciousness and locates these in an “objective spirit” as language, institutions and tools. Stronger versions also took umbrage with a normative application of volkerpsychologie because this arbitrarily bracketed an explanatory focus that endorsed only a relativist metaphysics (to an empirical context). Stronger versions even took a de facto Kantian critique a step further in attempting psychological explanations for what could be posited through logical inference (like freedom of the will). This did not mean resorting to cultural explanation, however. In fact, Dilthey distanced himself from volkerpsychologie because of its explanatory thrust. He developed his more “descriptive” approach (in part) in opposition to this. Strong versions of volkerpsychologie attempted generative explanations of intuitions derived from an original (empirical) context.

If there is any legitimate parallel between volkerpsychologie and formal sociology, then “form” could be given an entirely different treatment: conveying cognitive kinds that, among other things, allow for instances of particular cultural kinds.

 

References

Backhaus, Gary. (1999). “Georg Simmel as an Eidetic Social Scientist.” Sociological Theory 16: 260-281.

____. (2003). “Husserlian Affinities in Simmel’s Later Philosophy of History: The 1918 Essay.” Human Studies 26: 223-258.

Clark, William. (2008). Academic Charisma and the Origins of the Modern Research University. UChicago Press.

Erikson, Emily. (2013). “Formalist and Relationalist Theory in Social Network Analysis.” Sociological Theory 31: 219-242.

Frisby, David. (1984). “Georg Simmel and Social Psychology.” History of the Behavioral Sciences 20: 107-127.

Goodstein, Elizabeth. (2017). Georg Simmel and the Disciplinary Imagination. Stanford UP.

Hopos (Special Issue: Descriptive Psychology and Volkerpsychologie: In the Contexts of Historicism, Relativism and Naturalism). Spring 2020.

Jaworski, Gary. (1990). “Robert Merton’s Extension of Simmel’s Ubersehbar.” Sociological Theory 8: 99-105.

Kalmar, Ivan. (1987). The Völkerpsychologie of Lazarus and Steinthal and the Modern Concept of Culture. Journal of the History of Ideas 48: 671-690.

Klautke, Egbert. (2013). “The French reception of Völkerpsychologie and the origins of the social sciences.” Modern Intellectual History 10: 293-316.

Kohnke, Klaus. (1990). “Four Concepts of Social Science at Berlin University: Dilthey, Lazarus, Schmoller and Simmel.” in Georg Simmel and Contemporary Sociology.

Kusch, Martin (2019). “From Volkerpsychologie to the Sociology of Knowledge.” Hopos 9: 250-274.

Lizardo, Omar. (2014). “Beyond the Comtean Schema: The Sociology of Culture and Cognition Versus Cognitive Social Science.” Sociological Forum 29: 983-989.

Reiners, Stefan. (2020). “Our Science Must Establish Itself”: On the Scientific Status of Lazarus and Steinthal’s Völkerpsychologie.” Hopos 10: 234-253.

Rowlands, Mark. (2013). The New Science of the Mind: From Extended Mind to Embodied Phenomenology. MIT Press.

Silver, Daniel and Monica Lee. (2012). “Self-relations in Social Relations.” Sociological Theory 30: 207-237.

Silver, Daniel and Milos Brocic. (2019). “Three Concepts of Form in Simmel’s Sociology.” The Germanic Review 94: 114-124.

Stocking, George. (1966). “Franz Boas and the Culture Concept in Historical Perspective” American Anthropologist 68: 867-882.

Staiti, Andrea (2004). Husserl’s Transcendental Phenomenology: Nature, Sprit and Life. Cambridge U Press.

An Argument for False Consciousness

Philosophers generally discuss belief-formation in one of two ways: internalist and externalist. Both arguments are concerned with the justification of the beliefs that a given agent purports to have. Internalists and externalists dispute the kinds of justification that can be given to a belief, in order to lend or detract an epistemic justification for the belief in question. For the internalist, a belief is justified if the grounds for it comes from something internal to the believer herself which she can control. For the externalist, belief can be justified without such an internal support. We can still be justified in believing something even if there are no grounds for belief that we can individually control. Between the internalist and externalist, “justifiability” concerns whether a belief can be present or whether what looks like belief is really something else (e.g. “unfounded hunch,” “dogmatism,” “false consciousness”).

Is such a dispute relevant for sociology? The answer, I argue, must be an unqualified yes: such a dispute is very relevant for sociology, but to see why requires a significant change in what it means to justify a belief. As a simple causal statement, sociology seems to support a belief externalism. After all, sociologists are in the business of describing beliefs that find presumably external sources in things like culture, meaning structures, and ideology. Yet, as a matter of action, sociologists seem more inclined toward belief internalism. The beliefs that drive agency are ones that agents themselves seem to control, as internal mental states, at least to the degree that they have a motivation to act and are not “cultural dopes” simply going through the motions. 

This is not a contradiction, it seems, because sociologists do not claim to be in the business of evaluating whether belief is justifiably present or not. In most cases, belief is unproblematically present as a matter of course. Sociologists are far more concerned with belief as an empirical process and beliefs as empirical things that can be used to explain other things. When confronted with questions about the “evaluation” or “justification” of beliefs, sociologists tend to think in terms of “value-neutrality.” The discipline can explain beliefs with even the most objectionable content without evaluating whether they are good or bad in a moral sense, or true or false in an epistemic sense. As some have suggested, not being committed to value-neutrality about beliefs would change our questions entirely and make for a very different discipline (see Abend 2008). 

I want to claim that there is a different way in which sociologists do evaluate beliefs (quite radically in fact) for the simple fact that they commit to belief externalism. This carries significant stakes for sociology as it touches upon a way in which the discipline recognizes and legitimates the presence of belief and by doing so countervails efforts not to recognize it or recognize it in a different way.

Consider a few vignettes (adapted from Srinivasan 2019a):

RACIST DINNER TABLE: A young black woman is invited to dinner at her white friend’s house. Her host’s father seems polite and welcoming, but over the course of the dinner the guest develops the belief that her friend’s father is racist. Should the guest be pressed on the sources of this belief, she says she simply “knows” that her friend’s father is racist. In fact, her friend’s father is racist though his own family does not know it.

CLASSIST COLLEGE: A working class student attends a highly selective college that prides itself on its commitment to social justice. She is assured by her advisor that while much of the student body comes from the richest 10%, she will feel right at home. Over the course of the first month of her attendance, however, the student experiences several instances where her class background becomes an explicit point of attention, ridicule and exclusion. She comes to believe that the university is not meant for those who come from her background. She tells this to her advisor who tells her in turn that, perhaps, she is being too sensitive. No one is trying to shun her.

DOMESTIC VIOLENCE: A woman in a poor rural village is regularly beaten and abused by her husband. Her husband expresses regret for the abuse, but explains to his wife that she “deserves” it based on her not being dutifully attentive to him. The woman believes that she only has herself to blame, an opinion echoed by her family and friends. She has never heard a contrary opinion.

Any sociologist who, having read these vignettes, and who are then asked “Are beliefs present?”, would very likely say “of course beliefs are present.” In fact, that would probably be the furthest thing from their minds. A sociologist would probably find such a question annoying and of dubious validity. There are far more pressing matters in these vignettes. Here is my wager: in saying that belief is present, sociologists actually make a radical evaluation of these beliefs, because they commit to belief externalism. In other words, they commit to the view that belief can be present even if the believer does not have grounds for belief that they can individually control. 

To consider the significance of this, consider some arguments in the philosophy of mind that are specifically meant to discredit belief externalism. As Srinivasan explains, the three cases above seem directly analogous to three famous thought experiments that each have the purpose of showing how belief cannot be present under the circumstances found in each of the vignettes (though the third is slightly tricky). A relevant disanalogy will help show why sociology’s commitment to belief externalism is significant and radical. 

RACIST DINNER TABLE corresponds to the CLAIRVOYANT experiment (Bonjour 1980) in which an individual believes he completely understands a certain subject matter under normal circumstances simply because he does not possess evidence, reasons or counterarguments of any kind against the possibility of his having a clairvoyant cognitive power. “One day [the clairvoyant] comes to believe that the President is in New York City, though he has no evidence either for or against this belief. In fact the belief is true and results from his clairvoyant power, under circumstances in which it is completely reliable.” To say the belief is justified in this instance is absurd, and this seems to prove the necessity to “reflect critically upon one’s beliefs … [in order to] preclude believing things to which one has, to one’s knowledge, no reliable means of epistemic access” (Bonjour 1980: 63). To have a reliable means of epistemic access (e.g. this is why I believe this) is to have an internalist grounds for belief that one can control. Without it, we don’t have beliefs but “unfounded hunches.”

CLASSIST COLLEGE corresponds to the DOGMATIST experiment (Lasonen-Aarnio 2010) in which someone in an art museum forms a belief about a given sculpture as being red, though she is later told by a museum staff member that when the museum visitor saw the sculpture it had been illuminated by a hidden light that momentarily made it seem like it was red when in fact it is white. Even when the museum patron is told this, however, she persists in her belief that the sculpture is red. In this case, such a belief would not be justified because the internalist grounds that would have made it justifiable no longer apply. To justifiably believe that the sculpture is red, the museum patron could not have witnessed the sculpture in its white state and/or could not have been told by the museum staff member why her belief is inaccurate. She is a dogmatist because, while the second condition does apply, her belief persists nevertheless.

DOMESTIC VIOLENCE corresponds to the famous BRAIN-IN-A-VAT experiment. Someone will form beliefs when they are trapped (Matrix-style) in a liquid goo vat that feeds electrochemical signals directly to their nervous system. For some internalists, belief is justifiably present in such circumstances based on the internalist criteria that the person in the vat will have “every reason to believe [that] perception is a reliable process. [The] mere fact unbeknown to [them that] it is not reliable should not affect the justification” (Cohen 1984: 81-82). 

In all three cases, there are analogous circumstances between the vignettes and the thought experiments. The question is why it seems unproblematic to ascribe beliefs in the vignettes while it seems far more problematic to ascribe them in the thought experiments. The answer comes in a relevant disanalogy: the vignettes account for belief-formation by referencing a relational process, of some kind, that an internalist simply cannot recognize and the externalist in these cases only latently recognizes. 

As suggested above, for a sociologist to say that “yes beliefs are present” in such circumstances as RACIST DINNER TABLE, CLASSIST COLLEGE, and DOMESTIC VIOLENCE is unproblematic to the point of absurdity. Yet, if the thought experiments reveal anything, they reveal why attributing belief in these circumstances is really saying something. And it says something without having to rely on CLAIRVOYANT, DOGMATIST or BRAIN-IN-A-VAT kinds of fallacies. This is because sociologists have a very important thing in their back-pocket, something deeply familiar to them: the ability to account for belief-formation, again, in “terms of structural notions rather than individualist ones.” 

This may all seem obvious enough, but it actually opens a large and important horizon that Omar and I (Strand and Lizardo 2015; Strand 2015) have just barely scratched the surface. Belief-formation (and desire-formation) is a primary sociological problem because accounting for the presence of belief is a very good way of sorting out distinctively social effects of various theoretically important kinds that also happen to be inextricably cognitive. But let’s take this one step further. The internalist critique of externalism revolves around the fact that externalists can only describe the presence of belief under such and such circumstances. It is not a normative theory that can be “action-guiding [and] operational under conditions of uncertainty and ignorance” (Srinivasan 2019a). Those who have internalist grounds for belief can presumably apply them in conditions of uncertainty and ignorance. Hence, belief should be formed on grounds of internal criteria and the subject’s individual perspective. 

But consider what externalism might look like as a normative theory. What would it mean for beliefs formed without an internal criteria and only through relationships with others to carry a greater or equivalent epistemic good as beliefs formed through internal criteria that otherwise seem far more respectable ethically speaking (insofar as they allow us to attribute blame and responsibility)? As the scenario between BRAIN-IN-A-VAT and DOMESTIC VIOLENCE suggests, internalist criteria can obviously mislead the attribution of belief in circumstances where it does not apply and where the recognition of externalist grounds for belief can reveal false consciousness. More specifically, the RACIST DINNER TABLE/CLAIRVOYANCE and CLASSIST COLLEGE/DOGMATIST examples suggest that the externalist belief-formation evidenced in these circumstances carries a distinct epistemic good. None of this should be unfamiliar to sociologists. Sociologists are often the ones who recognize, defend and legitimate the presence of belief in these circumstances, despite all countervailing forces.

All of this rests on a certain genealogical anxiety, however, as Srinivasan (2019b) appreciates. As a field, cognitive science massively contributes to this anxiety. For externalism of this sort, of the sociological sort, makes a radical claim to the degree that it radically departs from folk-psychological familiarity, and its overlap with ethical respectability, at least should we try to take this to a logical conclusion. We must conclude that our beliefs—even our good ones, even our “action-guiding” ones—result from some kind of “lucky” or “unlucky” inheritance. They must be genealogical in other words and cannot result from some internalist criteria that remains indelibly ours, under our control and which reflects kindly upon us (or poorly depending on how lucky we are). I will save discussion of these implications for another post.

 

References

Abend, Gabriel. (2008). “Two Main Problems in the Sociology of Morality.” Theory and Society 37: 87-125.

BonJour, Laurence (1980). “Externalist Theories of Empirical Knowledge” Midwest Studies in Philosophy 5: 53–73.

Cohen, Stewart. (1984). “Justification and Truth.” Philosophical Studies 46: 279-296.

Lasonen-Aarnio, Maria. (2010). “Unreasonable Knowledge”. Philosophical Perspectives 24: 1-21.

Strand, Michael. (2015). “The Genesis and Structure of Moral Universalism: Social Justice in Victorian Britain, 1834-1901.” Theory and Society 44: 537-573.

Strand, Michael and Omar Lizardo. (2015). “Beyond World Images: Belief as Embodied Action in the World.” Sociological Theory 33: 44-70.

Srinivasan, Amia. (2019a). “Radical Externalism.” Philosophical Review

_____. (2019b). “Genealogy, Epistemology, and Worldmaking.” Proceedings of the Aristotelian Society 119: 127-156.

Practice Theory versus Problem-Solving

In 2009, Neil Gross argued that the critique of action as a calculation of means to ends, which had been ongoing for at least the prior thirty years, had been successful. Not only that, the insistence that “action-theoretical assumptions necessarily factor into every account of social order and change and should therefore be fully specified” had also been successful. Both efforts made the question “how to conceptualize action in terms of social practices?” now the main question that confronts theorists. Gross (2009) proposed his own answer to this question: we can conceptualize action in terms of social practices by understanding social practice in terms of problem-solving.

Gross’ answer has in many ways been wildly influential, and for good reason. In a not insignificant respect it has successfully settled the debate over the main question and how to conceptualize action as social practice. The empirical application of practice theory in sociology increasingly revolves around a productive focus on problem-solving. But is problem-solving the best way to fully specify the “action-theoretical assumptions [that] necessarily factor into every account of social order and change”?

I will argue maybe not and critique this position in the effort to widen the horizon of relevant practice theories in the field. In a second post, I will make this argument on cognitive grounds through engagement with “predictive processing.” In this post, I will articulate the potential problems with problem-solving primarily through a dialogue between practice theorists and art history. Let’s first start with what problem-solving means as way of conceptualizing action as social practice.

Like all practice theories, problem-solving attempts to bridge the gap between observers and actors without introducing the same attributions that plague a means-ends frame. Bridging that gap is something that all practice theories claim to do. While the reasons for doing this are well-known in Giddens or Bourdieu for instance, consider the art historian Michael Baxandall’s argument for the virtues of a practice theory:

We describe the effect of the picture on us by telling of inferences we have made about the action or process that might have led the picture to being as it is …Awareness that the picture’s having an effect on us is the product of human action … when we attempt a historical explanation of a picture [we] try to develop this kind of thought (1985: 6)

Baxandall (who will play a significant role in the argument that follows) argues here that understanding something (a painting, a text, a state … anything in principle) as the mode of action that creates or generates it is the best understanding that we can obtain. A practical understanding is therefore different from and surpasses an interpretive understanding or a realist understanding of the same things. A similar argument, for instance, is proposed by Bourdieu (echoing Piaget) and his focus (1996) on “generative understanding.”

If we were to summarize (roughly) the four main characteristics of the problem-solving frame, they could include the following:

(1) End/means are endogenous to situations nor external to them (Whitford 2002)

(2) Proximate goals shape final goals; ends, ambitions, interests (etc) reflect the tools at hand rather than vice versa

(3) Action involves the recombination, transposition and modification of schemas, habits, tools, conventions, cultural objects (etc) to solve problems and allow for the continuation of action

(4) Problematic situations are a point of public access to a logic of practice that can be shared by observers and actors alike (Swidler 2005)

An example of this mode of argument is Richard Biernacki’s (1995) magisterial The Fabrication of Labor in which he uses a historical comparison of wages based on “piece-rates” versus “the daily expenditure of time” as contrasting solutions to the problem capitalist labor remuneration in Britain and Germany respectively between 1640 and 1914. Both of these formulas comprised “signifying practices incorporated into the concrete practices of work” (Biernacki 1995: 353). As Biernacki argues, attention to practice cannot be neglected, for such solutions were not engineered by capitalism itself.

The brute conditions of praxis in capitalist society, such as the need to compete in a market, did not provide the principles for organizing practices in forms that were stable and reproducible, for by themselves they did not supply a meaningful design for conduct. Rather, practices were given consistent shape by the particular specifications of labor as a commodity that depended, to be sure, upon the general conditions of praxis for their materials, but granted them social consequences according to an intelligible logic of their own (1995: 202)

The brute conditions of capitalist production created a problem situation that involved the commodity status of labor and its economic valuation. Piece-rates (or wages paid per unit of production) and time-wages (or wages paid according to time at work) both provide for the “meaningful design for conduct” that solve this problem. Biernacki reveals the extent of these meaningful designs, and how they correspond to this key problem, as “anchors for culture” in both contexts: “through their experience of the symbolic instrumentalities of production … workers acquired their understanding of labor as a commodity” (1995: 383). From lived experience at the “point of production”—given meaningful form by piece-rates and time-rates as the practices of waged labor—“categories of the discourse of complaint,” investment strategies, even architectural designs were all derived. Practices therefore anchor a capitalist system by solving its key problem: how to apply a commodified valuation to human labor.

Biernacki (2005) would further this kind of argument in a later discussion of Baxandall’s analysis of Piero della Francesca’s painting of the The Baptism of Christ. Baxandall, as an art historian, is famously known for not resorting to “meanings” in his non-interpretive approach to painting. In this case, Baxandall writes, “I do not … address the Baptism of Christ as a ‘text,’ either with one meaning or many. The enterprise is to address it as an object of historical explanation and this involves the identification of a selection of its causes” (1985: 31). For Biernacki, Baxandall’s approach proves highly generative. The latter’s analysis of The Baptism of Christ demonstrates “action as a problem-solving contrivance” instead of as a relation of means to ends. As Biernacki writes, “we discover agency in individuals’ creative construal of puzzles and in their unforeseen transposition and modification of schemas.”

In particular, Biernacki zeros in on the mathematical “schema” commensurazione found in Baxandall’s recounting of Francesca’s painting. To solve the problem of proportionality in the painting (Figure 1), Francesca deploys commensurazione “to cope with [the] problem of crowding on his painting surface … what matters is how Piero resorted to it as a tool for the difficult job of fitting so many figures without congestion onto an exaggeratedly vertical plane’ (Biernacki 2005). Or, this is what Biernacki reads from Baxandall’s account. All of this is in support of action as social practice qua problem-solving.

Screen Shot 2019-04-20 at 10.32.04 AM.png
Figure 1: Piero della Francesca, The Baptism of Christ, c-1448-1450 

In many ways Biernacki is ahead of his time in making these arguments. This concern with problem-solving has subsequently come to define the empirical application of practice theory in American sociology. Analytic attention is given to “problem-situations” and the mechanisms of “problem-solving.” Relevant variation involves habits or conventions and actors that “[mobilize] a more or less habitual responses” to solve emergent problems (Gross 2009). More broadly, “Cultural objects … are experienced as resonant because they solve problems better than the cognitive schema afforded by objects or habituated alternatives” (McDonnell, Bail and Tavory 2017). Innovation too is a matter a problem-solving. It occurs when “habitual responses to a given situation fail to yield adequate results.” This failure produces a problem situation. Once the problem situation has “emerged … novel ways of responding to it must be discovered through creative understanding, projection and experimentation” (Jansen 2017).

In all of these cases, action is conceived as practice, and practice is conceived as problem-solving. Given the prevailing wisdom, it is difficult to question a problem-solving frame if we seek to apply practice theory. But there has been a prior critique of problem-solving logic that I seek to build on in sketching my own critique. This critique also comes from Baxandall (1985). He proposes that we take pause when making practice into a variation on problem-solving because this could run afoul of one of the main tenets of any practice theory: that it follow the actors themselves and not substitute them for observers.

For Baxandall (1985), there is a difference between what a problem means for an actor and what it means for an observer. As he puts it, “The actor thinks of ‘problem’ when he is addressing a difficult task and consciously he knows he must work out a way to do it. The observer thinks of ‘problem’ when he is watching someone’s purposeful behavior and wishes to understand: ‘problem-solving’ is a construction he puts on other people’s purposeful activity” (69).

For Baxandall, this makes problem-solving, as an analytic frame, misleadingly attractive. Karl Popper (1978), for instance, uses problem-solving to understand “third world” or objective knowledge, the type that can be reconstructed regardless of who does the observing. The problem-solving frame is enticing for the analytic capacity it seems to promise for making innovation or resonance meaningful and pragmatic, in addition to knowledge. Innovation does not mean purposefully making a new idea ex nihilo. Resonance does not mean purposefully generating an appeal for something or a connection to it. In all cases, practice is the site of both the actor’s experience and the analyst’s observation. But it can only be this with a bridging concept like problem-solving. However, and again to quote Baxandall, to make problem-solving our analytic frame for understanding these phenomenon still “puts a formal pattern on the object of [our] interest” (70).

To argue this point Baxandall uses the example of Pablo Picasso painting the Les Demoiselles d’Avignon (1907) and the art collector Daniel-Henry Kahnweiler’s near contemporary account of Picasso’s painting of it, as told in Kahnweiler’s work Der Weg zum Kubismus (1920[1915]). Baxandall accounts for Kahnweiler’s personal relation to Picasso as one of Picasso’s first art collectors and his close friend. Kahnweiler attempts to understand Picasso’s painting as a finished product, but also as something he is observing when he visits Picasso’s studio, or sits for him as Picasso paints the Kahnweiler portrait as one of the first, and most recognizable, attempts at Cubism. Kahnweiler is uniquely close to Picasso’s skill, and Baxandall observes how Kahnweiler applies a problem-solving frame to make sense of what he observes. Picasso, of course, can’t explain his own skill, though the way he refers to it demonstrates how the problem-solving frame is still a “formal pattern” even though it does not look like it.

An attention to ‘problems’ in the observer, then, is really a habit of analysis in terms of ends and means … Picasso went on as he did and ‘found’ conclusions, or pictures; Kahnweiler sought to understand his behavior by forming implicit ‘problems’ (70).

The point is to stress a subtle difference between Picasso painting (a practice) and Kahnweiler using problem-solving to explain Picasso painting. But here we find an important parallel between Francesca painting the Baptism and the influence of commensurazione and Picasso painting Les Demoiselles and the influence of non-Euclidean geometry. Picasso’s sketch books at the time of his painting Les Demoiselles show sketches of the highly faceted geometrical figures found in Esprit Jouffret’s Traité élémentaire de géométrie à quatre dimensions (1903). This book was an attempt to popularize fourth-dimensional geometry in the tradition of Poincare, and it had been lent to Picasso by the mathematician Maurice Princet, i.e. “the mathematician of cubism” (Miller 2001).

Screen Shot 2019-04-20 at 10.55.02 AM.png
Figure 2: Pablo Picasso, Les Demoiselles d’Avignon, c. 1907 and an illustration from Jouffret, Traité élémentaire de géométrie à quatre dimensions (1903)

A similar question then applies this situation as with Francesca, commensurazione and his painting of the Baptism: did Picasso use these geometrical facets to solve a problem that allowed him to paint Les Demoiselles? Kahnweiler, for one, would seem to think so based on his (public) observation of Picasso painting and his rudimentary application of the problem-solving frame. But here is Baxandall’s important and contrasting point:

For Picasso the Brief [sic] and the grand problems might largely be embodied in his likes and dislikes about pictures, particularly his own: he need not formulate them out as problems. His active relation to each of his pictures was indeed always in the present moment, and at the level of process and on emerging derivative problems on which he spent his time. As he says, it would feel like finding rather than seeking (70)

Thus, we can categorize what Picasso was doing when he painted Les Demoiselles as problem-solving. Baxandall’s point, however, is that although “problem-solving” makes good analytic sense, it is falsely indicative of what is actually practical in this instance, or in any instance.

In just a few words: Problem-solving implies a phenomenal experience characterized by “seeking.” For Picasso his experience was more like “finding,” as Picasso himself confesses. If true, then Kahnweiler, or any observer who categorizes action as problem-solving, leaves a lot on the table, at least as far as comprehending action as social practice goes. In just a phrase: how can one find a solution to a problem that one was not looking for?

Indeed (and pace Biernacki) Baxandall does not actually associate commensurazione with a solution to a problem in his analysis of Francesca’s The Baptism of Christ. He refers to commensurazione, rather, as a generalized “mathematics-based alertness in the total arrangement of a picture, in which what we call proportion and perspective are keenly felt as interdependent and interlocking” (Baxandall 1985: 113). In a remarkable sense, the argument here is similar to the one that Bourdieu ([1967] 2005) finds in the art historian Erwin Panofsky and Panofsky’s account of the connection between scholastic philosophy and Gothic architecture, which proved seminal to Bourdieu’s development of habitus. In the same manner, it might seem like scholastic principles were used by Gothic architects to solve problems in the design of buildings like Notre-Dame de Paris. For Bourdieu ([1967] 2005), this instead suggests a “habit-forming force” shared by both scholastic philosophers and Gothic architects alike that gave them a common modus operandi that they applied philosophically and architecturally, respectively.

What is that “habit-forming force”? Bourdieu calls it ambiguously the scholastic “school of thought,” but he adds more insight by drawing attention to the “copyist’s daily activity … defined by the interiorization of the principles of clarification and reconciliation of contraries” which characterized the routine activity of scholastic education and “is concretely actualized in the specific logic of [this] particular practice” (215).

Following Panofsky, he finds the “copyist’s daily activity” mirrored in both the diagonal rib (“ribbed vault”) structure characteristic of Gothic architecture and the highly deliberate movement from proposition to proposition, “keeping the progression of reasoning always present in mind,” evident in texts like St. Thomas Aquinas’ Summa Theologiae (1485) as the most exemplary demonstration of Scholasticism. Rather than problem-solving, this particular practice “guides and directs, unbeknownst to [them], their apparently most unique creative acts” (Bourdieu [1967] 2005: 226).

Screen Shot 2019-04-20 at 11.26.46 AM.png
Figure 3: Index to St Thomas Aquinas’ Summa Theologiae and a diagonal ribbed structure

Whether we agree with these accounts or not (see Gross 2009: 367-68) should not negate the fact that action here is conceptualized as social practice, but not in a way that is accessible to problem-solving. I argue that this alternative presents the problem-solving frame with a number of questions, which can be fairly summarized as follows:

(1) Ends/means endogenous to situations (e.g. “ends-in-view”)

Does something lead agents into situations by, for instance, making them care?

(2) Proximate goals shape final goals

But do proximate goals need to be oriented toward a predicted future?

(3) Action involves recombination, transposition, and modification

But how do we know when it “works”?

(4) Problematic situations can be objectively described

Do problems appear to those for whom they can be problems? What decides that?

As this blog endeavors to argue: it is impossible to conclude whether these are genuine problems with problem-solving absent some connection with a cognitive mechanism that is analogous to theorists’ efforts to conceptualize action as social practice. In the next post, I’ll draw that connection by linking “predictive processing” with practice theory.

 

References

Baxandall, Michael. 1985. Patterns of Intention: On the Historical Explanation of Pictures. UC Press.

Biernacki, Richard. 1995. The Fabrication of Labor: Germany and Britain, 1640-1914. Berkeley: UC Press

_____. 2005. “Beyond the Classical Model of Conduct.” in Remaking Modernity. UChicago Press.

Bourdieu, Pierre. 1996. The Rules of Art. Stanford UP.

_____. 1967[2005]. “Postface to Erwin Panofsky, Gothic Architecture and Scholasticism”   in The Premodern Condition by Bruce Holsinger. UChicago Press.

Gross, Neil. 2009. “A Pragmatist Theory of Social Mechanisms.” American Sociological Review 74: 358-379.

Jansen, Robert. 2017. Revolutionizing Repertoires: The Rise of Populist Mobilization in Peru. UChicago Press.

McDonnell, Terence, Christopher Bail and Iddo Tavory. 2017. “A Theory of Resonance.” Sociological Theory 35: 1-14.

Miller, Arthur. 2002. Einstein, Picasso: Space, Time and the Beauty that Causes Havoc. Basic Books.

Popper, Karl. 1978. Three Worlds. The Tanner Lectures in Human Values.

Swidler, Ann. 2005. “What Anchors Cultural Practices.” in The Practice Turn in Contemporary Theory. Routledge.

Whitford, Josh. 2002. “Pragmatism and the untenable dualism of means
and ends: Why rational choice theory does not deserve paradigmatic privilege.” Theory and Society 31: 325-363.

Beyond Good Old-Fashioned Ideology Theory, Part Two

In part one, I examined two recent frameworks for understanding ideology (Jost and Martin) and explained how both serve as alternatives to the good old-fashioned ideology theory (GOFIT). Ultimately, I concluded that Martin’s (2015) model has specific advantages over Jost’s (2006) model, though the connection between ideology and “practical mastery of ideologically-relevant social relations” needs to be fleshed out. This is particularly true because any strong concentration on social relations seems to preclude any serious attention to cognition. But without it, the argument is vulnerable to crying foul over reductionism.

In this post, I sketch a model of cognition that checks the boxes of GOFIT ideology: distorting, invested with power, supports unequal social relations. But it is different for reasons I specify below. To do this, I use a famous experiment in neuroscience—Michael Gazzaniga’s “split-brain” hypothesis— and draw an analogue between it and a possible non-GOFIT ideology.

Galanter, Gerstenhaber … and Geertz

But before doing that, it seems reasonable to ask about the purpose of even attempting a non-GOFIT ideology. Is GOFIT a strawman? Why is it problematic? To answer these questions, and to indicate why a holistic revision of ideology away from GOFIT seems to be in order, consider Clifford Geertz and his essay (1973) “Ideology as a cultural system,” which presents what is to date arguably the most influential, non-Marxist approach to ideology in the social sciences. Geertz’s burden is to make ideology relevant by providing it with a “nonevaluative” form. And the way he does this, using modular or computational cognition, is what I want to focus on.

Ideology here is not tantamount to oversimplified, inaccurate, “fake news” style distortion that is, above all and categorically, what science is not. But if it isn’t to be censured like this, then for Geertz ideology must be a symbolic phenomenon that has something to do with how “symbolic systems” make meaning in the world, and in turn serve to guide action  (e.g. “models of, models for”). To make this argument, he does, in fact, make ideology cognitive by drawing from a psychological model: Eugene Galanter and Murray Gerstenhaber’s [1956] “On Thought: The Extrinsic Theory.”

As Geertz summarizes:

thought consists of the construction and manipulation of symbol systems, which are employed as models of other systems, physical, organic, social, psychological, and so forth, in such a way that the structure of these other systems– and, in the favorable ease, how they may therefore be expected to behave–is, as we say “understood.” Thinking, conceptualization, formulation, comprehension, understanding, or what-have-you, consists not of ghostly happenings in the head but of a matching of the states and processes of symbolic models against the states and processes of the wider world … (214)

Geertz returns to this same argument in arguably his most thorough approach to the culture concept (“The Growth of Culture and the Evolution of Mind”). Importantly, there too he does not conceive of culture or symbols absent a psychological referent, which he consistently draws from Galanter and Gerstenhaber.

Whatever their other differences, both so-called cognitive and so-called expressive symbols or symbol-systems have, then, at least one thing in common: they are extrinsic sources of information in terms of which human life can be patterned–extrapersonal mechanisms for the perception, understanding, judgment, and manipulation of the world. Culture patterns–religious, philosophical, aesthetic, scientific, ideological–are “programs”; they provide a template or blueprint for the organization of social and psychological processes, much as genetic systems provide such a template for the organization of organic processes (Geertz, 216)

How does this apply to ideology? It makes ideology a symbolic system for building an internal model. Geertz is distinctively not anti-psychological here but instead seems to double down on the “extrinsic theory of thought” to define culture as a symbol system through which agents construct models of and for some system out in the world, effectively programming their response to that system. Ideology refers to the symbol system that does this for the political system:

The function of ideology is to make an autonomous politics possible by providing the authoritative concepts that render it meaningful, the suasive images by means of which it can be sensibly grasped … Whatever else ideologies may be–projections of unacknowledged fears, disguises for ulterior motives, phatic expressions of group solidarity–they are, most distinctively, maps of problematic social reality and matrices for the creation of collective conscience (Geertz, 218, 220)

Geertz mentions the example of the Taft-Hartley Act (restricting labor unionizing) that carries the ideological label the “slave labor act.” Geertz emphasizes how ideology works according to how well or how poorly the model (“slave labor act”) “symbolically coerces … the discordant meanings [of its object] into a unitary conceptual framework” (210-211).

If GOFIT is a set of assumptions widely held about ideology, then we probably find little to disagree with in Geertz’s argument, at least at first glance. Much of it should ring true. If we object to anything it might be the heavy-handed language that Geertz uses that evokes modular or computational cognition (e.g. “programs”). But maybe Geertz himself is not responsible for this. His sources, Galanter and Gerstenhaber, were explicit in making these assumptions about cognition, and this I want to argue is important for a specific reason.

To Galanter and Gerstenhaber, “model” clearly meant the sort of three-dimensional scale models that scientists construct in order to understand large-scale physical phenomena. In this sense, they solved the “problem of human thinking” by defining it as a lesser version of idealized scientific thinking. And they were not alone in that pursuit. At least initially, cognition was presented as antithetical to behaviorism in psychology by allying itself with resources that were quite deliberate and quite reflexive: “[mid-century] cognitive scientists … looked for human nature by holding an image of what they were looking for in their [own] minds. The image they held was none other than their own self-image … ‘good academic thinking’ [became the] model of human thinking” (Cohen-Cole 2005).

This is not only the context for Geertz’s theory of ideology. His understanding of “symbol systems” writ large cannot be removed from this specific gloss on and an extension of “good academic thinking.” For our purposes, this should beg the question of whether using symbol systems to form internal models about the external world and  to manipulate and creatively construe those models as equivalent to “symbolic action” should be the template or basis for defining ideology on nonevaluative grounds, that is to say, for defining ideology in the way that Geertz himself does: as cognitive. 

Ideology and the Split-Brain

What I will try to do now, after this long preamble, is sketch a different possible cognitive basis for a theory of ideology, one that I think is compatible with Martin’s (2015) field-theoretic approach to ideology discussed in part one of this post. It develops a cognitive interpretation of what “practically mastery of ideologically relevant social relations” might mean. It also situates Marx as the contrary of Geertz by making social relations a necessary condition for ideology as a cognitive phenomenon, not something that needs to be bracketed (or pigeonholed as “strain” or “interest”) for ideology to be cognitive.

This different basis is Gazzaniga’s research (1967; 1998; Gazzaniga and Ledoux 1978) on the split-brain and the process of confabulation of meaning on the basis of incomplete visual input. It is important to mention that I use the split-brain as an analogue (in “good academic thinking” terms) to convey what ideology might mean as a cognitive phenomenon if it is not a symbol system. I do not imply that ideology requires a split-brain as a physical input.

For Gazzaniga, the two sides of the brain effectively constituted two separate spheres of consciousness, but this could only be truly appreciated when the corpus callosum was severed (what used to be a procedure for epileptic patients) and the two sides of the brain were rendered independent from each other. When this happened, the visual field was bissected as the brain stopped communicating information together that came through the right and left visual fields (hereafter RVF and LVF). What was observable in the RVF was received independently from what was observable in the LVF. As Gazzaniga found, the brain is multi-modal. The left hemisphere is the center of language about visual input. So when a word or image was flashed to the RVF and the information was received by the left hemisphere, the patient could provide an accurate report. When a word or image was flashed to the LVF, the patient could only confabulate because the non-integrated brain could not combine the visual information with the language functions of the left hemisphere. The split-brain patient effectively “didn’t see anything,” even though she could still connect visual cues to related pictures on command.

When visual information is presented to a split-brain, the mystery is how the verbal left hemisphere attempts to make sense of what the non-verbal right hemisphere is doing. This is the recipe for confabulations or “false memories” as Gazzaniga (1998) puts it, because here we witness the effects of the “interpreter mechanism.”

Thus, when the RVF and LVF of a split-brain patient were shown pictures of a house in the snow and a chicken’s claw, and the patient was asked to point to relevant pictures based on these visual cues, she pointed to a snow shovel and a chicken head respectively. Here is the interesting part:

the right hemisphere—that is, the left hand—correctly picked the shovel for the snowstorm; the right hand, controlled by the left hemisphere, correctly picked the chicken to go with the bird’s foot. Then we asked the patient why the left hand— or right hemisphere—was pointing to the shovel. Because only the left hemisphere retains the ability to talk, it answered. But because it could not know why the right hemisphere was doing what it was doing, it made up a story about what it could see—namely, the chicken. It said the right hemisphere chose the shovel to clean out a chicken shed (Gazzaniga 1998: 53; emphasis added).

“It made up a story” refers here to the verbal left hemisphere attempting to make sense of why right hemisphere had been directed toward a shovel. Flashing a picture to right hemisphere lacked any narrative ability, and yet the split-brain patient could still point at a relevant image even though this did not “pass through” language.

The argument here is that this serves as a good analogue for a theory of ideology that does not make computational or modular commitments. The important point is that confabulation is not just some made up story, but what the split-brain patient believes because his brain has filled in the blank (e.g. “I chose the shovel because I need to shovel out the chicken coop”). Ideology as a cognitive phenomenon does not, in this sense, mean programming the political system according to an extrinsic symbol system; in other words, building an internal model (a three-dimensional one) of that system and drawing entailments from it, as any good scientist would do. To be “in ideology” means filling in the blank as the normal way to cognitively cope with disconnected inputs, some with a “phonological representation,” others that are “nonspeaking.”

The Split-Brain and Social Relations

We can theorize that where practical mastery of social relations becomes important, in particular, social relations that are “ideologically-relevant,” it is because they generate an equivalent of a split-brain effect and its “interpreter mechanism.” In social relations arranged as fields, practical mastery consists of the “felt motivation of impulsion … to attach impulsion … to positions … [and have] the ethical or imperative nature of such motivations [be] akin to a social object, external and (locally) intersubjectively valid, that is, valid conditional on position and history” (Martin 2011: 312).

Fields refer to one type of social relation conducive to ideological effects, particularly if they are organized on quasi-Schmittian grounds of opponents and allies (Martin 2015). Marx is clear that other types of social relation (like capital) are specifically resistant to influence by any sort of cognitive mediation. Still, he achieves some understanding of those social relations by examining their “being thought … [through] abstractions” (see Marx 1973: 143). For instance,  the commodity fetish can be seen as analogous to a split-brain effect: the “social relation between things” is an LVF interpretation, while the “social relation between people” is equivalent to an RVF input. A split-brain is an analogue of mental structures that correspond to these objective (social) structures.

Taking the split-brain as the basis (not the “extrinsic theory”) for ideology as a (non-GOFIT) cognitive phenomenon, then, we can speculate that only certain social relations (fields, capital) have an ideological effect. The ideological effect they do have is because they generate a split-brain scenario with disconnected inputs. Agents are subject to social relations in which they do not have direct access (RVF). They fill in the blank of the effect of those inputs through “abstractions,” i.e. explicit endorsements or propositional attitudes that take linguistic form, often mistaken on their own terms as ideology (LVF).

To be continued … [note: Zizek (2017: 119ff) also finds the split-brain useful for thinking about ideology, though his argument confounds and mystifies with Pokemon Go]

 

References

Cohen-Cole, Jamie. (2005). “The Reflexivity of Cognitive Science: The Scientist as a Model of Human Nature.” History of the Human Sciences 18: 107-139.

Galanter, Eugene and Murray Gerstenhaber. (1956). “On Thought: The Extrinsic Theory.” Psychological Review 63: 218-227.

Gazzaniga, Michael. (1967). “The Split-Brain in Man.” Scientific American 217: 24-29.

_____. (1998). “The Split-Brain Revisited.” Scientific American 279: 51-55.

Gazzaniga, Michael and Joseph LeDoux. (1978). The Integrated Mind. Plenum Press.

Geertz, Clifford. (1973). “Ideology as a Cultural System.” in Interpretation of Cultures.

Jost, John. (2006). “The End of the End of Ideology.” American Psychologist 61: 651-670.

Martin, John Levi. (2015). “What is Ideology?” Sociologica 77: 9-31.

_____. (2011). The Explanation of Social Action. Oxford.

Marx, Karl. (1973). The Grundrisse. Penguin.

Zizek, Slavoj. (2017). Incontinence of the Void. MIT

 

Beyond Good Old-Fashioned Ideology Theory, Part One

The concept of ideology is surely one of the sacred cow concepts of sociology (and the social sciences more generally) and is one of the special few that circulates widely outside the ivory tower. It is also a concept that is arguably the most indebted of all to the presumption that cognition is a matter of representation, nothing more or less. Ideology has, from its French Revolution beginnings to the present, been associated closely with ideas and more specifically with ideas that project meaning over the world in relativistic and contentious ways. Almost universally ideology is characterized by representation; historically it has also been characterized by what we can call (unsatisfactorily) distortion. For ideologies to be representations they must be capable of generating reflexively clear meaning about the world. For ideologies to be distortions those representations must generate meaning in some way that concerns the exercise of power. Since ideologies are distorting they must consist of representations that either support or contend with some current configuration of power, by prescribing its direction. This means that people do not believe ideologies because ideologies are true. Instead, some combination of social factors and self-interest leads people to believe them.

This will have to do as a (quick/dirty) summary of the most common set of referents generally associated with ideology. Let’s call it good old-fashioned ideology theory (GOFIT) for short. Even a brief perusal of the recent news would probably suggest that the world (or at least the US) is becoming increasingly “ideological” on GOFIT terms as ideology seems to be more and more important for more and more stuff that it had been irrelevant for as recently as a decade ago (e.g. restaurant attendance, college enrollment, cultural consumption). If these impressions are even partially correct, then an enormous weight is placed on ideology. It is a concept that we (sociologists included) need  in order to make sense of the fractious, tribalizing times in which we live. But it is fair question to ask whether GOFIT ideology is up to the challenge.

On the above terms, GOFIT ideology essentially consists of something like the “rule-based manipulation of symbols” type of meaning construction, unreconstructed from its heyday in the classical cognitive science of the 1950s and 60s. This should make us pause and take a second look at the concept. The goal of this post is to (not exhaustively) examine whether ideology can do without these commitments and whether the concept can be removed from GOFIT and placed on new cognitive ground. I argue that ideology can do without these commitments and that it already has been placed (or is being placed) on new cognitive ground, which makes it an important point of focus not only for substantive phenomena (all around us today) but because ideology is closely entangled with the wider theoretical stakes of relevance to this blog, and it has been since at least The German Ideology when Marx and Engels tried for a final push of idealism into the dustbin.

In this first post, I will compare two arguments that try to move beyond GOFIT. In a second post, I will sketch a different approach that tries to extend a non-GOFIT ideology even further.

Psychologists, it seems, have beaten everyone to the punch in providing key evidence attesting to the present-day significance of ideology. Here, we can point to the influential work of John Jost (2006; website) and the research program he develops against the mid-century “end of ideology” claims. Those arguments hard largely eliminated ideology as a key conceptual variable, in one sense because large disagreements over how to organize society seemed to end sometime in the 1950s, at least in the US (“even conservatives support the welfare state,” as Seymour Martin Lipset famously quipped). But in a more important sense, the “end of ideology” also meant a paradigm in political psychology built around the presumption that “having an ideology” was a mystery and that only a small minority of people actually had one. Jost resurrects ideology by developing a new question in political psychology, one that at this point probably seems grossly redundant, but which summarizes a vast body of research inside and outside the academy, all of which asks some more or less complicated version of it: “why [do] specific individuals (or groups or societies) gravitate toward liberal or conservative ideas[?]” (2006: 654).

Jost here distance himself from the political scientist Philip Converse and his claim (esp Converse 1964) that probably no more than ten percent of the population possesses anything resembling an ideology (e.g. “political belief system”). For Converse, this meant that for the vast majority of political actions, especially voting behavior by a mass public, ideology is basically irrelevant. Jost argues that, on the contrary, even if the highly rationalized, systematic commitments of true ideologues is found  only among a small minority, we cannot dismiss peoples’ attraction to conservative or liberal ideas. Relaxing a strong consistency claim, Jost finds placement on the conservatism-liberalism spectrum as highly predictive of voting trends, and not only because where people self-identify on the ideological scale closely overlaps with their party affiliation. Ideas matter too, especially if we measure them as “resistance to change and attitudes toward equality” (2006: 660), which are (presumably) the source of the major ideological differences between the left and the right.

As Jost continues, these “core ideological beliefs concerning attitudes toward equality and traditionalism possess relatively enduring dispositional and situational antecedents, and they exert at least some degree of influence or constraint over the individual’s other thoughts, feelings, and behaviors” (2006: 660; my emphasis). Here Jost hits on a problem with influence inside and increasingly outside the academy today. Research on the “dispositional and situational antecedents” of attraction to liberal or conservative ideas has become something of a cottage industry, as evidenced in popular works by luminaries like George Lakoff (2002) and Jonathan Haidt (2012), and in Jost’s own work (see 2006: 665) that finds, among other things, unobtrusive-style evidence (“bedroom cues”) that strongly correlates with placement on the liberalism-conservatism spectrum (like whether one has postage stamps lying around the house instead of art supplies). Even Adorno’s (et al 1950) arguments have been buoyed by this conversation as prescient and timely (see Jost 2006: 654) after they had been summarily dismissed by mid-century psychologists. “Right-wing authoritarianism” as a personality measure helps define antecedent conditions that lead people to be attracted to ideas (or to Trump) with different ideological content. Adorno thrives as the research winds have changed.

The key presumption of this research is that ideologies are information-lite and  not complicated, at least not in a reflexive way, as Converse thought they must be (“complicated systems of relations between ideas”). But we might reasonably wonder whether, in their lack of complication, “ideological differences” in this literature do in fact count as differences of ideology and not something else. Jost himself does little to explain what it means to be “attracted” to liberal or conservative ideas (is this the same as believing them?), and what he calls “ideas” can only be distinguished from what he (confusingly) also calls “attitudes” if we presume that ideas involve some sort of deductive, rule-based manipulations (e.g. because I believe in equality, I will support politicians that promise to help the poor). On both fronts this makes his approach problematic. While Jost is successful at clearing many of the hurdles that stand in the way of making the concept of ideology relevant again, he retains some of the strongest presumptions of GOFIT.

If political psychology has largely been resurrected by making something significant of the widely held sense that “ideological differences” are of critical significance for politics today, there is at least one other alternative to GOFIT available which has similar motivations but which does not make nearly the same commitments. John Levi Martin has developed an approach to ideology on the basis of redefining it as non-representational. Ideology does not consist of a representation of the world, in this view, but serves rather (more pragmatically) as “citizens’ way of comprehending the nature of the alliances in which they find themselves” (2015: 21). While he shares with Jost the fruitfulness of engaging with GOFIT on the relationship between “social factors” and ideologies, in Martin’s case in particular, this comes with a considerable twist: ideologies are not given autonomy as a kind of rule-like content that allows for deductive logic. As Martin argues, what appear to be ideologies are not reducible to an equation like values + beliefs = opinions. Rather, they are the means through which individuals comprehend “the alliances” in which they find themselves (which is important). What we can call ideological differences, in other words, maps onto patterns of social relations and not to differences that might be ascribable to the content of ideas.

If we take his example of whether people say they support a policy that will provide assistance to out of work, poor and/or black people, “the classic [GOFIT] conception imagines a person beginning with the value of equality, adding the facts about discrimination (say) and producing support for the policy.” Jost would probably explain this as their attraction to some view of equality, whether fueled by a personality trait or some other dispositional antecedent (just as Lakoff and Haidt would, in different ways). In Martin’s alternative, the process is entirely different: “The rule is, simply put, ‘me and my friends are good’ and ‘those others are bad’ …  [The] actual calculus of opinion formation is sides + self-concept = opinion” (27). This is what Martin calls a political reasoning source of ideology formation. Whether one would support the above policy is dictated by what it signifies about one’s position in “webs of alliance and rivalry, friendship and enmity.” It is that positioning that makes it an ideological choice, not that it is driven by some sequence that begins (or ends) with a commitment to certain ideas.

Martin provides a bit fleshier example to illustrate how political reasoning of this sort is “totally relational” and therefore endogenous to alliance/rivalry coalitions:

I once saw a pickup truck in my home town that had two bumper stickers on the rear. One had a representation of the American flag, and words next to it: “One nation, one flag, one language.” The other side had the Confederate flag. This is the flag used by the short-lived Southern confederation of states during the Civil War, when they tried to break away from the Union in order to preserve their “peculiar institution,” that is, slavery of Africans and their descendants. They wanted there to be two countries, and two flags (25)

Such infelicitous placement of the two bumper stickers would be a contradiction from a GOFIT point of view in search of the content of the ideas and how this organizes a decision to place the two stickers from some kind of logical deduction. For GOFIT, such behavior quickly becomes incomprehensible (as does the person). In fact, Martin argues, the two flags demonstrate this person’s practical mastery of the political landscape in the USA circa 2015ish: “Displaying the Confederate flag in the United States does not imply anti-black racism. However, it does imply a lack of concern with being ‘called out’ as a racist—it implies fearlessly embracing aspects of American political culture without apology … it does demonstrate anti-anti-racism” (26). The other bumper sticker (one nation, one flag, one language) demonstrates the person’s response “to certain political initiatives to ease the barriers to American citizens, residents, and possibly others who read (or speak) Spanish but not English.”

Together, the two bumper stickers make sense. But to see how we first need to bracket whatever ideas they might seem to express and situate the stickers instead in sets of social relations in which they become meaningful for this person. When we do this, we see that this person demonstrates a combination of social oppositions that together situate him/her against the “liberal coalition.” The placement of the bumper stickers is a political action, not as the expression of some commitment to underlying ideas, but as this person’s theorization of their politics: “it is their attempt to come up with an abstract representation of the political alliance system in which they are in, and the nature of their opponents” (26).  

Pace Jost, then, Martin argues that patterns of ideological difference are not ultimately driven by absolute differences between conservative and liberal ideas, though this is not to say that ideas (or words) cannot themselves become points of ideological difference. So much is this true that political reasoning itself provides an ontology and can dictate the nature of reality in way that is impervious to criticisms of ideological “distortion” and their presumption of a GOFIT mind-to-world relation that is mediated by something like a belief system. The nature of the world itself can (and has) come to be an expression of oppositions and alliances with an ideological significance. Martin and Desmond (2010: 15), for instance, find that liberals and conservatives with high political information both significantly overestimate the extent of black poverty and are much more likely to be wrong about it than are moderates and liberals and conservatives with less political information. This is an effect of political reasoning, they claim, and anticipates a sort of post-truth scenario in which facts themselves also become a means to theorize one’s political position. For high information liberals and conservatives alike, “their knowledge is that-which-helps-us-know-what-we-want-to-fight-about” (Martin 2015: 28). In other words, they become more ideological as they become more ensconced in relations of alliance and rivalry, not as they internalize complicated belief systems.

Martin, then, reinterprets ideology as the way that people comprehend their situation in relations of alliance and opposition using whatever means might seem to adequately express the accumulation of friends and the distinction from enemies. Martin surpasses the GOFIT assumptions more successfully than Jost largely because his approach to ideology does not rely imputing a content to ideas that would make them “liberal” or “conservative.” In principle, any idea could be liberal or conservative in his framework (just as any bumper sticker could, or any fact about the world could, or any political candidate) depending on whether people use it to map alliances and oppositions and comprehend the boundaries of coalitions of friends/enemies.

This, I argue, makes Martin’s approach more adequate, and historically relevant in way that Jost’s approach cannot be, for understanding what seems to be the rapid proliferation of ideological differences today, or more impressionistically the increased presence of ideology today, presumably as people use more things to “theorize” their political position inside alliances/rivalries than had been used before, complicating those groupings (at least in the interim). Once again, this is much easier to understand if we do not attempt to situate individuals into fixed categories on the basis of antecedent dispositions that give them some fixed attraction to ideas with a certain content.

But this also suggests that Martin’s approach to ideology is non-GOFIT mainly because it is (or seems to be) non-cognitive. Martin succeeds because he takes ideology out of the mind and places it in social relations. Things (e.g. bumper stickers, art supplies, flags, welfare policies) become “ideological” when they symbolize relations of alliance and rivalry, as comprehended through them and (following Marx) never in their absence, though we might ask if there is any relevant difference between using things to comprehend these relations and using things to construct them. Jost leaves ideology in the mind (in ideas), so it remains for him at least partially GOFIT, though he emphasizes that ideology is supplemented by non-cognitive things like personality or situational factors (e.g. traumatic events, like 9/11, or private ones) that make ideas carry different degrees of attraction.

When something vaguely cognitive enters Martin’s framework, it usually comes under the heading of “political reasoning in practice,” which does appear to serve adequately as an alternative to a GOFIT conception of mind. In the next post, I attempt a definition of  “practical mastery” of ideologically-relevant relations as a cognitive trait and how this is absolutely required if we want to finally (once and for all) separate ideology from its GOFIT background.

 

References

Adorno, Theodor et al (1950). The Authoritarian Personality. Studies in Prejudice, edited by Max Horkheimer and Samuel H. Flowerman. New York: W.W. Norton & Company.

Converse, Philip. (1964). “The Nature of Belief Systems in Mass Publics.” Critical Review 18: 1-74.

Haidt, Jonathan. (2012). The Righteous Mind: Why Good People are Divided by Politics and Religion. Norton.

Jost, John. (2006). “The End of the End of Ideology.” American Psychologist 61: 651-670.

Lakoff, George. (2002). Moral Politics: How Conservatives and Liberals Think. UChicago Press.

Martin, John Levi. (2015). “What is Ideology?” Sociologica 77: 9-31.

Martin, John Levi and Matthew Desmond. (2010). “Political Position and Social Knowledge.” Sociological Forum 25: 1-26.