In one the clearest statements about the difference it makes to emphasize cognition in the study of culture and, more generally, for the social sciences as a whole, the anthropologist Maurice Bloch (2012) writes that, if we consider closely every time we use the word “meaning” in social science, then “a moment’s reflection will reveal that ‘meaning’ can only signify ‘meaning for people’. To talk of, for example, ‘the meaning of cultural symbols’, as though this could be separated from what these symbols mean, for one or a number of individuals, can never be legitimate. This being so, an absolute distinction between public symbols and private thought becomes unsustainable” (4).
As a critique of Geertzian and neo-Diltheyan arguments for “public meaning” and “cultural order” sui generis, Bloch’s point is fundamental, as it reveals a core problem with arguments built on those foundations once they have been untethered from “meaning for people” and can almost entirely be given over to “meaning for analysts.” Yet, and as Bloch makes it a point to emphasize, such critiques can only get us so far in attempting to change practices, as even if “a moment’s reflection” like this may lead some to agree with Bloch’s claim, without an alternative, these models will persist more or less unchanged. If “meaning for people” stands as some equivalent for a tethering to cognitive science as recommended by theorists like Stephen Turner (2007), then what is needed is a programmatic way of doing social theory without “minimizing the cognitive” by attempting, instead, to bridge social theory and cognitive neuroscience in the design of concepts.
In fairness to Geertz, one of his more overlooked essays proposes a culture concept that seems to want to avoid the very problem that Bloch identifies. In “The Growth of Culture and the Evolution of Mind” Geertz (1973) draws a connection between culture and “man’s nervous system,” emphasizing in particular the interaction of culture and the (evolved) mind in the following terms: “Like a frightened animal, a frightened man may run, hide, bluster, dissemble, placate or, desperate with panic, attack; but in his case the precise patterning of such overt acts is guided predominantly by cultural rather than genetic templates.” Here the problem of relating the cultural to the cognitive seems clearly resolved, as the latter is reduced to “genetic templates.” Yet, contrary to Sewell’s (2005) positive estimation of this aspect of Geertz’s thought as “materialist,” we should be wary of taking lessons from Geertz if by “materialist” Sewell means a culture concept that does due diligence to the evolved, embodied, and finite organisms we all are. Nonetheless, in many respects, the Geertzian move still prevails in contemporary cultural sociology which, likewise, features an admission of the relevance of the cognitive to the cultural, but retains a similar bracketing as de facto for figuring out the thorny culture + cognition relation.
For instance, recently Mast (2020) has emphasized that “representation” (qua the proverbial turtle) works all the way down, even in the most neurocognitive of dimensions, and so we cannot jettison culture even if we want to include a focus on cognition because we need cultural theory to account for representation. Likewise, Norton (2018) makes a similar claim by drawing a distributed cognition framework into sociology, but making “semiotics” the ingredient for which we need a designated form of cultural theory (in this case, his take on Peircean “semeiotics”) to understand. Kurkian (2020), meanwhile, argues that unless we admit distinguishably cultural ingredients like these, attempting any sort of marriage of culture + cognition will fail, because cognition will be about something that does not tread on culture’s terrain, like “information” for instance.
Each of these is a worthwhile effort, yet in some manner they misunderstand the task at hand in attempting a culture + cognition framework, recapitulating what Geertz did in 1973. This is because any such framework must rest on new concept-formation rather than what amounts to a defense of established concepts. This would admit that cultural theories of the past cannot be so straightforwardly repurposed without amendments. What we tend to see, rather, are associations of culture concepts (semiotics, representation) and cognitive concepts (distributed cognition, mirror neurons) by drawing essentially arbitrary analogies and parallels between concepts that otherwise remain unchanged. In most cases, such a bracketed application replicates the disciplinary division of labor in thought because the onus is never placed on revision, despite the dialectical encounter and the possibilities that each bank of concepts presents to the deficiencies and arbitrariness of the other. We either hold firm to our cultural theories of choice, or we engage in elaborate mimicry of a STEM-like distant relation.
Following Deleuze (1995), we should appreciate that to “form concepts” is at the very least “to do something,” like, for instance, making it wrong to answer the question “what is justice?” by pointing out a particular instance of justice that happened to me last weekend. Deleuze adds insight in saying that concepts attempt to find “singularities” from within a “continuous flow.” The insight is apt to the degree that culture + cognition thinking seems rooted in the sense that there is a “flow” here and that, maybe, the concepts we’ve inherited, most of them formed over the last 80 years, that make culture and cognition “singular” are simply not helpful anymore. Yet to rehash settled, unrevised cultural theories and bring them into relation with emerging cognitive theories (also unchanged) is essentially to “do” something with our concepts like affirm a thick boundary between sociologists’ jurisdiction and cognitive science’s jurisdiction, forbidding anything that looks like culture + cognition, and, in all likelihood, creating only an awkward, fraught, short-lived marriage between the two, which, despite the best of intentions, will continue to “minimize mentalistic content,” have the effect of carefully limiting the role that “psychologically realistic mechanisms” can play in concept-formation, and which will, in retrospect, probably only produce a brand of social theory that will seem hopelessly antique for sociologists looking back from the vantage of a future state of the field, one possibly even more removed from present-time concerns with “cognitive entanglements.”
The task should instead be something akin to what Bourdieu (1991) once called “dual reference” in his attempt to account for the strange verbiage littered throughout Heidegger’s philosophy (dasein, Sorge, etc). For Bourdieu, Heidegger’s work remains incomprehensible to us if we reference only the philosophical field in which he worked, and likewise incomprehensible if we reference only the Weimar-era political field in which he was firmly implanted. Instead, Heidegger’s philosophy, in particular these keywords, consists of position-takings in both fields simultaneously, which for Bourdieu goes some way in explaining the strange and tortured reception of Heidegger (with Being and Time something of a bestseller in Germany when published in 1927 and still canonical in pop philosophy pursuits today) to present-day.
Thus, in forming concepts, the goal should not be to posit an order of influence (culture → cognition, cognition → culture), nor to bracket the two (culture / cognition) and state triumphantly that this is where culture concepts can be brought to bear and this where cognitive ones can be, leaving both unchanged. Norton is right: Peirce has lots of bearing on contemporary cognitive science (see Menary 2015). But to say this and not amend an understanding of semeiotics (which, it seems, Peirce would probably advocate were he alive today, as he always considered his semeiotics as a branch of the “natural science” he always pursued) is a non-starter.
My argument is that concept-formation of the culture + cognition kind should yield dual reference concepts rather than bracketing concepts or order of influence concepts. The proposal will be that the concept of “test” demonstrates such a dual reference concept. We cannot account for the apparent ubiquity of tests, why they are meaningful, and how they are meaningful without reference to both a cognitive mechanism and a sociohistorical configuration that combines with, appropriates, and evokes it. The analysis here involves genealogy, institutional practice, site-specificity, and social relations.
Elsewhere (Strand 2020) I have advocated a culture + cognition styled approach as the production of “extraordinary discourse” and, relatedly, as concept-formation that can be adequate for “empirical cognition” as a neglected, minor tradition since the time of Kant (Strand 2021; though one with a healthy presence in classical theory). More recently, Omar and I have attempted concept-formation that more or less looks like this in recommending a probabilistic revision of basic tenets of the theory of action (forthcoming, forthcoming). To put it starkly: we need new concepts if we want something like culture + cognition. To work under the heading of “cognitive social science” is akin to a compass-like designation in a new direction. And rather like Omar (2014) has said, if theorists, so often these days casting about for a new conversation to be part of now that “cultural theory” is largely exhausted and we can only play with the pieces, want a model for this kind of work, they might study the role that philosophers have come to play in cognitive science, as engaged in what very much seems like a project of concept-formation.
In this post, I will attempt something similar, more generally as a version of deciphering “meaning for people” by asking a simple question: Why are tests so meaningful and seemingly ubiquitous in social life (Marres and Stark 2020; Ronnell 2005; Pinch 1993; Potthast 2017)? I will consider a potential “susceptibility” to tests and why this might explain why we find them featured so fundamentally in areas as varied as education, science, interpersonal relationships, medicine, morality, technology, and religion, as a short list, and how they can be given a truly generalized significance if we conceptualize test as trial (Latour 1988). More generally, the new(ish) “French pragmatist sociology” has made the epreuve (what mutually translates “test” and “trial” into French) a core concept as a way of “appreciating the endemic uncertainty of social life” (Lemieux 2008) though without implying too much about what a cognitive-heavy phase like “endemic uncertainty” might mean. The French pragmatists  might be on to something: test or trial may qualify as a “total social phenomena” in the tradition of Mauss (1966), less because we can single out one test as “at once a religious, economic, political, family, phenomena” and more because each of these orders depends, in some manner, on tests. This is more fitting with a cognitive susceptibility perspective, as I will articulate further below.
Provisionally, I will define a test as the creation of uncertainty, a suspension of possibilities, a way of “inviting chance in,” for the purpose of then resettling those possibilities and resolving that uncertainty by singling out a specific performance. After a duration of time has elapsed, the performance is complete. The state of affairs found at the end is what we can call an “outcome,” and it carries a certain kind of “objective” status to the extent that the initial uncertainty or open possibility is different now, less apparent than it was before, and “final” in some distinguishable way.
If testing appears ubiquitous and “total,” this is not because tests necessarily work better than other potential alternatives as ways of handling “endemic uncertainty.” It is also not because testing features as part of some larger cultural process in motion (like “modernity’s fascination with breaking known limitations” [Ronnell 2005]). Rather, I want to claim that if tests are ubiquitous, this indicates a cognitive susceptibility to tests, thus revealing latent “dispositions,” such that we could not help but find tests “meaningful for people” like us. Some potential reasons why are suggested by referencing a basic predictive processing mechanism:
According to [predictive processing], brains do not sit back and receive information from the world, form truth evaluable representations of it, and only then work out and implement action plans. Instead brains, tirelessly and proactively, are forever trying to look ahead in order to ensure that we have an adequate practical grip on the world in the here and now. Focused primarily on action and intervention, their basic work is to make the best possible predictions about what the world is throwing at us. The job of brains is to aid the organisms they inhabit, in ways that are sensitive to the regularities of the situations organisms inhabit (Hutto 2018).
Thus, in this rendering, we cannot help but notice “sensory perturbations” as those elements of our sensory profile that defy our expectation (or, in more “contentful” terms, our predictions). These errors stand out as what we perceive, and we attend to them by either adjusting ourselves to fit with the error (like sitting up a little more comfortably in our chair) or by acting to change those errors, so that we do not notice them anymore. In basic terms, then, the predictive processing “disposition” involves an enactive engagement with the world that seeks some circumstance in which nothing is perceived, because, we might say, everything is “meaningful” (i.e. expected). If we define “meaning” as something akin to “whatever subjectively defined qualities of one’s life make active persistence appealing,” then this adaptation of the test concept might be a way of accounting for meaning without a “minimum of mentalistic content” while incorporating a “psychologically realistic mechanism” (Turner 2007).
In what follows I will examine whether there is some alignment between this disposition and tests as a ubiquitous social process. If so, then it may be worthwhile to build on the foundation laid by the French pragmatists for concept-formation of the culture + cognition kind.
On cognitive susceptibility
The notion of cognitive “susceptibility” is drawn from Dan Sperber (1985) and the idea that, rather than dispositions that create a more direct link between cognition and cultural forms, that link may more frequently operate as susceptibility.
Dispositions have been positively selected in the process of biological evolution; susceptibilities are side-effects of dispositions. Susceptibilities which have strong adverse effects on adaptation get eliminated with the susceptible organisms. Susceptibilities which have strong positive effects may, over time, be positively selected and become, therefore, indistinguishable from dispositions. Most susceptibilities, though, have only marginal effects on adaptation; they owe their existence to the selective pressure that has weighed, not on them, but on the disposition of which they are a side-effect (80-81).
Sperber uses the example of religion. “Meta-representation” is an evolved cognitive disposition to create mental representations that do not have to pass the rigorous tests that apply to everyday knowledge. It enables representations not just of environmental and somatic phenomena, but even of “information that is not fully understood” (83). Because it has these capabilities, the meta-representational disposition creates “remarkable susceptibilities. The obvious function served by the ability to entertain half-understood concepts and ideas is to provide intermediate steps towards their full understanding. It also creates, however, the possibility for conceptual mysteries, which no amount of processing could ever clarify, to invade human minds” (84). Thus, Sperber concludes that “unlike everyday empirical knowledge, religious beliefs develop not because of a disposition, but because of a susceptibility” (85).
The disposition/susceptibility distinction can be quite helpful in navigating the murky waters around Bloch’s trope of “meaning for people,” because we do not necessarily have to give cultural forms over directly to dispositions. Rather, those cultural forms can arise as susceptibilities, which offer far more bandwidth to capture the cognitive dimensions of cultural forms as instances of “meaning for people.”
Thus, when God “tests the faith” of Abraham by ordering him to sacrifice his child Isaac, a space of chances is opened, and depending on how the test goes, something about Abraham will become definitive, at least for a while. A perceived lack of faith becomes equivalent to a noticeable error here, and it can be resolved by absorbing this uncertainty through some process that generates an outcome to that effect. Even though Abraham does not end up sacrificing Isaac in the story, he was prepared to do so, and thus he “proves” his faith. Some equivalent to this “sacrifice” remains integral to tests of faith of all sorts (Daly 1977).
I hypothesize that there must be a (cognitive) reason why this test, and the whole host of others we might come across, in fields and pursuits far removed from Abrahamic religion, is found in moments like these and in situations that mimic (even vaguely) God’s “test” of Abraham. The role of tests in this religious tradition, and potentially as a total social phenomenon, indicates something about “susceptibility” (in Sperber’s sense) to them. “Disposition” in this case concerns the predictive processing disposition to eliminate prediction error by either adapting a generative model to the error or by acting to change the source of the error; either way, our expectations change and we do not notice what stood out for us before. For tests, the construction of uncertainty and more possibilities than will ultimately be realized is a kind of susceptibility that corresponds to the predictive disposition. More specifically, this means that tests allow something to be known to us by enabling us to expect things of it.
Tests: scientific, technological, moral
What is remarkable about this is the range of circumstances to which we turn to tests to construct our expectations. Consider Latour’s description of the Pasteur’s experimental technique:
How does Pasteur’s own account of the first drama of his text modify the common sense understanding of fabrication? Let us say that in his laboratory in Lille Pasteur is designing and actor. How does he do this? One now traditional way to account for this feat is to say that Pasteur designs trials for the actor to show its mettle. Why is an actor defined through trials? Because there is no other way to define an actor but through its actions, and there is no other way to define an action but by asking what other actors are modified, transformed, perturbed or created by the character that is the focus of attention … Something else is necessary to grant an x an essence, to make it into an actor: the series of laboratory trials through which the object x proves it mettle … We do not know what it is, but we know what it does from the trials conducted in the lab. A series of performances precedes the definition of the competence that will later be made the sole cause of these performances (1999: 122, 119).
Here the test (or “trial”) design works in an experimental fashion by exposing a given yeast ferment to different substances, under various conditions just to see what it would do. By figuring this out, Pasteur “designs an actor,” which we can rephrase as knowing an object by now being able to hold expectations of it, being able to make predictions about it, and therefore no longer needing to fear what it might do or even have to notice it.
Latour is far from alone in putting such emphasis on testing for the purposes of science. Karl Popper (1997), for instance, insists on the centrality of the test and its trial function: “Instead of discussing the ‘probability’ of a hypothesis we should try to assess what tests, what trials, it has withstood; that is, we should try to assess how far it has been able to prove its fitness to survive by standing up to tests. In brief, we should try to assess how far it has been ‘corroborated.’” To put a hypothesis on trial is, then, to imperil its existence, as an act of humility. Furthermore, it is to relinquish one’s own claim over the hypothesis. If a “test of survival” is the metric of scientific worth, then one scientist cannot single-handedly claim control: hypotheses need “corroboration,” a word which Popper prefers over “confirmation” because corroboration suggests something collective.
When Popper delineates the nuances of the scientific test, he also seems to establish tests for membership in a scientific community, as based on this sort of collective orientation, which requires individual humility, and in which, from the individual scientist’s standpoint means “inviting chance in” relative to their own hypothesis, making them subject to more possibilities than what the scientist might individually intend, including the possibility that they could be completely wrong.
Meanwhile, in Pinch’s approach, which focuses specifically on technology, tests work through “projection”:
If a scale model of a Boeing 747 airfoil performs satisfactorily in a wind tunnel, we can project that the wing of a Boeing 747 will perform satisfactorily in actual flight … It is the assumption of this similarity relationship that enables the projection to be made and that enables engineers warrantably to use the test results as grounds that they have found out something about the actual working of the technology (1993: 29).
The connection with a predictive mechanism is clear here, as projection entails not being surprised when we move into the new context of the “actual world” having specified certain relationships in the “test world.” The projection/predictive aspect is made almost verbatim here: “In order to say two things are similar, we bracket, or place in abeyance, all the things that make for possible differences. In other words, we select from myriad possibilities the relevant properties whereby we judge two things to be similar … [The] outcome of the tests can be taken to be either a success or a failure, depending upon the sorts of similarity and difference judgments made” (32).
Thus, a generative model is made in the testing environment, and it is then applied in the actual world environment on the understanding that we will not need to identify predictive error when we do this, as the generative model is similar enough to the actual world that we will have already resolved those. As Pinch concludes, “The analysis of testing developed here is, I suggest, completely generalizable. The notion of projection and the similarity relationships that it entails are present in all situations in which we would want to talk about testing” (37). And, it does seem that this particular use of testing can find analogues far and wide, including with the laboratory testing that is Latour’s focus and more generally we might say with educational or vocational testing where, likewise, a similarity relationship depends on a test that can minimize the difference between two contexts (a difference that we can understand according to the presence, or hopefully lack thereof, of prediction error). But what if we try to apply the test concept to something more remote from science and technology, like morality?
On this front, we can find statements like the following, from Boltanski and Thevenot:
A universe reduced to a common world would be a universe of definite worths in which a test, always conclusive (and thus finally useless), could absorb the commotion and silence it. Such an Eden-like universe in which ‘nothing ever happens by chance’ is maintained by a kind of sorcery that exhausts all the contingencies … An accident becomes a deficiency … Disturbed situations are often the ones that lead to uncertainties about worth and require recourse to a test in order to be resolved. The situation is then purified … In a true test, deception is unveiled: the pea under the mattress discloses the real princess. The masks fall; each participant finds his or her place. By the ordering that it presupposes, a peak moment distributes the beings in presence, and the true worth of each is tested (2006: 136-138).
In this rendering, tests are quite explicitly meant to make “accidents” stand out, in addition to fraud and fakery. The goal is the construction of a situation removed of all contingencies, in which, likewise, we do not notice anything because the test has put it in its proper order. When we do notice certain things (e.g. “the same people win all the same tests,” “they are singled out unfairly,” “they never got the opportunity,” etc), these are prediction errors based on some predictive ordering of the world that creates expectation. Simultaneously they are meaningful (for people) as forms of injustice.
Boltanski and Thevenot dovetail, on this point, with something that became clear for at least one person in the tradition of probability theory, namely Blaise Pascal (see Daston 1988: 15ff). For Pascal, the expectations formed by playing a game of chance could themselves be the source of noticing the equivalent of “error,” for instance, when some player wins far too often while another never wins. A test is the source of an order “without contingency” where “nothing ever happens by chance,” which in this case means a test is the rules of the game that allow for possibilities (all can win) while resolving those possibilities into a result (only one will win). This creates expectations, and Boltanski and Thevenot extrapolate from this (citing sports contests as epitomizing their theory) to identify “worlds” as different versions of this predictive ordering. Injustice is officially revealed at a second level of testing, then, as the test that creates this order can itself be tested (see Potthast 2017). Prediction errors can be noticed, likewise these can be resolved through the adaptation of a generative model, which would seem to demand a reformative (or revolutionary) change of the test in a manner that would subsequently allow it to meet expectations.
A genealogy of testing
What is interesting about these examples is, abstracted from history as they are, they demonstrate parallel wings of a tradition that Foucault traces to the decline of the “ordeal” and the birth of the “inquiry.” Both of these fit the profile of the test, though only the former gives the outcome the kind of official status or legitimacy of the laboratory test, the technological test, or the moral test. The ordeal involves a sheer confrontation that can occur at any time, and which creates expectations strictly in relation to some other specific thing, whether this be another person or something inanimate and possibly dangerous (like fire) or a practice of some kind (like writing a book). One can always test themselves against this again, and to move beyond known limitations, they must test themselves if they are to do anything like revise a generative model by encountering different prediction errors.
Foucault’s larger point here recommends a more general argument, rooted in a kind of genealogy, that justice requires a caraceral; that the only form of justice is the one that rests in illegality. On the contrary, in his earlier work Foucault recommends a different approach to justice, one that renders any necessary association of justice and “the carceral archipelago” mistaken, as it would only consist of a relatively recent, though impactful, appropriation of justice. Thus, the argument Foucault presents is less nominal than it may seem at first, particularly when we consider the following:
What characterizes the act of justice is not resort to a court and to judges; it is not the intervention of magistrates (even if they had to be simple mediators or arbitrators). What characterizes the juridical act, the process or the procedure in the broad sense, is the regulated development of a dispute. And the intervention of judges, their opinion or decision, is only ever an episode in this development. What defines the juridical order is the way in which one confronts one another, the way in which one struggles. The rule and the struggle, the rule in the struggle, this is the juridical (Foucault 2019: 116).
Here the meaning of justice is expanded to refer to the “regulated development of a dispute,” which may or may not have judges, which may or may not take place in a court, result in a judgment, or find at its culmination some sort of definitive decision or “judgment.” All of these are added features to the basic dispute.
Elsewhere Foucault expands on this by changing the language he uses in a significant way: from “dispute” justice shifts to “trial,” which he gives this an expansive meaning by drawing a distinction within the category of trial itself and distinguishing between epreuve and inquiry. There is a historical tension in the distinction: inquiries will come to replace epreuves (or “ordeals”) in a Eurocentric history. This division is apparent as early as the ancient Greeks who, in a Homeric version, would create justice through the rule-governed dispute, with the responsibility for deciding—not who spoke the truth, but who was right–entrusted to the fight, the challenge, and “the risk that each one would run.” Contrary to this the Oedipus Rex form, as exemplified by Sophocles’ great play. Here, in order to resolve a dispute of apparent patricide, we find one of the emblems of Athenian democracy: “the people took possession of the right to judge, of the right to tell the truth, to set the truth against their own masters, to judge those who governed them” (Foucault 2000: 32-33).
This division would be replicated in the later distinctions of Roman law, as rooted inquiry, and Germanic law, as rooted in something more resembling the contest or epreuve, with disputes conducted through either means. Yet with the collapse of the Carolingian Empire in the tenth century, “Germanic law triumphed, and Roman law fell into oblivion for several centuries.” Thus, feudal justice consisted of “disputes settled by the system of the test,” whether this be a “test of the individual’s social standing,” a test of verbal demonstration in formulaically presenting the grievance or denunciating one another, tests of an oath in which “the accused would be asked to take an oath and if he declined or hesitated he would lose the case,” and finally “the famous corporal, physical tests called ordeals, which consisted of subjecting a person to a sort of game, a struggle with his own body, to find out whether he would pass or fail.”
As the trajectory of justice moves, then, the role and place of the epreuve ascends to prominence; testing becomes justice, in other words, as the means to resolve a dispute centers around the ordeal and its outcome, more generally as a way of letting God’s voice speak. In one general account, the trial by “cold water” involved “dunking the accused in a pond or a cistern; if the person sank, he or she was pronounced innocent, and if the person floated, he or she was found guilty and either maimed or killed.” In the trial by “hot iron,” the accused would “carry a hot iron a number of paces, after which the resulting wound was bandaged. If the wound showed signs of healing after three days, the accused was declared innocent, but if the wound appeared to be infected, a guilty verdict ensued” (Kerr, Forsyth and Plyey 1992).
The epreuve, in this case, remains a trial of force or between forces, which may be codified and regulated as the case may be, as water or iron would be blessed before the ordeal, and therefore made to speak the word of God. More generally, to decline the test was to admit guilt in this binary structure, and this carried into the challenge by another in a dispute to a contest. Thus, justice ended in a victory or a defeat, which appeared definitive, and this worked in an almost “automatic” way, because it required no third party in the form of one who judges.
Across this genealogy, we find something equivalent to the creation of uncertainty, in some cases deliberately made, in other cases not, and then its resolution by some means into an outcome after a given duration of time. This outcome may have an institutional sanction (as “justice”) or it could have something more like the sanction of a fight, and presumably the certainty of what would happen should a fight happen again. In these different ways, predictions are made and expectations settled. An “error” stands out as noticeable in a variety of forms: as someone with whom one has a dispute, as an action taken or event that happened but was not expected, whether according to explicitly defined rules or not, or in the case of the democratic link suggested by Foucault, the pressing question of who should rule and whether such rule can be legitimate (see Mouffe 2000).
Some equivalent to the test (whether as inquiry or ordeal) is involved in all of these cases, and in the genealogy at least, we can glimpse how consequential it might be for a new test form to come on the scene, or to win out over another, as a way of, in a sense, appropriating cognitive susceptibilities that must be activated should “testing” make any difference for predictive dispositions.
The larger point is that the concept of test is substantive, here, because we can bridge its properties to properties of cognition. The task is to say that the predictive dispositions that are cognitive create a susceptibility to tests: more specifically, we are likely to find tests meaningful because of our predictive dispositions. If tests are drawn upon across all of these different areas, specifically in cases of uncertainty (whether as dispute, as experiment, as how to design a technology) or what we have established in general terms as “situations in which we are presently engaged with prediction error that we cannot help but notice a lot,” then it would follow that we are susceptible to tests as what allows us to absorb this uncertainty, a process we cannot understand or even fully recognize without reference to “real features of real brains” (Turner 2007). This, I want to propose, is how we can approach “test” as a dual reference concept, and its applicability in areas as varied as religion, politics, science, morality, and technology.
Tests are “meaningful for people” when they absorb uncertainty and generate expectation. They are also meaningful for people when they create uncertainty and enable critique. We could not identify something like a “test” if tests did not have these kinds of cognitive effects, and we cannot understand those cognitive effects without finding a distinguishably cognitive process (e.g. “psychologically real” with lots of “mentalistic content” extending even to neurons). In this case, the parallel of testing and uncertainty and predictive processing and prediction error is not a distant analogy, as is often the case with bracketing concepts. To understand testing’s absorption of uncertainty we need predictive processing, but to understand how predictive processing might matter for the things sociologists care about we need testing.
I’ll conclude with the suggestion that if “test” can qualify as this sort of dual reference concept then we should favor it over other potential concepts that can account for meaning (e.g. “categories,” “worldview,” “interpretation”) but, arguably, cannot be dual reference.
Something that looks like endnotes
 The French “pragmatists” are, in centering “test” in their concept-formation, not to be received as illegitimate appropriators of that title. Peirce (1992) himself encouraged a focus on the study of “potential” as referring to something “indeterminate yet capable of determination in any special case.” This could very well serve as clarified restatement of the definition of test. Dewey (1998) makes the connection more explicit in his thorough conceptualization of test: “The conjunction of problematic and determinate characters in nature renders every existence, as well as every idea and human act, an experiment in fact, even though not in design. To be intelligently experimental is but to be conscious of this intersection of natural conditions so as to profit by it instead of being at its mercy. The Christian idea of this world and this life as a probation is a kind of distorted recognition of the situation; distorted because it applied wholesale to one stretch of existence in contrast with another, regarded as original and final. But in truth anything which can exist at any place and at any time occurs subject to tests imposed upon it by surroundings, which are only in part compatible and reinforcing. These surroundings test its strength and measure its endurance … That stablest thing we can speak of is not free from conditions set to it by other things … A thing may endure secula seculorum and yet not be everlasting; it will crumble before the gnawing truth of time, as it exceeds a certain measure. Every existence is an event.”
Bloch, Maurice (2012). Anthropology and the Cognitive Challenge. Cambridge UP.
Boltanski, Luc and Laurent Thevenot. (2006). On Justification. Princeton UP.
Bourdieu, Pierre. (1991). The Political Ontology of Martin Heidegger. Stanford UP.
Daston, Lorraine. (1988). Classical Probability in the Enlightenment. Princeton UP.
Daly, Robert. (1977). “The Soteriological Significance of the Sacrifice of Isaac.” The Catholic Biblical Quarterly 39: 45-71.
Deleuze, Gilles and Guattari, Felix. (1995). What is Philosophy? Columbia UP.
Foucault, Michel. (2019). Penal Theories and Institutions: Lectures at the College de France, 1971-72, edited by Bernard Harcourt. Palgrave.
Foucault, Michel. (2000). “Truth and Juridical Forms” in Power: The Essential Works of Michel Foucault, 1954-1984, edited by James D. Faubion. The New Press.
Geertz, Clifford. (1973). “The Growth of Culture and the Evolution of Mind” in Interpretation of Cultures.
Hutto, Daniel. (2018). “Getting into predictive processing’s great guessing game: Bootstrap heaven or hell?” Synthese 195: 2445-2458.
Kerr, Margaret, Forsyth, Richard, and Michel Plyey. (1992). “Cold Water and Hot Iron: Trial by Ordeal in England.” Journal of Interdisciplinary History 22: 573-595.
Kurkian, Dmitry. (2020). “Culture and Cognition: the Durkheimian Principle of Sui Generis Synthesis vs. Cognitive-Based Models of Culture.” American Journal of Cultural Sociology 8: 63-89.
Latour, Bruno. (1988). The Pasteurization of France. Harvard UP.
Latour, Bruno. (1999). Pandora’s Hope. Harvard UP.
Lemieux, Cyril. (2008) “Scene change in French sociology?” L’oeil Sociologique
Lizardo, Omar. (2014). “Beyond the Comtean Schema: The Sociology of Culture and Cognition Versus Cognitive Social Science.” Sociological Forum 29: 983-989.
Marres, Noortje and David Stark. (2020). “Put to the Test: For a New Sociology of Testing.” British Journal of Sociology 71: 423-443.
Mast, Jason. (2020). “Representationalism and Cognitive Culturalism: Riders on Elephants on Turtles All the Way Down.” American Journal of Cultural Sociology 8: 90-123.
Marcel, Mauss. (1966). The Gift. Something UP.
Menary, Richard. (2015). “Pragmatism and the Pragmatic Turn in Cognitive Science” in The Pragmatic Turn: Toward Action-Oriented Views in Cognitive Science. MIT Press.
Mouffe, Chantal. (2008). The Democratic Paradox. Verso.
Norton, Matthew. (2018). “Meaning on the Move: Synthesizing Cognitive and Systems Concepts of Culture.” American Journal of Cultural Sociology 7: 1-28.
Pinch, Trevor. (1993). “Testing—One, Two, Three… Testing!”: Toward a sociology of testing. Science, Technology, & Human Values, 18(1), 25–41.
Potthast Jorg. (2017) The sociology of conventions and testing. In: Benzecry C, Krause M and Reed IA (eds) Social Theory Now. Chicago: University of Chicago Press, 337–361.
Popper, Karl. (1997). The Logic of Scientific Discovery. Routledge.
Ronnell, Avital. (2005). The Test Drive. University of Illinois Press.
Sewell, William. (2005). “History, Synchrony, and Culture: Reflections on the Work of Clifford Geertz” in Logics of History.
Sperber, Dan. (1985). “Anthropology and Psychology: Towards an Epidemiology of Representations.” Man 20: 73-89.
Strand, Michael. (2020). “Sociology and Philosophy in the United States since the Sixties: Death and Resurrection of a Folk Action Obstacle.” Theory and Society 49: 101-150.
Strand, Michael (2021). “Cognition, Practice and Learning in the Discourse of the Human Sciences” in Handbook in Classical Sociological Theory. Springer.
Strand, Michael and Omar Lizardo. (forthcoming). “For a Probabilistic Sociology: A History of Concept-Formation with Pierre Bourdieu” Theory and Society
Strand, Michael and Omar Lizardo (forthcoming). “Chance, Orientation and Interpretation: Max Weber’s Neglected Probabilism and the Future of Social Theory.” Sociological Theory
Turner, Stephen. (2007). “Social Theory as Cognitive Neuroscience.” European Journal of Social Theory 10: 357-374.