Wax On, Wax Off: Transposability and the Problem with “Domains”

 

In the film Happy Gilmore, Adam Sandler plays a hockey player who is a terrible skater but has a powerful slap shot. The main story arc of the film is that Sandler will use this ability in the entirely different sport of golf. This is a fairly common trope. Very often this becomes associated with something biological—we don’t know where Sandler’s character learned to hit the puck as he does, he may just be “a natural.” In the Karate Kid, though, we famously see the “wax on, wax off” motions of waxing Mr. Miyagi’s car turned into blocking motions in hand-to-hand combat. This same scenario occurs in, for example, the zombie series Santa Clarita Diet. Joel is instructed to shove a pear into a dead chicken as quickly as possible. Later, when fighting a zombie, he shoves a lemon into the zombie’s mouth as if by muscle memory. We then see that the odd pear-chicken skill is meant to remove the zombie’s ability to bite. In each case, we see an ability seemingly transposed across domains.

In a recent blog post, Omar offered a critical discussion of the use of “transposability” as a concept in the sociology of culture. Namely, the idea that schematic knowledge is, or must be, “transposable” across “domains” is a critical error:

Bourdieu and Sewell (drawing on Bourdieu) made a crucial property conjunction error, bestowing a magical power (transposability) to implicit (personal) culture. This type of personal culture cannot display the transposability property precisely because it is implicit…

If implicit culture is, by definition, domain-specific, how can it be transposed across domains? The argument, as I understand it is, to the extent schemas are transposed “requires that they be ‘representationally redescribed’… into more flexible explicit formats.” The complication with this discussion thus far, though, is that “domain” is doing a lot of theoretically heavy lifting, and I don’t think it can hold the weight.

The Problem with Domains

Let’s start with Durkheim’s “puzzle” in the introduction to Elementary Forms. As he saw it, in the quest to understand where knowledge comes from there were two camps: the Rationalists and the Empiricists. He thought the Empiricists were on the right track in that we gain knowledge from our moment-to-moment experience. However, the Empiricists didn’t have a solution for integrating what we learn across each moment: 

…the things which persons perceive change from day to day, and from moment to moment. Nothing is ever exactly the same twice, and the stream of perception…must be constantly changing… The question from this perspective, is how general concepts can be derived from this… stream of particular experiences, which are literally not the same from one moment to the next, let alone from person to person.” (Rawls 2005, 56)

As we are exposed to chairs in different moments, what is it that allows us to pull out the basic properties of “chairness”? Indeed, even the same chair at different times and in different places will be objectively “different”—diverse shading, slow decaying, a coating of dust. Of course, Durkheim was less concerned with the mundane (like chairness) than with the “pure categories of the understanding” a.k.a. “skeleton of thought” a.k.a. the “elementary forms” or often just “The Categories.” We would likely call all of these schemas today, and something more like image schemas or primary schemas. 

The developmental neuroscientist Jean Mandler (2004) approaches Durkheim’s problem of knowledge with the question: what is the minimum that must be innate to get learning started? She argues that we have an innate attentional bias towards things in motion, but more importantly, we also have an innate ability to schematize and redescribe experience in terms of those schematizations. Schematization is, in my mind, best understood not as retaining but as forgetting, elegant forgetting. As the richness of an immediate moment slowly starts to fade, certain properties are retained because they have structural analogies in the current moment. Properties that do not have such analogies in the current moment continue along a fading path, slowly falling away (unless brought to the fore by the ecology of a new moment). What is left is a fuzzy structure — a schema — with probabilistic associations among properties. Probabilities shaped by exposure to perceptual regularities. Therefore, the most persistent perceptual regularities will also be the most widely shared.

Mandler also argues that once we have a few basic schemas, as fuzzy and open-ended as they may continue to be, we can then redescribe our experience using these schemas. More importantly for the present discussion, both schematization and redescription seem to implicate transposability. But, Mandler works with infants and toddlers, so much of this occurs rapidly in human development—before what we would typically call “conscious control” is up and running.

It is here where the discussion implicates “domains.” Can we reserve “transposability” as the use of schematic knowledge in a “new domain”? And, then simply call the more pervasive “carrying of schemas from moment to moment” something else? In this setup, if encountering or thinking about a chair in chair-domains (domains where chairs typically are), then drawing on my chair-schema will not qualify as transposability. It is only when chairs are encountered or thought about in non-chair-domains (domains where chairs typically aren’t) that transposition is occurring. Without an analytical definition of what a domain is, however, this becomes slippery: If transposition requires that implicit knowledge be “representationally redescribed into more flexible explicit formats,” then by definition we can only know “new domains” whenever we see this occurring.

Spectrum of Transposability

I think we should ditch domains as the linchpin of transposability rather than salvage it. Schematic knowledge is transposable. At least in the most basic notion of “drawing on implicit knowledge” from moment to moment. But, sure, it is not transposable without constraint. Implicit knowledge is called forth by the recognition of familiar affordances in a moment. The problem is that affordances are not cleanly bundled into mutually exclusive “domains.” 

Perhaps transposition across normatively distinct domains typically occurs via deliberate mediation — but the idea that it only occurs via deliberate mediation is perhaps a step too far. New situations will evoke some implicit knowledge acquired in a prior situation without the individual deliberating. Much of what we call “being a natural” is likely just such a process. True, Mr. Myagi was conscious that “wax on, wax off” would transpose to fighting, but Daniel was not. And, more importantly, it was Daniel who was transposing it, and it was the affordances of the fighting situation that evoked the “wax on, wax off” response.

As a starker example of transposition without deliberation — or even against deliberation — we can look to hysteresis: The mismatch between the person and environment (Strand and Lizardo 2017). When I was a freshman in college, I boxed for extra money. I had never boxed before, but I had wrestled for years.  Luckily for me, someone was kind enough to properly wrap my wrists during my first fight! During that fight, I continually did something I knew was not correct: I got an underhook, a wrestling move involving placing your bicep in the armpit of your opponent and wrapping your hand around their shoulder, giving you leverage over their body position. The second or third time I did this, the referee stopped the fight and informed me that this was not allowed in boxing and I would lose points if I continued to do it. Getting an underhook when the move was “open” was “second nature.” It was “muscle memory.” I  deliberately tried to stop this automatic response. I continued to fail. I lost points. Despite this being a domain distinct from wrestling (normatively), my body interpreted the affordances of the moment as being a familiar domain (ecologically). Transposition occurred against my conscious effort.

References

Durkheim, Emile. 1995. The Elementary Forms of Religious Life. New York: Free Press.

Mandler, Jean Matter. 2004. The Foundations of Mind: Origins of Conceptual Thought. Oxford University Press.

Rawls, Anne Warfield. 2005. Epistemology and Practice: Durkheim’s The Elementary Forms of Religious Life. Cambridge University Press.

Strand, Michael, and Omar Lizardo. 2017. “The Hysteresis Effect: Theorizing Mismatch in Action.” Journal for the Theory of Social Behaviour 47(2):164–94.

Does Labeling Make a Thing “a Thing”?

pipe

“Reality is continuous” Zerubavel (1996:426) tells us, “and if we envision distinct clusters separated from one another by actual gaps it is because we have been socialized to ‘see’ them.” This assumption, that without “socialization” an individual would experience reality as meaningless—or as William James (1890:488) said of the newborn “one great blooming, buzzing confusion”—is fairly common in sociology. 

Hand-in-hand is the assumption that socialization is learning language: “It is language that helps us carve out of experiential continua discrete categories such as ‘long’ and ‘short’ or ‘hot’ and ‘cold’” (Zerubavel 1996:427). Boiled down, this view of socialization is a very standard “fax” or “downloading” model in which the socializing agents “install” the language in its entirety into the pre-socialized infant. The previously chaotic mass of reality is now lumped and only then becomes meaningful to the infant. Furthermore, because the socializing agents have the same language installed, the world is lumped in the same (arbitrary) way for them as well. This is what allows for intersubjective experience.

As Edmund Leach puts it:

“I postulate that the physical and social environment of a young child is perceived as a continuum. It does not contain any intrinsically separate ‘things.’ The child, in due course, is taught to impose upon this environment a kind of discriminating grid which serves to distinguish the world as being composed of a large number of separate things, each labeled with a name. This world is a representation of our language categories, not vice-versa.” Leach (1964:34)

Where did this assumption come from?

Generally, Durkheim’s Elementary Forms is cited to shoulder these assumptions. According to the introduction, the problem to be solved is that an individual’s experience was always particular: “A sensation or an image always relies upon a determined object, or upon a collection of objects of the same sort, and express the momentary condition of a particular consciousness” (Durkheim 1995:13). As a result of this, Durkheim attempts to argue, humans cannot have learned the basic “categories” by which we think—like cause, substance, class, etc.—from individual experience, not because it is continuous, but rather always discontinuous and unique. The alternative was that the categories exist “a priori” which, regardless as to whether this apriorism is nativist or idealist, Durkheim found an unsatisfying solution.

While there is of course much debate about this, Durkheim posited a sociogenesis of these basic categories from the organization of “primitive” societies which “preserves all the essential principles of apriorism… It leaves reason with its specific power, accounts for that power, and does so without leaving the observable world” (Durkheim 1995:18). After their genesis, however, there was no need to re-create them: “in contrast to Kant, Durkheim argued that these categories are a concrete historical product, not an axiom of thought, but in contrast to Hume, he acknowledged that these categories are as good as a priori for actual thought, for they are universally shared” (Martin 2011:119).

Once generated at the moment human society first formed, these categories had to simply be passed down from generation to generation. It seems intuitive that language would be the mechanism of transmission: “The system of concepts with which we think in every-day life is that expressed by the vocabulary of our mother tongue; for every word translates a concept” (Durkheim 1995:435)

It is here where we also get the more “relativist” interpretation of Elementary Forms in which each bounded “culture” can live in a distinct reality delimited by each language. Furthermore, while Durkheim’s argument is about the most generic (and universal) concepts of human thought, Zerubavel argues that our perception of the world is changed by highly specific labels: “As we assign them distinct labels, we thus come to perceive ‘bantamweight’ boxers and ‘four-star’ hotels as if they were indeed qualitatively different from ‘featherweight’ boxers and ‘three-star’” (1996:427 emphasis added).

We see a similar notion in The Social Construction of Reality, to which Zerubavel’s work is indebted: “The language used in everyday life con­tinuously provides me with the necessary objectifications and posits the order within which these make sense…” (Berger and Luckmann [1966] 1991:35 emphasis added)

Is such an assumption defensible? 

To outline the notion up to this point: First, we imagine the unsocialized person— usually, but not necessarily, the pre-linguistic infant. Their senses are providing information about the world to their brain, but it is either a completely undifferentiated mass or hopelessly particular from one moment to the next. In either case, their experience has no meaning to them. Second, the unsocialized person somehow learns that a portion of their experience has a “label” or “name” and it thus can be both lumped together and split from the rest of experience, and only then becomes meaningful. Third, on this basis, each language forms a kind of “island” or “prison-house” of meaning, carving up the undifferentiated world in culturally-unique ways, such that things “thinkable” in one language are “unthinkable” in others. (I will set aside the problems of how exactly these labels are internalized.)

Buried within this general notion, are four positions: (1) learning a label is necessary and sufficient; (2) learning a label is necessary, not sufficient; (3) learning a label, is not necessary, but is sufficient; (4) learning a label is not necessary, but common evidence that other processes have made a thing “a thing.” For Leach and Zerubavel (and some interpretations of Durkheim), it appears to be (1): once you have a label, boom! Then, and only then, you can perceive a thing. For Berger and Luckmann, it is occasionally (1) and (2) and other times (3) and (4). For example, Berger writes in The Sacred Canopy ([1967] 2011:20):

The objective nomos is given in the process of objectivation as such. The fact of language, even if taken by itself, can readily be seen as the imposition of order upon experience. Language nomizes by imposing differentiation and structure upon the ongoing flux of experience. As an item of experience is named, it is ipso facto, taken out of this flux of experience and given stability as the entity so named.

That’s about as extreme as it gets. However, in The Social Construction of Reality, a slightly tempered view is taken:

The cavalry will also use a different language in more than an instrumental sense… This role-specific language is internalized in toto by the individual as he is trained for mounted combat. He be­comes a cavalryman not only by acquiring the requisite skills but by becoming capable of understanding and using this language. (Berger and Luckmann [1966] 1991:159 emphasis added)

Although there are other parts of The Social Construction of Reality which privilege language above all (and disregarding the “in toto”), here it suggests that vocabulary is part of a practice. In other words, “an angry infantryman swears by making reference to his aching feet,” because of the experience of “aching feet,” and “the cavalryman may mention his horse’s backside,” again because of his experience with horses. Without their role-specific language, the infantryman would still be able to perceive “aching feet” and the cavalryman would know a “horse’s backside.” On the contrary, these terms are meaningful to them—and useful as metaphors—because of their experiences, rather than vice versa.

For this to be the case, however, we must reject the notion that, without socialization (as the internalization of language), perception would amount to “one great blooming, buzzing confusion.” Rather, reality has order without interpretation and we can directly experience it as such. Even infants perceive a world that is pre-clumped, and early concept formation precedes language acquisition and follows perceptual differentiation (Mandler 2008:209)

…between 7 and 11 months (and perhaps starting earlier) infants develop a number of [highly schematic] concepts like animal, furniture, plant, and container… ‘basic-level’ artifact concepts such as cup, pan, bed and so on are not well-established until the middle of the second year, and natural kind concepts such as dog and tree tend to be even later… Needless to say, this is long after infants are fully capable of distinguishing these categories on a perceptual basis. 

Labels likely play a greater role later on in the process of socialization (perhaps especially during second socialization). In already linguistically-competent people, labels can be used to select certain features of perceived objects and downplay others, exacerbate differences between similar objects, or group perceptually distinct objects into one category (Taylor, Stoltz, and McDonnell 2019). However, this does not mean that labels alone literally “filter” our perception—indeed evidence shows (Alilović et al. 2018; Mandler 2008) adults and infants perceive the world first through an unfiltered sweep, and after perceiving, we “curate” the information through automatic or deliberate prediction and attention. Language may make it faster, easier, and therefore more likely to think about some things over others, but this does not render something unthinkable or imperceptible (Boroditsky 2001). Likewise, it is unlikely that naming something is necessary and sufficient to make a thing “a thing.”

References

Alilović, Josipa, Bart Timmermans, Leon C. Reteig, Simon van Gaal, and Heleen A. Slagter. 2018. “No Evidence That Predictions and Attention Modulate the First Feedforward Sweep of Cortical Information Processing.” bioRxiv 351965.

Berger, Peter L. [1967] 2011. The Sacred Canopy: Elements of a Sociological Theory of Religion. Open Road Media.

Berger, Peter L., and Thomas Luckmann. [1966] 1991. The Social Construction of Reality: A Treatise in the Sociology of Knowledge. Penguin.

Boroditsky, L. 2001. “Does Language Shape Thought? Mandarin and English Speakers’ Conceptions of Time.” Cognitive Psychology 43(1):1–22.

Durkheim, Emile. 1995. The Elementary Forms of Religious Life. New York: Free Press.

James, W. 1890. The Principles of Psychology, Vol 1. Henry Holt.

Mandler, J. M. 2008. “On the Birth and Growth of Concepts.” Philosophical Psychology.

Martin, John Levi. 2011. The Explanation of Social Action. Oxford University Press, USA.

Taylor, Marshall A., Dustin S. Stoltz, and Terence E. McDonnell. 2019. “Binding Significance to Form: Cultural Objects, Neural Binding, and Cultural Change.” Poetics Volume 73:1-16

Zerubavel, Eviatar. 1996. “Lumping and Splitting: Notes on Social Classification.” Sociological Forum 11(3):421–33.

Did Saussure Say Meaning is Arbitrary?

The short answer is no, Saussure did not say meaning is arbitrary.

Why do we care what Saussure said? Because some influential work in cultural sociology makes the consequential (and I think incorrect) claim that meaning is arbitrary and uses Saussure’s work to justify these claims. Consider, as an example, some of the work of Jeffrey Alexander. When the “strong program” of cultural sociology was just a twinkle in Alexander’s eye, he wrote (1990:536): 

Since Saussure set forth semiotic philosophy in his general theory of linguistics, its key stipulation has been the arbitrary relation of sign and referent: there can be found no “rational reason,” no force or correspondence in the outside world, for the particular sign that the actor has chosen to represent his or her world.

A few years later, in the strong program’s foundational article, Alexander and Smith claim (1993:157):

Because meaning is produced by the internal play of signifiers, the formal autonomy of culture from social structural determination is assured. To paraphrase Saussure in a sociological way, the arbitrary status of a sign means that its meaning is derived not from its social referent—the signified—but from its relation to other symbols, or signifiers within a discursive code. It is only difference that defines meaning, not an ontological or verifiable linkage to extra-symbolic reality.

Then finally, a more recent example, Alexander writes in Performance and Power (2011:10, 99): 

A sign’s meaning is arbitrary, Saussure demonstrated, in that “it actually has no natural connection with the signified” (1985:38), that is, the object it is understood to represent. Its meaning is arbitrary in relation to its referent in the real world…

Not long after Durkheim’s declaration, and quite likely in response to it, there emerged a dramatic transformation in linguistic understanding that continues to ramify in the humanities and the social sciences. Ferdinand de Saussure and Roman Jakobson propose that words gain meaning not by referring to things “out there” in the real world, but from their structured relation to other words inside of language.

Misinterpreting Saussure

In my forthcoming paper, “Becoming A Dominant Misinterpreted Source,” I show that much of this received understanding of Saussure misses the mark.

To begin my journey down the Saussurean rabbit hole, I reviewed 167 articles and book chapters in sociology that cite Saussure to distill the most common interpretations of his work. The figure below shows the pages (on the x axis) of The Course in General Linguistics (Cours) and the number of citations to that page number as a count (on the y axis). Of the 167 citations, however, only 35 offer page numbers. Furthermore, of those offering page numbers, they are mostly confined to four basic topics: (1) the langage, langue, parole distinction, (2) the definition of “semiology,” (3) the definition of the “linguistic sign,” and (4) the definition of “linguistic value.”

What is not cited is over half of the book: Saussure’s discussion of grammar, principles of articulation, diachronic (i.e. evolutionary) linguistics, geographic linguistics, and retrospective (or historical/ anthropological) linguistics. (And, of course it covered this wide range of topics because it was lecture notes for his linguistics course compiled and published after his death.)

4_Saussure_page_citations.png

Next, to determine if these common interpretations are correct, I engaged in an exegesis of the Cours, as well as reading other text written by Saussure, and also text about Saussure written by his biographers and other linguistic historians. While there are some things we’ve been getting right, there are important things we’ve been getting wrong.

First, it is commonly assumed by sociologists that Saussure was putting forth a philosophy of language — or how language refers to things in the world (often encompassed as the “problem of reference”). He was, however, putting forth a philosophy of linguistics, or how language was to be studied as a science (and, in fact, spends very little time discussing “semiology,” which he saw as a branch of general psychology). The implication of this is that Saussure’s “key stipulation,” as Alexander asserts, was not “the arbitrary relation of sign and referent.” Rather, for Saussure the linguistic sign was a wholly psychological entity, rendering both the physical sound and the physical referent outside the scope of general linguistics.

Saussure claimed that a linguistic sign was composed of two aspects. The first was the mental impression of the sounds of speech (image-acoustique or sound-image), which he called the signifier. The second was an idea or concept, understood in psychological terms, which he called the signified. What was arbitrary for Saussure was not the relation between a spoken word and its referent; rather, what he claimed was arbitrary was the relation between signifier and signified, both mental entities (see Table 1). This arbitrariness, he asserted, allowed the linguist to justify studying the totality of these sound-images as if an autonomous system.

Table 1. 
Mental EntitySound-Image (Signifier)Concept (Signified)
Physical EntitySounds of SpeechReferent

Here, we can see a kind of ur-argument for claiming that some object of study is autonomous, and thus requiring the specialized tools of a distinct enterprise. This, I would argue, is why Alexander wants to borrow Saussure for non-linguistic domains. It offers a means to assert that “the formal autonomy of culture from social structural determination is assured.” However, Saussure was very clear that he saw language as a unique entity, and thus his argument for autonomy was also unique to language. Although he acknowledged some ways language was not arbitrary, and sketched out how the study of language was a subfield of the general science of “semiology,” he felt language was set apart by being the most arbitrary of all ([1986] 2009:88):

In order to emphasise that a language is nothing other than a social institution, Whitney [a famous American linguist] quite rightly insisted upon the arbitrary character of linguistic signs. In so doing, he pointed linguistics in the right direction. But he did not go far enough. For he failed to see that this arbitrary character fundamentally distinguished language from all other institutions.

A final, and kind of tricky, misinterpretation of Saussure relates to his definition of “value.” It is often assumed that what Saussure meant by value was synonymous with “meaning.” But “linguistic value” was about the organization of sound-images in the mind, and distinct from the organization of meaning which had to do with concepts or ideas. Furthermore, value is not the same as the qualities of physical sounds, but rather was about how sound-images were related to each other. As Saussure states,

Proof of this [that value is distinct form meaning and physical sound] is that the value of a term may be modified without either its meaning or its sound being affected, solely because a neighboring term has been modified” (Saussure, [1986] 2009, p. 120)

Here, we see a second ur-argument emerge, related to Endogeneity and Mutual Constitution.  The object of inquiry is not only autonomous, but the components of some system can only be understood with how they relate to every other component in that system. Change one element in the system, and every element in the system changes accordingly. Here again Saussure is quick to argue that language — specifically understood as the system of linguistics values — is unique (Saussure, [1986] 2009, p. 80):

…language is a system of pure values which are determined by nothing except the momentary arrangement of its terms. A value—so long as it is somehow rooted in things and in their natural relations, as happens with economics (the value of a plot of ground, for instance, is related to its productivity)—can to some extent be traced in time if we remember that it depends at each moment upon a system of coexisting values. Its link with things gives it, perforce, a natural basis, and the judgments that we base on such values are therefore never completely arbitrary; their variability is limited. But we have just seen that natural data have no place in linguistics.

The final misunderstanding involves whether Saussure is developing Durkheim’s thoughts about culture. To use Alexander again, consider (1988:4–5):

Saussure depended… on a number of key concepts that were identical with the controversial and widely discussed terms of the Durkheim school. Most linguistic historians (eg. Doroszewski 1933:89-90; Ardener 1971:xxxii-xiv), indeed, have interpreted these resemblances as evidence of Durkheim’s very significant influence on Saussure… The echoes in Saussurean linguistics of Durkheim’s symbolic theory are deep and substantial. Just as Durkheim insisted that religious symbols could not be reduced to their interactional base, Saussure emphasized the autonomy of linguistics signs vis-a-vis their social and physical referents.

In the paper, I go into detail demonstrating why this is very unlikely, but here I’ll just quote a couple linguistic historians. The first essay on the matter in English states (Washabaugh 1974:28):

Most linguistic historians (Doroszewski 1933; Ardener 1971; Robins 1967; Mounin 1968) have interpreted these resemblances as evidence of Durkheim’s influence over Saussure. However, a careful reading of Durkheim will show that these resemblances are only terminological.

Perhaps the most comprehensive discussion of the Saussure-Durkheim link comes from Koerner, where he concludes (1987:22): “I do not see… any convincing concrete, textual, evidence that Saussure incorporated Durkheimian sociological concepts in his theoretical argument.”

Is Meaning Really Arbitrary?

A far more important question than whether Saussure actually claimed meaning is arbitrary is whether meaning actually is arbitrary

Alongside appeals to the authority of Saussure are “just so” stories that seem to show arbitrariness as an obvious fact. As it relates to the present-day Latin alphabet in English, for example, we can assert that the letter “A” is arbitrarily related to the sound that it might represent. However, what about the “O” which does correspond to the shape of the lips when we make the /o/ sound? For the same reason I cannot use this latter example to assert that all letters in the alphabet correspond to the shape of the mouth, we should not use the former to claim that all letters are arbitrarily related to their sounds. Even worse is using such examples to make claims about the operation of meaning in general (the fallacy of composition). The range of arbitrariness or motivation in semiotic systems is, after all, an empirical question which scores of scholars have been exploring for decades. More problematic than misinterpreting Saussure, then, is wielding his lecture notes as a means to shut down this line of inquiry.

Often in tandem with claims that meaning is arbitrary is the assertion that meaning is “conventional,” as if the latter is a prerequisite for, or proof of, the former. But, does this need to be the case? I would argue it does not, and furthermore that this opens up a much broader scope for cultural analysis. The meaning of say, smoke, can be “motivated” in that it is correlated with the presence of fire—but, and this is key, fire is not the only thing with which smoke is associated. As fire is also used to cook, for example, smoke is also associated with food. How do we know whether smoke “means” fire or food if not through some human selection and convention? That the associations between meanings and signs are made more or less probable by the structure of reality does not mean they are not also conventional. Furthermore, I would contend, a more fruitful point of departure for cultural analysis is a framework which can account for both the arbitrary and motivated aspects of meaning.

References

Alexander, Jeffrey. 2011. Performance and Power. Polity.

Alexander, Jeffrey C. 1988. “Culture and Political crisis:‘Watergate’and Durkheimian Sociology.” Durkheimian Sociology: Cultural Studies 187–224.

Alexander, Jeffrey C. 1990. “Beyond the Epistemological Dilemma: General Theory in a Postpositivist Mode.” Pp. 531–44 in Sociological Forum. Vol. 5. Springer.

Alexander, Jeffrey C. and Philip Smith. 1993. “The Discourse of American Civil Society: A New Proposal for Cultural Studies.” Theory and Society 22(2):151–207.

Koerner, E. F. Konrad. 1987. On the Problem of“ Influence” in Linguistic Historiography. John Benjamin.

Saussure, Ferdinand de. [1986] 2009. Course in General Linguistics. edited by A. S. C. Bally. Chicago: Bloomsbury Academic.

Stoltz, Dustin S. Forthcoming. “Becoming A Dominant Misinterpreted Source: The Case of Ferdinand De Saussure in Cultural Sociology.” Journal of Classical Sociology.

Washabaugh, William. 1974. “Saussure, Durkheim, and Sociolinguistic Theory.” Archivum Linguisticum 5:25–34.

Categories, Part III: Expert Categories and the Scholastic Fallacy

There’s a story — probably a myth — about Pythagoras killing one of the members of his math cult because this member discovered irrational numbers (Choike 1980). (He also either despised or revered beans).

Screenshot from 2019-05-20 08-40-04.png
“Oh no, fava beans.” ~Pythagoras (Wikimedia Commons)

The Greeks spent a lot of time arguing about arche, or the primary “stuff.” Empedocles argued that it was the four elements. Anaximenes thought it was just air. Thales thought it was water. Pythagoras and his followers figured it was numbers (Klein 1992, page 64):

They saw the true grounds of the things in this world in their countableness, inasmuch as the condition of being a “world” is primarily determined by the presence of an “ordered arrangement” — [which] rests on the fact that the things ordered are delimited with respect to one another and so become countable.

For the Pythagoreans the clean, crisp integers were sacred because they conveyed a harmony — an orderedness — and there is an undeniable allure to this precision. (Indeed, such an allure that Pythagoras and his followers were driven to do some very strange things.)

Looking at even simple arithmetic, it does seem obvious that classical categories do in fact exists: there is a set of integers, a set of odd numbers, a set of even numbers, and so on. If we continue to follow this line of thought to pure mathematics in general, there is an almost mystical, quality of the “objects” of this discipline.

When thinking about mathematical objects like geometric forms, however, there is a fundamental difference between squares or circles or triangles as understood in our daily life (i.e. as having graded similarities to certain exemplar shapes we likely learned about in grade school) and the kind of perfectly precise shapes in theoretical geometry. That is, as far as we know, a perfect circle does not exist in nature (even though an electron’s spin and neutron stars are pretty damn close), nor has humankind been able to manufacture a perfect shape.

And this is the main point: precision is weird. If “crispness” is really only found in mathematics (and pure mathematics at that), then we should be skeptical of the analytical traditions’ use of discrete units as an analogy for knowledge in general.

But, sometimes, thinking with classical categories is useful.

Property Spaces

While we can be skeptical of the Chomskyan program presuming syntactical units must necessarily be classical categories, this does not mean we can never proceed as if phenomena could be divided into crisp sets.

Theorists commonly make something like “n by n” tables, typologies, or more technically, property spaces — for the classic statement see Lazarsfeld (1937) and Barton (1955), but this is elaborated in (Ragin 2000, page 76-85), Becker (Becker 2008, page 173-215), and most extensively in chapters 4, 5, and 6 of Karlsson and Bergman (2016). In this procedure, the analyst outlines a few dimensions that account for the most variation in their empirical observations. This is essentially “dimension reduction,” as we take the inherent heterogeneity (and particularity) of social experience and simplify this into the patterns that are the most explanatory (if only ideal-typical).

For example, Alejandro Portes and Julia Sensenbrenner (1993) tell us that there are four sources of social capital (each deriving conveniently from the work of Durkheim, Simmel, Weber, and Marx and Engels, respectively). These four sources are then grouped into those that come from “consummatory” (or principled) motivations and those that come from “instrumental” motivations. Thus the “motivation” is the single dimension that divides our Social Capital property space into a Set A and a Set B: either resources are exchanged because of the actor’s own self-interest, or not. More often, however, these basic property spaces based on simple categorical distinctions are the starting point for more complex (or “fitted”) property spaces.

Consider Aliza Luft’s excellent “Toward a Dynamic Theory of Action at the Micro Level of Genocide: Killing, Desistance, and Saving in 1994 Rwanda.” Luft begins with a critique of prior categorical thinking: “Research on genocide tends to pregroup actors—as perpetrators, victims, or bystanders—and to study each as a coherent collectivity (often identified by their ethnic category)” (Luft 2015, page 148). Previously, analysts explained participation in genocide in one of four ways: (1) members of the perpetrating group were obedient to an authority, (2) responding to intergroup antagonism, (3) succumbing to intragroup norms or peer pressure, (4) and finally, ingroup members dehumanize the outgroup. While all are useful theories, she explains, they are complicated by the empirical presence of behavioral variation. That is, not everyone associated with a perpetrating group engages in violence at the same time or consistently throughout a conflict (and may even save members of the victimized group).

Screenshot from 2019-05-20 08-45-07

 

What she does to meet this challenge is to add dimensions to a binary property space which previously consisted of a group committing murder and a group being murdered. Focusing on the former, she notes that (1) not everyone in that group does actually participate, (2) some of those who did (or did not) participate eventual cease (or begin) participating, (3) some of those who did not participate not only desisted but also actively saved members of the outgroup. Taking this together, we arrive at a property space that can be presented by the spanning tree shown above. Luft then outlines four mechanisms that explain “behavioral boundary crossing.”

In this case, previous expert categories lead to an insufficient explanation for the perpetration of genocide, and elaboration proved necessary. Attempting to create classical categories — with rules for inclusion and exclusion and the presumption of mutual exclusivity in which all members are equally representative — is likely a necessary step in the theorizing process. Much of the work of developing theory, however, is not just showing that these categories are insufficient (because, of course, they are), but rather pointing out where this slippage is leading to problems in our explanations, and how they can be mended, as Luft does. 

The Scholastic Fallacy

Treating data or theory as if they can be cleanly divided into crisp sets is like the saying “all models are wrong, but some models are useful.” But taking for granted these distinctions can also lead analysts to commit the “scholastic fallacy.”

This is when the researcher “project[s] his theoretical thinking into the heads of acting agents…” (Bourdieu 2000, page 51).  This, according to Bourdieu, was a key folly of structuralism: “[Levi-Strauss] built formal systems that, though they account for practices, in no way provide the raison d’etre of practices” (Bourdieu 2000, page 384). This seems especially obvious for categories, as discussed in my previous two posts. It is one thing to say people can be divided into X group and Y group for Z reasons, and it is another to say people do divide other people in X group and Y group for Z reasons (see Martin 2001, or more generally Martin 2011)

Categorizing for the “acting agent” is not a matter of first learning rules and then applying them to demarcate the world into mutually exclusive clusters. It is, for the most part, a matter of simply “knowing it when I see it” —  a skill of identifying and grouping that we have built up through the accrued experience of redundant patterns encountered in mundane practices. Generally, rules, if they are used, are produced in post hoc justifications of our intuitive judgment about group memberships. It is here, however, where expert discourse is likely to play the largest role in lay categorizing: as a means to justify what we already believe to be the case.

This is not to say “non-experts” cannot or do not engage in this kind of theoretical thinking about categories. But, again Bourdieu points out, most people do not have the “leisure (or the desire) to withdraw from [the world]” so as to think about it in this way (Bourdieu 2000, page 51). More importantly, relying on expert categories for most of the tasks in our everyday lives would not be very useful because categorizing is foremost about reducing the cognitive demands of engaging with an always particular and continuously evolving reality.

References

Barton, Allen H. 1955. “The Concept of Property-Space in Social Research.” The Language of Social Research 40–53.

Becker, Howard S. 2008. Tricks of the Trade: How to Think about Your Research While You’re Doing It. University of Chicago Press.

Bourdieu, P. 2000. Pascalian Meditations. Stanford University Press.

Choike, James R. 1980. “The Pentagram and the Discovery of an Irrational Number.” The Two-Year College Mathematics Journal 11(5):312–16.

Karlsson, Jan Ch and Ann Bergman. 2016. Methods for Social Theory: Analytical Tools for Theorizing and Writing. Routledge.

Klein, Jacob. 1992. Greek Mathematical Thought and the Origin of Algebra. Courier Corporation.

Lazarsfeld, Paul F. 1937. “Some Remarks on the Typological Procedures in Social Research.” Zeitschrift Für Sozialforschung 6(1):119–39.

Luft, Aliza. 2015. “Toward a Dynamic Theory of Action at the Micro Level of Genocide: Killing, Desistance, and Saving in 1994 Rwanda.” Sociological Theory 33(2):148–72.

Martin, John Levi. 2001. “On the Limits of Sociological Theory.” Philosophy of the Social Sciences 31(2):187–223.

Martin, John Levi. 2011. The Explanation of Social Action. Oxford University Press, USA.

Portes, A. and J. Sensenbrenner. 1993. “Embeddedness and Immigration: Notes on the Social Determinants of Economic Action.” The American Journal of Sociology.

Ragin, Charles C. 2000. Fuzzy-Set Social Science. University of Chicago Press.

Categories, Part II: Prototypes, Fuzzy Sets, and Other Non-Classical Theories

A few years ago The Economist published “Lil Jon, Grammaticaliser.” “Lil Jon’s track ‘What You Gonna Do’ got me thinking,” the author tells us, “of all things, the progressive grammaticalisation of the word shit.” In it, Lil Jon repeats “What they gon’ do? Shit” and in this lyric, shit doesn’t mean “shit” it means “nothing.”

As the author goes on to explain, things that are either trivial, devalued or demeaning are commonly used to mean “nothing”: I haven’t eaten a bite, I don’t give a rat’s ass, I won’t hurt a fly, he doesn’t know shit. More examples are given in Hoeksema’s “On the Grammaticalization of Negative Polarity Items.” This is difficult to account for in Chomsky’s (Extended or Revised Extended) Standard Theory because the meaning of terms makes them candidates for specific kinds of syntactic functions (Traugott and Heine 1991:8):

What we find in language after language is that for any given grammatical domain, there is only a restrictive set of… sources. For example, case markers, including prepositions and postpositions, typically derive from terms for body parts or verbs of motion; tense and aspect markers typically derive from specific spatial configurations; modals from terms from possession, or desire; middles from reflexives, etc.

Grammaticalization involves the extension of term until its meaning is “bleached” and becomes more generic and encompassing (Sweetser 1988). For example, the modal word “will,” as in “I will finish that review,” comes from the Old English term willan meaning to “want” or “wish,” and, of course, it still carries that connotation:  “I willed it into being.” This relates to a second difficulty for Chomskian Theory: grammaticalization is a graded process. It’s not always easy to decide whether a particular lexical item should be categorized as one or another syntactical unit and therefore we cannot know precisely which rules apply when.

Logical Weakness of the Classical Theory

It may be that the classical theory doesn’t work well for linguistics, but that might not be reason to abandon it elsewhere. In fact, there is a certain sensibleness to the approach: categories are about splitting the world up, so why shouldn’t everything fall into mutually exclusive containers? To summarize the various weaknesses as described by Taylor (2003):

  1. Provided we know (innately or otherwise) what features grant membership in a category, we must still verify that a token has all the features granting it membership, rendering categories pointless.
  2. Perhaps we could allow an authority to assure us a token has all the features, but then we are no longer relying on the classical conditions to categorize.
  3. Features might also be kinds of categories, e.g., if cars must have wheels, what defines inclusion in the category “wheels,” which leads to infinite regress (unless, of course, we can find genuine primitives).
  4. Finally, it seems that a lot of features are defined circularly by reference to their category, e.g., cars have doors, but what kind of doors other than the doors cars tend to have?

The rejection of this classical theory is foreshadowed by, among others, Wittgenstein. The younger Wittgenstein was interested in philosophy and mathematics, and after being encouraged by Frege, he more or less forced Bertrand Russell to take him on as a student in 1911. His first major work the Tractatus Logico-Philosophicus, was published in 1921, which went on to inspire the founding of the Vienna Circle of Logical Empiricism—which, even though living in Vienna at the time, did not include Wittgenstein, who seemed to hate everyone. (At the same time, it bears noting, Roman Jakobson was a couple hundred miles away founding the Prague Circle of Linguistics).  

After several years worth reading about, the received story goes, Wittgenstein does an about face on his own argument in the Tractatus in the course of trying to find the “atoms” of formal logic. In his later writings beginning in the late 1920s and continuing until his death in 1951, we get, among other things, the notion of defining words not be a list of necessary and sufficient conditions but by looking at how words are used. The most well-known example being, after reviewing a few different ways the word “game” is used, he states “we can go through many, many other groups of games in the same way, can see how similarities crop up and disappear…I can think of no better expression to characterize these similarities than ‘family resemblances’” (Wittgenstein [1953] 2009 para. 66-67).

Beyond Family Resemblances

Screenshot from 2019-04-27 11-45-20
From the The Atlas of the Munsell Color System, by Albert H. Munsell

Prototype Theory and Basic Level Categories

One pillar of the classical theory is that, if membership is granted based on having certain attributes, than it follows that no member should be a better or worse example of that category. A second pillar is that, category criteria should be independent of who or what is doing the categorizing. Eleanor Rosch’s early work toppled both pillars.

Rosch graduated from Reed College, completing her senior thesis on Wittgenstein (who she says “cured her of philosophy”) — specifically his discussion of pain and “private language.” She went on to complete graduate work in psychology at the famed Harvard Department of Social Relations, under the direction of Roger Brown (who was an expert in the psychology of language). She conducted research in New Guinea on Dani color and form categories, as well as child rearing practices (Rosch Heider 1971), and in late 1971, she joined the psychology department at UC, Berkeley.

In a 1973 publication, “Natural Categories,” Rosch critiqued existing studies of category formation because it relied on categories that subjects had already formed. For example, “American college sophomores have long since learned the concepts ‘red’ and ‘square’” To meet this challenge, she studied the Dani who had only two color terms, which divided color on the basis of brightness, rather than hue. Rosch hypothesized (Rosch 1973:330):

…there are colors and forms which are more perceptually salient than other stimuli in their domains…salient colors are those areas of the color space previously found to be most exemplary of basic color names in many different languages… and that salient forms are the “good forms” of Gestalt psychology (circle, square, etc.). Such colors and forms more readily attract attention than other stimuli… are more easily remembered than less salient stimuli…

She ultimately found “the salience and memorability of certain areas of the color space…can influence the formation of linguistic categories” (the classical citation for cross-cultural color categorization being Berlin and Kay 1991; see also Gibson et al. 2017). As categories form around salient prototypes, potential members of this category are judged on a graded basis.

In addition to building categories around salient exemplars, Rosch also found that, and aligning with ecological psychology, such salience relates to the usefulness for, and capacities of, the observer. For example, there tends to be the most cross-cultural agreement as to how any given token is categorized at the “basic level.” That is,  although different groups of people may differ in terms of what the prototypical “dog” is — is it a golden retriever or a bulldog? — when people see a dog, any dog, they will probably categorize it at the basic level of “dog,” as opposed to generically as animal or mammal or specifically as a golden retriever-bulldog mix. And it is at this basic level where there is the most interpersonal (and cross-cultural) similarities.

Berkeley and the West Coast Cognitive Revolution

In a previous post, I discussed all the interesting things happening in anthropology and artificial intelligence at UC, San Diego and Stanford during the 70 and 80s, and we can add UC, Berkeley to this list of strongholds for West Coast Cognitive Revolutionaries.  

Lakoff left MIT for Berkeley in 1972, and shortly thereafter he was confronted with kinds of utterances neither generative semantics nor generative grammar could account for, e.g., “John invited you’ll never guess how many people to the party” in which a clause splits another clause, sometimes called “center embedding.” Faced with this, Lakoff got an NSF grant to invite people from linguistics, psychology, logic, and artificial intelligence for a summer seminar in 1975, which ballooned into roughly 190 attendees (de Mendoza Ibáñez 1997). Among the lectures was Rosch on basic-level categories and how category prototypes can be represented in motor-systems (the seedling of the embodied mind), Charles Fillmore’s discussion of “Frame Semantics” which inspired the cognitive anthropologists, and Leonard Talmy (a recent Berkeley PhD) on how physical embodiment creates universal “cognitive topologies” which map onto words, like “in” and “out.”

So, Lakoff recalls, “in the face of all this evidence, in the summer of 1975, I realized that both transformational grammar and formal logic were hopelessly inadequate and I stopped doing Generative Semantics” (de Mendoza Ibáñez 1997).  It is also in 1975 that he published “Hedges: A Study in Meaning Criteria and the Logic of Fuzzy Concepts,” incorporating ideas from Rosch, as well as another Berkeley Professor Lotfi Zadeh. In this paper Lakoff argued: “For me, some of the most interesting questions are raised by the study of words whose meaning implicitly involves fuzziness- words whose job is to make things fuzzier or less fuzzy. I will refer to such words as ‘hedges’.” In addition to referring to Rosch’s then-unpublished paper “On the Internal Structure of Perceptual and Semantic Categories,” Lakoff acknowledges “Professor Zadeh has been kind enough to discuss this paper with me often and at great length and many of the ideas in it have come from those  discussions.”

Zadeh was born in Baku, Azerbaijan, then studied at the University of Tehran before completing his master’s at MIT, and doctorate in electrical engineering at Columbia University in 1949. He eventually landed at UC, Berkeley in 1959 where he slowly began to develop “fuzzy” methods. In 1965 he published the paradigm-shifting piece, “Fuzzy Sets,” which he began writing during the summer of ‘64 while working at Rand Corporation, and exists as the report “Abstraction and Pattern Classification.” In essence, Zadeh realized many objects in the world did not have clear boundaries to allow discrete classification, but rather allowed for graded membership (he used the example of  “tall man” and “very tall man”). He then demonstrates that classical “crisp” set theory was simply a special case of “fuzzy” set theory.

Zadeh would quickly expand the notion of fuzzy methods into a plethora of subfields, including information systems and computer science, but also linguistics beginning in the 1970s, an early example being, “A Fuzzy-Set-Theoretic Interpretation of Linguistic Hedges.” However, whether fuzzy logic explains the normal process of human categorization (i.e. whether humans are actually following the procedures of fuzzy logic in the task of categorizing) continues to be a debated topic. Rosch (e.g. Rosch 1999), in particular, is skeptical, precisely because the process of categorizing is not about applying decontextualized “rules.” Rather, as Mike argued in his recent post, we can think of categorizing as more like finding, than seeking.

References

Berlin, Brent and Paul Kay. 1991. Basic Color Terms: Their Universality and Evolution. University of California Press.

Gibson, Edward, Richard Futrell, Julian Jara-Ettinger, Kyle Mahowald, Leon Bergen, Sivalogeswaran Ratnasingam, Mitchell Gibson, Steven T. Piantadosi, and Bevil R. Conway. 2017. “Color Naming across Languages Reflects Color Use.” Proceedings of the National Academy of Sciences of the United States of America 114(40):10785–90.

de Mendoza Ibáñez, Francisco José Ruiz. 1997. “An Interview with George Lakoff.” Cuadernos de Filología Inglesa 6(2):33–52.

Rosch, E. 1999. “Reclaiming Concepts.” Journal of Consciousness Studies 6(11-12):61–77.

Rosch, Eleanor H. 1973. “Natural Categories.” Cognitive Psychology 4(3):328–50.

Rosch Heider, Eleanor. 1971. “Style and Accuracy of Verbal Communications within and between Social Classes.” Journal of Personality and Social Psychology 18(1):33.

Sweetser, Eve E. 1988. “Grammaticalization and Semantic Bleaching.” Pp. 389–405 in Annual Meeting of the Berkeley Linguistics Society. Vol. 14..

Taylor, John R. 2003. Linguistic Categorization. OUP Oxford.

Traugott, Elizabeth Closs and Bernd Heine. 1991. Approaches to Grammaticalization: Volume II. Types of Grammatical Markers. John Benjamins Publishing.

Wittgenstein, Ludwig. [1953] 2009. Philosophical Investigations. Blackwell.

Categories, Part I: The Fall of the Classical Theory

In a “monster of the week” episode of the The X-Files, Mulder and Scully encounter a genie, Jenn. She tells Mulder — who has three wishes — “Everyone I come in contact with asks for the wrong things…” Thinking the trick is to ask for something altruistic, Mulder wishes for “peace on earth.” Jenn grants his wish by vanishing all humans except Mulder. Distraught, Mulder uses his second wish to undo his first wish. He then decides the problem is that the wish was not specific enough, and we see him writing a lengthy “contract” in a word processor. In the end he wishes Jenn to be free, but if he were able to ask for this really specific contractual wish, things probably still wouldn’t have went as he intended. This is because there will probably always be “wiggle room” when Jenn begins to interpret the wish —  she could find a loophole. As we know from Durkheim, “the contract is not sufficient by itself…”

If we think of a contract as a set of explicit rules allowing some things and baring others, then a perfect contract is what we would call a classical category. For example, the category “world peace” describes certain states of affairs, which includes some things (like people are to be calm) and excludes others (like people are not to be fighting). This used to be the dominant way philosophers, psychologists, and most other disciplines were thinking about categories, and it continues to pop up as a kind of “Good Old-Fashioned Category Theory” — or, we might say, GOLFCAT — even in sociology.

What are “Classical” Categories?

John Taylor, in Linguistic Categorization (Chapter 2) and George Lakoff in Women, Fire, and Dangerous Things (Chapter 1), provide great overviews of this theory of categories. In short, this theory is based on a metaphor (Lakoff [1987] 2008:6):

They were assumed to be abstract containers, with things either inside or outside the category. Things were assumed to be in the same category if and only if they had certain properties in common. And the properties they had in common were taken as defining the category.

To put this more formally, Taylor (2003:21) offers the following four conditions:

  • Categories are defined in terms of a conjunction of necessary and sufficient features.
  • Features are binary.
  • Categories have clear boundaries.
  • All members of a category have equal status.

One can easily see this view of categories as built into the early 20th century approach to phonology — which often conforms well to the folk theory of phonology today. Basically, speaking is a linear sequence of discrete sounds. A single language has a finite set of discrete sounds. More formally, these sounds are defined by distinguishing features that correspond to how they are produced in the mouth and throat — e.g., /m/ as in “mom” is found in almost every language, and is a “voiced bilabial nasal” because it is produced with both lips (pressed together) and by blocking the airflow and redirecting it through the nasal cavity, and it is voiced because the vocal cords vibrate. Furthermore these features are said to be “binary” in that they can either be present or absent (either the vocal cords vibrate or they do not: think of /th/ in thee compared to thy). This was the theory championed by the incomparable Roman Jakobson. Take this example (published in the same year Jakobson arrived at Harvard):

Our basic assumption is that every language operates with a strictly limited number of underlying ultimate distinctions which form a set of binary oppositions (Jakobson and Lotz 1949:151)

This theory was more fully elaborated in the book Preliminaries to Speech Analysis: The Distinctive Features and Their Correlates. Later, two MIT linguistic professors, Noam Chomsky (who, while a doctoral student at Penn supervised by Zellig Harris, conducted research at Harvard as a member of the Society of Fellows) and Morris Halle  (a doctoral student of Jakobson’s at Harvard) would write in The Sound Pattern of English (1968:297):

In the view of the fact that phonological features are classificatory devices, they are binary… for the natural way of indicating whether or not an item belongs to a particular category is by means of binary features.

Chomsky, of course, did not stop with phonology but continued down this path intending to discover the simple categories of syntax, which could explain all the regularity and variance in human languages. Surveying these developments in linguistics, Taylor offers three common additional conditions:

  • Features are primitive (i.e. irreducible to any other features)
  • Features are universal (i.e. there is an all-encompassing feature inventory)
  • Feature are abstract (features do not directly correspond to any particular case)

Finally, and both famously and controversially, this classical category theory as applied to language is extended by Chomsky et al. much further, forming the basis of the “nativist-generative-transformational” theory (Taylor 2003:26):

  • Features are innate

The Beginning of the Fall (in Linguistics)

Screenshot from 2019-04-17 15-38-26

Chomsky published Aspects of the Theory of Syntax in 1965, and it quickly became a kind of sacred text for the nascent MIT linguistics department. In it, he lays out the basic task of the “Standard Theory,” as discovering “generative grammar” which “must be a system of rules that can iterate to generate an indefinitely large number of structures” (Chomsky 2014 [1965]: 15-16).

One strong assumption built into his program is that there are “grammatically” correct sentences, and that lexical units could be adequately arranged in either-or categories (e.g., noun, verb etc…). A second assumption built into is that the highly variant “surface structure” of given utterances can be reduced into constituent categories or a “deep structure,” and a set of rules of composition and transformation. Finally, Chomsky felt there was clear and necessary boundaries between phonology, semantics, and syntax — and syntax was the real goal of linguistics (see Chapter 2 in Syntactic Structures in particular).

For all these reasons, he was skeptical that descriptive and statistical studies could reveal the underlying structure and offered a now infamous example:

  1. Colorless green ideas sleep furiously.
  2. *Furiously sleep ideas green colorless.

According to Chomsky, “It is fair to assume that neither sentence…ever occurred in an English discourse… Hence, in any statistical model for grammaticalness, these sentences will be ruled out on identical grounds as equally ‘remote’ from English.” Even though, according to Chomsky, a reasonable person could tell that sentence (1) is syntactically correct, while (2) is not. (Although, one paper (Pereira 2000:1245) does test this assertion and finds that sentence (1) is about 200,000 times more probable than sentence (2), and thus Chomsky’s assertion is either naive or in bad faith.)

Harsh Words for the Master

George Lakoff was an undergrad at MIT, majoring in mathematics and poetry when Noam Chomsky founded the Department of Linguistics in 1961. As part of the founding, Chomsky invited Jakobson from Harvard to teach a class. As Lakoff describes it:

So my advisor in the English Department said: “Roman Jakobson is coming to teach poetics, you’re interested in poetry, you should take this course, but if you’re going to do it, you should know all your linguistics, so also take Morris Halle’s Introduction to Linguistics”

In the 1960s, Chomskyan generative linguistics had become hegemonic, superseding the Bloomfieldian paradigm, and after his first years studying English at Indiana University, Lakoff intended to contribute to this new project. He returned to Cambridge in the summer of  1963 to marry Robin (Tolmach) Lakoff — a linguistics PhD student at Harvard at the time who, among other things, would go on to found the study of gender and language with Language and Woman’s Place.

While there, Lakoff found a job on an early machine translation project at MIT, where he met several others who would oppose Chomsky in the “linguistics wars.” When he returned to Indiana, he decided to turn to linguistics, and studied under Fred Householder, who famously published an early critique of Chomsky and Halle’s theory of phonology in 1965. In his final year, Lakoff returned to Cambridge, where Paul Postal directed his dissertation, and he also worked closely with Haj Ross and James McCawley.

Together, Lakoff, Ross, McCawley and Postal each explored cases that didn’t seem to fit Chomsky’s Standard Theory, and attempted to offer “patches” that would adequately account for these anomalies. In fact, Lakoff’s dissertation was “On the nature of syntactic irregularity.” This resulted in Extended Standard Theory.

In there exploration of exceptions, however, they soon landed on the kernel of an idea that would force a break with the Standard Theory entirely and form the basis of what they called generative semantics: “syntax should not be determining semantics, semantics should be determining syntax” (Harris 1995:104). In other words, “the deeper syntax got the closer it came to meaning” (Harris 1995:128). The result was something of a tempestuous counter-revolution, as Lakoff put it in a New York Times article, “Former Chomsky Disciples Hurl Harsh Words at the Master”:

Since Chomsky’s syntax does not and cannot admit context, he can’t even account, for the word ‘please’…Nor can he handle hesitations like ‘oh’ and ‘eh,’ But it’s virtually impossible to talk to Chomsky about these things. He’s a genius, and he fights dirty when he argues.

As John Searle observed, “…the author of the revolution now occupied a minority position in the movement he created. Most of the active people in generative grammar regard Chomsky’s position as having been rendered obsolete” (Searle 1972:20). (Interestingly, it appears that the groundswell of interest in the alternative approach at MIT coincided with Chomsky leaving on sabbatical to Berkeley.)

In the end, as the boundary between semantics and syntax began to blur, these counter-revolutionaries would soon need to grapple with theories of meaning found outside of linguistics. This would ultimately, but not immediately, lead them to engage with non-classical theories of categorization. In my next post, I will discuss the logical weaknesses of the classical theory and the alternative approach.

References

Chomsky, Noam. 2014. Aspects of the Theory of Syntax. MIT Press.

Chomsky, Noam and Morris Halle. 1968. The Sound Pattern of English. Harper & Row.

Harris, Randy Allen. 1995. The Linguistics Wars. Oxford University Press.

Jakobson, R. and J. Lotz. 1949. “Notes on the French Phonemic Pattern.” Word & World 5(2):151–58.

Lakoff, George. [1987] 2008. Women, Fire, and Dangerous Things. University of Chicago Press.

Pereira, F. 2000. “Formal Grammar and Information Theory: Together Again?” Transactions of the Royal Society of London ….

Searle, John R. 1972. “A Special Supplement: Chomsky’s Revolution in Linguistics.” The New York Review of Books. Retrieved April 16, 2019 (https://www.nybooks.com/articles/1972/06/29/a-special-supplement-chomskys-revolution-in-lingui/).

Taylor, John R. 2003. Linguistic Categorization. OUP Oxford.

When is Consciousness Learned?

Consciousness-learned

Continuing with the theme of innateness and durability from my last post, consider the question: are humans born with consciousness? In a ground-breaking (and highly contested) work, the psychologist Julian Jaynes argued that if only humans have consciousness, it must have emerged at some point in our human history. In other words, consciousness is a socially and culturally acquired skill (Williams 2011).

To summarize his argument: until as recently as the Bronze age (the third millennium BCE) he purports that humans were not, strictly speaking conscious. Rather, humans experienced life in a proto-conscious state he refers to as “bicameralism.” Roughly around the “Axial Age” (cf Mullins et al. 2018), bicameral humans declined and conscious, “unicameral” humans emerged.

One piece of evidence he deploys in support of his thesis is that the content of the Homeric poem the Iliad is substantially different than the later Odyssey. The former, he argues, is devoid of references to introspection, while the latter does have introspection. Jaynes argues a similar pattern emerges between earlier and later books of the Christian Bible. In a recent attempt  (see also Raskovsky et al. 2010) to test this specific hypothesis quantitatively,  Diuk et al. (2012), use Latent Semantic Analysis to calculate the semantic distances between the reference word “introspection” and all other words in a text. Remarkably, their findings are consistent with Jaynes’ argument  (see also: http://www.julianjaynes.org/evidence_summary.php).

Screenshot from 2018-12-19 17-47-55.png
From Diuk et al. (2012): “Introspection in the cultural record of the Judeo-Christian tradition. The New Testament as a single document shows a significant increase over the Old Testament, while the writings of St. Augustine of Hippo are even more introspective. Inset: regardless of the actual dating, both the Old and New Testaments show a marked structure along the canonical organization of the books, and a significant positive increase in introspection.”

Is Consciousness Learned in Childhood?

If consciousness, as Jaynes argued, is a product of social and cultural development, does this also mean that we each must “learn” to be conscious? Some contemporary research suggests something like this might be the case.

To begin we need a simple definition: consciousness is our “awareness of our awareness” (sometimes called metacognition). A problem with considering the extent of our conscious awareness is the normative baggage associated with “not being conscious.” For the folk, it is somewhat insulting to say people are “mindlessly” doing something, and we tend to value “self-reflection.” Certainly this is a generalization, but let’s bracket the notion that non-conscious experience is somehow less good than being conscious. The bulk of what the brain does is below the level of our awareness. For starters, when we are asleep, under general anesthesia, or even in a coma, the brain continues to be quite active. Moving to our waking lives, the kinds of skills and habits that Giddens (1979) confusingly calls the “practical consciousness” is deployed at a speed that outstrips our ability to be aware it is happening until after the fact. The kind of skillful execution associated with athletes and artists, for instance, is often associated with Csikszentmihalyi’s “flow” precisely because there is a “letting go” and letting the situation take over. All this is to say we are conscious far less than we probably think. Indeed asking us when we are not conscious  (Jaynes 1976:23):

…is like asking a flashlight in a dark room to search around for something that does not have any light shining upon it. The flashlight, since there is light in whatever direction it turns, would have to conclude that there is light everywhere. And so consciousness can seem to pervade all mentality when actually it does not.

A second major confusion is the assumption that consciousness is how humans learn ideas or form concepts. As we discuss elsewhere (Lizardo et al. 2016), memory systems are multiple, and while we learn via conscious processes, the bulk of what we learn is via non-conscious processes in “nondeclarative” memory systems (Lizardo 2017). This is especially the case for the most basic concepts we learn from infancy onward. In fact, Durkheim’s argument that it is through ritual—embodied experience—that so-called “primitive” groups learned the “basic categories of the understanding” more or less pre-figures this point (Rawls 2001).

Rather than the experience-near associated with everyday life, consciousness involves introspection and “time traveling” associated both with reconstructing our own biographies from memory and imagining possible (and impossible) futures. A recent school of thought in cognitive science—referred to as “enactivism”—takes a rather radical approach in arguing that the vast majority of human cognition is not, strictly speaking, contentful (Hutto and Myin 2012, 2017). Indeed, a lot of “remembering” does “not require representing any specific past happening or happenings… remembering is a matter of reenactment that does not involve representation” (Hutto and Myin 2017:205). But, what about autobiographical remembering involved in introspection and self-reflection which we might consider the hallmark of consciousness?

To answer this — within the broader enactivist project — they draw on group of scholars who argue that autobiographical memory is “a product of innumerable social experiences in cultural space that provide for the developmental differentiation of the sense of a unique self from that of undifferentiated personal experience” (Nelson and Fivush 2004:507). These scholars find that “a specific kind of memory emerges at the end of pre-school period”  (Nelson 2009:185). Such a theory offers a plausible explanation for “infantile amnesia” — the inability to recall events prior to about three or four — an explanation much less ridiculous than Freud’s contention that these memories were repressed so as to “screen from each one the beginnings of one’s own sex life.”

These theorists go on to argue that “a new form of social skill” associated with this “new type of memory” (Hoerl 2007:630). This skill is “narrating” one’s experience. Parent’s reminiscing with children play a central role in the acquisition of this skill (Nelson and Fivush 2004:500):

…parental narratives make an important contribution to the young child’s concept of the personal past. Talking about experienced events with parents who incorporate the child’s fragments into narratives of the past not only provides a way of organizing memory for future recall but also provides the scaffold for understanding the order and specific locations of personal time, the essential basis for autobiographical memory.

Returning to Jaynes, we find a remarkably analogous description of the emergence of consciousness as  the “development on the basis of linguistic metaphors of an operation of space in which an ‘I’ could narratize out alternative actions to their consequences” (Jaynes 1976:236). That is, we could assert, consciousness is this social skill emerging from the (embodied and social) practice of reminiscing with parents and classmates (or the like) when we are around three years old.

REFERENCES

Diuk, Carlos G., D. Fernandez Slezak, I. Raskovsky, M. Sigman, and G. A. Cecchi. 2012. “A Quantitative Philology of Introspection.” Frontiers in Integrative Neuroscience 6:80.

Giddens, A. (1979). Central problems in social theory. Berkeley: University of California press.

Hoerl, C. 2007. “Episodic Memory, Autobiographical Memory, Narrative: On Three Key Notions in Current Approaches to Memory Development.” Philosophical Psychology.

Hutto, Daniel D. and Erik Myin. 2012. Radicalizing Enactivism: Basic Minds without Content. MIT Press.

Hutto, Daniel D. and Erik Myin. 2017. Evolving Enactivism: Basic Minds Meet Content. MIT Press.

Jaynes, Julian. 1976. The Origin of Consciousness in the Breakdown of the Bicameral Mind.

Lizardo, Omar. 2017. “Improving Cultural Analysis Considering Personal Culture in Its Declarative and Nondeclarative Modes.” American Sociological Review 0003122416675175.

Lizardo, Omar, Robert Mowry, Brandon Sepulvado, Dustin S. Stoltz, Marshall A. Taylor, Justin Van Ness, and Michael Wood. 2016. “What Are Dual Process Models? Implications for Cultural Analysis in Sociology.” Sociological Theory 34(4):287–310.

Mullins, Daniel Austin, Daniel Hoyer, Christina Collins, Thomas Currie, Kevin Feeney, Pieter François, Patrick E. Savage, Harvey Whitehouse, and Peter Turchin. 2018. “A Systematic Assessment of ‘Axial Age’ Proposals Using Global Comparative Historical Evidence.” American Sociological Review 83(3):596–626.

Nelson, Katherine. 2009. Young Minds in Social Worlds: Experience, Meaning, and Memory. Harvard University Press.

Nelson, Katherine and Robyn Fivush. 2004. “The Emergence of Autobiographical Memory: A Social Cultural Developmental Theory.” Psychological Review 111(2):486–511.

Raskovsky, I., D. Fernández Slezak, C. G. Diuk, and G. A. Cecchi. 2010. “The Emergence of the Modern Concept of Introspection: A Quantitative Linguistic Analysis.” Pp. 68–75 in Proceedings of the NAACL HLT 2010 Young Investigators Workshop on Computational Approaches to Languages of the Americas, YIWCALA ’10. Stroudsburg, PA, USA: Association for Computational Linguistics.

Rawls, A. W. (2001). Durkheim’s treatment of practice: concrete practice vs representations as the foundation of reason. Journal of Classical Sociology, 1(1), 33-68.

Williams, Gary. 2011. “What Is It like to Be Nonconscious? A Defense of Julian Jaynes.” Phenomenology and the Cognitive Sciences 10(2):217–39.

Limits of innateness: Are we born to see faces?

Sociologists tend to be skeptical of claims individuals are consistent across situations, as a recent exchange on Twitter exemplifies. This exchange was partially spurred by revelations that the famous Stanford Prison Experiment (which supposedly showed people will quickly engage in behaviors commensurate with their assigned roles even if it means being cruel to others), was even more problematic than previously thought.

Fig14Koehler.png

The question of individual “durability” is sometimes framed as “nature vs nurture,” and this is certainly a part of the matter. In sociology, however, this skepticism of “durability” often goes much further than innateness, and sometimes leads sociologists to suggest individuals are inchoate blobs until situations come along to construct us (or interlocutors may resort to obfuscation by touting the truism that humans are always in a situation). If pushed on the topic, however, even the staunchest situationalist would likely concede that humans are born with some qualities, and the real question is what are the limits of such innateness? What kinds of qualities of people can be innate? To what extent are these innate qualities human universals? And, if we are “born with it” can  “it” change and how and to what extent? In Stephen Turner’s new Cognitive Science and the Social, he puts the matter succinctly:

“…children quickly acquire the ability to speak grammatically. This seems to imply that they already had this ability in some form, such as a universal set of rules of language stored in the brain. If one begins with this problem, one wants a model of the brain as “language ready.” But why stop there? Why think that only grammatical rules are innate? One can expand this notion to the idea of the “culture-ready” brain, one that is poised and equipped to acquire a culture” (2018:44–45).

As I’ve previously discussed, the search for either the universal rules or specialized module for language has, thus far, failed. Nevertheless, most humans must be “language-ready” in the minimal sense of having the ability to acquire the ability to speak and understand speech. But, answering the question of where innateness ends and enculturation begins is not easy. Even for those without the disciplinary inclination toward strongly situationalist arguments.

Are we born to see faces?

How we identify faces is a good place to explore this difficulty: Do we learn to identify faces or are we born to see faces? And, if we are born to see faces, is this ability refined through use and to what extent? Enter: the fusiform face area  (FFA). Just like language, the FFA is often used as evidence for the more general arguments of functional localization and domain specificity. This argument goes: facial recognition is produced not by generic cognitive processes involved in vision (or other generic processes), but rather an inborn special-purpose module.

One reason why faces are an even better candidate for grappling with the question of innateness than is language is that the human fetus is exposed to language while in the womb. Human fetuses gain some sense of prosody, tonality, and as a result, a basic sense of grammar in the course of development in utero. There is no comparable exposure to faces, however. Another reason is, as the Gestalt psychologists argued, faces have an irreducible structure such that they are perceived as complete wholes even when viewing only a part — “the whole is something else than the sum of its parts, because summing is a meaningless procedure, whereas the whole-part relationship is meaningful” (Koffka 1935:176).

Facial recognition encompasses two related functions: distinguishing faces from non-face objects and distinguishing among faces. The key debate within this area of cognitive neuroscience is whether there is a module that is specialized for one or both of these processes (Kanwisher, McDermott, and Chun 1997; Kanwisher and Yovel 2006), as opposed to a distributed and generic cognitive process (Haxby et al. 2001). This debate goes back to the observation that humans struggle to recognize and remember faces that are upside down, which seemed to be the case for faces more so than any other non-face object (Diamond and Carey 1986) — suggesting something about faces made them unique. 20181014-Selection_001.png The proposal facial recognition was the result of a specialized module, however, begins with a relatively recent paper by Kanwisher et al. (1997). Using functional magnetic resonance imaging (which I’ve discussed in detail in previous posts), 15 subjects were shown various common objects as well as faces. They found in 12 of those subjects a specific area of the brain was more active when they saw faces than when they saw non-face objects. On its face, it seems like reasonable evidence humans are born with a module necessary for identifying faces.

However, when one squares this claim with the underlying logic of fMRI—it is used to (a) measure relative activation, not an on/off process, and (b) voxel and temporal resolution is far too coarse to conclude a region is homogeneously activated—the claim that the FFA is a functionally specialized module for facial recognition weakens considerably.  These areas are not entirely inactive when viewing non-face objects. Indeed, relative to baseline activation, subsequent research found the FFA is significantly more active when viewing various objects (Grill-Spector, Sayres, and Ress 2006). Specifically, the level of specificity of the stimulus (e.g. faces tend to be individuals whereas chairs tend to be generic) and the participants level of expertise with the stimulus (e.g. car and bird enthusiasts) predicted greater relative activation (Gauthier et al. 2000; Rhodes et al. 2004).

Finally, if we are born to distinguish faces from non-faces, the ability to distinguish among faces is considerably trained by early socialization, and such socialization introduces a lot of variation among people. For example, one of the earliest attempts to measure facial recognition concluded, “that women are perhaps superior to men in the test; that salespeople are superior to students and farm people; that fraternity people are perhaps superior to non-fraternity people…” (Howells 1938:127).

Subsequent research in this vein found individuals are better at distinguishing among their racial/ethnic ingroups than their outgroups. In an early study of black and white students from a predominantly black university and a predominantly white university, researchers found participants more easily discriminated among faces of their own race. They also found “white faces were found more discriminable” overall, which they suggest may be the result of “the distribution of social experience is such that both black persons and white persons will have had more exposure to white faces than black faces in public media…” (Malpass and Kravitz 1969:332). Summarizing more recent work, Kubota et al.  (2012) state “participants process outgroup members primarily at the category level (race group) at the expense of encoding individuating information because of differences in category expertise or motivated ingroup attention.”

Why should sociologists care?

To summarize, the claim that facial recognition emerges from an innate functionally-specialized cognitive module is weakened in three ways: the FFA responds to more generic features faces share with other objects; the FFA is implicated in a distributed neural network rather than solely a discrete module; the FFA is used for non-facial recognition functions; and finally, facial recognition is trained by our (social) experience. Why should sociologists care? I think there are three reasons. First, innateness is not deterministic or specific but rather constraining and generic. Second, these constraints ripple throughout our social experience, forming the contours of cultural tropes, but are not immutable. Third, limited innateness does not mean individuals are not durable across situations, even (near) universally so.

A dispositional and distributed theory of cognition and action accounts for object recognition by its use: “information about salient properties of an object—such as what it looks like, how it moves, and how it is used—is stored in sensory and motor systems active when that information was acquired” (Martin 2007:25). This is commensurate with the broad approach many of the posts on this blog have been working with. Perhaps, however, there is a special class of objects for which this is not exactly the case. In other words, the admittedly weak innateness of distinguishing unfamiliar faces from non-face objects is, perhaps, the evidence we are “born with” some forms of nondeclarative knowledge (Lizardo 2017).

Such nondeclarative knowledge, however, may be re-purposed for cultural ends. Following the logic of neural exaption, discussed in a previous post, humans can be born with predispositions, especially related to very generic cognitive processes, which are further trained, refined, and recycled for novel uses, novel uses which are nevertheless constrained in a way that yields testable predictions. A fascinating example related to facial perception is anthropomorphization. If rudimentary facial recognition is innate (and therefore, probably evolutionarily old), this inherently social-cognitive process is being reused for non-social purposes (i.e. non-social in the restricted sense of interpersonal interaction). This facial recognition network—together with other neuronal networks—is used to identify people and predict their behavior, and this may be adapted to non-human animate and inanimate objects, like natural forces, as well as anonymous social structures, like financial markets.

What this means, following the logic of neural reuse and conceptual metaphor theory, is that the target domain (e.g. derivative markets, earthquakes) is “contaminated” by predispositions which originally dealt with the source domain (here, interpersonal interaction). This means attempting to imagine the intentions of thousands of unknown traders as if inferring the intentions of an interlocutor may lead traders to “ride” financial bubbles (De Martino et al. 2013). Therefore, what is and is not innate is a messy question to answer — even by those without a disciplinary distrust of innateness claims. Although cognitive neuroscientists are making headway, it remains an empirical question which objects are recognized innately and the extent to which the object recognition is robust to enculturation and neural recycling.

More importantly, the question of individual durability across situations should not be reduced solely to “nature vs nurture.” That is, we must grapple with the question of once these processes are so trained in an individual (during “primary socialization”), how easily can they be re-trained, if at all? In John Levi Martin’s Thinking Through Theory (2014:249), the third of his “Newest Rules of Sociological Method” is pessimistic in this regard: “Most of what people think of as cultural change is actually changes in the compositions of populations.” That is, even if we were to bar the possibility of innateness in any strong sense, once individuals reach a certain age they are likely to be fairly consistent across situations, with little chance of altering in fundamental ways.

REFERENCES

De Martino, Benedetto, John P. O’Doherty, Debajyoti Ray, Peter Bossaerts, and Colin Camerer. 2013. “In the Mind of the Market: Theory of Mind Biases Value Computation during Financial Bubbles.” Neuron 79(6):1222–31.

Diamond, Rhea and Susan Carey. 1986. “Why Faces Are and Are Not Special: An Effect of Expertise.” Journal of Experimental Psychology. General 115(2):107.

Gauthier, I., P. Skudlarski, J. C. Gore, and A. W. Anderson. 2000. “Expertise for Cars and Birds Recruits Brain Areas Involved in Face Recognition.” Nature Neuroscience 3(2):191–97.

Grill-Spector, Kalanit, Rory Sayres, and David Ress. 2006. “High-Resolution Imaging Reveals Highly Selective Nonface Clusters in the Fusiform Face Area.” Nature Neuroscience 9(9):1177–85.

Haxby, J. V., M. I. Gobbini, M. L. Furey, A. Ishai, J. L. Schouten, and P. Pietrini. 2001. “Distributed and Overlapping Representations of Faces and Objects in Ventral Temporal Cortex.” Science 293(5539):2425–30.

Howells, Thomas H. 1938. “A Study of Ability to Recognize Faces.” Journal of Abnormal and Social Psychology 33(1):124.

Kanwisher, Nancy and Galit Yovel. 2006. “The Fusiform Face Area: A Cortical Region Specialized for the Perception of Faces.” Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 361(1476):2109–28.

Kanwisher, N., J. McDermott, and M. M. Chun. 1997. “The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception.” The Journal of Neuroscience: The Official Journal of the Society for Neuroscience 17(11):4302–11.

Koffka, Kurt. 1935. Principles of Gestalt Psychology. New York: Harcourt, Brace.Kubota, Jennifer T., Mahzarin R. Banaji, and Elizabeth A. Phelps. 2012. “The Neuroscience of Race.” Nature Neuroscience 15(7):940–48.

Lizardo, Omar. 2017. “Improving Cultural Analysis Considering Personal Culture in Its Declarative and Nondeclarative Modes.” American Sociological Review 0003122416675175.

Malpass, R. S. and J. Kravitz. 1969. “Recognition for Faces of Own and Other Race.” Journal of Personality and Social Psychology 13(4):330–34.

Martin, Alex. 2007. “The Representation of Object Concepts in the Brain.” Annual Review of Psychology 58(1):25–45.

Martin, John Levi. 2014. Thinking Through Theory. W. W. Norton, Incorporated.

Rhodes, Gillian, Graham Byatt, Patricia T. Michie, and Aina Puce. 2004. “Is the Fusiform Face Area Specialized for Faces, Individuation, or Expert Individuation?” Journal of Cognitive Neuroscience 16(2):189–203.

Turner, Stephen P. 2018. Cognitive Science and the Social: A Primer. Routledge.

Where Did Sewell Get “Schema”?

Although there are precedents to using the term “schema” in an analytical manner in sociology (e.g., Goffman’s Frame Analysis and Cicourel’s Cognitive Sociology), it is undoubtedly William Sewell Jr’s “A Theory of Structure: Duality, Agency, and Transformation” published in the American Journal of Sociology in 1992 that really launched the career of the term in sociology.

In our forthcoming paper, Schemas and Frames (Wood et al. 2018), we briefly sketch the history of the schema concept in the cognitive sciences—from psychology and artificial intelligence to anthropology and cognitive neuroscience. We note how certain ambiguities in Sewell’s formulation renders it unclear whether it is compatible with the concept as used in the cognitive sciences. Part of the reason, I would suggest, is because Sewell did not get this concept from the cognitive sciences, not even cognitive anthropology.

First, we must discuss (briefly) Giddens’ intervention. To summarize (following Piaget 2015:6–16) the defining features of the various varieties of structuralism—in mathematics, psychology, anthropology, linguistics—include: (1) patterned-wholes are not mere aggregates, (2) patterned-wholes presuppose some principles of composition or transformation which structure them, and (3) the dynamics of wholes, as the product of these underlying principles, result in self-maintenance such that the process which constitutes the patterned-whole is not immediately terminated.

Giddens’ innovation, first articulated in Central Problems in Social Theory (1979), and later in Constitution of Society (1984), involved separating aspects (1) and (2) above. He referred to the patterned-whole as a social system and to the underlying principles of composition and transformation as structure. In essence, he asks for a Gestalt shift in how sociologists approached the regularities of social life. This, in turn, places structure as operating “behind the scenes,” or in Giddens words, “structure as a ‘virtual order’ of differences” (Giddens 1979:64)

In response to this move, Sewell uses the term schema for the first time in this passage:

Structures, therefore, have only what [Giddens] elsewhere terms a ‘virtual’ existence (e.g., 1984, p. 17). Structures do not exist concretely in time and space except as ‘memory traces, the organic basis of knowledgeability’ (i.e., only as ideas or schemas lodged in human brains) and as they are ‘instantiated in action’ (i.e., put into practice). (Sewell 1992:6)

Giddens also, confusingly, defines “structure” as consisting of “rules and resources”  (1979:63–64). The latter of which, Sewell points out, is not virtual. He goes on to demonstrate Giddens term “rules” isn’t virtual either as it implies public prescriptions. Sewell focuses his intervention here (1992:7):

Giddens develops no vocabulary for specifying the content of what people know. I would argue that such a vocabulary is, in fact, readily available, but is best developed in a field Giddens has to date almost entirely ignored: cultural anthropology. After all, the usual social scientific term for ‘what people know’ is ‘culture,’ and those who have most fruitfully theorized and studied culture are the anthropologists… What I mean to get at is not formally stated prescriptions but the informal and not always conscious schemas, metaphors, or assumptions presupposed by such formal statements. I would in fact argue that publicly fixed codifications of rules are actual rather than virtual and should be regarded as resources rather than as rules in Giddens’s sense. Because of this ambiguity about the meaning of the word ‘rules,’ I believe it is useful to introduce a change in terminology. Henceforth I shall use the term ‘schemas’ rather than ‘rules’.

Beyond noting that he is inspired by the work of anthropologists, Sewell offers few clues as to what motivates his use of schema.

Is Sherry Ortner and Michigan’s CSST the source?

Despite referring to “schema” over a hundred times in the essay, he cites almost no scholars. In a footnote, he states “It is not possible here to list a representative example of anthropological works that elaborate various ‘rules of social life.’” In the same footnote, after citing Geertz’s The Interpretation of Cultures as the most influential discussion of culture, he states “For a superb review of recent developments in cultural anthropology, see Ortner (1984).” As this footnote suggests, it may have been Sherry Ortner who motivated his conceptualization.

In the essay, Sewell cites Ortner’s 1984 piece “Theory in Anthropology since the Sixties,” and includes Ortner among several scholars he thanks for feedback on his AJS piece. However, in the cited article, Ortner’s only mention of “schema” is in a quotation from Bourdieu  (1978:15). In this essay, she outlines the main cleavage within symbolic anthropology in the 1960s was between the Turnerians and the Geertzians. Geertz’s “most radical move,” according to Ortner, was arguing “culture is not something locked inside people’s heads, rather is embodied in public symbols” (1984:129). Ortner identified as “Geertzian,” as he was her advisor at the University of Chicago, where he taught from 1960 to 1970, before leaving for the Institute for Advanced Study at Princeton (David Schneider, another Parsonsian symbolic anthropologist, was also her teacher at Chicago).

Sewell received his Ph.D. in history at Berkeley in 1971, and was an instructor at the University of Chicago from 1968 to 1971, before becoming an Assistant Professor there from 1971 until 1975 — overlapping with Ortner’s graduate studies there. He then had a five-year stint at the Institute for Advanced Study with Geertz in residence. From 1985 to 1990, Sewell was faculty in the history and sociology at the University of Michigan, overlapping again with Ortner, a faculty member in anthropology from 1977—1995. However, the overlaps between the two (and Sewell with Ortner’s mentor), is speculative evidence of their interactions.

In 1991, the relatively new American Sociological Association Sociology of Culture Section gave an honorable mention for the best article to Nicola Beisel for “Class, Culture, and Campaigns Against Vice in Three American Cities.” Her advisor at Michigan was Sewell, and in the Culture section newsletter’s interview with her, she states (1991:4-5):

Certainly, the biggest influence on my work was the University of Michigan’s Center for the Study of Social Transformations (CSST), a group of sociologists, social historians, and anthropologists that was started by Bill Sewell, Terry McDonald, Sherri Ortner, and Jeff Paige. The year I spent as a CSST fellow was one long and extremely fruitful discussion of culture, structure, agency, and social change….I do think that we have to demonstrate to our colleagues who think they do work on ‘hard structures’ that culture plays a vital part in the constitution and reproduction of those structures. In thinking about these issues I have been greatly influenced by Bill Sewell’s and Anthony Giddens’ theorizing the duality of structures, particularly the discussions in Sewell’s forthcoming AJS article.

In a recent interview about her 1995 essay, “Resistance and the Problem of Ethnographic Refusal” published in Comparative Studies in Society and History (CSSH), Ortner also refers to the founding of CSST:

In 1995 I was still at the University of Michigan and was involved in the formation of an incredibly exciting interdisciplinary discussion group, Comparative Studies in Social Transformation or CSST (not to be confused with the journal CSSH!). CSST was populated by anthropologists, historians, and a few folks from other fields, with many shared theoretical interests (Marxism, culture theory, practice theory, feminism, Foucault, etc.) and with overlapping cultural and historical interests in–broadly speaking–issues of power, domination, and resistance. If you look at the acknowledgments of “Resistance and the Problem of Ethnographic Refusal” (and I am a big believer in looking at acknowledgments), you will see the names of many of the key participants in that group, and it is an amazing roll call of some of the leading anthropologists, historians, and other social and cultural thinkers of that generation.

Sewell was among those acknowledged (alongside, Fred Cooper, Fernando Coronil, Nick Dirks, Val Daniel, Geoff Eley, Ray Grew, Roger Rouse, Julie Skurski, Ann Stoler, and Terry McDonald). Curiously, Sewell acknowledges none of those members of CSST in his 1992 article — only Ortner. This strongly suggests there was, at least, cross-pollination between Ortner and Sewell.

Where Did Ortner Get “Schema”?

20180816-Selection_017.png
Ortner’s sketch of the Gyepshi altar in Sherpas Through Their Rituals

We may speculate, therefore, that Sewell received the schema concept from Ortner through, either informal talks, discussions at the CSST, or something of Ortner’s he read but did not cite in the AJS article. That is, it is strange that in the single essay of Ortner’s cited by Sewell, she does not really refer to “schemas” beyond a quoting Bourdieu.

In Ortner’s first book (1978), Sherpas Through Their Rituals (based on her dissertation), she references schemas only once, in quoting Ricoeur: “the stain [defilement] is the first schema of evil” (Ortner’s addendum). In a collection of reactions to Ortner’s “Theory in Anthropology since the Sixties,” by Maurice Bloch, Jane Collier, Sylvia Yanagisako, Thomas Gibson,  Sharon Stephens, and Pierre Bourdieu—based on the 1987 American Ethnological Society invited session, held at the American Anthropological Association Meetings in Chicago and published as a working paper by the CSST—Ortner offers the following in her response (1989:102-103, emphasis added):

And finally, my own recent work on Sherpa social and religious history utilizes a notion of cultural schemas, recurring stories that depict structures as posing problems, to which actors must and do find solutions. Here again structure (or culture) exists in and through its varying relations with various kinds of actors. Further, structure comes here as part of a package of emotional and moral configurations, and not just abstract ordering principles.

The work she is referring to here is in her 1989 book, High Religion: A Cultural and Political History of Sherpa Buddhism. It is here that “schema”— specifically “cultural schema”—is used numerous times (54 in total). In the opening chapter, Ortner describes two “notions” of structure will be used in the analysis (1989:14 emphasis added):

The first is a concept of structural contradictions—conflicting discourses and conflicting patterns of practice—that recurrently pose problems to actors. The second is a concept of cultural ‘schemas,’ plot structures that recur throughout many cultural stories and rituals, that depict actors responding to the contradictions of their culture and dealing with them in appropriate, even “heroic,” ways.

In chapter four, Ortner argues “Sherpa society is founded on a contradiction between an egalitarian and hierarchical ethic.” She furthermore argues that recognition of this contradiction is “culturally formalized, in the sense that important cultural stories both depict such competitive relations and show the ways in which they may be resolved….the stories collectively embody what I will call a cultural schema” (1989:59, emphasis added; see also her 1990 chapter “Patterns of History: Cultural Schemas in the Founding of Sherpa Religious Institutions”).

20180816-Selection_018.png

Ortner then offers a short survey of the “pedigree” of this concept in anthropology, beginning with what she called “key scenarios” in her dissertation and a 1973 American Anthropologist article. These are a particular kind of “key symbol,” which “implies clear-cut modes of action appropriate to correct and successful living in the culture…they formulate the culture’s basic means-ends relationship in actionable form” (1973:1341). Ortner outlines how numerous different contexts—like seating arrangements, shamanistic seances, ritual offerings to gods—were structured as if they were a hospitality event. Therefore, the “scenario of hospitality” acted as a “cultural schema,” transposable across situations and providing prescriptions for action.

Next, Ortner identifies other exemplars, including Schieffelin’s ([1976] 2005) examination of reciprocity and opposition as “cultural scenarios” among the Kaluli of New Guinea, Turner’s (1975) “root paradigms” like martyrdom in Christianity, Geertz’s  “transcription of a fixed ideal” in Negara (1980), and Sahlins’ “structures of the long run” in Historical Metaphors (1981) (1981). Ortner argues that cultural schemas have “durability” because “they depict actors respond to, and resolving…the central contradictions of the culture” (1989:61). After High Religion, Ortner refers to schemas only once, in a retrospective on Geertz in 1997.

What is absent from Ortner’s otherwise exhaustive review of anthropology in the 1984 essay, and throughout her work on cultural schemas, is any references to “cognitive” anthropology. She offers no reference to Goodenough, Lounsbury, Romney, D’Andrade, Frake or others, and only referring to Bloch’s work prior to his turn to the cognitive sciences as exemplified by his 1991 article “Language, Anthropology and Cognitive Science.” In fact, it is odd that she does not reference a 1980 review essay in the American Ethnologist, titled “On Cultural Schemata” written by G. Elizabeth Rice, a UC-Irvine PhD. Nor is there a reference to the 1983 Annual Review of Anthropology essay, “Schemata in Cognitive Anthropology,” written by Ronald Casson, a student of D’Andrade and Frake while at Stanford. Furthermore, she does not cite the work of Robert I. Levy who studied Nepal (1990) from a cognitive-anthropological perspective (in fact, both Levy’s and Ortner’s book on Nepal are reviewed in the same issue of the American Ethnologist). Originally trained as a  psychiatrist, Levy was brought to UC-San Diego in 1969 to help establish the nascent field of “psychological anthropology.” In Tahitians: Mind and Experience in the Society Islands (1975), he applies the concept of schema—which he attributes to the psychiatrist Ernest Schachtel’s study of memory and amnesia.

Several more such examples can be found. We can conclude that Ortner’s conceptualization of schema (and therefore Sewell’s and likely Sewell’s students) appears to be largely independent of its parallel development in the cognitive sciences (including cognitive anthropology) forming in the U.S. west coast (briefly discussed in my post on connectionism).

References

Geertz, Clifford. 1980. Negara. Princeton University Press.

Giddens, A. 1984. The Constitution of Society: Outline of the Theory of Structuration. University of California Press.

Giddens, Anthony. 1979. Central Problems in Social Theory: Action, Structure, and Contradiction in Social Analysis. Vol. 241. Univ of California Press.

Levy, Robert I. 1975. Tahitians: Mind and Experience in the Society Islands. University of Chicago Press.

Ortner, Sherry B. 1989. High Religion. Motilal Banarsidass.

Ortner, Sherry B. 1973. “On Key Symbols.” American Anthropologist 75(5):1338–46.

Ortner, Sherry B. 1978. Sherpas Through Their Rituals. Cambridge University Press.

Ortner, Sherry B. 1984. “Theory in Anthropology since the Sixties.” Comparative Studies in Society and History 26(1):126–66.

Piaget, Jean. 2015. Structuralism (Psychology Revivals). Psychology Press.

Sahlins, Marshall. 1981. “Historical Metaphors and Mythical Realities.” Ann Arbor: University of Michigan Press 344.

Schieffelin, E. 2005. The Sorrow of the Lonely and the Burning of the Dancers. Springer.

Sewell, William H. 1992. “A Theory of Structure: Duality, Agency, and Transformation.” The American Journal of Sociology 98(1):1–29.

Turner, Victor. 1975. Dramas, Fields, and Metaphors: Symbolic Action in Human Society. Cornell University Press.

Wood, Michael Lee, Dustin S. Stoltz, Justin Van Ness, and Marshall A. Taylor. 2018. “Schemas and Frames.” Sociological Theory, Forthcoming.

 

Exaption: Alternatives to the Modular Brain, Part II

Scientists discovered the part of the brain responsible for…

In my last post, I discuss one alternative to the modular theory of the mind/brain relationship: connectionism. Such a model is antithetical to modularity in that there are only distributed networks of neurons in the brain, not special-purpose processors.

One strength of the modular approach, however, is that it maps quite well to our folk psychology. And, much of the popular discourse surrounding research in neuroscience involves the celebrated “discovery” of the part of the brain responsible for X. A major theme of the previous posts is that the social sciences should be skeptical of the baggage of our folk psychology. But, is there not some truth to the idea that certain regions of the brain are regularly implicated in certain cognitive processes?

The earliest attempts at localization relied on an association between some diagnosed syndrome—such aphasia discussed in the previous posts—and abnormalities of the brain’s structure (i.e. lesions) identified in post-mortem examinations. For example, Paul Broca, discussed in my previous post, noticed lesions on a particular part of the brain in patients with difficulty producing speech. This part of the brain became known as Broca’s area, but researchers only have a loose consensus as to the boundaries of the area (Lindenberg, Fangerau, and Seitz 2007).

Furthermore, the relationship between lesions in this area and aphasia is partial at best. A century later, Nina Dronkers, the Director of the Center for Aphasia and Related Disorders, states (2000:60):

After several years of collecting data on chronic aphasic patients, we find that only 85% of patients with chronic Broca’s aphasia have lesions in Broca’s area, and only 50–60% of patients with lesions in Broca’s area have a persisting Broca’s aphasia.

More difficult for the modularity thesis, those with damage to Broca’s area and who also have Broca’s aphasia usually have other syndromes. This implies that the area is multi-purpose, and thus not a single-purpose language production module (see this book-length discussion Grodzinsky and Amunts 2006). One reason I focus on Broca’s area (apart from my interest in linguistics) is that it is considered the exemplary case for the modular theory quite dominant (if implicit) in much neuroscientific research (Viola and Zanin 2017).

Part of the difficulty with assessing even weak modularity hypotheses, however, is that neuroanatomical research continues to revise the “parcellation” of the brain. The first such attempt was by Korbinian Brodmann, published in German in 1909  as “Comparative Localization Studies in the Brain Cortex, its Fundamentals Represented on the Basis of its Cellular Architecture.” He divided the cerebral cortex (the outermost “layer” of the brain) into 52 regions based on the structure of cells (cytoarchitecture) sampled from different sections of brains taken from 64 different mammalian species, including humans (see Figure 1). Although Brodmann’s studies were purely anatomical, he wrote: “my ultimate goal was the advancement of a theory of function and its pathological deviations.” Nevertheless, he rejected what he saw as naive attempts at functional localization:

[Dressing] up the individual layers with terms borrowed from physiology or psychology…and all similar expressions that one encounters repeatedly today, especially in the psychiatric and neurological literature, are utterly devoid of any factual basis; they are purely arbitrary fictions and only destined to cause confusion in uncertain minds.

20180522-Selection_003
Figure 1. Brodmann’s handdrawn parcellation of the human brain.

Over a century later, many researchers continue to refer to “Brodmann’s area” numbers as general orientation markers. More recently (see Figure 2), using data from the Human Connectome Project and supervised machine learning techniques, a team of researchers characterized 180 areas in each hemisphere — 97 new areas and 83 areas identified in previous work (Glasser et al. 2016). This study used a “multi-modal” technique which included cytoarchitecture, like Brodmann, but also connectivity, topography and function. For the latter, the study used data from “task functional MRI (tfMRI) contrasts,” wherein resting state measures are compared with measures taken during seven different tasks.

glasser-map

One of these tasks was language processing using a procedure developed by Binder et al. (2011) wherein participants read a short fable and then are asked a forced-choice question. Glasser et al. found reasonable evidence associating this language task with previously identified components of the “language network” (for recent overviews of the quest to localize the language network, see Frederici 2017 and Fitch 2018, both largely within the generative tradition).  Specifically, these are Broca’s area (roughly 44) and Wernicke’s area (roughly PSL), and also identified an additional area, which they call 55b). Their findings also agreed with previous work going back to Broca on the “left-lateralization” of the language network—which means not that language is only in the left hemisphere (as some folk theories purport), but simply the left areas show more activity in response to the language task than in homologous areas in the right hemisphere (an early finding which inspired Jaynes’ Bicameral Mind hypothesis)

Does this mean we have discovered the “language module” theorized by Fodor, Chomsky, and others? Not quite, for three reasons. First, Glasser et al. found if they removed the functional task data, their classifier was nearly as accurate at identifying parcels. Second, the parcels were averaged over a couple hundred brains, and yet the classifier was still able to identify parcels in atypical brains (whether this translated into changes in functionality was outside the scope of the study).

Third, and most important for our purposes, this work does not—and the researchers do not attempt to—determine whether parcels are uniquely specialized (or encapsulated in Fodor’s terms). That is, while we can roughly identify a language network implicating relatively consistent areas across different brains, this does not demonstrate that such structures are necessary and sufficient for human language, and solely used for this purpose. Indeed, language may be a “repurposing” brain parcels used for (evolutionarily or developmentally older) processes. This is precisely the thesis of neural “exaption.”

What is Exaption?

In the last few decades several new frameworks—under labels like neural reuse, neuronal recycling, neural exploitation, or massive redeployment—attempt to offer a bridge between the modularity assumptions which undergird most neuroanatomical research, on one hand, and the connectionist assumptions which spurred advancements in artificial intelligence research and anthropology on the other. Such frameworks also attempt to account for the fact there is some consistency in activation across individuals, which does look a little bit like modularity.

The basic idea is exaption (also called exaptation): some biological tendencies or anatomical constraints may predispose certain areas of the brain to be implicated in certain cognitive functions, but these same areas may be recycled, repurposed, or reused for other functions. Exemplars of this approach are Stanislas Dehaene’s Reading in the Brain and Michael Anderson’s After Phrenology.

Perhaps the easiest way to give a sense of what this entails is to consider cases of neurodiversity, specifically the anthropologist Greg Downey’s essay on the use of echolocation by the visually impaired. While folk understandings may suggest that hearing becomes “better” in those with limited sight, this is not quite the case. Rather, one study finds, when listening to “ a recording [which] had echoes, parts of the brain associated with visual perception in sighted individuals became extremely active.” In other words, the brain repurposed the visual cortex as a result of the individual’s practices. While most humans have limited echolocation abilities and the potential to develop this skill, only some will put in the requisite practice.

Another strand of research supporting neural exaption falls under the heading of “conceptual metaphor theory” (itself a subfield of cognitive linguistics). The basic argument from this literature is that people tend to reason about (target) domains they have had little direct experience with by analogy to (source) domains with which they have had much direct experience (e.g. the nation is a family). As argued in Lakoff and Johnson’s famous Metaphors We Live By, this metaphorical mapping is not just figurative or linguistic, but rather a pre-linguistic conceptual mappings, and an—if not the—essential part of all cognition (Hofstadter and Sander 2013). Therefore, thinking or talking about even very abstract concepts re-activates a coalition of neural associations, many of which are fundamentally adapted to the mundane sensorimotor task of navigating our bodies through space. As we discuss in our forthcoming paper, “Schemas and Frames” (Wood et al. 2018), because talking and thinking recruit areas of our neural system often deployed in other activities—and at time-scales faster than conscious awareness can adequately attend—our biography of embodiment channels our reasoning in ways that seem intuitive and yet are constrained by the pragmatic patterns of those source domains. This is fully compatible with the dispositional theory of the mental Omar discusses.

What does this mean for sociology? I think there are numerous implications and we are just beginning to see how generative these insights are for our field. Here, I will limit myself to discussing just two, specifically related to how we tend to think about the role of language in our work. First, for an actor, knowing what text or talk means involves an actual embodied simulation of the practices it implies, very often (but not necessarily) in service of those practices in the moment (Binder and Desai 2011). Therefore, language should not be understood as an autonomous realm wherein meanings are produced by the internal interplay of contrastive differences within an always deferred linguistic system. Rather, following the later Wittgenstein in the Philosophical Investigations, “in most cases, the meaning of a word is its use.” Furthermore, as our embodiment is largely (but certainly not completely) shared across very different peoples (for example, most of us experience gravity all the time), there is a significant amount of shared semantics across diverse peoples (Wierzbicka 1996)—indeed without this, translation would likely be impossible.

Second, the repurposing of vocabulary commonly used in one context into a new context will often involve the analogical transfer of traces of the old context. This is because invoking such language activates a simulation of practices from the old context while one is in the new context. (Although this is dependent upon the accrued biographies of the individuals involved). This suggests that our language can be constraining in predictable ways, but not because the language itself has a structure or code rendering certain possibilities unthinkable. Rather, it is that language is the manifestation of a habit inextricably involved in a cascade of other habits, making it easier to execute  (and therefore more probable for) some actions or thoughts over others. For example, as Barry Schwartz argued in his (criminally under-appreciated) Vertical Classification, it is nearly universal that UP is associated with power and also the morally good as a result of (near-universal) practices we encounter as babies and children. This helps explain the persistence of the “height premium” in the labor market (e.g., Lundborg, Nystedt, and Rooth 2014).

 

References

Binder, Jeffrey R. et al. 2011. “Mapping Anterior Temporal Lobe Language Areas with fMRI: A Multicenter Normative Study.” NeuroImage 54(2):1465–75.

Binder, Jeffrey R. and Rutvik H. Desai. 2011. “The Neurobiology of Semantic Memory.” Trends in Cognitive Sciences 15(11):527–36.

Dronkers, N. F. 2000. “The Pursuit of Brain–language Relationships.” Brain and Language. Retrieved (http://www.ebire.org/aphasia/dronkers/the_pursuit.pdf).

Fitch, W. Tecumseh. 2018. “The Biology and Evolution of Speech: A Comparative Analysis.” Annual Review of Linguistics 4(1):255–79.

Friederici, Angela D. 2017. Language in Our Brain: The Origins of a Uniquely Human Capacity. MIT Press.

Glasser, Matthew F. et al. 2016. “A Multi-Modal Parcellation of Human Cerebral Cortex.” Nature 536(7615):171–78.

Grodzinsky, Yosef and Katrin Amunts. 2006. Broca’s Region. Oxford University Press, USA.

Hofstadter, Douglas and Emmanuel Sander. 2013. Surfaces and Essences: Analogy as the Fuel and Fire of Thinking. Basic Books.

Lindenberg, Robert, Heiner Fangerau, and Rüdiger J. Seitz. 2007. “‘Broca’s Area’ as a Collective Term?” Brain and Language 102(1):22–29.

Lundborg, Petter, Paul Nystedt, and Dan-Olof Rooth. 2014. “Height and Earnings: The Role of Cognitive and Noncognitive Skills.” The Journal of Human Resources 49(1):141–66.

Viola, Marco and Elia Zanin. 2017. “The Standard Ontological Framework of Cognitive Neuroscience: Some Lessons from Broca’s Area.” Philosophical Psychology 30(7):945–69.

Wierzbicka, Anna. 1996. Semantics: Primes and Universals. Oxford University Press, UK.

Wood, Michael Lee, Dustin S. Stoltz, Justin Van Ness, and Marshall A. Taylor. 2018. “Schemas and Frames.” Retrieved (https://osf.io/preprints/socarxiv/b3u48/).