Four arguments for the cognitive social sciences

Despite increasing efforts to integrate ideas, concepts, findings and methods from the cognitive sciences with the social sciences, not all social scientists agree this is a good idea. Some are indifferent to these integrative attempts. Others consider them as overly reductionist and, thereby, as a threat to the identity of their disciplines. As a response to many social scientists’ skepticism towards psychology and cognitive science, cognitive social scientists have provided arguments to convince other social scientists about the benefits of integrating the social sciences with the cognitive sciences. In this blog post, that is based on a recently published article co-authored with Matti Sarkia and Mikko Hyyryläinen (Kaidesoja, Sarkia & Hyyryläinen 2019), I briefly outline and evaluate four arguments for the cognitive social sciences. By cognitive social sciences, I refer to scientific disciplines that aim to integrate the social sciences with the cognitive sciences, including disciplines like cognitive anthropology, cognitive sociology, political psychology, and behavioral economics. By interdisciplinary integration, I mean different ways of bringing disciplines together.

Each argument presupposes a different idea about how the cognitive sciences should be integrated with the social sciences. These arguments can be referred to as explanatory grounding, theoretical unification, constraint and complementarity. Different arguments also subscribe to different visions as to how the cognitive social sciences might look like and make different assumptions about social phenomena and scientific explanations of them. Hence, different arguments provide reasons for engaging in different types of research programs in the cognitive social sciences. For these reasons, it is important not only to reconstruct these four arguments but also to take a closer look at their presuppositions and implications.

I will address each argument in two stages. First, I provide a reconstruction of the argument by specifying its premises, inferential structure and conclusion. Then I briefly evaluate the argument by analyzing some of its presuppositions and the plausibility of its premises. Although I do not claim these four arguments to be the only arguments for the cognitive social sciences, I believe that they are among the most important and influential ones. In addition, while I attribute each argument to a particular author, in the longer piece we also point to other cognitive social scientists who have proposed similar arguments (see Kaidesoja, Sarkia & Hyyryläinen 2019).

Argument from explanatory grounding

Ron Sun (2012) presents the argument from explanatory grounding for the cognitive social sciences. Here is the reconstruction of Sun’s argument that we provided in our paper:

  1. Most social scientists do not currently make use of the knowledge produced in the cognitive sciences when they explain social phenomena.
  2. Cognitive processes are the ontological basis of social processes.
  3. Explanations in the cognitive sciences are deeper than explanations in the social sciences because they bottom out in cognitive processes.
  4. If social scientists ground their explanations in the cognitive sciences, their explanations for social phenomena would become deeper than they are at present.
  5. Conclusion: the social sciences should be grounded in the cognitive sciences (Kaidesoja, Sarkia & Hyyryläinen 2019, 3).

It is important to recognize that Sun’s argument presupposes that the explanatory grounding relation between the cognitive and social sciences is asymmetrical. This means that if the social sciences are grounded in the cognitive sciences, then the cognitive sciences cannot be grounded in the social sciences.

Sun’s key premises 2 and 3 rest on the requirement that scientific explanations should reflect the ontological order of reality. This means that higher-level processes should be explained by the models that represent their lower-level component processes that form the ontological basis of the higher-level processes. Since Sun (2012) assumes that cognitive sciences study cognitive processes that are ontologically more fundamental than social processes studied in the social sciences, he expects that the cognitive sciences are capable of providing deeper explanations for social processes than those currently provided in the social sciences. He does not claim, however, that these cognitive explanations would explain social processes away (e.g. by means of ontologically reducing them to cognitive processes or eliminating them from scientific ontology). In other words, the idea of explanatory grounding of the social sciences in the cognitive sciences is compatible with the assumption that social processes have weakly emergent properties that can be mechanistically explained (e.g. Kaidesoja 2013).

Although it does not reduce social phenomena to cognitive phenomena, the idea of asymmetrical explanatory grounding may pose unnecessary constraints for the development of the cognitive sciences. There is no good a priori reasons to exclude the possibility that the social sciences might have something useful to offer to those parts of the cognitive sciences that address the cognitive aspects of social phenomena. For example, social scientists may indicate that some cognitive mechanisms have social aspects that have been ignored by cognitive scientists. In addition, while Sun (2012) tends to assume that the explanatory grounding of the social sciences in the cognitive sciences should be based on a cognitive architecture that provides a unified theory of the mind, such as his own CLARION architecture, this assumption can be challenged on three grounds. First, many competing cognitive architectures exist and it is not clear which one should be chosen for the purposes of explanatory grounding. Second, mechanistic approach to explanation is perfectly compatible with the idea of local (or phenomenon-specific) explanatory grounding that may proceed without a unified theory of mind. Third, at least arguably, local attempts at explanatory grounding have turned out to be more fruitful than global attempts that rely on unified cognitive architectures.

For these and some other reasons we discuss in the article, it seems that the local version of the explanatory grounding argument is more promising than the global one. The local explanatory grounding arguments are presented in the context of explanatory research on particular social phenomena, such as transactive memory, collaborative learning or moral judgements. In addition, at least some social phenomena may be grounded in cognitive mechanisms understood in an externalist way, meaning that these cognitive mechanisms include important technological, social and/or cultural aspects in addition to brain-bound aspects (see Miłkowski et al., 2018). Cognitive mechanisms of this kind have been theorized and studied in the so called 4E (i.e. embodied, embedded, enactive and extended) approaches to cognition as well as in distributed and situated cognition approaches.

Argument from theoretical unification

Herbert Gintis (e.g.  2007a, 2009, 2012) has developed an argument for a unified and cognitively informed behavioral science. We reconstruct Gintis’s argument as follows:

  1. Scientific disciplines that study the same domain of phenomena should be conceptually and theoretically unified with one another.
  2. The behavioral sciences all study the same domain of phenomena, which have to do with the decision-making and strategic interaction.
  3. Hence, the behavioral sciences ought to be unified with one another.
  4. Conclusion: Unification of the behavioral sciences requires a unified framework for modeling decision-making and strategic interaction in a way that takes into account the contributions of different behavioral sciences (Kaidesoja, Sarkia & Hyyryläinen 2019, 6).

Although theoretical unification surely is one of the epistemic criteria used in scientific evaluation, the problem with Gintis argument is that it fails to notice that it is not the only one nor even the most important one. Indeed, many philosophers of science and social epistemologists have argued that a diversity of perspectives on the world is essential for scientific progress both in the natural sciences and in the social sciences (e.g. Longino, 1990; Weisberg & Muldoon, 2009). This means that the requirement for theoretical unification becomes problematic if it is used to suppress other research programs in the cognitive social sciences. The argument from theoretical unification largely ignores these points.

In addition, it is not at all clear whether Gintis (2007a; 2009; 2012) succeeds in integrating the social sciences with the cognitive sciences in an adequate way. He builds his unifying theoretical framework by combining the slightly revised rational actor model and game theory − both originally developed in neo-classical economics − with the relatively speculative use of some evolutionary principles.  One reason to doubt the feasibility of this framework is to note many cognitive scientists and behavioral economists have forcefully criticized the axioms of rational choice theory. Although Gintis (e.g. 2007b) admits this and responds to these critiques, we argued in the paper that his way of dealing with them is highly selective and question begging (Kaidesoja, Sarkia & Hyyryläinen 2019, 7). Moreover, if only those parts of the social sciences studying decision-making and strategic interaction are included in “the unified behavioral science”, then large chunks of the social sciences are excluded from it.  This is problematic insofar as one wants to develop an argument for the cognitive social sciences that would encompass research programs on all kinds of social phenomena. In addition, Gintis’ argument from theoretical unification is likely to raise the specter of economics imperialism among social scientists, due to the central role that the rational actor model plays in his unified modeling framework and his principles for unifying the behavioral sciences.

Argument from constraints

Maurice Bloch’s (2012) argument for the cognitive social sciences highlights limitations in social scientists’ and their research subjects’ understanding of how their minds operate. This is how we reconstructed Bloch’s argument form constraints:

  1. Since all social processes involve cognitive aspects, social scientists must make assumptions about human cognition in their research practices.
  2. Social scientists’ assumptions about the cognitive processes of their research subjects are often based on the subjects’ own accounts of these processes and/or the ideas and concepts of “folk psychology” that people use in their everyday life.
  3. Cognitive scientific studies have convincingly demonstrated that our cognitive processes are not transparent to us and that our own understanding of these processes, including social scientists’ and their research subjects’ “folk psychological theories”, is limited and sometimes misleading.
  4. Conclusion: social scientists’ assumptions about cognitive processes of their research subjects should be constrained by the results of cognitive sciences (Kaidesoja, Sarkia & Hyyryläinen 2019, 9).

This argument includes much less ontological, methodological and theoretical presuppositions when compared with the two arguments considered above. For example, instead of celebrating the progress of the cognitive sciences, Bloch (2012, p. 9) holds that “the study of cognition is in its infancy” and that, for this reason, “the cognitive sciences are more certain when telling us what things are not like, than when telling us how things are” (p. 9). Accordingly, the main purpose of his argument is to weed out implausible cognitive assumptions from the social sciences rather than to ground the social sciences in the cognitive sciences or to unify the social sciences with the help of the cognitive sciences.

All of the premises of the above argument seem well justified. Indeed, cognitive scientists have convincingly demonstrated not only that our everyday conceptions about how our minds work are seriously limited and potentially misleading but also that a large part of our action-related cognitive processes are implicit (e.g. Evans & Frankish, 2009; Kahneman, 2012). The conclusion in 4 is also well supported at least to the extent that social scientists studying small-scale social interactions are well-advised to pay attention to the results of cognitive sciences when they make assumptions about the cognitive processes of their research subjects since this enables them to avoid biased explanations.

This does not mean, however, that social scientists should replace their methods with the methods of cognitive sciences, since, as Bloch (2012) rightly argues, ethnographic methods can be used to produce data about social and cultural phenomena that is impossible to obtain by using the experimental and simulation methods of cognitive scientists (see also Hutchins, 1995). What it does mean is that the data social scientists produce by using ethnographic methods should not be interpreted as providing reliable knowledge about the internal cognitive processes of their research subjects and that, for many explanatory purposes, it should be supplemented with data acquired by using other type of methods, including those used in the cognitive sciences.

Nevertheless, the results of cognitive sciences are less significant when it comes to explanatory studies on the outcomes of social interactions of a large number of individuals in a specific institutional context. The reason is that social scientists cannot escape from making trade-offs between the psychological realism and the tractability of their models in this context. The feasibility of their assumptions about cognition should be judged in a case-by-case manner that takes into account the purposes in which they use their models. However, in order to be able make judgements of this kind, social scientists should be aware of the relevant cognitive processes that they abstract from or idealize in their models. To this end, they need cognitive sciences (see Lizardo, 2009).

Argument from complementarity

The argument from complementarity is the oldest one of these four arguments. Eviatar Zerubavel proposed it already in his Social Mindscapes in 1997. We reconstructed Zerubavel’s argument in the paper as follows:

  1. Since cognitive science studies cognitive universals, it cannot answer questions about how cognition varies between groups and how social environments affect cognitive processes.
  2. In order to provide a more comprehensive understanding of human cognition, cognitive science should be complemented with studies that answer questions concerning the domain of sociomental (i.e. cognitive phenomena that vary between groups and cultures but are not entirely idiosyncratic).
  3. Cognitive sociology’s ontological, theoretical and methodological position allows it to answer questions concerning the domain of sociomental.
  4. Conclusion: Cognitive science should be complemented with cognitive sociology (Kaidesoja, Sarkia & Hyyryläinen 2019, 11).

The argument from complementarity is based on a view that different disciplines produce knowledge about human cognition according to their distinct ontological and epistemological commitments that may be incompatible with each other. It suggests that cognitive sociology does not aim to build a bridge between sociology and the cognitive sciences but rather forms an autonomous perspective on the sociomental aspects of human cognition that is meant to complement cognitive science.

This argument assumes a quite narrow and monolithic understanding of cognitive science. Although premise 1 includes a relatively accurate characterization of the state of the cognitive science in 1990s, today it is clearly outdated. The reason is that cognitive science has moved away from a nearly exclusive focus on “the universal foundations of human cognition” (Zerubavel, 1997, p. 3) that are realized in our brains, and included wider perspectives that focus on the embodied, embedded, enactive, extended, situated, distributed and cultural-historical aspects of cognitive processes (e.g. Hutchins, 1995; Clark, 1997; Franks, 2011; Lizardo et al., 2019; Turner, 2018). Although studies on “wide cognition” (Miłkowski et al., 2018) were in their infancy in 1990s, when Zerubavel first developed his argument, it seems that these externalist approaches to human cognition are also ignored in more recent discussions that have been inspired by his work (e.g. Brekhus, 2015). Hence, the argument from complementarity needs to be updated by taking into account recent developments in the cognitive sciences. When this is done, it is not at all clear whether the revised argument provides a distinct argument for the cognitive social sciences.

Another problem with the argument from complementarity concerns the kind of interdisciplinarity it would produce in practice. Omar Lizardo (2014), for example, argues that the sociology of culture and cognition, often used as a synonym for Zerubavellian cognitive sociology, creates “a sense of pseudo-interdisciplinarity”. This means that, although the name suggests at least some degree of interdisciplinary interaction, the actual communication between these disciplines has been almost nonexistent in this tradition. All attempts to create complementary perspectives to cognitive science run the risk of pseudo-interdisciplinarity of this kind. Hence, although interdisciplinary integration is regarded as an ultimate goal of the multilevel approach to cognition in some of Zerubavel’s (e.g. 1997, p. 113) claims, the argument from complementary may actually lead away from this goal.

References

Bloch, M. (2012). Anthropology and the cognitive challenge. Cambridge: Cambridge University Press.

Brekhus, W. (2015). Culture and cognition: Patterns in the social construction of reality. Cambridge: Polity Press.

Clark, A. (1997). Being there: Putting brain, body, and world together again. Cambridge, MA: The MIT Press.

Evans, J., & Frankish K. (Eds.). (2009). In two minds: Dual process theories and beyond. Oxford: Oxford University Press.

Franks, B. (2011). Culture & cognition: Evolutionary perspectives. Houndmills: Palgrave Macmillan.

Gintis, H. (2007a). A framework for the unification of the behavioral sciences. Behavioral and Brain Sciences, 30, 1–16.

Gintis, H. (2007b). A framework for the unification of the behavioral sciences II. Behavioral and Brain Sciences, 30, 45–53.

Gintis, H. (2009). The bounds of reason: Game theory and the unification of the behavioral sciences.          Princeton, NJ: Princeton University Press.

Gintis, H. (2012). The role of cognitive processes in unifying the behavioral sciences. In R. Sun (Ed.), Grounding social sciences in cognitive sciences (pp. 415–443). Cambridge, MA: MIT Press.

Hutchins, E. (1995). Cognition in the wild. Cambridge, MA: The MIT Press.

Kahneman, D. (2011). Thinking, fast and slow. London: Penguin Books.

Kaidesoja, T. (2013) Naturalizing critical realist social ontology. London: Routledge.

Kaidesoja, T., Sarkia, M., Hyyryläinen, M. (2019) Arguments for the cognitive social sciences. Journal for the Theory of Social Behavior. 1-16. https://doi.org/10.1111/jtsb.12226

Lizardo, O. (2009). Formalism, behavioral realism and the interdisciplinary challenge in sociological Theory. Journal for the Theory of Social Behaviour, 39(1), 39–79.

Lizardo, O. (2014). Beyond the Comtean schema: The sociology of culture and cognition versus cognitive social science. Sociological Forum, 29(4), 983–989.

Lizardo, O., Sepulvado, B., Stoltz, D., & Taylor, M.A. (2019) What can cognitive neuroscience do for cultural sociology? American Journal of Cultural Sociology. Retrieved August 6, 2019, from https://doi.org/10.1057/s41290-019-00077-8.

Longino, H. (1990). Science as social knowledge. Princeton, NJ: Princeton University Press.

Miłkowski , M., Clowes, R., Rucińska, Z., Przegalińska, A., Zawidzki, T., Krueger, J., … Hohol, M. (2018). From wide cognition to mechanisms: A silent revolution. Frontiers of Psychology 9, Art. 2393.

Newen, A., De Bruin, L., & Gallagher, S. (Eds.) (2018) The Oxford handbook of 4E cognition. Oxford: Oxford University Press.

Sun, R. (2012). Prolegomena to cognitive social sciences. In R. Sun (Ed.), Grounding social sciences in cognitive sciences (pp. 3–32). Cambridge, MA: MIT Press.

Turner, S.P. (2018). Cognitive science and the social. London: Routledge. Zerubavel, E. (1997). Social mindscapes: An invitation to cognitive sociology. Cambridge, MA: Harvard University Press.

Categories, Part II: Prototypes, Fuzzy Sets, and Other Non-Classical Theories

A few years ago The Economist published “Lil Jon, Grammaticaliser.” “Lil Jon’s track ‘What You Gonna Do’ got me thinking,” the author tells us, “of all things, the progressive grammaticalisation of the word shit.” In it, Lil Jon repeats “What they gon’ do? Shit” and in this lyric, shit doesn’t mean “shit” it means “nothing.”

As the author goes on to explain, things that are either trivial, devalued or demeaning are commonly used to mean “nothing”: I haven’t eaten a bite, I don’t give a rat’s ass, I won’t hurt a fly, he doesn’t know shit. More examples are given in Hoeksema’s “On the Grammaticalization of Negative Polarity Items.” This is difficult to account for in Chomsky’s (Extended or Revised Extended) Standard Theory because the meaning of terms makes them candidates for specific kinds of syntactic functions (Traugott and Heine 1991:8):

What we find in language after language is that for any given grammatical domain, there is only a restrictive set of… sources. For example, case markers, including prepositions and postpositions, typically derive from terms for body parts or verbs of motion; tense and aspect markers typically derive from specific spatial configurations; modals from terms from possession, or desire; middles from reflexives, etc.

Grammaticalization involves the extension of term until its meaning is “bleached” and becomes more generic and encompassing (Sweetser 1988). For example, the modal word “will,” as in “I will finish that review,” comes from the Old English term willan meaning to “want” or “wish,” and, of course, it still carries that connotation:  “I willed it into being.” This relates to a second difficulty for Chomskian Theory: grammaticalization is a graded process. It’s not always easy to decide whether a particular lexical item should be categorized as one or another syntactical unit and therefore we cannot know precisely which rules apply when.

Logical Weakness of the Classical Theory

It may be that the classical theory doesn’t work well for linguistics, but that might not be reason to abandon it elsewhere. In fact, there is a certain sensibleness to the approach: categories are about splitting the world up, so why shouldn’t everything fall into mutually exclusive containers? To summarize the various weaknesses as described by Taylor (2003):

  1. Provided we know (innately or otherwise) what features grant membership in a category, we must still verify that a token has all the features granting it membership, rendering categories pointless.
  2. Perhaps we could allow an authority to assure us a token has all the features, but then we are no longer relying on the classical conditions to categorize.
  3. Features might also be kinds of categories, e.g., if cars must have wheels, what defines inclusion in the category “wheels,” which leads to infinite regress (unless, of course, we can find genuine primitives).
  4. Finally, it seems that a lot of features are defined circularly by reference to their category, e.g., cars have doors, but what kind of doors other than the doors cars tend to have?

The rejection of this classical theory is foreshadowed by, among others, Wittgenstein. The younger Wittgenstein was interested in philosophy and mathematics, and after being encouraged by Frege, he more or less forced Bertrand Russell to take him on as a student in 1911. His first major work the Tractatus Logico-Philosophicus, was published in 1921, which went on to inspire the founding of the Vienna Circle of Logical Empiricism—which, even though living in Vienna at the time, did not include Wittgenstein, who seemed to hate everyone. (At the same time, it bears noting, Roman Jakobson was a couple hundred miles away founding the Prague Circle of Linguistics).  

After several years worth reading about, the received story goes, Wittgenstein does an about face on his own argument in the Tractatus in the course of trying to find the “atoms” of formal logic. In his later writings beginning in the late 1920s and continuing until his death in 1951, we get, among other things, the notion of defining words not be a list of necessary and sufficient conditions but by looking at how words are used. The most well-known example being, after reviewing a few different ways the word “game” is used, he states “we can go through many, many other groups of games in the same way, can see how similarities crop up and disappear…I can think of no better expression to characterize these similarities than ‘family resemblances’” (Wittgenstein [1953] 2009 para. 66-67).

Beyond Family Resemblances

Screenshot from 2019-04-27 11-45-20
From the The Atlas of the Munsell Color System, by Albert H. Munsell

Prototype Theory and Basic Level Categories

One pillar of the classical theory is that, if membership is granted based on having certain attributes, than it follows that no member should be a better or worse example of that category. A second pillar is that, category criteria should be independent of who or what is doing the categorizing. Eleanor Rosch’s early work toppled both pillars.

Rosch graduated from Reed College, completing her senior thesis on Wittgenstein (who she says “cured her of philosophy”) — specifically his discussion of pain and “private language.” She went on to complete graduate work in psychology at the famed Harvard Department of Social Relations, under the direction of Roger Brown (who was an expert in the psychology of language). She conducted research in New Guinea on Dani color and form categories, as well as child rearing practices (Rosch Heider 1971), and in late 1971, she joined the psychology department at UC, Berkeley.

In a 1973 publication, “Natural Categories,” Rosch critiqued existing studies of category formation because it relied on categories that subjects had already formed. For example, “American college sophomores have long since learned the concepts ‘red’ and ‘square’” To meet this challenge, she studied the Dani who had only two color terms, which divided color on the basis of brightness, rather than hue. Rosch hypothesized (Rosch 1973:330):

…there are colors and forms which are more perceptually salient than other stimuli in their domains…salient colors are those areas of the color space previously found to be most exemplary of basic color names in many different languages… and that salient forms are the “good forms” of Gestalt psychology (circle, square, etc.). Such colors and forms more readily attract attention than other stimuli… are more easily remembered than less salient stimuli…

She ultimately found “the salience and memorability of certain areas of the color space…can influence the formation of linguistic categories” (the classical citation for cross-cultural color categorization being Berlin and Kay 1991; see also Gibson et al. 2017). As categories form around salient prototypes, potential members of this category are judged on a graded basis.

In addition to building categories around salient exemplars, Rosch also found that, and aligning with ecological psychology, such salience relates to the usefulness for, and capacities of, the observer. For example, there tends to be the most cross-cultural agreement as to how any given token is categorized at the “basic level.” That is,  although different groups of people may differ in terms of what the prototypical “dog” is — is it a golden retriever or a bulldog? — when people see a dog, any dog, they will probably categorize it at the basic level of “dog,” as opposed to generically as animal or mammal or specifically as a golden retriever-bulldog mix. And it is at this basic level where there is the most interpersonal (and cross-cultural) similarities.

Berkeley and the West Coast Cognitive Revolution

In a previous post, I discussed all the interesting things happening in anthropology and artificial intelligence at UC, San Diego and Stanford during the 70 and 80s, and we can add UC, Berkeley to this list of strongholds for West Coast Cognitive Revolutionaries.  

Lakoff left MIT for Berkeley in 1972, and shortly thereafter he was confronted with kinds of utterances neither generative semantics nor generative grammar could account for, e.g., “John invited you’ll never guess how many people to the party” in which a clause splits another clause, sometimes called “center embedding.” Faced with this, Lakoff got an NSF grant to invite people from linguistics, psychology, logic, and artificial intelligence for a summer seminar in 1975, which ballooned into roughly 190 attendees (de Mendoza Ibáñez 1997). Among the lectures was Rosch on basic-level categories and how category prototypes can be represented in motor-systems (the seedling of the embodied mind), Charles Fillmore’s discussion of “Frame Semantics” which inspired the cognitive anthropologists, and Leonard Talmy (a recent Berkeley PhD) on how physical embodiment creates universal “cognitive topologies” which map onto words, like “in” and “out.”

So, Lakoff recalls, “in the face of all this evidence, in the summer of 1975, I realized that both transformational grammar and formal logic were hopelessly inadequate and I stopped doing Generative Semantics” (de Mendoza Ibáñez 1997).  It is also in 1975 that he published “Hedges: A Study in Meaning Criteria and the Logic of Fuzzy Concepts,” incorporating ideas from Rosch, as well as another Berkeley Professor Lotfi Zadeh. In this paper Lakoff argued: “For me, some of the most interesting questions are raised by the study of words whose meaning implicitly involves fuzziness- words whose job is to make things fuzzier or less fuzzy. I will refer to such words as ‘hedges’.” In addition to referring to Rosch’s then-unpublished paper “On the Internal Structure of Perceptual and Semantic Categories,” Lakoff acknowledges “Professor Zadeh has been kind enough to discuss this paper with me often and at great length and many of the ideas in it have come from those  discussions.”

Zadeh was born in Baku, Azerbaijan, then studied at the University of Tehran before completing his master’s at MIT, and doctorate in electrical engineering at Columbia University in 1949. He eventually landed at UC, Berkeley in 1959 where he slowly began to develop “fuzzy” methods. In 1965 he published the paradigm-shifting piece, “Fuzzy Sets,” which he began writing during the summer of ‘64 while working at Rand Corporation, and exists as the report “Abstraction and Pattern Classification.” In essence, Zadeh realized many objects in the world did not have clear boundaries to allow discrete classification, but rather allowed for graded membership (he used the example of  “tall man” and “very tall man”). He then demonstrates that classical “crisp” set theory was simply a special case of “fuzzy” set theory.

Zadeh would quickly expand the notion of fuzzy methods into a plethora of subfields, including information systems and computer science, but also linguistics beginning in the 1970s, an early example being, “A Fuzzy-Set-Theoretic Interpretation of Linguistic Hedges.” However, whether fuzzy logic explains the normal process of human categorization (i.e. whether humans are actually following the procedures of fuzzy logic in the task of categorizing) continues to be a debated topic. Rosch (e.g. Rosch 1999), in particular, is skeptical, precisely because the process of categorizing is not about applying decontextualized “rules.” Rather, as Mike argued in his recent post, we can think of categorizing as more like finding, than seeking.

References

Berlin, Brent and Paul Kay. 1991. Basic Color Terms: Their Universality and Evolution. University of California Press.

Gibson, Edward, Richard Futrell, Julian Jara-Ettinger, Kyle Mahowald, Leon Bergen, Sivalogeswaran Ratnasingam, Mitchell Gibson, Steven T. Piantadosi, and Bevil R. Conway. 2017. “Color Naming across Languages Reflects Color Use.” Proceedings of the National Academy of Sciences of the United States of America 114(40):10785–90.

de Mendoza Ibáñez, Francisco José Ruiz. 1997. “An Interview with George Lakoff.” Cuadernos de Filología Inglesa 6(2):33–52.

Rosch, E. 1999. “Reclaiming Concepts.” Journal of Consciousness Studies 6(11-12):61–77.

Rosch, Eleanor H. 1973. “Natural Categories.” Cognitive Psychology 4(3):328–50.

Rosch Heider, Eleanor. 1971. “Style and Accuracy of Verbal Communications within and between Social Classes.” Journal of Personality and Social Psychology 18(1):33.

Sweetser, Eve E. 1988. “Grammaticalization and Semantic Bleaching.” Pp. 389–405 in Annual Meeting of the Berkeley Linguistics Society. Vol. 14..

Taylor, John R. 2003. Linguistic Categorization. OUP Oxford.

Traugott, Elizabeth Closs and Bernd Heine. 1991. Approaches to Grammaticalization: Volume II. Types of Grammatical Markers. John Benjamins Publishing.

Wittgenstein, Ludwig. [1953] 2009. Philosophical Investigations. Blackwell.

Identifying Cultural Variation in Thinking

What does it mean to identify cultural variation in thought? Sociologists routinely identify differences in the way people think or reason about things (e.g., Young 2004), but what does it mean to think differently, and how are differences identified? In this post, I introduce a way of thinking about this question that moves beyond traditional “frame-like” concepts.

The Frame Approach to Thinking about Thinking

Frame-like concepts are often used to denote different ways of thinking, referring to monolithic cognitive objects— “mental fences” (Zerubavel 1997:37)—which “filter” (Small 2004:70) cognition by “[highlighting] certain facts while [excluding] others (Fligstein, Brundage, and Schultz 2017:881).” Frame-like concepts are treated both as durable ways of thinking (Zerubavel 1997) and situationally-variable frames of thought (Goffman 1974), but in either case, they are generally interpreted as mutually exclusive categories, with only one active at any given time.

Frame-like concepts are intuitive but also bring important challenges and limitations. First, frame-like concepts denote differences in thought without explaining what it means to think differently. Frame theory is not so much a theory of how people think as much as an assertion that people think differently. Because of this, frame analysis often lies on shaky ground empirically, with analysts intuiting differences without objective criteria. Person A is said to think about Y using a different frame than Person B because the analyst intuits that their thinking is different. This sounds bad, but is it really? How hard can it be to evaluate differences in thought?

Suppose that we asked two professors for their thoughts about a certain graduate student. The first says “she’s turning out lots of ideas,” and the second says “she’s had a mental breakdown.” These statements are obviously different, yet they are nonetheless instantiations of the same conceptual metaphor—THE MIND IS A MACHINE—identified by Lakoff and Johnson (1999:247). Statements which appear different, even opposite, on the surface may actually be evidence of identical thinking.

And yet, those teachers’ statements are different, which leads to a second major limitation of frame-like concepts—the assumption of monolithicity. Frame-like concepts treat thinking as a unitary process which either is or isn’t the same across persons. More accurately, thinking is a complex cascade of neural activations, such that thinking could be both similar and different across persons, in different ways. For example, persons with different positions on a moral issue and different vocabularies of justification may nonetheless share certain “background” assumptions about the meaning of morality (Abend 2014).

Regarding frame-like concepts, Turner (2018:33–34) notes:

Cognitive science exposes the inadequacy of many of the clichéd extensions of common sense talk about mind used in social theory and elsewhere, notably notions that are useful for interpretation, such as “frame” ideas. Either these can be given an interpretation in terms of actual cognitive mechanisms or they need to be discarded and replaced.

In the next section, I outline an alternative approach for analyzing variation in thought that begins by considering the different cognitive associations responsible for producing observed responses. The primary advantage of this approach is it allows for sameness and difference to coexist in different forms and at different degrees of schematicity. In this way, differences are not established with all-or-nothing catch-all codes like “frames,” but located to particular associations which may have their own distinct causal histories. More generally, this entails rethinking thought as the activation of cascades of associations rather than single “frames.”

Beyond Frames

Moving beyond frame analysis requires a different theory of thinking. When researchers ask participants to perform some cognitive task, they are directing participants to create a response, rather than requesting the delivery of fully-formed ideas:

Our data don’t tell us about the static organization of others’ minds—they tell us about a potentiality that others have that can be used to accomplish certain tasks in certain environments. But that’s fine, since that’s what a mind is—it’s a set of potentialities, and not a cluster of statements, and our questions are tasks that can, if properly designed, evoke these potentials… People don’t necessarily have ready-made opinions. Instead, they often have an inchoate mass of ideas; the question you ask creates a task that requires the respondent to marshal her faculties and thoughts (Martin 2017:78).

These tasks may be understood as evoking bundles of associations. Some associations belong to the general task itself (such as categorization), and others belong to the domain in question (such as “sexuality”). The analytic approach I propose consists of identifying these different associations and observing the similarity or difference for each. Here I identify three kinds of associations common to interviewing tasks—schemas associated with the general task, objects associated with the domain, and object qualities associated with the domain—and discuss each. I use Brekhus’s (1996) findings on sexual identity as a case study.

Brekhus (1996) finds that Americans mark sexual identity along six dimensions: (1) quantity of sex, (2) timing of sex, (3) level of perceived enjoyment, (4) degree of consent, (5) orientation, and (6) the social value of the agents. Brekhus (1997) is primarily interested in identifying general dimensions of sexual identity and understanding the process by which these are constructed, but suppose we are interested in variation in thinking about sexual identities? To this end, we can identify the different kinds of associations activated when marking sexual identities.

Brekhus’s six dimensions of sexual identity are specific combinations of schemas, objects, and object qualities. Each of these three things may vary independently of the others, though they may be associated to a certain extent.

1. Schemas associated with the task

Marking sexual identity is a common kind of task, in that all marking entails assigning an object to a category. In terms of cognitive linguistics, this involves the activation of the CONTAINER image schema (Boot and Pecher 2011). Whether we are talking about identities, classification (Bowker and Star 2000), or boundaries (Lamont and Molnár 2002), we are referring to the same general schematic process—putting things in containers. At this level, we would expect no variation.

The task of marking sexual identity (and classification more generally) may simply involve putting someone within a container (e.g. “gay”), but it may also involve putting someone on a SCALE (Johnson 1987:122). For example, Brekhus notes that sexual identities are marked by quantity, degree of consent, level of perceived enjoyment, the timing of sex, and social value of the agents. Each of these is an instantiation of the SCALE schema. Thus, we may observe variation in the marking of sexual identity based on differences in the schema associated with the task. This variation is not the result of possessing or lacking SCALE and CONTAINER schemas, which are universal but result from a habitual association between a schema and the thing being marked (Casasanto 2017).

2. Objects associated with the domain

Brekhus’s (1996) dimensions of sexuality focus on three kinds of objects: the agent (e.g. their age, history, and social value), the agent’s partner (e.g. their gender relative to the agent), and the interactions between them (e.g. the duration of their relationship and degree of consent). Marking sexual identity may vary in by focusing on one or more of these objects rather than the other, but for this domain, these are the primary objects. If we were talking about some other domain of identity, the associated objects might be different.

3. Object qualities associated with the domain

Brekhus’s (1996) dimensions of sexual identity are based on specific qualities of the different associated objects. For example, an agent’s identity is marked based on how much they enjoy sex, or how much sex they’ve had. There is more room for variation here, and we can imagine even other potential object-qualities. For example, sexual identity could be marked based on the LOCATION of sex, whether it happens in the bedroom, or in a public space (e.g. “exhibitionist”). LOCATION, in this case, is a quality of the people engaging in sexual acts (“a person in this kind of space”).

Additionally, we can imagine new object-qualities by applying the SCALE schema in new places. For example, Brekhus discusses orientation in terms of CONTAINERS—what kind of person or thing you are attracted to—but orientation can also be marked in terms of quantity—how many kinds of persons are you attracted to? (e.g. pansexual). Similarly, sexual identity could be marked not only by the kind of partner in the relationship but the number of partners in the relationship (e.g. polyamorous).

Concluding Remarks

Taken together, this short exercise suggests the following:

  • Thinking tasks, like marking identities, activate multiple kinds of associations which may be analyzed as distinct processes working together.
  • Similarity and difference in thinking may occur in different ways (e.g. at the level of schemas, objects, and object qualities).
  • Similarity and difference may coexist. Responses that appear different may nonetheless be manifestations of the same basic structure (e.g. all instantiated by the same schemas and focusing on the same objects).
  • Cognitive difference may be established either by introducing new associations from other domains or recombining associations in new or different ways.

In addition to pinpointing where there is more or less similarity in thought, analyzing thinking in this way opens new questions for analysis. For example, If certain schemas are dominant for a certain task, why, and to what extent does this vary across persons? Why are certain schemas, such as SCALE, more commonly associated with certain objects over others? How does a person’s individual experience influence which bundles of associations are activated when ascribing sexual identities? The takeaway is that thinking does not happen via filtering frames, but the activation of multiple associations working together, and that by recognizing this fact and incorporating it into the analysis, we get a better understanding of culture and thinking and are better prepared to think about how thinking varies across persons, times, and situations.

References

Abend, Gabriel. 2014. The Moral Background: An Inquiry into the History of Business Ethics. Princeton University Press.

Boot, Inge and Diane Pecher. 2011. “Representation of Categories: Metaphorical Use of the Container Schema.” Experimental Psychology 58(2):162.

Bowker, Geoffrey C. and Susan Leigh Star. 2000. Sorting Things Out: Classification and Its Consequences. MIT Press.

Brekhus, Wayne. 1996. “Social Marking and the Mental Coloring of Identity: Sexual Identity Construction and Maintenance in the United States.” Sociological Forum 11(3):497–522.

Casasanto, Daniel. 2017. “The Hierarchical Structure of Mental Metaphors.” Metaphor: Embodied Cognition and Discourse 46–61.

Fligstein, Neil, Jonah Stuart Brundage, and Michael Schultz. 2017. “Seeing Like the Fed: Culture, Cognition, and Framing in the Failure to Anticipate the Financial Crisis of 2008.” American Sociological Review 82(5):879–909.

Goffman, Erving. 1974. Frame Analysis: An Essay on the Organization of Experience. Harvard University Press.

Johnson, Mark. 1987. The Body in the Mind: The Bodily Basis of Meaning, Imagination, and Reason. University of Chicago Press.

Lakoff, George and Mark Johnson. 1999. Philosophy In The Flesh. Basic Books.

Lamont, Michèle and Virág Molnár. 2002. “The Study of Boundaries in the Social Sciences.” Annual Review of Sociology 28(1):167–95.

Martin, John Levi. 2017. Thinking Through Methods: A Social Science Primer. University of Chicago Press.

Small, Mario Luis. 2004. Villa Victoria: The Transformation of Social Capital in a Boston Barrio. University of Chicago Press.

Young, Alford. 2004. “The Minds of Marginalized Black Men: Making Sense of Mobility.” Opportunity and Future Life Chances 23.

Zerubavel, Eviatar. 1997. “Social Mindscapes: An Introduction to Cognitive Sociology.” Cambrdge, MA. : Harvard University Press. 連結.

Habitus and Learning to Learn: Part II

Beyond the Content-Storage Metaphor

The underlying neural structures constitutive of habitus are procedural (Kolers & Roediger, 1984), based on motor-schemas constructed from the experience of interacting with persons, objects, and material culture in the socio-physical world (Gallese & Lakoff, 2005; Malafouris, 2013). Habitus affords the capacity to learn because we are embodied beings endowed with the capacities and liabilities afforded by our sensory receptors and motor effectors. In this respect, the neurocognitive recasting of habitus is thoroughly consistent with the “embodied and embedded” turn in contemporary cognitive science.

Traditional accounts of learning rely primarily on the content-storage metaphor (Roediger, 1980). Under this classical conceptualization, experience modifies our cognitive makeup mainly via the recording of content-bearing representations into some sort of mental system dedicated to their inscription and “storage,” most plausibly what cognitive psychologists refer to as “long-term memory.” Because the habitus is seen as the locus of social and experiential learning, and as a sort of repository of past experience, it is tempting to conceptualize it using this content-storage metaphor.

In the current formulation, the metaphor of long-term memory storage emerges as a highly misleading one, and one that would severely limit the conceptual potential of the notion of habitus. In its place, I propose that the habitus contains the “record” of past experiences but it does not store these records as a set of individualized content-bearing “facts” or “propositions” to be accessed as (declarative) “knowledge” or as (episodic) memories that can be recalled in the form of a recreation of previous experiences (Michaelian, 2016). Explicit forms of memory are reconstructive rather than restorative, and rely on the procedural traces encoded in habitus.

The same goes for the procedures generative of goals and plans of action the conscious positing of a future project (Williams, Huang, & Bargh, 2009). The (consciously posited) goal-oriented model of action, rather than being the fundamental framework’ that constrains the very capacity to make meaningful statements about action, as Talcott Parsons (1937) once proposed, is reinterpreted under a habitus-based conception of action as a cognitively unnatural activity (Bourdieu, 2000). Thus, the deliberative positing of a possible future rather than being taken as the point of departure or as the privileged site where a special sort of “agency” is located, must be re-conceptualized, as a puzzling, context-dependent phenomenon in need of special explanation.

Offline Cognition as Habitual Reconstruction

Recent work in the psychology of memory and “mental time travel” support the idea that both the seeming recollection of past events, the imagining of counterfactual and hypothetical scenarios, and the simulation of possible future events, all share an underlying neural basis and even share some recognizable features at the level of phenomenology. Rather than being faithful records of past experiences, autobiographical memories are as reconstructive and hypothetical as the (embodied) simulation and situated conceptualization of future experiences (Michaelian, 2011). What all of these socio-cognitive states do seem to share is a suspension of our (default) embodied engagement with the world (Glenberg, 1997). As such, they represent exceptional states removed at least one step away from “action” and not the core prototypical cases upon which to build a coherent model of action. Habit-based action made possible by habitus is the default, and these other more contemplative and intellectualist mode the exception.

Nevertheless, it would be a mistake to posit to sharp a divide between habitus and scholastic contemplation of possible futures, counterfactual states, or representational pasts. All of these more intellectualist and content-ful states are rooted in habitus, if only indirectly. The habitus provides the underlying set of capacities making possible the (re)creation of mental “content” on the spot, via processes of situated conceptualization, embodied simulation, and affective-looping (Barsalou, 2005; Damasio, 1999). Nevertheless, while the online activation of facts and memories —for instance during an interview setting—is made possible via habitus, these objectified products are not to be taken as the constituents of habitus.

Habitus and Learning to Learn

In this respect, the habitus stores nothing that can be legitimately referred to as “content.” Instead, the primary form of learning that organizes the neural structures constitutive of habitus is the one that sets the stage for, and actually makes possible, the traditional forms of episodic and declarative learning-s, and the context-sensitive recreation of those contents, which come later in ontogenetic development. When the habitus forms and acquires structure in childhood what the person is doing is in essence “learning to learn.”

As noted in the previous post, the notion of learning to learn has a somewhat obscure pedigree in social theory, but it has figured prominently in the accounts given by Gregory Bateson, who called “deutero-learning,” and in Hayek’s proposal of a groundbreaking theory of perception in the Sensory Order. In both of these accounts, learning is not taken for granted as a pre-existing feature’ of the human agent, but the very ability to be modified by the world is conceived as something that must be produced by our immersion and coupling to the world. The world must prepare the agent to learn before learning can take place.

The standard model of learning takes what Bourdieu referred to as the “scholastic” situation as its primary exemplar. Under this characterization, to learn is to commit a content-bearing proposition (e.g. a belief or statement) to memory. The problem with this conception, as Bourdieu noted, is that it takes for granted the tremendeous amount of previous development, immersion, and “connection-weight setting” that happend in the previous (home) environment to prepare the person for these forms of scholastic learning. The proposed habitus-based model of learning takes the decidedly non-scholastic case of skill-acquisition as its primary exemplar of learning (Dreyfus, 1996; Polanyi, 1958).

Procedural learning, in this sense, results in the picking up of the structural features that characterize the most repetitive (and thus experientially consistent) patterns of the early environment. This is learning about the formal structure of the early world not a passive recording of facts. The structure of habitus primarily mirrors the systematic, repetitive structure of the world in terms of the overall constitution (e.g., empirical and relational co-occurrences) and temporal rhythms of the environment, especially that characteristic of the earliest experiences (e.g., the environment that predates “learning” as traditionally conceived).

Subsequent experiences will then be actively fitted into this pre-experiential (but nonetheless produced by experience) neural structure. In connectionist terms, the procedural learning giving rise to habitus is essentially equivalent “setting the weights” that will remain a durable, relatively resistant to change, part of our neuro-cognitive architecture. These weights partially fix our overall style of perception, appreciation and classification of all subsequent experience. As Philosopher Paul Churchland puts it,

…the brain represents the general or lasting features of the world with a lasting configuration of its myriad synaptic connections strengths. That configuration of carefully turned connections dictates how the brain will react to the world…To acquire those capacities for recognition and response is to learn about the general causal structure of the world, or at least, of that small part of it that is relevant to one’s own practical concerns. That knowledge is embodied in the peculiar configuration of one’s…synaptic connections. During learning and development in childhood, these connection strengths, or “weights” as they are often called, are to progressively more useful values. These adjustments…are steered most dramatically by the unique experience that each child encounters (1996, p. 5)

Accordingly, and in contrast to the view construing habitus as a mnemonic repository of experiential contents the connectionist recasting of habitus as the set of synaptic weights coming to structure further experiential activation, reveals that the habitus stores coarse-grained structural patterns keyed to “reflect” previously encountered environmental regularities and not fine-grained experiential content.

The experiential content that the person is exposed to further down the developmental line will be made sense of using the (perceived, classified and made part of practical action schemes) synaptic weights acquired in early experience. Thus, as a precondition for subsequent experience and (skillful) practical action in the world, pre-experiential learning and adjustment have to happen first. The notion of habitus is useful precisely because it captures an ontogenetic reality: the fact that this learning to learn is sticky and produces durable cognitive structures that modulate the way in which persons are allowed to be further modified by experience.

As the cognitive scientist Margaret Wilson puts it:

Research on skill-learning and expertise has primarily been conducted in the context of understanding how skills are acquired. What has been neglected is the fact that when the experiment is done, or when the real-life skill has been mastered, it leaves behind a permanently changed cognitive system. This may not matter much in the case of learning a single video game or a strategy for solving Sudoku; but the cumulative effect of a lifetime of numerous expertises may result in a dramatically different cognitive landscape across individuals.

(Wilson 2008: 182)

If the active construction, initializing, and relative equilibration (“setting the weights”) of pre-experiential neural structures necessary for making sense of further experience was not an ontogenetic reality and a presupposition for traditional forms of learning, the notion of habitus would not be a superfluous, gratuitous adjunct in social theory. But the cognitive reality is that “the rate of synaptic change does seem to go down steadily with increasing age”(Churchland 1996: 6). This statement is not incompatible with recent findings of neural “plasticity” lasting throughout adulthood, but it does force the analyst to distinguish different types of plasticity in ontogenetic time and the new capacities they are attuned to and result in. This means that a structured habitus is the ineluctable result of any type of (normal) development. Thus, exposure to repeated regularities will create a well-honed habitus reflective of the structure of the regularities encountered early on. It is in this sense that the habitus cannot but be a product of early experiential (socio-physical) realities.

References

Barsalou, L. W. (2005). Situated conceptualization. Handbook of Categorization in Cognitive Science, 619, 650.

Bourdieu, P. (2000). Pascalian Meditations. Stanford University Press.

Churchland, P. M. (1996). The Engine of Reason, the Seat of the Soul: A Philosophical Journey Into the Brain. MIT Press.

Damasio, A. R. (1999). The Feeling of what Happens: Body and Emotion in the Making of Consciousness. Harcourt Brace.

Dreyfus, H. L. (1996). The current relevance of Merleau-Ponty’s phenomenology of embodiment. The Electronic Journal of Analytic Philosophy, 4(4), 1–16.

Gallese, V., & Lakoff, G. (2005). The Brain’s concepts: the role of the Sensory-motor system in conceptual knowledge. Cognitive Neuropsychology, 22(3), 455–479.

Glenberg, A. M. (1997). What memory is for: Creating meaning in the service of action. The Behavioral and Brain Sciences, 20(01), 41–50.

Kolers, P. A., & Roediger, H. L., III. (1984). Procedures of mind. Journal of Verbal Learning and Verbal Behavior, 23(4), 425–449.

Malafouris, L. (2013). How Things Shape the Mind: A Theory of Material Engagement. MIT Press.

Michaelian, K. (2011). Generative memory. Philosophical Psychology, 24(3), 323–342.

Michaelian, K. (2016). Mental Time Travel: Episodic Memory and Our Knowledge of the Personal Past. MIT Press.

Parsons, T. (1937). The Structure of Social Action. New York: Free Press.

Polanyi, M. (1958). Personal knowledge, towards a post critical epistemology. Chicago, IL: University of.

Roediger, H. L., 3rd. (1980). Memory metaphors in cognitive psychology. Memory & Cognition, 8(3), 231–246.

Williams, L. E., Huang, J. Y., & Bargh, J. A. (2009). The Scaffolded Mind: Higher mental processes are grounded in early experience of the physical world. European Journal of Social Psychology, 39(7), 1257–1267.

Wilson, Margaret. 2010. “The Re-Tooled Mind: How Culture Re-Engineers Cognition.” Social Cognitive and Affective Neuroscience 5 (2-3): 180–87.

Habitus and Learning to Learn: Part I

In this and subsequent posts, I will attempt to revise, reconceptualize and update the concept of habitus using the theoretical and empirical resources of contemporary cognitive neuroscience and cognitive social science.

I see this step as necessary if this Bourdieusian notion is to have a future in social theory. Conversely, if no such recasting is coherent or successful, then it might be time to retire the idea of habitus.

My reconstruction of habitus in what follows is necessarily selective. I keep historical and conceptual exegesis to a minimum (see e.g. Lizardo 2004 for that), and I will not engage in an attempt to convince you that the concept of habitus is a useful one in social science research. I presume that my undertaking this effort presupposes that the notion of habitus is useful and that its “updating” in terms of contemporary advances in the cognitive sciences is a worthwhile exercise.

There is a theoretical payoff in this endeavor. By connecting the notion of habitus as a conceptual tool for social analysis with emerging developments in the cognitive and neurosciences a lot of standing problems in social scientific conceptualizations of cognition, perception, categorization, and action are shown to be either pseudo-problems, or are resolved in more satisfactory ways than in proposals made from non-cognitive standpoints. In what follows, I address a series of the theoretical issues that I believe are properly recast using a version of the habitus concept informed by cognitive neuroscience, beginning with the notion of “learning” and ending with a reconsideration of the notion of categories and categorization.

The habitus as a “learning to learn” cognitive structure

The habitus is a set of durable cognitive structures that develop in order to allow the person to exploit the most general features of experience most effectively. These structures are constitutive of our capacity to develop an intuitive, routine grasp of events, entities, and their inter-relations and yet are also the product of experience. In neuroscientific terms, this presupposes “a durable transformation of the body through the reinforcement or weakening of synaptic connections” (Bourdieu 2000, 133).

As the economist and social theorist Friedrich Hayek once put it, “the apparatus by means of which we learn about the external world is itself the product of a kind of experience” (Hayek 1952, 165). The cognitive structures constitutive of habitus themselves, are a product of a special kind of learning, the process of “learning to learn” (something that the anthropologist Gregory Bateson (1972) once referred to as “deutero-learning”). From this point of view, “the process of experience does not begin with sensations or perceptions, but necessarily precedes them: it operates on physiological events and arranges them into a structure or order which becomes the basis of their `mental’ significance” (Hayek 1952: 166).

The experience-generated cognitive structures constitutive of habitus are designed to capture the most significant axes of variation–in essence the abstract causal and temporal signatures–of the early environment (Foster 2018). They make possible subsequent practical exploitation and even the fairly unnatural contemplative “recording” of later experiences in the form of episodic and semantic learning. The habitus itself is not a repository of “contents” in the traditional sense (e.g., a “storehouse” of individuated beliefs, attitudes, and the like) but it is generative of our ability to actively retrieve the experiential, mnemonic and imaginative qualities that form the core of our everyday experience.

Beyond Plasticity

From the point of view of a neuro-cognitive construal of habitus as a learning-to-learn structure, extant notions of learning (or socialization) in sociology come off as limited. Most consist of general accounts regarding the “plasticity” of the organism (Berger and Luckmann 1966), and are usually anxious to separate whatever is innate or biologically specified from that which comes from experience. At the extreme, we find accounts suggesting that nothing specific comes from biology and that all specific content is, therefore, “learned.”

Most social theorists, after making sure to set down this rather crude division, are satisfied in having secured a place for the cultural and social sciences in having delimited the scope of that which can be directly given by “biology.” Most analysts are then satisfied to establish broad statements about how humans are unique because so much of their cultural equipment has to be acquired from the world via experience, or how the human animal is essentially incomplete, or how biological evolution and the biological “inner code” requires reliance on externalized, epigenetic cultural codes for its full expression and development (Geertz 1973).

The actual experiential and cognitive mechanisms making possible learning in the first place and the constraints that these mechanisms pose on any socio-cultural theory of learning are thought of as exogenous. Learning from experience just “happens” and the role of social science is simply to keep track, document and acknowledge the existence of the external origins of the contents so learned.

What is missing from these standard accounts? First, that persons are capable of learning or that the brain is plastic is a very important but preliminary point. Only the most narrowly misinformed nativist argument would fall when confronted with this fact. Second, the issue is not whether persons learn, but how to account for this ability without begging the question. In this respect, standard definitions of culture as that which is learned and standard definitions of persons as essentially “cultural animals,” are well-taken, but ultimately fail to make a substantively consequential statement. This views are limited because they fail to distinguish between different forms of learning, the accomplishment of which are presuppositions for others.

A neurocognitive conception of habitus can serve to re-specify the notion of learning in cultural analysis in a useful way. From the point of view of a neuroscientifically informed social theory (Turner 2007), it is not enough to acknowledge the commonplace observation that persons are modified by experience or that the current set of skills and abilities that a person commands is indeed a product of modification by experience. Instead, the key is to specify what exactly this modification consists of, and how it differs, for instance, from the experiential sort of “modification” we are constantly exposed to in our everyday life by virtue of being creatures capable of consciousness, or the modification that happens when learn a new propositional fact, or when form a new episodic memory as a result of being involved in some biographically salient event.

The neurocognitive recasting of habitus as learning-to-learn structure improves the standard account of learning by suggesting that all learning requires the early, systematic, and relatively durable modification of the person as a categorizing and perceiving agent. That is, before learning of the “usual” kind can begin (e.g. learning about propositional facts to be “stored” in semantic memory) a different sort of “learning” has to occur: the person must learn to form the pre-experiential structures that will have the function of bringing forth or disclosing a comprehensible world (in the phenomenological sense). This “deutero-learning” needs to be distinguished from the sort of recurrent experience-linked modification resulting in the acquisition of episodic (having a factual account of our personal biography) or propositional or declarative knowledge (knowledge that).

In a follow-up post, I’ll develop the implications of this distinction for contemporary understandings of enculturation and socialization in cultural analysis.

References

Bateson, Gregory. 1972. Steps to an Ecology of Mind: Collected Essays in Anthropology, Psychiatry, Evolution, and Epistemology. University of Chicago Press.

Berger, Peter L., and Thomas Luckmann. 1966. The Social Construction of Reality: A Treatise in the Sociology of Knowledge. Anchor Books. New York: Doubleday.

Bourdieu, Pierre. 2000. Pascalian Meditations. Stanford University Press.

Foster, Jacob G. 2018. “Culture and Computation: Steps to a Probably Approximately Correct Theory of Culture.” Poetics 68 (June): 144–54.

Geertz, Clifford. 1973. The Interpretation of Cultures: Selected Essays. New York: Basic Books.

Hayek, F. A. 1952. The Sensory Order: An Inquiry Into the Foundations of Theoretical Psychology. University of Chicago Press.

Lizardo, Omar. 2004. “The Cognitive Origins of Bourdieu’s Habitus.” Journal for the Theory of Social Behaviour 34 (4): 375–401.

Turner, Stephen P. 2007. “Social Theory as a Cognitive Neuroscience.” European Journal of Social Theory 10 (3): 357–74.

When is Consciousness Learned?

Consciousness-learned

Continuing with the theme of innateness and durability from my last post, consider the question: are humans born with consciousness? In a ground-breaking (and highly contested) work, the psychologist Julian Jaynes argued that if only humans have consciousness, it must have emerged at some point in our human history. In other words, consciousness is a socially and culturally acquired skill (Williams 2011).

To summarize his argument: until as recently as the Bronze age (the third millennium BCE) he purports that humans were not, strictly speaking conscious. Rather, humans experienced life in a proto-conscious state he refers to as “bicameralism.” Roughly around the “Axial Age” (cf Mullins et al. 2018), bicameral humans declined and conscious, “unicameral” humans emerged.

One piece of evidence he deploys in support of his thesis is that the content of the Homeric poem the Iliad is substantially different than the later Odyssey. The former, he argues, is devoid of references to introspection, while the latter does have introspection. Jaynes argues a similar pattern emerges between earlier and later books of the Christian Bible. In a recent attempt  (see also Raskovsky et al. 2010) to test this specific hypothesis quantitatively,  Diuk et al. (2012), use Latent Semantic Analysis to calculate the semantic distances between the reference word “introspection” and all other words in a text. Remarkably, their findings are consistent with Jaynes’ argument  (see also: http://www.julianjaynes.org/evidence_summary.php).

Screenshot from 2018-12-19 17-47-55.png
From Diuk et al. (2012): “Introspection in the cultural record of the Judeo-Christian tradition. The New Testament as a single document shows a significant increase over the Old Testament, while the writings of St. Augustine of Hippo are even more introspective. Inset: regardless of the actual dating, both the Old and New Testaments show a marked structure along the canonical organization of the books, and a significant positive increase in introspection.”

Is Consciousness Learned in Childhood?

If consciousness, as Jaynes argued, is a product of social and cultural development, does this also mean that we each must “learn” to be conscious? Some contemporary research suggests something like this might be the case.

To begin we need a simple definition: consciousness is our “awareness of our awareness” (sometimes called metacognition). A problem with considering the extent of our conscious awareness is the normative baggage associated with “not being conscious.” For the folk, it is somewhat insulting to say people are “mindlessly” doing something, and we tend to value “self-reflection.” Certainly this is a generalization, but let’s bracket the notion that non-conscious experience is somehow less good than being conscious. The bulk of what the brain does is below the level of our awareness. For starters, when we are asleep, under general anesthesia, or even in a coma, the brain continues to be quite active. Moving to our waking lives, the kinds of skills and habits that Giddens (1979) confusingly calls the “practical consciousness” is deployed at a speed that outstrips our ability to be aware it is happening until after the fact. The kind of skillful execution associated with athletes and artists, for instance, is often associated with Csikszentmihalyi’s “flow” precisely because there is a “letting go” and letting the situation take over. All this is to say we are conscious far less than we probably think. Indeed asking us when we are not conscious  (Jaynes 1976:23):

…is like asking a flashlight in a dark room to search around for something that does not have any light shining upon it. The flashlight, since there is light in whatever direction it turns, would have to conclude that there is light everywhere. And so consciousness can seem to pervade all mentality when actually it does not.

A second major confusion is the assumption that consciousness is how humans learn ideas or form concepts. As we discuss elsewhere (Lizardo et al. 2016), memory systems are multiple, and while we learn via conscious processes, the bulk of what we learn is via non-conscious processes in “nondeclarative” memory systems (Lizardo 2017). This is especially the case for the most basic concepts we learn from infancy onward. In fact, Durkheim’s argument that it is through ritual—embodied experience—that so-called “primitive” groups learned the “basic categories of the understanding” more or less pre-figures this point (Rawls 2001).

Rather than the experience-near associated with everyday life, consciousness involves introspection and “time traveling” associated both with reconstructing our own biographies from memory and imagining possible (and impossible) futures. A recent school of thought in cognitive science—referred to as “enactivism”—takes a rather radical approach in arguing that the vast majority of human cognition is not, strictly speaking, contentful (Hutto and Myin 2012, 2017). Indeed, a lot of “remembering” does “not require representing any specific past happening or happenings… remembering is a matter of reenactment that does not involve representation” (Hutto and Myin 2017:205). But, what about autobiographical remembering involved in introspection and self-reflection which we might consider the hallmark of consciousness?

To answer this — within the broader enactivist project — they draw on group of scholars who argue that autobiographical memory is “a product of innumerable social experiences in cultural space that provide for the developmental differentiation of the sense of a unique self from that of undifferentiated personal experience” (Nelson and Fivush 2004:507). These scholars find that “a specific kind of memory emerges at the end of pre-school period”  (Nelson 2009:185). Such a theory offers a plausible explanation for “infantile amnesia” — the inability to recall events prior to about three or four — an explanation much less ridiculous than Freud’s contention that these memories were repressed so as to “screen from each one the beginnings of one’s own sex life.”

These theorists go on to argue that “a new form of social skill” associated with this “new type of memory” (Hoerl 2007:630). This skill is “narrating” one’s experience. Parent’s reminiscing with children play a central role in the acquisition of this skill (Nelson and Fivush 2004:500):

…parental narratives make an important contribution to the young child’s concept of the personal past. Talking about experienced events with parents who incorporate the child’s fragments into narratives of the past not only provides a way of organizing memory for future recall but also provides the scaffold for understanding the order and specific locations of personal time, the essential basis for autobiographical memory.

Returning to Jaynes, we find a remarkably analogous description of the emergence of consciousness as  the “development on the basis of linguistic metaphors of an operation of space in which an ‘I’ could narratize out alternative actions to their consequences” (Jaynes 1976:236). That is, we could assert, consciousness is this social skill emerging from the (embodied and social) practice of reminiscing with parents and classmates (or the like) when we are around three years old.

REFERENCES

Diuk, Carlos G., D. Fernandez Slezak, I. Raskovsky, M. Sigman, and G. A. Cecchi. 2012. “A Quantitative Philology of Introspection.” Frontiers in Integrative Neuroscience 6:80.

Giddens, A. (1979). Central problems in social theory. Berkeley: University of California press.

Hoerl, C. 2007. “Episodic Memory, Autobiographical Memory, Narrative: On Three Key Notions in Current Approaches to Memory Development.” Philosophical Psychology.

Hutto, Daniel D. and Erik Myin. 2012. Radicalizing Enactivism: Basic Minds without Content. MIT Press.

Hutto, Daniel D. and Erik Myin. 2017. Evolving Enactivism: Basic Minds Meet Content. MIT Press.

Jaynes, Julian. 1976. The Origin of Consciousness in the Breakdown of the Bicameral Mind.

Lizardo, Omar. 2017. “Improving Cultural Analysis Considering Personal Culture in Its Declarative and Nondeclarative Modes.” American Sociological Review 0003122416675175.

Lizardo, Omar, Robert Mowry, Brandon Sepulvado, Dustin S. Stoltz, Marshall A. Taylor, Justin Van Ness, and Michael Wood. 2016. “What Are Dual Process Models? Implications for Cultural Analysis in Sociology.” Sociological Theory 34(4):287–310.

Mullins, Daniel Austin, Daniel Hoyer, Christina Collins, Thomas Currie, Kevin Feeney, Pieter François, Patrick E. Savage, Harvey Whitehouse, and Peter Turchin. 2018. “A Systematic Assessment of ‘Axial Age’ Proposals Using Global Comparative Historical Evidence.” American Sociological Review 83(3):596–626.

Nelson, Katherine. 2009. Young Minds in Social Worlds: Experience, Meaning, and Memory. Harvard University Press.

Nelson, Katherine and Robyn Fivush. 2004. “The Emergence of Autobiographical Memory: A Social Cultural Developmental Theory.” Psychological Review 111(2):486–511.

Raskovsky, I., D. Fernández Slezak, C. G. Diuk, and G. A. Cecchi. 2010. “The Emergence of the Modern Concept of Introspection: A Quantitative Linguistic Analysis.” Pp. 68–75 in Proceedings of the NAACL HLT 2010 Young Investigators Workshop on Computational Approaches to Languages of the Americas, YIWCALA ’10. Stroudsburg, PA, USA: Association for Computational Linguistics.

Rawls, A. W. (2001). Durkheim’s treatment of practice: concrete practice vs representations as the foundation of reason. Journal of Classical Sociology, 1(1), 33-68.

Williams, Gary. 2011. “What Is It like to Be Nonconscious? A Defense of Julian Jaynes.” Phenomenology and the Cognitive Sciences 10(2):217–39.

Cultural Cognition in Time, from Memory to Imagination

Over the past few years, I have been thinking about the concept of imagination. It emerged out of my efforts to understand the generational change in public opinion about same-sex marriage in the U.S. when it became clear to me that young and old simply imagined homosexuality and same-sex marriage in different ways [see also three essential readings on the imagination: (Appadurai 1996; Orgad 2012; Strauss 2006)]. It wasn’t that the two cohorts disagreed about the issue; it’s that they couldn’t even understand each other. I realized that the imagination represents an implicit domain of political cognition that by-and-large goes unrecognized and unacknowledged by people when they talk to each other, while nonetheless structuring the debate in a way that is similar to framing.[1] I published my initial argument here (No paywall!), and have elaborated on this theory of imagination in my recent book (Definitely paywall!).

One thing that sets my view of the imagination apart from the ways that some other social scientists invoke the concept is that I see an important connection with the concept of collective memory. In many usages (e.g. Castoriadis 1987; Taylor 2002), the idea of the social imagination or the social imaginary is so broad that it most closely approximates the concept of culture—that incomprehensible whole that signifies everything and nothing all at the same time (Strauss makes this critique effectively). By contrast, I think the argument that Olick (1999) makes for collective memory fits well with Strauss’ critique of the social imaginary: we need a dual, individualist-collectivist theory of the imagination, one that anchors the cultural and cognitive versions of the concept in each other. Simply put, minds imagine things just like minds remember things, but the resources and the effects of imagination and memory are cultural and social.

Certainly, the cognitive process of remembering is distinguished in part by its retrospective temporal horizon, and in the empirical work of many sociologists (Baiocchi et al. 2014; Perrin 2006), the imagination’s temporal horizon is future-oriented: actions that we could take to solve a problem, or visions of a better society. Thus, it makes some sense (from a phenomenological perspective, at least) that we can think of collective memory and the social imagination as cultural-cognitive processes that occupy different spots on a temporal continuum.

However, I’d like to make the case that the social imagination is not just future-oriented, but present-oriented. I will also make the case that collective memory may be fruitfully theorized as the past-oriented variant of the social imagination. The ultimate goal of this essay is to persuade sociologists that the imagination is something of a master cultural-cognitive process, with variants that correspond to different phenomenological time horizons, and that is influenced by positive and negative socio-emotional forces.

In purely psychological terms, the imagination is the mind’s capacity to construct a mental image of a non-present phenomenon. Whether past-, present-, or future-oriented, and whether the imagined entity is real (horse) or unreal (unicorn), the cognitive process is essentially the same. Sociologically speaking, however, different imaginations have different effects: individuals’ imaginations of stereotypical and counter-stereotypical people will either reinforce or attenuate prejudicial attitudes and implicit biases (Blair, Ma and Lenton 2001; Slusher and Anderson 1987). Thus, there are political consequences to people’s imaginations: cultivating one’s capacity to produce (and act on) counter-stereotypic mental images may be an effective strategy for combatting implicit racism, sexism, and other forms of enduring prejudice.

As a sociological process, the social imagination is the process that shapes the patterns of associations that define cultural schemas, or the cultural content of a schema. In other words, the social imagination is the cultural-cognitive process that govern the creation, maintenance, and deconstruction of stereotypes, prototypes, categories, and concepts of all kinds. Certainly, other (material, structural, political, whatever) factors are involved in this process, too—like oppression, socialization, etc.—but the social imagination is the culture-cognition nexus. As Orgad (2012) shows, the mass media are one of the most critical institutions involved in contests of the social imagination. In this view, media consumption improves, not reduces, our capacity to imagine because it provides us with many of the fundamental resources for producing mental images. If you combine this understanding of the social imagination with the psychological research describe above, we can explain why stereotypical and counter-stereotypical media representations are so important: media representations can create, maintain, change, or destroy the cultural associations that define different groups of people in the public mind.

As far as I’ve read, Glaeser’s (2011) Political Epistemicsis one of the master treatises on the social imagination, though he doesn’t put it in those terms. Glaeser uses “understanding” to refer to this realm of cultural-cognition, and he uses the term to refer to both the process and its outcome. On page 10, Glaeser begins his definition of understanding by characterizing it as a process: “Understanding is a process of orientation…”; however, one page earlier, Glaeser writes of it as an achievement, or outcome: “understanding is achieved in a process of orientation…” My own view is that the imagination is this process of orientation that produces understandings. This follows Kant (1929), who, in Critique of Pure Reason, argues that the “transcendental power of imagination” is the fundamental  synthetic capacity of mind that combines perception and the cultural categories of understanding, thus structuring all human knowledge and experience.

If we keep this Kantian philosophy of the imagination at the center of our thinking, we might also conceive of memory as another species of imagination: one in which the original sensory perception took place in some bygone time and which is continually brought to life in mental images in the present by synthesizing those past perceptions with current mental structures (hence, the well-known power of our memories to change over time and for our present biography, self-identity, and social context to shape our memories into something other than what actually happened).

In sum, the imagination can be future-oriented (our ability to imagine possible future actions or solutions to social problems), present-oriented (our schemas, stereotypes, and understandings), or past-oriented (our memories).

Beyond distinguishing these three different forms of imagination, as classifed by their temporal horizon, we should differentiate between real and fantastical variants of each. Since a simple distinction between real/correct and unreal/incorrect versions of a mental image is philosophically untenable (even impossible, in the case of future-oriented mental images—things that have not yet occurred), I would argue that any given mental image should be conceived as existing on a continuum, whose polar ends represent ideal-typical, emotion-driven fantasies that “pull” our imagination in either direction. In this rendering, the ideal-typical end points are the only points on the continuum that could be labeled as the purely unreal; actually existing mental images would fall somewhere on the continuum and whose degree of “realness” is variable and relationally determined.

The point of establishing this continuum is not to determine whether one imagined mental image is more correct than another in some absolute sense, but rather to begin to discern the socio-emotional forces that are inevitably involved in the process of imagination and the sociological consequences of producing various kinds of mental images. For example, the prevalence of handgun ownership and attitudes about gun rights in the U.S. must certainly take into account the fear-driven imagination that a criminal who is waiting to rob and murder you is hiding behind every corn stalk in the state of Iowa. Whether past-, present-, or future-oriented, our mental images of reality are constructed within a socio-emotional landscape; as social scientists, it behooves us to think seriously about those landscapes, how they affect our imaginations, and how social action ultimately makes sense to the actors who imagine the world as they do.

Thus, we have three different continuums for the social imagination—one for each temporal horizon—in which mental images are constructed. The mental image’s location on the continuum is influenced by the extent to which positive and negative emotional circumstances influence the process of imagination.

Future-Oriented Imagination: The Domain of Possibility

Cultural Cognition Future

Let’s take the domain of future-oriented imagination first: the domain of possibility. The social imagination of the possible is inevitably informed by the emotions of fear and hope and situated in relation to social conditions of dystopia and utopia. Karen Cerulo (2008) has already written on the cognitive and cultural dynamics of this domain. Another notable example of the sociology of possibility is Erik Olin Wright’s “Real Utopias” research program (e.g., Wright 2013), which promises a sociology of liberation if we take it seriously.

Present-Oriented Imagination: The Domain of Understanding

Cultural Cognition Present

The social imagination of the present happens in the domain of understanding. As mentioned above, Glaeser’s Political Epistemics is the essential read on how processes of validation reinforce and challenge existing understandings. Glaeser labels these types of validation as recognition, resonance, and corroboration. In addition to them being cognitive, cultural, and social in nature, they are also emotional. The present-oriented process of imagination is anchored by two fantastical emotional tendencies: the extreme cynical denial of reality that we might call delusion, and the extreme polyannaish denial of reality that we might call naiveté. All understandings and misunderstandings can be conceived in terms of their socio-emotional tenor, as well as in their cognitive, cultural, and social terms.

Past-Oriented Imagination: The Domain of Memory

Cultural Cognition Past

Finally, turning to the domain of memory, our imaginary reconstructions of past events are influenced by the socio-emotional poles of denial of the negative and romanticization of the positive. The unreal social recollections driven by these emotions are those of erasure and nostalgia: in its extreme forms, collective memory has the potential to totally eliminate the past or construct a fantasy past that never existed. One classic sociological illustration of the importance of nostalgia is, of course, Stephanie Coontz’s The Way We Never Were (1992); this example shows clearly how the romanticization of the past is not purely cognitive or cultural, but structured by institutional power relations like those that reinforce patriarchy. In a parallel (maybe mutually constitutive) way, structures of oppression contribute to the ongoing erasure of women, people of color, and the working class from history in part because of how the socio-emotional consequences of these structures lead to us to produce distorted imaginations of the past.

Obviously, these are just simple thumb-nail sketches, but I believe that understanding the social imagination in its various temporal horizons is important, not just for explaining social action (in the interpretive, symbolic interactionist vein) but also for creating social change. Positive and negative emotions are powerful forces, and the terms on which people produce their imaginations of the world will also affect how they act in that world. Like the old idea of cognitive liberation (McAdam 1982) implies, how we imagine the world can determine whether we mobilize for justice or surrender to despair. The social imagination is very much like other social institutions; it is a cultural entity in which past, present, and future intersect. Sociology should devote some attention to this institution as we do to the others.

References

Appadurai, Arjun. 1996. Modernity at Large: Cultural Dimensions of Globalization. Minneapolis, MN: University of Minnesota Press.

Baiocchi, Gianpaolo, Elizabeth A. Bennett, Alissa Cordner, Peter Taylor Klein, and Stephanie Savell. 2014. The Civic Imagination: Making a Difference in American Political Life. Boulder, CO: Paradigm Publishers.

Blair, Irene V., Jennifer E. Ma, and Alison P. Lenton. 2001. “Imagining Stereotypes Away: The Moderation of Implicit Stereotypes through Mental Imagery.” Journal of Personality and Social Psychology, 81 (5): 828-841.

Castoriadis, Cornelius. 1987. The Imaginary Institution of Society. Cambridge, MA: MIT Press.

Cerulo, Karen A. 2008. Never Saw it Coming: Cultural Challenges to Envisioning the Worst. Chicago: University of Chicago Press.

Coontz, Stephanie. 1992. The Way We Never Were: American Families and the Nostalgia Trap. New York: Basic Books.

Glaeser, Andreas. 2011. Political Epistemics: The Secret Police, the Opposition, and the End of East German Socialism. Chicago: University of Chicago Press.

Kant, Immanuel. 1929. Critique of Pure Reason. New York: St. Martin’s Press.

McAdam, Doug. 1982. Political Process and the Development of Black Insurgency, 1930-1970. Chicago: University of Chicago Press.

Nelson, Thomas E., Rosalee A. Clawson, and Zoe M. Oxley. 1997. “Media Framing of a Civil Liberties Conflict and its Effects on Tolerance.” American Political Science Review, 91 (3): 567-583.

Olick, Jeffrey K. 1999. “Collective Memory: The Two Cultures.” Sociological Theory, 17 (3): 333-348.

Orgad, Shani. 2012. Media Representation and the Global Imagination. Malden, MA: Polity Press.

Perrin, Andrew J. 2006. Citizen Speak: The Democratic Imagination in American Life. Chicago: University of Chicago Press.

Slusher, Morgan P., and Craig A. Anderson. 1987. “When Reality Monitoring Fails: The Role of Imagination in Stereotype Maintenance.” Journal of Personality and Social Psychology, 52 (4): 653-662.

Strauss, Claudia. 2006. “The Imaginary.” Anthropological Theory, 6 (3): 322-344.

Taylor, Charles. 2002. “Modern Social Imaginaries.” Public Culture, 14 (1): 91-124.

Wright, Erik Olin. 2013. “Transforming Capitalism Through Real Utopias.” American Sociological Review, 78 (1): 1-25.

 

[1] Framing and imagination are different concepts, and it is important to distinguish between them. Framing is a communicative process with cognitive effects, while the imagination is fundamentally a cognitive process, albeit with cultural influences. Setting that difference aside, though, and focusing purely on the sociological level of each concept, the social imagination is the process that shapes the pattern of associations that define cultural schemas, while framing is the process that shapes explicit cognition (for more on how framing works through deliberate, rather than automatic processing, see Nelson, Thomas E., Rosalee A. Clawson, and Zoe M. Oxley. 1997. “Media Framing of a Civil Liberties Conflict and its Effects on Tolerance.” American Political Science Review91 (3): 567-583.)

“Learning By Nodes”: Dendritic Learning and What It Means (Or Not) for Cultural Sociology

In a paper published earlier this year in Scientific Reports and further discussed in a later ACS Chemical Neuroscience article, a group of researchers argues that learning might not function like we previously thought. The researchers (Sardi et al. 2018a, 2018b) explain that the dominant conceptualization in cognitive neuroscience of how learning works—synaptic learning, or “Hebbian learning” (Hebb 1949)—is wrong. Instead, using a series of computational models and experiments with synaptic blockers and neuronal cultures  (see Sardi et al. 2018a:4-7), the authors find evidence for a different type of learning—what they refer to as “dendritic learning.” Just as “Copernicus was the first to articulate loudly that the earth revolves around the sun and not vice versa, even though all the accumulated astronomical evidence at that time fit the old postulation,” the researchers proclaim, as are they the first to “[swim] against conventional wisdom” of Hebbian learning theory (2018b:1231).

Of what consequence is this newfound process of dendritic learning for cultural sociology? Should we care at all? I’ll try to briefly describe some of the potential consequences of dendritic learning for cultural sociology; but, spoiler alert, I am not sure one way or the other if these consequences amount to being consequential for how we do sociology. But perhaps taking a peek at what dendritic learning is and how it is different from conventional understandings of how learning works is a nice place to start.

copernican-universe
Figure 1. Are We Witnessing a “Revolution of the Cognitive Spheres”?
Note: Image from Copernicus’ On the Revolutions of the Heavenly Spheres (Palca 2011).

LINKS VS. NODES

Going on 70 years, the prevailing explanation for how learning works has been synaptic learning. Building from Hebb’s (1949) The Organization of Behavior, the idea behind synaptic learning is that if there is an activity that stimulates a neuron which in turn stimulates another neuron, and if that activity is repeated over time, then the first neuron becomes a more efficient stimulator of the second neuron and the two become more strongly connected in the brain.

Neuron-neuron stimulation occurs through synapses, the chemical (usually) or electrical (less frequently) structural gaps between neurons transmitting information across them. Synaptic learning, then, is a type of “activity-dependent synaptic plasticity” (Choe 2015:1305). Repeated practices or exposures to a certain stimulus modifies the synaptic strength between the two neurons: when the practice/exposure is repeated, the two neurons become more tightly associated in the brain, and when the practice/exposure is not repeated, the association weakens. This process occurs relatively slowly.

Synaptic learning is the inspiration behind the old adage that “neurons that fire together wire together.” Until very recently, this was the way we assumed new neural coalitions formed in biological neural networks. Consider an example from Luke Muehlhauser over on the Less Wrong blog (Muehlhauser 2011). Think back to Pavlov’s experiment on classical conditioning (Pavlov 1910):  a dog is given food when the researcher rings a bell, and the timing between the bell ringing and the presentation of food is manipulated. At first, there is no association between the neurons stimulated by bell ringing and the neurons that trigger salivation; they are, ostensibly, mutually exclusive actions. However, if the researcher rings the bell and the food is presented to the dog at the same time (or in close enough time intervals), the neurons that fire when food is present and the neurons that fire with bell ringing are activated together. Over repeated trials, the synapses between “bell ringing” and “salivation” neurons become stronger and, eventually, simply ringing the bell induces salivation without the presentation of food (see Figure 2).

Screen Shot 2018-10-16 at 6.56.46 PM
Figure 2. Synaptic Learning with Pavlov’s Experiment
Note: Reprinted from Less Wrong blog (Muehlhauser 2011).

Sardi and colleagues refer to synaptic learning as “learning by links” (Sardi et al. 2018a:1), since learning occurs through the synapses that link the neurons together. Their research, however, suggests a different type of learning—dendritic learning, also known as “learning by nodes” (Sardi et al. 2018a:2). In short, with this mode of learning, the workhorse of the neuron for learning purposes is not the synapses, but instead the dendrites. In a neuron cell, dendrites are the long, treelike extensions that connect the cell body (the soma, which contains the cell nucleus) to the synapses that themselves “connect” the neuron to other neurons.

Take a look at Figure 3, a neuron cell’s anatomy. The dendrites are responsible for taking in information from other neurons and passing it along into the soma, while the axon is responsible for passing the information on to other neurons via the axon terminals—which are themselves connected to the next neuron’s dendrites through synapses, thus propagating information transmission across the neural network. Without dendrites, information cannot be transmitted into the body of the neuron: e.g., damaged or abnormal dendrites are linked to brain under-connectivity issues associated with autism (Martínez-Cerdeño, Maezawa, and Jin 2016). Trying to construct new neural networks without dendrites is like trying to have group deliberation with all talk and no listening.

Screen Shot 2018-10-16 at 8.44.31 PM
Figure 3. A Neuron’s Anatomy
Note: Reprinted from OpenStax (2018), redirected from Khan Academy (2018).

So, how does dendritic learning differ functionally from synaptic learning? While synaptic learning is based on the idea of synaptic plasticity, dendritic learning revolves around the notion of (you guessed it) a sort of dendritic plasticity: given increasing or decreasing levels of exposure to a neuron-activating stimulus, the extent of the neuron’s “dendritic excitability” can grow or diminish while the strength of the synapses remain relatively constant (Neuroskeptic 2018).

Consider Figure 4. Across both panels, the teardrop object at the bottom represents the neuron cell body, which is where the firing happens if the input signals from the dendrites are strong enough for an outgoing signal to be pushed from the cell body down through the axon and into the dendrites of the next neuron. The long treelike branches are the dendrites, and the tips are the synapses that connect the neuron’s dendrites to the axon terminals of other (not shown) neurons. The left panel illustrates conventional synaptic learning, where the synapses themselves are weighted (indicated by the red valves at the tips of the branches) upward or downward depending on the extent of stimulus exposure. The right panel shows dendritic learning: it is the extent to which a neuron’s dendrites are in a high state of stimulation, and not the strength of the synapses linking the neuron to other neurons, that determines the strength of the input signal and therefore whether or not the neuron fires. In dendritic learning, then, there are far fewer “learning parameters,” since the dendrites are responsible for the learning and not the synapses (see the right panel of Figure 4) (ScienceDaily 2018).

Screen Shot 2018-10-16 at 9.34.34 PM
Figure 4. Synaptic Learning (left) vs. Dendritic Learning (right)
Note: Reprinted from ScienceDaily (2018).

IMPLICATIONS (?) FOR CULTURAL SOCIOLOGY

The “Neuroskeptic” over at Discover Magazine reviewed the evidence from the Sardi et al. papers and suggests that “[a]t best they have shown that dendritic learning also happens [in addition to synaptic learning],” and that “[they] don’t think Copernicus has returned to earth just yet” (Neuroskeptic 2018). I agree with Neuroskeptic in terms of what this means for neuroscience, largely because they are the neuroscientist and I am not. That said, there does seem to be the potential for some implications for how we do cultural sociology. But the potential may be greater for some subfields than for others.

I’m Not Sure What this Adds for How Sociologists Study Learning

The existence of dendritic learning has at least two major implications for cognitive neuroscience. First, learning may happen at much faster timescales than previously thought. Second, weak synapses matter a lot. In terms of timescale, it seems that the brain isn’t that bad at quick adaptation—at least relative to traditional Hebbian learning. As Sardi and colleagues note, “[t]his dynamic brain activity leads to the capability that when we think about an issue several times we may find different solutions” (Shrourou 2018). For the importance of weak synapses, the researchers point out that dendritic strengths are “self-oscillating” (2018b:1231), where weak synapses effectively “temper” the dendritic weights and prevent them from taking on extreme values. In other words, “dendritic learning enables stabilization around intermediate [dendritic strength] values” (Sardi et al. 2018a:4). These implications are pretty important for neuroscientists and medical researchers studying various diseases of the brain (Sardi et al. 2018b:1231-32).

What does all this mean for cultural sociologists? It might be too early to tell. Dendritic learning might be faster than synaptic learning, but the time scales in the experiments are in much smaller intervals (minutes) than the learning processes of interest to sociologists. The researchers note that future studies should “investigate . . . [dendritic learning] efficiency and available learning time scales in more realistic scenarios” (2018b:1231), so it’s an empirical question whether or not the learning speed differentials between synaptic and dendritic learning are a wash with longer timescales. So, in terms of theoretical leverage, dendritic learning may or may not offer much over and above how we already talk about learning in culture and cognition studies (see Lizardo et al. 2016:293-95). At the end of the day, for cultural sociologists it may all look like GOFILT—Good Old Fashioned Implicit Learning Theory—in which case the difference between synaptic and dendritic learning can be taken as ontologically true but analytically inconsequential. Only time (pun) will tell.

The Payoff May Come Sooner for Computational Social Science

In addition to understanding the learning processes behind biological neural networks and brain disorders, Sardi and colleagues also note that this “paradigm shift” matters for developing machine learning algorithms built to mimic human learning (2018b:1231). In natural language processing, for instance, if synaptic learning isn’t the baseline model of human learning (itself an empirical question), then perhaps analytical strategies that build associations between terms or documents based on term frequencies and co-occurrences aren’t based on the best cognitive model for machine learning.

But at face value I’m skeptical of this last proposition—I like word count methods for analyzing meaning, others do too (Nelson 2014; Underwood 2013), and I’ve read enough papers that make defensible claims using them to sell me on their continued use. That said, we have not seen dendritic learning rules implemented into machine learning algorithms yet (but see Sardi et al. 2018a:2-3 for an example of dendritic learning rules in a series of perceptron models), and it might prove particularly consequential in deep learning tasks and artificial neural network models. These sort of machine learning algorithms have not gained much traction in sociology, though, so, for now, it seems that the utility of distinguishing between synaptic and dendritic learning for culture and cognition studies is truly a waiting game.

I can continue all of my work without making these distinctions, and I suspect that most of the people reading this post are in the same position.

REFERENCES

Choe, Yoonsuck. 2015. “Hebbian Learning.” Pp. 1305-09 in Encyclopedia of Computational Neuroscience, edited by D. Jaeger and R. Jung. New York: Springer.

Hebb, Donald O. 1949. The Organization of Behavior: An Neuropsychological Theory. New York: Wiley.

Khan Academy. 2018. “Overview of Neuron Structure and Function.” Khan Academy. Retrieved October 16, 2018 (https://www.khanacademy.org/science/biology/human-biology/neuron-nervous-system/a/overview-of-neuron-structure-and-function).

Lizardo, Omar, Robert Mowry, Brandon Sepulvado, Dustin S. Stoltz, Marshall A. Taylor, Justin Van Ness, and Michael Wood. 2016. “What Are Dual Process Models? Implications for Cultural Analysis in Sociology.” Sociological Theory 34(4):287-310.

Martínez-Cerdeño, Verónica, Izumi Maezawa, and Lee-Way Jin. 2016. “Dendrites in Autism Spectrum Disorders.” Pp. 525-43 in Dendrites: Development and Disease, edited by K. Emoto, R. Wong, E. Huang, and C. Hoogenraad. Tokyo: Springer.

Muehlhauser, Luke. 2011. “A Crash Course in the Neuroscience of Human Motivation.” Less Wrong. Retrieved October 16, 2018 (https://www.lesswrong.com/posts/hN2aRnu798yas5b2k/a-crash-course-in-the-neuroscience-of-human-motivation).

Nelson, Laura K. 2014. “Computer-Assisted Content Analysis and Sociology: What You Should Know.” Bad Hessian. Retrieved October 17, 2018 (http://badhessian.org/2014/01/computer-assisted-content-analysis-and-sociology-what-you-should-know/).

Neuroskeptic. 2018. “Is ‘Dendritic Learning’ How the Brain Works?” Discover Magazine. Retrieved October 16, 2018 (http://blogs.discovermagazine.com/neuroskeptic/2018/05/11/dendritic-learning/#.W8aX4P5KjdT).

OpenStax. 2018. “Neurons and Glial Cells.” OpenStax CNX. Retrieved October 16, 2018 (https://cnx.org/contents/GFy_h8cu@9.87:c9j4p0aj@3/Neurons-and-Glial-Cells).

Palca, Joe. 2011. “For Copernicus, A ‘Perfect Heaven’ Put Sun At Center.” NPR: Morning Edition. Retrieved October 16, 2018 (https://www.npr.org/2011/11/08/141931239/for-copernicus-a-perfect-heaven-put-sun-at-center).

Pavlov, Ivan. 1910. The Work of the Digestive Glands. London: C. Griffin & Company.

Sardi, Shira, Roni Vardi, Amir Goldental, Anton Sheinin, Herut Uzan, and Ido Kanter. 2018a. “Adaptive Nodes Enrich Nonlinear Cooperative Learning Beyond Traditional Adaptation By Links.” Scientific Reports 8(1):5100.

Sardi, Shira, Roni Vardi, Amir Goldental, Yael Tugendhaft, Herut Uzan, and Ido Kanter. 2018b. “Dendritic Learning as a Paradigm Shift in Brain Learning.” ACS Chemical Neuroscience 9:1230-32.

ScienceDaily. 2018. “The Brain Learns Completely Differently than We’ve Assumed Since the 20th Century.” ScienceDaily. Retrieved October 16, 2018 (https://www.sciencedaily.com/releases/2018/03/180323084818.htm).

Shrourou, Alina. 2018. “Dendritic Learning Occurs Much Faster and In Closer Proximity to Neurons, Shows Study.” News Medical: Life Sciences. Retrieved October 16, 2018 (https://www.news-medical.net/news/20180830/Dendritic-learning-occurs-much-faster-and-in-closer-proximity-to-neurons-shows-study.aspx).

Underwood, Ted. 2013. “Wordcounts Are Amazing.” The Stone and the Shell. Retrieved October 17, 2018 (https://tedunderwood.com/2013/02/20/wordcounts-are-amazing/).

Limits of innateness: Are we born to see faces?

Sociologists tend to be skeptical of claims individuals are consistent across situations, as a recent exchange on Twitter exemplifies. This exchange was partially spurred by revelations that the famous Stanford Prison Experiment (which supposedly showed people will quickly engage in behaviors commensurate with their assigned roles even if it means being cruel to others), was even more problematic than previously thought.

Fig14Koehler.png

The question of individual “durability” is sometimes framed as “nature vs nurture,” and this is certainly a part of the matter. In sociology, however, this skepticism of “durability” often goes much further than innateness, and sometimes leads sociologists to suggest individuals are inchoate blobs until situations come along to construct us (or interlocutors may resort to obfuscation by touting the truism that humans are always in a situation). If pushed on the topic, however, even the staunchest situationalist would likely concede that humans are born with some qualities, and the real question is what are the limits of such innateness? What kinds of qualities of people can be innate? To what extent are these innate qualities human universals? And, if we are “born with it” can  “it” change and how and to what extent? In Stephen Turner’s new Cognitive Science and the Social, he puts the matter succinctly:

“…children quickly acquire the ability to speak grammatically. This seems to imply that they already had this ability in some form, such as a universal set of rules of language stored in the brain. If one begins with this problem, one wants a model of the brain as “language ready.” But why stop there? Why think that only grammatical rules are innate? One can expand this notion to the idea of the “culture-ready” brain, one that is poised and equipped to acquire a culture” (2018:44–45).

As I’ve previously discussed, the search for either the universal rules or specialized module for language has, thus far, failed. Nevertheless, most humans must be “language-ready” in the minimal sense of having the ability to acquire the ability to speak and understand speech. But, answering the question of where innateness ends and enculturation begins is not easy. Even for those without the disciplinary inclination toward strongly situationalist arguments.

Are we born to see faces?

How we identify faces is a good place to explore this difficulty: Do we learn to identify faces or are we born to see faces? And, if we are born to see faces, is this ability refined through use and to what extent? Enter: the fusiform face area  (FFA). Just like language, the FFA is often used as evidence for the more general arguments of functional localization and domain specificity. This argument goes: facial recognition is produced not by generic cognitive processes involved in vision (or other generic processes), but rather an inborn special-purpose module.

One reason why faces are an even better candidate for grappling with the question of innateness than is language is that the human fetus is exposed to language while in the womb. Human fetuses gain some sense of prosody, tonality, and as a result, a basic sense of grammar in the course of development in utero. There is no comparable exposure to faces, however. Another reason is, as the Gestalt psychologists argued, faces have an irreducible structure such that they are perceived as complete wholes even when viewing only a part — “the whole is something else than the sum of its parts, because summing is a meaningless procedure, whereas the whole-part relationship is meaningful” (Koffka 1935:176).

Facial recognition encompasses two related functions: distinguishing faces from non-face objects and distinguishing among faces. The key debate within this area of cognitive neuroscience is whether there is a module that is specialized for one or both of these processes (Kanwisher, McDermott, and Chun 1997; Kanwisher and Yovel 2006), as opposed to a distributed and generic cognitive process (Haxby et al. 2001). This debate goes back to the observation that humans struggle to recognize and remember faces that are upside down, which seemed to be the case for faces more so than any other non-face object (Diamond and Carey 1986) — suggesting something about faces made them unique. 20181014-Selection_001.png The proposal facial recognition was the result of a specialized module, however, begins with a relatively recent paper by Kanwisher et al. (1997). Using functional magnetic resonance imaging (which I’ve discussed in detail in previous posts), 15 subjects were shown various common objects as well as faces. They found in 12 of those subjects a specific area of the brain was more active when they saw faces than when they saw non-face objects. On its face, it seems like reasonable evidence humans are born with a module necessary for identifying faces.

However, when one squares this claim with the underlying logic of fMRI—it is used to (a) measure relative activation, not an on/off process, and (b) voxel and temporal resolution is far too coarse to conclude a region is homogeneously activated—the claim that the FFA is a functionally specialized module for facial recognition weakens considerably.  These areas are not entirely inactive when viewing non-face objects. Indeed, relative to baseline activation, subsequent research found the FFA is significantly more active when viewing various objects (Grill-Spector, Sayres, and Ress 2006). Specifically, the level of specificity of the stimulus (e.g. faces tend to be individuals whereas chairs tend to be generic) and the participants level of expertise with the stimulus (e.g. car and bird enthusiasts) predicted greater relative activation (Gauthier et al. 2000; Rhodes et al. 2004).

Finally, if we are born to distinguish faces from non-faces, the ability to distinguish among faces is considerably trained by early socialization, and such socialization introduces a lot of variation among people. For example, one of the earliest attempts to measure facial recognition concluded, “that women are perhaps superior to men in the test; that salespeople are superior to students and farm people; that fraternity people are perhaps superior to non-fraternity people…” (Howells 1938:127).

Subsequent research in this vein found individuals are better at distinguishing among their racial/ethnic ingroups than their outgroups. In an early study of black and white students from a predominantly black university and a predominantly white university, researchers found participants more easily discriminated among faces of their own race. They also found “white faces were found more discriminable” overall, which they suggest may be the result of “the distribution of social experience is such that both black persons and white persons will have had more exposure to white faces than black faces in public media…” (Malpass and Kravitz 1969:332). Summarizing more recent work, Kubota et al.  (2012) state “participants process outgroup members primarily at the category level (race group) at the expense of encoding individuating information because of differences in category expertise or motivated ingroup attention.”

Why should sociologists care?

To summarize, the claim that facial recognition emerges from an innate functionally-specialized cognitive module is weakened in three ways: the FFA responds to more generic features faces share with other objects; the FFA is implicated in a distributed neural network rather than solely a discrete module; the FFA is used for non-facial recognition functions; and finally, facial recognition is trained by our (social) experience. Why should sociologists care? I think there are three reasons. First, innateness is not deterministic or specific but rather constraining and generic. Second, these constraints ripple throughout our social experience, forming the contours of cultural tropes, but are not immutable. Third, limited innateness does not mean individuals are not durable across situations, even (near) universally so.

A dispositional and distributed theory of cognition and action accounts for object recognition by its use: “information about salient properties of an object—such as what it looks like, how it moves, and how it is used—is stored in sensory and motor systems active when that information was acquired” (Martin 2007:25). This is commensurate with the broad approach many of the posts on this blog have been working with. Perhaps, however, there is a special class of objects for which this is not exactly the case. In other words, the admittedly weak innateness of distinguishing unfamiliar faces from non-face objects is, perhaps, the evidence we are “born with” some forms of nondeclarative knowledge (Lizardo 2017).

Such nondeclarative knowledge, however, may be re-purposed for cultural ends. Following the logic of neural exaption, discussed in a previous post, humans can be born with predispositions, especially related to very generic cognitive processes, which are further trained, refined, and recycled for novel uses, novel uses which are nevertheless constrained in a way that yields testable predictions. A fascinating example related to facial perception is anthropomorphization. If rudimentary facial recognition is innate (and therefore, probably evolutionarily old), this inherently social-cognitive process is being reused for non-social purposes (i.e. non-social in the restricted sense of interpersonal interaction). This facial recognition network—together with other neuronal networks—is used to identify people and predict their behavior, and this may be adapted to non-human animate and inanimate objects, like natural forces, as well as anonymous social structures, like financial markets.

What this means, following the logic of neural reuse and conceptual metaphor theory, is that the target domain (e.g. derivative markets, earthquakes) is “contaminated” by predispositions which originally dealt with the source domain (here, interpersonal interaction). This means attempting to imagine the intentions of thousands of unknown traders as if inferring the intentions of an interlocutor may lead traders to “ride” financial bubbles (De Martino et al. 2013). Therefore, what is and is not innate is a messy question to answer — even by those without a disciplinary distrust of innateness claims. Although cognitive neuroscientists are making headway, it remains an empirical question which objects are recognized innately and the extent to which the object recognition is robust to enculturation and neural recycling.

More importantly, the question of individual durability across situations should not be reduced solely to “nature vs nurture.” That is, we must grapple with the question of once these processes are so trained in an individual (during “primary socialization”), how easily can they be re-trained, if at all? In John Levi Martin’s Thinking Through Theory (2014:249), the third of his “Newest Rules of Sociological Method” is pessimistic in this regard: “Most of what people think of as cultural change is actually changes in the compositions of populations.” That is, even if we were to bar the possibility of innateness in any strong sense, once individuals reach a certain age they are likely to be fairly consistent across situations, with little chance of altering in fundamental ways.

REFERENCES

De Martino, Benedetto, John P. O’Doherty, Debajyoti Ray, Peter Bossaerts, and Colin Camerer. 2013. “In the Mind of the Market: Theory of Mind Biases Value Computation during Financial Bubbles.” Neuron 79(6):1222–31.

Diamond, Rhea and Susan Carey. 1986. “Why Faces Are and Are Not Special: An Effect of Expertise.” Journal of Experimental Psychology. General 115(2):107.

Gauthier, I., P. Skudlarski, J. C. Gore, and A. W. Anderson. 2000. “Expertise for Cars and Birds Recruits Brain Areas Involved in Face Recognition.” Nature Neuroscience 3(2):191–97.

Grill-Spector, Kalanit, Rory Sayres, and David Ress. 2006. “High-Resolution Imaging Reveals Highly Selective Nonface Clusters in the Fusiform Face Area.” Nature Neuroscience 9(9):1177–85.

Haxby, J. V., M. I. Gobbini, M. L. Furey, A. Ishai, J. L. Schouten, and P. Pietrini. 2001. “Distributed and Overlapping Representations of Faces and Objects in Ventral Temporal Cortex.” Science 293(5539):2425–30.

Howells, Thomas H. 1938. “A Study of Ability to Recognize Faces.” Journal of Abnormal and Social Psychology 33(1):124.

Kanwisher, Nancy and Galit Yovel. 2006. “The Fusiform Face Area: A Cortical Region Specialized for the Perception of Faces.” Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 361(1476):2109–28.

Kanwisher, N., J. McDermott, and M. M. Chun. 1997. “The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception.” The Journal of Neuroscience: The Official Journal of the Society for Neuroscience 17(11):4302–11.

Koffka, Kurt. 1935. Principles of Gestalt Psychology. New York: Harcourt, Brace.Kubota, Jennifer T., Mahzarin R. Banaji, and Elizabeth A. Phelps. 2012. “The Neuroscience of Race.” Nature Neuroscience 15(7):940–48.

Lizardo, Omar. 2017. “Improving Cultural Analysis Considering Personal Culture in Its Declarative and Nondeclarative Modes.” American Sociological Review 0003122416675175.

Malpass, R. S. and J. Kravitz. 1969. “Recognition for Faces of Own and Other Race.” Journal of Personality and Social Psychology 13(4):330–34.

Martin, Alex. 2007. “The Representation of Object Concepts in the Brain.” Annual Review of Psychology 58(1):25–45.

Martin, John Levi. 2014. Thinking Through Theory. W. W. Norton, Incorporated.

Rhodes, Gillian, Graham Byatt, Patricia T. Michie, and Aina Puce. 2004. “Is the Fusiform Face Area Specialized for Faces, Individuation, or Expert Individuation?” Journal of Cognitive Neuroscience 16(2):189–203.

Turner, Stephen P. 2018. Cognitive Science and the Social: A Primer. Routledge.

Beyond Good Old-Fashioned Ideology Theory, Part Two

In part one, I examined two recent frameworks for understanding ideology (Jost and Martin) and explained how both serve as alternatives to the good old-fashioned ideology theory (GOFIT). Ultimately, I concluded that Martin’s (2015) model has specific advantages over Jost’s (2006) model, though the connection between ideology and “practical mastery of ideologically-relevant social relations” needs to be fleshed out. This is particularly true because any strong concentration on social relations seems to preclude any serious attention to cognition. But without it, the argument is vulnerable to crying foul over reductionism.

In this post, I sketch a model of cognition that checks the boxes of GOFIT ideology: distorting, invested with power, supports unequal social relations. But it is different for reasons I specify below. To do this, I use a famous experiment in neuroscience—Michael Gazzaniga’s “split-brain” hypothesis— and draw an analogue between it and a possible non-GOFIT ideology.

Galanter, Gerstenhaber … and Geertz

But before doing that, it seems reasonable to ask about the purpose of even attempting a non-GOFIT ideology. Is GOFIT a strawman? Why is it problematic? To answer these questions, and to indicate why a holistic revision of ideology away from GOFIT seems to be in order, consider Clifford Geertz and his essay (1973) “Ideology as a cultural system,” which presents what is to date arguably the most influential, non-Marxist approach to ideology in the social sciences. Geertz’s burden is to make ideology relevant by providing it with a “nonevaluative” form. And the way he does this, using modular or computational cognition, is what I want to focus on.

Ideology here is not tantamount to oversimplified, inaccurate, “fake news” style distortion that is, above all and categorically, what science is not. But if it isn’t to be censured like this, then for Geertz ideology must be a symbolic phenomenon that has something to do with how “symbolic systems” make meaning in the world, and in turn serve to guide action  (e.g. “models of, models for”). To make this argument, he does, in fact, make ideology cognitive by drawing from a psychological model: Eugene Galanter and Murray Gerstenhaber’s [1956] “On Thought: The Extrinsic Theory.”

As Geertz summarizes:

thought consists of the construction and manipulation of symbol systems, which are employed as models of other systems, physical, organic, social, psychological, and so forth, in such a way that the structure of these other systems– and, in the favorable ease, how they may therefore be expected to behave–is, as we say “understood.” Thinking, conceptualization, formulation, comprehension, understanding, or what-have-you, consists not of ghostly happenings in the head but of a matching of the states and processes of symbolic models against the states and processes of the wider world … (214)

Geertz returns to this same argument in arguably his most thorough approach to the culture concept (“The Growth of Culture and the Evolution of Mind”). Importantly, there too he does not conceive of culture or symbols absent a psychological referent, which he consistently draws from Galanter and Gerstenhaber.

Whatever their other differences, both so-called cognitive and so-called expressive symbols or symbol-systems have, then, at least one thing in common: they are extrinsic sources of information in terms of which human life can be patterned–extrapersonal mechanisms for the perception, understanding, judgment, and manipulation of the world. Culture patterns–religious, philosophical, aesthetic, scientific, ideological–are “programs”; they provide a template or blueprint for the organization of social and psychological processes, much as genetic systems provide such a template for the organization of organic processes (Geertz, 216)

How does this apply to ideology? It makes ideology a symbolic system for building an internal model. Geertz is distinctively not anti-psychological here but instead seems to double down on the “extrinsic theory of thought” to define culture as a symbol system through which agents construct models of and for some system out in the world, effectively programming their response to that system. Ideology refers to the symbol system that does this for the political system:

The function of ideology is to make an autonomous politics possible by providing the authoritative concepts that render it meaningful, the suasive images by means of which it can be sensibly grasped … Whatever else ideologies may be–projections of unacknowledged fears, disguises for ulterior motives, phatic expressions of group solidarity–they are, most distinctively, maps of problematic social reality and matrices for the creation of collective conscience (Geertz, 218, 220)

Geertz mentions the example of the Taft-Hartley Act (restricting labor unionizing) that carries the ideological label the “slave labor act.” Geertz emphasizes how ideology works according to how well or how poorly the model (“slave labor act”) “symbolically coerces … the discordant meanings [of its object] into a unitary conceptual framework” (210-211).

If GOFIT is a set of assumptions widely held about ideology, then we probably find little to disagree with in Geertz’s argument, at least at first glance. Much of it should ring true. If we object to anything it might be the heavy-handed language that Geertz uses that evokes modular or computational cognition (e.g. “programs”). But maybe Geertz himself is not responsible for this. His sources, Galanter and Gerstenhaber, were explicit in making these assumptions about cognition, and this I want to argue is important for a specific reason.

To Galanter and Gerstenhaber, “model” clearly meant the sort of three-dimensional scale models that scientists construct in order to understand large-scale physical phenomena. In this sense, they solved the “problem of human thinking” by defining it as a lesser version of idealized scientific thinking. And they were not alone in that pursuit. At least initially, cognition was presented as antithetical to behaviorism in psychology by allying itself with resources that were quite deliberate and quite reflexive: “[mid-century] cognitive scientists … looked for human nature by holding an image of what they were looking for in their [own] minds. The image they held was none other than their own self-image … ‘good academic thinking’ [became the] model of human thinking” (Cohen-Cole 2005).

This is not only the context for Geertz’s theory of ideology. His understanding of “symbol systems” writ large cannot be removed from this specific gloss on and an extension of “good academic thinking.” For our purposes, this should beg the question of whether using symbol systems to form internal models about the external world and  to manipulate and creatively construe those models as equivalent to “symbolic action” should be the template or basis for defining ideology on nonevaluative grounds, that is to say, for defining ideology in the way that Geertz himself does: as cognitive. 

Ideology and the Split-Brain

What I will try to do now, after this long preamble, is sketch a different possible cognitive basis for a theory of ideology, one that I think is compatible with Martin’s (2015) field-theoretic approach to ideology discussed in part one of this post. It develops a cognitive interpretation of what “practically mastery of ideologically relevant social relations” might mean. It also situates Marx as the contrary of Geertz by making social relations a necessary condition for ideology as a cognitive phenomenon, not something that needs to be bracketed (or pigeonholed as “strain” or “interest”) for ideology to be cognitive.

This different basis is Gazzaniga’s research (1967; 1998; Gazzaniga and Ledoux 1978) on the split-brain and the process of confabulation of meaning on the basis of incomplete visual input. It is important to mention that I use the split-brain as an analogue (in “good academic thinking” terms) to convey what ideology might mean as a cognitive phenomenon if it is not a symbol system. I do not imply that ideology requires a split-brain as a physical input.

For Gazzaniga, the two sides of the brain effectively constituted two separate spheres of consciousness, but this could only be truly appreciated when the corpus callosum was severed (what used to be a procedure for epileptic patients) and the two sides of the brain were rendered independent from each other. When this happened, the visual field was bissected as the brain stopped communicating information together that came through the right and left visual fields (hereafter RVF and LVF). What was observable in the RVF was received independently from what was observable in the LVF. As Gazzaniga found, the brain is multi-modal. The left hemisphere is the center of language about visual input. So when a word or image was flashed to the RVF and the information was received by the left hemisphere, the patient could provide an accurate report. When a word or image was flashed to the LVF, the patient could only confabulate because the non-integrated brain could not combine the visual information with the language functions of the left hemisphere. The split-brain patient effectively “didn’t see anything,” even though she could still connect visual cues to related pictures on command.

When visual information is presented to a split-brain, the mystery is how the verbal left hemisphere attempts to make sense of what the non-verbal right hemisphere is doing. This is the recipe for confabulations or “false memories” as Gazzaniga (1998) puts it, because here we witness the effects of the “interpreter mechanism.”

Thus, when the RVF and LVF of a split-brain patient were shown pictures of a house in the snow and a chicken’s claw, and the patient was asked to point to relevant pictures based on these visual cues, she pointed to a snow shovel and a chicken head respectively. Here is the interesting part:

the right hemisphere—that is, the left hand—correctly picked the shovel for the snowstorm; the right hand, controlled by the left hemisphere, correctly picked the chicken to go with the bird’s foot. Then we asked the patient why the left hand— or right hemisphere—was pointing to the shovel. Because only the left hemisphere retains the ability to talk, it answered. But because it could not know why the right hemisphere was doing what it was doing, it made up a story about what it could see—namely, the chicken. It said the right hemisphere chose the shovel to clean out a chicken shed (Gazzaniga 1998: 53; emphasis added).

“It made up a story” refers here to the verbal left hemisphere attempting to make sense of why right hemisphere had been directed toward a shovel. Flashing a picture to right hemisphere lacked any narrative ability, and yet the split-brain patient could still point at a relevant image even though this did not “pass through” language.

The argument here is that this serves as a good analogue for a theory of ideology that does not make computational or modular commitments. The important point is that confabulation is not just some made up story, but what the split-brain patient believes because his brain has filled in the blank (e.g. “I chose the shovel because I need to shovel out the chicken coop”). Ideology as a cognitive phenomenon does not, in this sense, mean programming the political system according to an extrinsic symbol system; in other words, building an internal model (a three-dimensional one) of that system and drawing entailments from it, as any good scientist would do. To be “in ideology” means filling in the blank as the normal way to cognitively cope with disconnected inputs, some with a “phonological representation,” others that are “nonspeaking.”

The Split-Brain and Social Relations

We can theorize that where practical mastery of social relations becomes important, in particular, social relations that are “ideologically-relevant,” it is because they generate an equivalent of a split-brain effect and its “interpreter mechanism.” In social relations arranged as fields, practical mastery consists of the “felt motivation of impulsion … to attach impulsion … to positions … [and have] the ethical or imperative nature of such motivations [be] akin to a social object, external and (locally) intersubjectively valid, that is, valid conditional on position and history” (Martin 2011: 312).

Fields refer to one type of social relation conducive to ideological effects, particularly if they are organized on quasi-Schmittian grounds of opponents and allies (Martin 2015). Marx is clear that other types of social relation (like capital) are specifically resistant to influence by any sort of cognitive mediation. Still, he achieves some understanding of those social relations by examining their “being thought … [through] abstractions” (see Marx 1973: 143). For instance,  the commodity fetish can be seen as analogous to a split-brain effect: the “social relation between things” is an LVF interpretation, while the “social relation between people” is equivalent to an RVF input. A split-brain is an analogue of mental structures that correspond to these objective (social) structures.

Taking the split-brain as the basis (not the “extrinsic theory”) for ideology as a (non-GOFIT) cognitive phenomenon, then, we can speculate that only certain social relations (fields, capital) have an ideological effect. The ideological effect they do have is because they generate a split-brain scenario with disconnected inputs. Agents are subject to social relations in which they do not have direct access (RVF). They fill in the blank of the effect of those inputs through “abstractions,” i.e. explicit endorsements or propositional attitudes that take linguistic form, often mistaken on their own terms as ideology (LVF).

To be continued … [note: Zizek (2017: 119ff) also finds the split-brain useful for thinking about ideology, though his argument confounds and mystifies with Pokemon Go]

 

References

Cohen-Cole, Jamie. (2005). “The Reflexivity of Cognitive Science: The Scientist as a Model of Human Nature.” History of the Human Sciences 18: 107-139.

Galanter, Eugene and Murray Gerstenhaber. (1956). “On Thought: The Extrinsic Theory.” Psychological Review 63: 218-227.

Gazzaniga, Michael. (1967). “The Split-Brain in Man.” Scientific American 217: 24-29.

_____. (1998). “The Split-Brain Revisited.” Scientific American 279: 51-55.

Gazzaniga, Michael and Joseph LeDoux. (1978). The Integrated Mind. Plenum Press.

Geertz, Clifford. (1973). “Ideology as a Cultural System.” in Interpretation of Cultures.

Jost, John. (2006). “The End of the End of Ideology.” American Psychologist 61: 651-670.

Martin, John Levi. (2015). “What is Ideology?” Sociologica 77: 9-31.

_____. (2011). The Explanation of Social Action. Oxford.

Marx, Karl. (1973). The Grundrisse. Penguin.

Zizek, Slavoj. (2017). Incontinence of the Void. MIT