In my previous post, I introduced the task of cognitive neuroscience, which is (largely) to locate processes we associate with the mind in the structures of the brain and nervous system (Tressoldi et al. 2012). I also discussed the classical and commonsensical approach which conceptualizes the brain and mind relationship by analogy to computer hardware and software: distinct physical modules in the brain run operations on a limited set of innate codes (not unlike binary code) to produce outputs. One problem with this I discussed is theoretical: the grounding problem.
Another objection is empirical. If one proposes a strict relationship between functional modularity and structural modularity, using brain imaging technology, researchers should be able to identify these modules in neural architecture with some consistency across persons. However, researchers do not find such obvious evidence (Genon et al. 2018). For example, some of the researchers who pioneered brain imaging techniques, specifically positron emission tomography (PET), attempted to find three components of the “reading system” (orthography, phonology, and semantics) (e.g., Peterson, Fox, Posner, & Mintun, 1989). A decade later, researchers continued to disagree as to where the “reading system” is located (Coltheart 2004).
Part of the problem may be methodological: the technology remains rudimentary and advances come with tradeoffs (Turner 2016; Ugurbil 2016). The fMRI is the most common technique used in research, and high-resolution machines can measure blood flow in voxels (3-dimensional pixels) that are about 1 cubic millimeter in size. With an average of 86 billion neurons in the human brain (Azevedo et al. 2009), there are an average of 100,000 neurons in one voxel (although neurons vary widely in size and structure—see NeuroMorpho.org for a database of about 90,000 digitally reconstructed human and nonhuman neurons), and each neuron has between hundreds to thousands of synapses connecting it (with varying strengths) to neighboring neurons. To interpret fMRI data, neuronal activity within each voxel is averaged, using the kinds of statistical techniques familiar to many sociologists, and must extract signal from noise. Therefore, it is important to bear in mind, like all inferential analyses, findings are provisional.
Connectionism in Linguistics and Artificial Intelligence
Even if non-invasive imaging resolution were to be extended to the neuronal level in real-time, it may be that there are no special-purpose brain modules to be discovered. That is, it may be that cognitive functions are distributed across the brain and nervous system, in perhaps highly variable ways. Such an alternative relies on a network perspective and comes with many potential forebearers, such as Aristotle, Hume, Berkeley, Herbert Spencer, or William James (Medler 1998).
Take for example Paul Broca and Carl Wernicke’s work on aphasia in the late 19th century. Noting the varieties if aphasia, or the loss of the ability to produce and/or understand speech or writing, Lichtheim (1885) concludes, following the work of Wernicke and Broca: different aspects of language (i.e. speaking, hearing speech, understanding speech, reading, writing, interpreting visual language) are associated with different areas of the brain, but connected via a neural network. Interruption along any one of these pathways can account for observations of the many kinds of aphasia.
If language were produced by a discrete module, one would predict global language impairment, not piecemeal. Thus, this work developed the notion that so-called psychological “faculties” like language were distributed across areas of the brain. Following the logic of such evidence, an alternative perspective later referred to as connectionism, argues that the brain has no discrete functional regions and does not operate on symbols in a sequential process as a computer, but rather is distributed neural network which operates in parallel.
The connectionist approach (also called parallel distributed processing or PDP) coalesced primarily around PDP Research Group, lead by David Rumelhart and James McClelland at the Institute for Cognitive Science at UC-San Diego, as an alternative to the generative grammar approach to modeling brain activity. In particular, the publication of Parallel Distributed Processing in 1986 marked the beginning of the contemporary connectionist perspective.
A key difference with prior computational approaches is that connectionist theories dispense with the analogy of mind as software and brain as hardware. Mental processes are not encoded in some language of thought or translated into neural architecture, they are the neural networks. Furthermore, unlike Chomsky’s generative grammar, a connectionist approach to language can better account for geographical and/or sociological variation—dialects, accents, vocabulary, syntax—within what is commonly considered the “same” language. This is because learning (from a connectionist perspective) plays a key role in both language use and form, and thus is easily coupled with, for example, practice theoretic approaches which reconceptualize folk concepts, like beliefs, into a species of habit.
Take, for example, Basil Bernstein’s pioneering work on linguistic variation across class in England (1960). He demonstrated that, independent of non-verbal measures of intelligence, those in the middle class would use a broader range of vocabulary (and therefore would score higher on verbal measures of intelligence) because elaborating one’s thoughts (and talking about oneself) was an important practice (and therefore habit) for the middle class, but not for the working class. As Bernstein summarized, “The different vocabulary scores obtained by the two social groups may simply be one index, among many, which discriminates between two dominant modes of utilizing speech” (1960:276).
Connectionism and Cognitive Anthropology
Beginning in the 1960s, cognitive anthropology was beginning to see problems with modeling culture using techniques like componential analysis (a technique borrowed from linguistics, see Goodenough 1956), which followed a decision-tree, or “checklist” logic. It is here a small theory-group in cognitive anthropology—called the “cultural models” school surrounding Roy d’Andrade while at Stanford in the 1960s and then UC-San Diego in the 1970s—circulated informally a working paper written by the linguist Charles Fillmore (while at Stanford) in which he outlined “semantic frames” as an alternative to checklist approaches to word meanings. In another paper circulated informally, “Semantics, Schemata, and Kinship,” referred to colloquially as “the yellow paper” (Quinn 2011:36), the anthropologist Hugh Gladwin (while also at Stanford) made a similar argument. Rather than explain the meaning of familial words like “uncle” in minimalist terms, anthropologists should consider how children acquire a “gestalt-like household schema,” and uncle “fits” within this larger cognitive structure.
However, it wasn’t until these cognitive anthropologists paired this new concept of cultural schemas with the connectionism that, according to Roy d’Andrade (1995) and Naomi Quinn (2011), a paradigm shift occurred in cognitive anthropology in the 1980s and 1990s. Quinn recalls the second chapter of Rumelhart, et al’s 1986 book, “Schemata and Sequential Thought Processes in PDP Models” gave the schema a “new and more neurally convincing realization as a cluster of strong neural associations” (Quinn 2011:38).
Beyond d’Andrade and his students and collaborators like Quinn and Claudia Strauss at Stanford, Edwin Hutchins, who also worked closely with Rumelhart and McClelland’s PDP Research Group, was instrumental in extending connectionism from the individual brain to a social group with his concept of “distributed cognition.” Independently of this US West Coast cognitive revolution, the British anthropologist Maurice Bloch was one of the first to recognize the importance of connectionism for anthropology. Beginning with his essay “Language, Anthropology and Cognitive Science,” in which he criticized his discipline for relying on an overly linguistic conceptualization of culture (a criticism which applies with full force to contemporary cultural sociology).
In a follow-up post, I will consider more recent advances in understanding the brain-mind relationship, specifically the concept of “neural reuse,” and assess the connectionist model in light of this work.
d’Andrade, Roy G. 1995. The Development of Cognitive Anthropology. Cambridge University Press.
Azevedo, Frederico A. C. et al. 2009. “Equal Numbers of Neuronal and Nonneuronal Cells Make the Human Brain an Isometrically Scaled-up Primate Brain.” The Journal of Comparative Neurology 513(5):532–41.
Bloch, Maurice. “Language, anthropology and cognitive science.” Man (1991): 183-198.
Bernstein, Basil. 1960. “Language and Social Class.” The British Journal of Sociology 11(3):271–76.
Coltheart, Max. 2004. “Brain Imaging, Connectionism, and Cognitive Neuropsychology.” Cognitive Neuropsychology 21(1):21–25.
Genon, Sarah, Andrew Reid, Robert Langner, Katrin Amunts, and Simon B. Eickhoff. 2018. “How to Characterize the Function of a Brain Region.” Trends in Cognitive Sciences.
Goodenough, Ward H. 1956. “Componential Analysis and the Study of Meaning.” Language 32(1):195–216.
Lichtheim, Ludwig. 1885. “On Aphasia.” Brain 7:433–84.
Medler, David A. 1998. “A Brief History of Connectionism.” Neural Computing Surveys 1:18–72.
Petersen, S.E., Fox, P.T., Posner, M.I., Mintun, M. and Raichle, M.E., 1989. “Positron emission tomographic studies of the processing of single words.” Journal of Cognitive Neuroscience, 1(2), pp.153-170.
Quinn, Naomi. 2011. “The History of the Cultural Models School Reconsidered: A Paradigm Shift in Cognitive Anthropology.” Pp. 30–46 in A Companion to Cognitive Anthropology.
Rumelhart, David E., James L. McClelland, and the PDP Research Group. 1986. Parallel Distributed Processing. Cambridge, MA: MIT Press.
Tressoldi, Patrizio E., Francesco Sella, Max Coltheart, and Carlo Umiltà. 2012. “Using Functional Neuroimaging to Test Theories of Cognition: A Selective Survey of Studies from 2007 to 2011 as a Contribution to the Decade of the Mind Initiative.” Cortext. 48(9):1247–50.
Turner, Robert. 2016. “Uses, Misuses, New Uses and Fundamental Limitations of Magnetic Resonance Imaging in Cognitive Science.” Philosophical Transactions of the Royal Society of London. 371(1705).
Ugurbil, Kamil. 2016. “What Is Feasible with Imaging Human Brain Function and Connectivity Using Functional Magnetic Resonance Imaging.” Philosophical Transactions of the Royal Society of London. 371(1705).
Pingback: Exaption: Alternatives to the Modular Brain, Part II – Culture, Cognition, and Action (culturecog)
Pingback: Where Did Sewell Get “Schema”? – Culture, Cognition, and Action (culturecog)
Pingback: Limits of innateness: Are we born to see faces? – Culture, Cognition, and Action (culturecog)