Is The Brain a Modular Computer?10 min read

As discussed in the inaugural post, cognitive science encompasses numerous sub-disciplines, one of which is neuroscience. Broadly defined, neuroscience is the study of the nervous system or how behavioral (e.g. walking), biological (e.g. digesting), or cognitive processes (e.g. believing) are realized in the (physical) nervous system of biological organisms.

Cognitive neuroscience, then, asks how does the brain produce the mind?

As a starting point, this subfield takes two positions vis a vis two kinds of dualism. First, is the rejection of Descartes’ “substance dualism,” which posits the mind is a nonphysical “ideal” substance. Second, is the assumption that so-called cognitive processes are somehow distinct from simple behavioral or biological processes, referred to as “property dualism.” That is, processes we tend to label “cognitive”—imagining, calculating, desiring, intending, wishing, believing, etc.—are distinct yet “localizable” in the physical structures of the brain and nervous system. As Philosophy Bro summarizes:

…substance dualism says “Oh no you’ve got the wrong thing entirely, stupid” and property dualism says “yeah, no, go on, keep looking at the brain, we’ll get it eventually.”

Broadening the scope to the entire cognitive sciences, including the philosophy of mind, one would be hard-pressed to find a contemporary scholar who takes substance dualism seriously. Thus, whatever the relationship between the mental and the neural, it cannot be that the mental is a nonphysical ideal substance which cannot be studied in empirical ways.

The current debate, rather is between various kinds of property dualist positions and those who argue against even property dualism. However, without diving into these philosophical debates, it is helpful to get a handle on what different trends in cognitive neuroscience contend is the relationship between brains and minds. Here I will briefly review what is considered the classical and commonsensical view, which is a quintessential property dualist approach.

An 1883 phrenology diagram, From People’s Cyclopedia of Universal Knowledge, Wikimedia Commons

The Modular Computer Theory of Mind

The classic approach to localization suggests that the brain is composed of discrete, special-purpose, “modules.” In many ways, this is aligned with our folk psychology: the amygdala is the “fear center,” the visual cortex is the “vision center,” and so on. This approach is most often traced back to Franz Gall and his pseudo-scientific (and racist) “organology” and “cranioscopy,” later referred to as “phrenology.” He argued that there were 27 psychological “faculties” each which had a respective “sub-organ” in the brain.

While most of the work associated with Gall was discarded, the idea that cognitive processes could be located in discrete modules continued, most forcefully in the work of the philosopher Jerry Fodor, specifically The Modularity of Mind (1983). Fodor’s approach builds on Noam Chomsky’s generative grammar. Struck by his observation that young children quickly learn to speak grammatically “correct” sentences, Chomsky argued the acquisition of language cannot be through imitation and trial-and-error. Instead, he proposed human minds have innate (and universal) structures which denote the basic set of rules for organizing language. The environment simply activates different combinations, resulting in the variation across groups. With a finite set of rules, humans can learn to create an infinite number of combinations, but no amount of experience or learning will alter the rules. (I will save the evaluation of Chomsky’s approach to language acquisition for later, but it doesn’t fare well).

Fodor took this one step further and argued that the fundamental contents of “thought” was language-like in this combinatorial sense, or what has come to be known as “mentalese.” In Language of Thought (1975), Fodor proposed that in order to learn anything in the traditional sense, humans must already have some kind of language-like mental contents to work with. As Stephen Turner (2018:45) summarizes in his excellent new Cognitive Science and the Social: A Primer:

If one begins with this problem, one wants a model of the brain as “language ready.” But why stop there? Why think that only grammatical rules are innate? One can expand this notion to the idea of the “culture-ready” brain, one that is poised and equipped to acquire a culture. The picture here is this: cultures and languages consist of rules, which follow a template but which vary in content, to a limited extent; the values and parameters need to be plugged into the template, at which point the culture or language can be rapidly acquired, mutual understanding is possible, and social life can proceed.

Such a thesis rests on the so-called “Computational Theory of Mind,” which by analogy to computers, presumes the mental contents are symbols (a la “binary codes”) which are combined through the application of basic principles producing more complex thought. Perception is, therefore, “represented” in the mind by being associated with “symbols” in the mind, and it is through the organization of perception into symbolic formations that experience becomes meaningful. Different kinds of perceptions can be organized by different modules, but again, the basic symbols and principles unique to each module remains unmodified by use or experience.

Despite the fact such a symbol-computation approach to thinking is “anti-learning,” this view is often implicit in (non-cognitive) anthropology and (cultural) sociology. For example, Robert Wuthnow ([1987] 1989), Clifford Geertz (1966), Jeffrey Alexander with Philip Smith (1993) were each inspired by the philosopher Susanne Langer’s Philosophy in a New Key, in which she argues for the central role of “symbols” in human life. She claims “the use of signs is the very first manifestation of mind” ([1942] 2009:29, thus “material furnished by the senses is constantly wrought into symbols, which are our elementary ideas” ([1942] 2009:45), and approvingly cites Arthur Ritchie’s The Natural History of the Mind, “As far as thought is concerned, and at all levels of thought, it is a symbolic process…The essential act of thought is symbolization” (1936:278–9).

Conceptualizing “thinking” as involving the (computational) translation of perceptual experience into a private, world-independent, symbolic languages, however, makes it difficult to account for “meaning” at all. This is commonly called the “grounding problem,” (which Omar discussed in his 2016 paper, “Cultural symbols and cultural power”), which grapples with the following question (Harnard 1990:335): “How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes [or principles of composition], be grounded in anything but other meaningless symbols?”

The problem is compounded when the mind is conceived as composed of multiple computational “modules,” each of which is independent from the other. The most famous thought-experiment demonstrating the problem with this approach is Searle’s (1980) “Chinese Room Argument.” To summarize, Searle posits a variation on the Turing Test where both sides of the electronically-mediated conversation are human (as opposed to one human and the other artificial); however, both speak different languages:

Suppose that I’m locked in a room and given a large batch of Chinese writing . . . . To me, Chinese writing is just so many meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules . . . The rules are in English, and I understand these rules . . . and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response . . . . Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols . . . my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. (Searle 1980:350–1)

Despite his acquired proficiency at symbol manipulation, locked in the room, he does not understand Chinese, nor does the content of his responses have any meaning to him. Therefore, Searle concludes, thinking cannot be fundamentally computational in this sense.

There are viable alternatives to this modular computer theory of the mind, many of which may run counter to folk understandings, but which square better with evidence. More importantly, these alternatives (which will be covered extensively in this blog) would likely be considered more “sociological,” as they invite (and often require) a role for both learning and context in explaining cognitive processes.

References

Alexander, Jeffrey C. and Philip Smith. 1993. “The Discourse of American Civil Society: A New Proposal for Cultural Studies.” Theory and Society 22(2):151–207.

Fodor, Jerry A. 1975. The Language of Thought. Harvard University Press.

Fodor, Jerry A. 1983. The Modularity of Mind. MIT Press.

Geertz, Clifford. 1966. “Religion as a Cultural System.” Pp. 1–46 in Anthropological Approaches to the Study of Religion, edited by M. Banton.

Harnad, Stevan. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1-3), 335-346.

Langer, Susanne K. [1942] 2009. Philosophy in a New Key: A Study in the Symbolism of Reason, Rite, and Art. Harvard University Press.

Lizardo, Omar. (2016). Cultural symbols and cultural power. Qualitative Sociology, 39(2), 199-204.

Ritchie, Arthur D. 1936. The Natural History of Mind. Longmans, Green and co.

Searle, John R. (1980). Minds, brains, and programs. Behavioral and brain sciences, 3(3), 417-424.

Turner, Stephen P. 2018. Cognitive Science and the Social: A Primer. Routledge.

Wuthnow, Robert. [1987] 1989. Meaning and Moral Order: Explorations in Cultural Analysis. Berkeley: University of California Press.

4 Comments

  1. Pingback: Connectionism: Alternatives to the Modular Brain, Part I – Culture, Cognition, and Action (culturecog)

  2. Pingback: Culture, Cognition and “Socialization” – Culture, Cognition, and Action (culturecog)

  3. Pingback: Beyond Good Old-Fashioned Ideology Theory, Part Two – Culture, Cognition, and Action (culturecog)

  4. Pingback: Limits of innateness: Are we born to see faces? – Culture, Cognition, and Action (culturecog)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to Top

Discover more from Culture, Cognition, and Action (culturecog)

Subscribe now to keep reading and get access to the full archive.

Continue reading