“Learning By Nodes”: Dendritic Learning and What It Means (Or Not) for Cultural Sociology

In a paper published earlier this year in Scientific Reports and further discussed in a later ACS Chemical Neuroscience article, a group of researchers argues that learning might not function like we previously thought. The researchers (Sardi et al. 2018a, 2018b) explain that the dominant conceptualization in cognitive neuroscience of how learning works—synaptic learning, or “Hebbian learning” (Hebb 1949)—is wrong. Instead, using a series of computational models and experiments with synaptic blockers and neuronal cultures  (see Sardi et al. 2018a:4-7), the authors find evidence for a different type of learning—what they refer to as “dendritic learning.” Just as “Copernicus was the first to articulate loudly that the earth revolves around the sun and not vice versa, even though all the accumulated astronomical evidence at that time fit the old postulation,” the researchers proclaim, as are they the first to “[swim] against conventional wisdom” of Hebbian learning theory (2018b:1231).

Of what consequence is this newfound process of dendritic learning for cultural sociology? Should we care at all? I’ll try to briefly describe some of the potential consequences of dendritic learning for cultural sociology; but, spoiler alert, I am not sure one way or the other if these consequences amount to being consequential for how we do sociology. But perhaps taking a peek at what dendritic learning is and how it is different from conventional understandings of how learning works is a nice place to start.

Figure 1. Are We Witnessing a “Revolution of the Cognitive Spheres”?
Note: Image from Copernicus’ On the Revolutions of the Heavenly Spheres (Palca 2011).


Going on 70 years, the prevailing explanation for how learning works has been synaptic learning. Building from Hebb’s (1949) The Organization of Behavior, the idea behind synaptic learning is that if there is an activity that stimulates a neuron which in turn stimulates another neuron, and if that activity is repeated over time, then the first neuron becomes a more efficient stimulator of the second neuron and the two become more strongly connected in the brain.

Neuron-neuron stimulation occurs through synapses, the chemical (usually) or electrical (less frequently) structural gaps between neurons transmitting information across them. Synaptic learning, then, is a type of “activity-dependent synaptic plasticity” (Choe 2015:1305). Repeated practices or exposures to a certain stimulus modifies the synaptic strength between the two neurons: when the practice/exposure is repeated, the two neurons become more tightly associated in the brain, and when the practice/exposure is not repeated, the association weakens. This process occurs relatively slowly.

Synaptic learning is the inspiration behind the old adage that “neurons that fire together wire together.” Until very recently, this was the way we assumed new neural coalitions formed in biological neural networks. Consider an example from Luke Muehlhauser over on the Less Wrong blog (Muehlhauser 2011). Think back to Pavlov’s experiment on classical conditioning (Pavlov 1910):  a dog is given food when the researcher rings a bell, and the timing between the bell ringing and the presentation of food is manipulated. At first, there is no association between the neurons stimulated by bell ringing and the neurons that trigger salivation; they are, ostensibly, mutually exclusive actions. However, if the researcher rings the bell and the food is presented to the dog at the same time (or in close enough time intervals), the neurons that fire when food is present and the neurons that fire with bell ringing are activated together. Over repeated trials, the synapses between “bell ringing” and “salivation” neurons become stronger and, eventually, simply ringing the bell induces salivation without the presentation of food (see Figure 2).

Screen Shot 2018-10-16 at 6.56.46 PM
Figure 2. Synaptic Learning with Pavlov’s Experiment
Note: Reprinted from Less Wrong blog (Muehlhauser 2011).

Sardi and colleagues refer to synaptic learning as “learning by links” (Sardi et al. 2018a:1), since learning occurs through the synapses that link the neurons together. Their research, however, suggests a different type of learning—dendritic learning, also known as “learning by nodes” (Sardi et al. 2018a:2). In short, with this mode of learning, the workhorse of the neuron for learning purposes is not the synapses, but instead the dendrites. In a neuron cell, dendrites are the long, treelike extensions that connect the cell body (the soma, which contains the cell nucleus) to the synapses that themselves “connect” the neuron to other neurons.

Take a look at Figure 3, a neuron cell’s anatomy. The dendrites are responsible for taking in information from other neurons and passing it along into the soma, while the axon is responsible for passing the information on to other neurons via the axon terminals—which are themselves connected to the next neuron’s dendrites through synapses, thus propagating information transmission across the neural network. Without dendrites, information cannot be transmitted into the body of the neuron: e.g., damaged or abnormal dendrites are linked to brain under-connectivity issues associated with autism (Martínez-Cerdeño, Maezawa, and Jin 2016). Trying to construct new neural networks without dendrites is like trying to have group deliberation with all talk and no listening.

Screen Shot 2018-10-16 at 8.44.31 PM
Figure 3. A Neuron’s Anatomy
Note: Reprinted from OpenStax (2018), redirected from Khan Academy (2018).

So, how does dendritic learning differ functionally from synaptic learning? While synaptic learning is based on the idea of synaptic plasticity, dendritic learning revolves around the notion of (you guessed it) a sort of dendritic plasticity: given increasing or decreasing levels of exposure to a neuron-activating stimulus, the extent of the neuron’s “dendritic excitability” can grow or diminish while the strength of the synapses remain relatively constant (Neuroskeptic 2018).

Consider Figure 4. Across both panels, the teardrop object at the bottom represents the neuron cell body, which is where the firing happens if the input signals from the dendrites are strong enough for an outgoing signal to be pushed from the cell body down through the axon and into the dendrites of the next neuron. The long treelike branches are the dendrites, and the tips are the synapses that connect the neuron’s dendrites to the axon terminals of other (not shown) neurons. The left panel illustrates conventional synaptic learning, where the synapses themselves are weighted (indicated by the red valves at the tips of the branches) upward or downward depending on the extent of stimulus exposure. The right panel shows dendritic learning: it is the extent to which a neuron’s dendrites are in a high state of stimulation, and not the strength of the synapses linking the neuron to other neurons, that determines the strength of the input signal and therefore whether or not the neuron fires. In dendritic learning, then, there are far fewer “learning parameters,” since the dendrites are responsible for the learning and not the synapses (see the right panel of Figure 4) (ScienceDaily 2018).

Screen Shot 2018-10-16 at 9.34.34 PM
Figure 4. Synaptic Learning (left) vs. Dendritic Learning (right)
Note: Reprinted from ScienceDaily (2018).


The “Neuroskeptic” over at Discover Magazine reviewed the evidence from the Sardi et al. papers and suggests that “[a]t best they have shown that dendritic learning also happens [in addition to synaptic learning],” and that “[they] don’t think Copernicus has returned to earth just yet” (Neuroskeptic 2018). I agree with Neuroskeptic in terms of what this means for neuroscience, largely because they are the neuroscientist and I am not. That said, there does seem to be the potential for some implications for how we do cultural sociology. But the potential may be greater for some subfields than for others.

I’m Not Sure What this Adds for How Sociologists Study Learning

The existence of dendritic learning has at least two major implications for cognitive neuroscience. First, learning may happen at much faster timescales than previously thought. Second, weak synapses matter a lot. In terms of timescale, it seems that the brain isn’t that bad at quick adaptation—at least relative to traditional Hebbian learning. As Sardi and colleagues note, “[t]his dynamic brain activity leads to the capability that when we think about an issue several times we may find different solutions” (Shrourou 2018). For the importance of weak synapses, the researchers point out that dendritic strengths are “self-oscillating” (2018b:1231), where weak synapses effectively “temper” the dendritic weights and prevent them from taking on extreme values. In other words, “dendritic learning enables stabilization around intermediate [dendritic strength] values” (Sardi et al. 2018a:4). These implications are pretty important for neuroscientists and medical researchers studying various diseases of the brain (Sardi et al. 2018b:1231-32).

What does all this mean for cultural sociologists? It might be too early to tell. Dendritic learning might be faster than synaptic learning, but the time scales in the experiments are in much smaller intervals (minutes) than the learning processes of interest to sociologists. The researchers note that future studies should “investigate . . . [dendritic learning] efficiency and available learning time scales in more realistic scenarios” (2018b:1231), so it’s an empirical question whether or not the learning speed differentials between synaptic and dendritic learning are a wash with longer timescales. So, in terms of theoretical leverage, dendritic learning may or may not offer much over and above how we already talk about learning in culture and cognition studies (see Lizardo et al. 2016:293-95). At the end of the day, for cultural sociologists it may all look like GOFILT—Good Old Fashioned Implicit Learning Theory—in which case the difference between synaptic and dendritic learning can be taken as ontologically true but analytically inconsequential. Only time (pun) will tell.

The Payoff May Come Sooner for Computational Social Science

In addition to understanding the learning processes behind biological neural networks and brain disorders, Sardi and colleagues also note that this “paradigm shift” matters for developing machine learning algorithms built to mimic human learning (2018b:1231). In natural language processing, for instance, if synaptic learning isn’t the baseline model of human learning (itself an empirical question), then perhaps analytical strategies that build associations between terms or documents based on term frequencies and co-occurrences aren’t based on the best cognitive model for machine learning.

But at face value I’m skeptical of this last proposition—I like word count methods for analyzing meaning, others do too (Nelson 2014; Underwood 2013), and I’ve read enough papers that make defensible claims using them to sell me on their continued use. That said, we have not seen dendritic learning rules implemented into machine learning algorithms yet (but see Sardi et al. 2018a:2-3 for an example of dendritic learning rules in a series of perceptron models), and it might prove particularly consequential in deep learning tasks and artificial neural network models. These sort of machine learning algorithms have not gained much traction in sociology, though, so, for now, it seems that the utility of distinguishing between synaptic and dendritic learning for culture and cognition studies is truly a waiting game.

I can continue all of my work without making these distinctions, and I suspect that most of the people reading this post are in the same position.


Choe, Yoonsuck. 2015. “Hebbian Learning.” Pp. 1305-09 in Encyclopedia of Computational Neuroscience, edited by D. Jaeger and R. Jung. New York: Springer.

Hebb, Donald O. 1949. The Organization of Behavior: An Neuropsychological Theory. New York: Wiley.

Khan Academy. 2018. “Overview of Neuron Structure and Function.” Khan Academy. Retrieved October 16, 2018 (https://www.khanacademy.org/science/biology/human-biology/neuron-nervous-system/a/overview-of-neuron-structure-and-function).

Lizardo, Omar, Robert Mowry, Brandon Sepulvado, Dustin S. Stoltz, Marshall A. Taylor, Justin Van Ness, and Michael Wood. 2016. “What Are Dual Process Models? Implications for Cultural Analysis in Sociology.” Sociological Theory 34(4):287-310.

Martínez-Cerdeño, Verónica, Izumi Maezawa, and Lee-Way Jin. 2016. “Dendrites in Autism Spectrum Disorders.” Pp. 525-43 in Dendrites: Development and Disease, edited by K. Emoto, R. Wong, E. Huang, and C. Hoogenraad. Tokyo: Springer.

Muehlhauser, Luke. 2011. “A Crash Course in the Neuroscience of Human Motivation.” Less Wrong. Retrieved October 16, 2018 (https://www.lesswrong.com/posts/hN2aRnu798yas5b2k/a-crash-course-in-the-neuroscience-of-human-motivation).

Nelson, Laura K. 2014. “Computer-Assisted Content Analysis and Sociology: What You Should Know.” Bad Hessian. Retrieved October 17, 2018 (http://badhessian.org/2014/01/computer-assisted-content-analysis-and-sociology-what-you-should-know/).

Neuroskeptic. 2018. “Is ‘Dendritic Learning’ How the Brain Works?” Discover Magazine. Retrieved October 16, 2018 (http://blogs.discovermagazine.com/neuroskeptic/2018/05/11/dendritic-learning/#.W8aX4P5KjdT).

OpenStax. 2018. “Neurons and Glial Cells.” OpenStax CNX. Retrieved October 16, 2018 (https://cnx.org/contents/GFy_h8cu@9.87:c9j4p0aj@3/Neurons-and-Glial-Cells).

Palca, Joe. 2011. “For Copernicus, A ‘Perfect Heaven’ Put Sun At Center.” NPR: Morning Edition. Retrieved October 16, 2018 (https://www.npr.org/2011/11/08/141931239/for-copernicus-a-perfect-heaven-put-sun-at-center).

Pavlov, Ivan. 1910. The Work of the Digestive Glands. London: C. Griffin & Company.

Sardi, Shira, Roni Vardi, Amir Goldental, Anton Sheinin, Herut Uzan, and Ido Kanter. 2018a. “Adaptive Nodes Enrich Nonlinear Cooperative Learning Beyond Traditional Adaptation By Links.” Scientific Reports 8(1):5100.

Sardi, Shira, Roni Vardi, Amir Goldental, Yael Tugendhaft, Herut Uzan, and Ido Kanter. 2018b. “Dendritic Learning as a Paradigm Shift in Brain Learning.” ACS Chemical Neuroscience 9:1230-32.

ScienceDaily. 2018. “The Brain Learns Completely Differently than We’ve Assumed Since the 20th Century.” ScienceDaily. Retrieved October 16, 2018 (https://www.sciencedaily.com/releases/2018/03/180323084818.htm).

Shrourou, Alina. 2018. “Dendritic Learning Occurs Much Faster and In Closer Proximity to Neurons, Shows Study.” News Medical: Life Sciences. Retrieved October 16, 2018 (https://www.news-medical.net/news/20180830/Dendritic-learning-occurs-much-faster-and-in-closer-proximity-to-neurons-shows-study.aspx).

Underwood, Ted. 2013. “Wordcounts Are Amazing.” The Stone and the Shell. Retrieved October 17, 2018 (https://tedunderwood.com/2013/02/20/wordcounts-are-amazing/).

Limits of innateness: Are we born to see faces?

Sociologists tend to be skeptical of claims individuals are consistent across situations, as a recent exchange on Twitter exemplifies. This exchange was partially spurred by revelations that the famous Stanford Prison Experiment (which supposedly showed people will quickly engage in behaviors commensurate with their assigned roles even if it means being cruel to others), was even more problematic than previously thought.


The question of individual “durability” is sometimes framed as “nature vs nurture,” and this is certainly a part of the matter. In sociology, however, this skepticism of “durability” often goes much further than innateness, and sometimes leads sociologists to suggest individuals are inchoate blobs until situations come along to construct us (or interlocutors may resort to obfuscation by touting the truism that humans are always in a situation). If pushed on the topic, however, even the staunchest situationalist would likely concede that humans are born with some qualities, and the real question is what are the limits of such innateness? What kinds of qualities of people can be innate? To what extent are these innate qualities human universals? And, if we are “born with it” can  “it” change and how and to what extent? In Stephen Turner’s new Cognitive Science and the Social, he puts the matter succinctly:

“…children quickly acquire the ability to speak grammatically. This seems to imply that they already had this ability in some form, such as a universal set of rules of language stored in the brain. If one begins with this problem, one wants a model of the brain as “language ready.” But why stop there? Why think that only grammatical rules are innate? One can expand this notion to the idea of the “culture-ready” brain, one that is poised and equipped to acquire a culture” (2018:44–45).

As I’ve previously discussed, the search for either the universal rules or specialized module for language has, thus far, failed. Nevertheless, most humans must be “language-ready” in the minimal sense of having the ability to acquire the ability to speak and understand speech. But, answering the question of where innateness ends and enculturation begins is not easy. Even for those without the disciplinary inclination toward strongly situationalist arguments.

Are we born to see faces?

How we identify faces is a good place to explore this difficulty: Do we learn to identify faces or are we born to see faces? And, if we are born to see faces, is this ability refined through use and to what extent? Enter: the fusiform face area  (FFA). Just like language, the FFA is often used as evidence for the more general arguments of functional localization and domain specificity. This argument goes: facial recognition is produced not by generic cognitive processes involved in vision (or other generic processes), but rather an inborn special-purpose module.

One reason why faces are an even better candidate for grappling with the question of innateness than is language is that the human fetus is exposed to language while in the womb. Human fetuses gain some sense of prosody, tonality, and as a result, a basic sense of grammar in the course of development in utero. There is no comparable exposure to faces, however. Another reason is, as the Gestalt psychologists argued, faces have an irreducible structure such that they are perceived as complete wholes even when viewing only a part — “the whole is something else than the sum of its parts, because summing is a meaningless procedure, whereas the whole-part relationship is meaningful” (Koffka 1935:176).

Facial recognition encompasses two related functions: distinguishing faces from non-face objects and distinguishing among faces. The key debate within this area of cognitive neuroscience is whether there is a module that is specialized for one or both of these processes (Kanwisher, McDermott, and Chun 1997; Kanwisher and Yovel 2006), as opposed to a distributed and generic cognitive process (Haxby et al. 2001). This debate goes back to the observation that humans struggle to recognize and remember faces that are upside down, which seemed to be the case for faces more so than any other non-face object (Diamond and Carey 1986) — suggesting something about faces made them unique. 20181014-Selection_001.png The proposal facial recognition was the result of a specialized module, however, begins with a relatively recent paper by Kanwisher et al. (1997). Using functional magnetic resonance imaging (which I’ve discussed in detail in previous posts), 15 subjects were shown various common objects as well as faces. They found in 12 of those subjects a specific area of the brain was more active when they saw faces than when they saw non-face objects. On its face, it seems like reasonable evidence humans are born with a module necessary for identifying faces.

However, when one squares this claim with the underlying logic of fMRI—it is used to (a) measure relative activation, not an on/off process, and (b) voxel and temporal resolution is far too coarse to conclude a region is homogeneously activated—the claim that the FFA is a functionally specialized module for facial recognition weakens considerably.  These areas are not entirely inactive when viewing non-face objects. Indeed, relative to baseline activation, subsequent research found the FFA is significantly more active when viewing various objects (Grill-Spector, Sayres, and Ress 2006). Specifically, the level of specificity of the stimulus (e.g. faces tend to be individuals whereas chairs tend to be generic) and the participants level of expertise with the stimulus (e.g. car and bird enthusiasts) predicted greater relative activation (Gauthier et al. 2000; Rhodes et al. 2004).

Finally, if we are born to distinguish faces from non-faces, the ability to distinguish among faces is considerably trained by early socialization, and such socialization introduces a lot of variation among people. For example, one of the earliest attempts to measure facial recognition concluded, “that women are perhaps superior to men in the test; that salespeople are superior to students and farm people; that fraternity people are perhaps superior to non-fraternity people…” (Howells 1938:127).

Subsequent research in this vein found individuals are better at distinguishing among their racial/ethnic ingroups than their outgroups. In an early study of black and white students from a predominantly black university and a predominantly white university, researchers found participants more easily discriminated among faces of their own race. They also found “white faces were found more discriminable” overall, which they suggest may be the result of “the distribution of social experience is such that both black persons and white persons will have had more exposure to white faces than black faces in public media…” (Malpass and Kravitz 1969:332). Summarizing more recent work, Kubota et al.  (2012) state “participants process outgroup members primarily at the category level (race group) at the expense of encoding individuating information because of differences in category expertise or motivated ingroup attention.”

Why should sociologists care?

To summarize, the claim that facial recognition emerges from an innate functionally-specialized cognitive module is weakened in three ways: the FFA responds to more generic features faces share with other objects; the FFA is implicated in a distributed neural network rather than solely a discrete module; the FFA is used for non-facial recognition functions; and finally, facial recognition is trained by our (social) experience. Why should sociologists care? I think there are three reasons. First, innateness is not deterministic or specific but rather constraining and generic. Second, these constraints ripple throughout our social experience, forming the contours of cultural tropes, but are not immutable. Third, limited innateness does not mean individuals are not durable across situations, even (near) universally so.

A dispositional and distributed theory of cognition and action accounts for object recognition by its use: “information about salient properties of an object—such as what it looks like, how it moves, and how it is used—is stored in sensory and motor systems active when that information was acquired” (Martin 2007:25). This is commensurate with the broad approach many of the posts on this blog have been working with. Perhaps, however, there is a special class of objects for which this is not exactly the case. In other words, the admittedly weak innateness of distinguishing unfamiliar faces from non-face objects is, perhaps, the evidence we are “born with” some forms of nondeclarative knowledge (Lizardo 2017).

Such nondeclarative knowledge, however, may be re-purposed for cultural ends. Following the logic of neural exaption, discussed in a previous post, humans can be born with predispositions, especially related to very generic cognitive processes, which are further trained, refined, and recycled for novel uses, novel uses which are nevertheless constrained in a way that yields testable predictions. A fascinating example related to facial perception is anthropomorphization. If rudimentary facial recognition is innate (and therefore, probably evolutionarily old), this inherently social-cognitive process is being reused for non-social purposes (i.e. non-social in the restricted sense of interpersonal interaction). This facial recognition network—together with other neuronal networks—is used to identify people and predict their behavior, and this may be adapted to non-human animate and inanimate objects, like natural forces, as well as anonymous social structures, like financial markets.

What this means, following the logic of neural reuse and conceptual metaphor theory, is that the target domain (e.g. derivative markets, earthquakes) is “contaminated” by predispositions which originally dealt with the source domain (here, interpersonal interaction). This means attempting to imagine the intentions of thousands of unknown traders as if inferring the intentions of an interlocutor may lead traders to “ride” financial bubbles (De Martino et al. 2013). Therefore, what is and is not innate is a messy question to answer — even by those without a disciplinary distrust of innateness claims. Although cognitive neuroscientists are making headway, it remains an empirical question which objects are recognized innately and the extent to which the object recognition is robust to enculturation and neural recycling.

More importantly, the question of individual durability across situations should not be reduced solely to “nature vs nurture.” That is, we must grapple with the question of once these processes are so trained in an individual (during “primary socialization”), how easily can they be re-trained, if at all? In John Levi Martin’s Thinking Through Theory (2014:249), the third of his “Newest Rules of Sociological Method” is pessimistic in this regard: “Most of what people think of as cultural change is actually changes in the compositions of populations.” That is, even if we were to bar the possibility of innateness in any strong sense, once individuals reach a certain age they are likely to be fairly consistent across situations, with little chance of altering in fundamental ways.


De Martino, Benedetto, John P. O’Doherty, Debajyoti Ray, Peter Bossaerts, and Colin Camerer. 2013. “In the Mind of the Market: Theory of Mind Biases Value Computation during Financial Bubbles.” Neuron 79(6):1222–31.

Diamond, Rhea and Susan Carey. 1986. “Why Faces Are and Are Not Special: An Effect of Expertise.” Journal of Experimental Psychology. General 115(2):107.

Gauthier, I., P. Skudlarski, J. C. Gore, and A. W. Anderson. 2000. “Expertise for Cars and Birds Recruits Brain Areas Involved in Face Recognition.” Nature Neuroscience 3(2):191–97.

Grill-Spector, Kalanit, Rory Sayres, and David Ress. 2006. “High-Resolution Imaging Reveals Highly Selective Nonface Clusters in the Fusiform Face Area.” Nature Neuroscience 9(9):1177–85.

Haxby, J. V., M. I. Gobbini, M. L. Furey, A. Ishai, J. L. Schouten, and P. Pietrini. 2001. “Distributed and Overlapping Representations of Faces and Objects in Ventral Temporal Cortex.” Science 293(5539):2425–30.

Howells, Thomas H. 1938. “A Study of Ability to Recognize Faces.” Journal of Abnormal and Social Psychology 33(1):124.

Kanwisher, Nancy and Galit Yovel. 2006. “The Fusiform Face Area: A Cortical Region Specialized for the Perception of Faces.” Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 361(1476):2109–28.

Kanwisher, N., J. McDermott, and M. M. Chun. 1997. “The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception.” The Journal of Neuroscience: The Official Journal of the Society for Neuroscience 17(11):4302–11.

Koffka, Kurt. 1935. Principles of Gestalt Psychology. New York: Harcourt, Brace.Kubota, Jennifer T., Mahzarin R. Banaji, and Elizabeth A. Phelps. 2012. “The Neuroscience of Race.” Nature Neuroscience 15(7):940–48.

Lizardo, Omar. 2017. “Improving Cultural Analysis Considering Personal Culture in Its Declarative and Nondeclarative Modes.” American Sociological Review 0003122416675175.

Malpass, R. S. and J. Kravitz. 1969. “Recognition for Faces of Own and Other Race.” Journal of Personality and Social Psychology 13(4):330–34.

Martin, Alex. 2007. “The Representation of Object Concepts in the Brain.” Annual Review of Psychology 58(1):25–45.

Martin, John Levi. 2014. Thinking Through Theory. W. W. Norton, Incorporated.

Rhodes, Gillian, Graham Byatt, Patricia T. Michie, and Aina Puce. 2004. “Is the Fusiform Face Area Specialized for Faces, Individuation, or Expert Individuation?” Journal of Cognitive Neuroscience 16(2):189–203.

Turner, Stephen P. 2018. Cognitive Science and the Social: A Primer. Routledge.

Beyond Good Old-Fashioned Ideology Theory, Part Two

In part one, I examined two recent frameworks for understanding ideology (Jost and Martin) and explained how both serve as alternatives to the good old-fashioned ideology theory (GOFIT). Ultimately, I concluded that Martin’s (2015) model has specific advantages over Jost’s (2006) model, though the connection between ideology and “practical mastery …

Durkheimian Sociology and its Discontents, Part II: Why Culture, Social Psychology, & Emotions Matter to Suicide

In a previous post, I argued that despite its importance and “classical” status, sociologists have not contributed to the study of suicide as much as they could. While Anna Mueller and I have yet to posit a general or formal theoretical statement on suicide, in this post, I attempt to …

Where Did Sewell Get “Schema”?

Although there are precedents to using the term “schema” in an analytical manner in sociology (e.g., Goffman’s Frame Analysis and Cicourel’s Cognitive Sociology), it is undoubtedly William Sewell Jr’s “A Theory of Structure: Duality, Agency, and Transformation” published in the American Journal of Sociology in 1992 that really launched the career of …

Durkheimian Sociology and its Discontents: Why its Time for a New Sociology of Suicide

Since Durkheim showed that certain social structural factors, external to the individual, had a strong positive relationship to variation in suicide rates, sociologists have maintained the argument that suicide is caused by social forces and, therefore, is a phenomenon squarely in the domain of sociology. Yet, western medical professionals (Marsh …

On the Nature of Habit

Much like American sociological theory post-Parsons (Camic, 1986), habits have been given short shrift in the analytic philosophy of action tradition. As noted in previous posts, one problem is that habit-based explanations, being a form of dispositional account of action, are hard to reconcile with dominant intellectualist approaches to explaining …

Culture, Cognition and “Socialization”

Culture and cognition studies in sociology are mainly concerned with the construction,  transmission, and transformation of shared stocks of knowledge. This was clear in the classical theoretical foundations of contemporary work in the sociology of culture laid out in Parsons’s middle period functionalism (Parsons 1951) and in Berger and Luckmann’s …