Elite Capture & Placing Blame
We are wired to blame ourselves or our leaders; we have trouble blaming both.
After writing last week’s post, I asked myself: “Why is it so hard for us to see situations where the elite (the term I use to describe those who hold power in a given context) and the marginalized (those who hold relatively less power in a given context) are simultaneously causing the suffering of the marginalized?”. One example of this that I explored in my master’s was college office hours. In a classroom setting, the professor typically holds more power than their students - especially if that professor is tenured (lol). Therefore, a professor can say something like “I’m crazy busy this semester, but you’re always welcome to book my office hours if you need help.” As an 18 year old student at Harvard, I had an underlying assumption that every professor spent their free time changing the world. Therefore wasting their time was the worst possible thing I could do for myself, for them, and for all of humanity. I would, as a result of this assumption, wait until I was struggling in a class to go to office hours.
Who’s fault was it that I wasn’t going to office hours? The teacher offered them, and I didn’t take them - so 18-year-old me thought it was my fault. A few years later, as a 23 year old master’s student I realized that it was professors’ fault.
This past summer, I’m wondering if it’s both everyone and no-one’s fault. A concept of something being no-one’s fault and/or everyone’s fault is impossible to mentally wrap my head around; my brain immediately wants to place blame on something or someone. Am I the only one who does this? yes. Was that a rhetorical question? also yes. Will I still ask Google? absolutely.
My google search brought me to a couple of papers titled People or systems? To blame is human. The fix is to engineer and Lifting the Veil: How the Brain Decides Blame and Punishment.
A few quotes I pulled from “People or systems?”
Since no system has ever built itself, since few systems operate by themselves, and since no systems maintain themselves, the search for a human in the path of failure is bound to succeed. If not found directly at the sharp end—as a 'human error' or unsafe act—it can usually be found a few steps back. The assumption that humans have failed therefore always vindicates itself. The search for a human-related cause is reinforced by past successes and by the fact that most accident analysis methods put human failure at the very top of the hierarchy, i.e., as among the first causes to be investigated.
Violations and related concepts are important topics in modern safety management, but one can approach them in one of two ways. One could treat violations as the behaviors of bad people, and proceed with person-centered solutions. Or one could treat violations as an indicator to better design those system properties that necessitate violations, and also design support systems that keep workers safe when they must go outside of protocol or work around a flawed system. Although many safety professionals advocate for the latter approach, the former person-centered approach appears to dominate in industry.
It is not that humans are uninvolved in accidents, but that in reality they are not typically the sole or primary causal agents. Nor are behaviors that contribute to accidents caused solely by internal factors.
What are the implications of person-centered tendencies and norms? What recommendations can be made to deal with the implications? To answer those questions, the next section considers each of the “three E’s of safety”: engineering, education, and enforcement (Goetsch, 2007).
Engineering: Person-centered tendencies de-prioritize engineering solutions, and instead rely on education and enforcement. Purely technical engineering solutions that do not consider the role of the human may not make sense to the person-centered practitioner. However, many resources exist for improving systems as a whole, human and all. Numerous writings on human factors engineering are a good source of information on how to achieve fit between workers and the rest of the work system in order to improve safety and performance (Eastman Kodak Company, 2004; Helander, 2006; Salvendy, 2006; Sanders & McCormick, 1993).
Education: If one believes that the cause of accidents is rooted in the person, one naturally turns to education (e.g., training, poster campaigns) for solutions. These may be ineffective, as in the case of a worker who is pushed by production pressures or inflexible technology to take risks…Many safety educators teach about different approaches to safety, about the fundamental attribution error, and about the many reasonable causes of so-called unsafe behavior—does this help students withhold the urge to assign cause or blame to people when people are not to blame? Or is telling people to “think systems” as ineffective as telling them to “be safer”?
Enforcement: Enforcement is often the result of accident investigations, which themselves appear to be biased toward person-centered findings. Dekker (2002) suggests transitioning from the use of narrower investigative methods such as root-cause analysis—which tends to reveal that “the human did it”—to something more holistic that permits discovery of a multi-causal network. What kind of resources will be necessary to make holistic investigations plausible? Can better safety solutions be developed from investigations that do not result in assigning cause or blame predominantly to humans?
Those are solutions that both modify behavior by supporting workers’ performance and obviate the need for behavioral change when old behaviors are made safe in the new system (Holden et al., 2008; Karsh et al., 2006).
Hence, good performance is the essence of safety, and good system design the essence of good performance. So, too, will good design that leads to good performance be cost-efficient (Helander, 2006; Stapleton et al., 2009), which is yet another reason to promote engineering solutions. Without systems solutions being available, it is all too easy to give in to our human-centered cause and blame tendencies. But with adequate alternatives, we may accept that to blame is human, but the fix is to engineer.
A few quotes I pulled from “Lifting the Veil”
One helpful framework for understanding how decisions can go awry is dual-process theory (a version of which was eventually popularized by Daniel Kahneman in Thinking, Fast and Slow). According to this model, when people are learning new skills, practice is slow and effortful, requiring a high degree of executive function. Once people become experts, however, they rely on intuitive (“fast”) processes as a default problem-solving tool. The key idea: intuition works most of the time but can lead to mistakes.
While dual-process theory offers a compelling higher-level model for understanding decision making, it doesn’t speak to the underlying biology. Moreover, legal decision making may be especially complicated, as it is deeply rooted in interpersonal dynamics and requires a core skill—a sort of cognitive super-power—that most humans develop by age 5: the ability to attribute mental states, such as beliefs, intentions, and desires, to other people. Psychologists call this “theory of mind” (5). Without it, the social world would be a deeply confusing place.
As an MIT grad student, Rebecca Saxe found that the right temporoparietal junction (rTPJ) is preferentially activated when people are reading each others’ minds (e.g. when considering how your partner’s preferences might differ from your own). This same part of the brain is activated when people are judging whether another person is blameworthy for a mistake. She later found that transcranial magnetic stimulation (TMS) - which blocks brain activity to whatever part of the brain it is applied - caused people rely on outcomes rather than intention when judging others. In the experiment, subjects judged an intentional act (that caused no harm in the end) as less blameworthy and judged an accidental act (that did cause harm in the end) as more blameworthy.
As psychiatrists, we also see this problem from the other side: many of our patients present to us because they have difficulty with mentalization (with resultant interpersonal problems) or with balancing cognitive and affective processes in decision making. Understanding the role of these basic circuits (in a transdiagnostic way) may allow us to better understand the problems they face—and perhaps explain them in a less stigmatizing way. It may also point to effective biopsychosocial interventions, including different forms of psychotherapy (or mindfulness).
Therapists help their patients balance cognitive and emotional processes, be attuned to them both, and give credit to them both. I leave this article asking… what does “therapy” look like at a group level? It seems that we have individual differences in how likely we are to place blame - even when we have all of the information about the situation available to us. To return to the premise of the first article, we have to engineer (social) systems that make it less likely for us to make mistakes in placing blame on others.
This is exciting! It helps discern where designers can intervene. Blaming individuals is an emotional and cognitive process that is easier for our brains to process than blaming complex systems (especially when we’re a part of those complex systems ourselves). Also, this process isn’t completely cognitive; it’s also emotional. Since it’s both emotional and cognitive, what does it look like to design (online and offline) spaces where people can better reckon with complexity before making snap judgements?
How does media and technology work together to help us empathize with one another?
“As society seeks to resolve all manner of conflict, it is useful to bear in mind that we each harbor our own “complex of instincts and emotions and habits and convictions.” We would be wise to manage them—and also to accept that these imperfect decision-making processes are a part of what make us so inescapably human.”