Categories
Consult-Liaison Reading

Biased Thoughts.

The only social media platform I have yet to abandon is Twitter. It’s a good example of “variable ratio reinforcement”. Think of a slot machine: People put money into it with hopes of winning a jackpot. A reinforcer increases the likelihood that a specific behavior will happen. Here, the reinforcer is the pay out. The chance of a jackpot makes it more likely that someone will stay and continue to put money into the slot machine. However, the slot machine doesn’t pay out money on a predictable schedule or ratio. Jackpots happen on a variable schedule. This “variable ratio reinforcement” is what keeps people at slot machines (a specific behavior) for hours.

The Twitter algorithm occasionally (on an unpredictable, variable schedule) shows me interesting and useful information. It recently introduced me to a paper called Toward Parsimony in Bias Research: A Proposed Common Framework of Belief-Consistent Information Processing for a Set of Biases. (Though the paper isn’t too jargony, it is wordy… but worth your attention if you like this sort of stuff.) Of course, this paper played right into my biases: I like parsimony (or, more simply put, in a world of Lumpers and Splitters, I am generally on Team Lumper) and I like thinking about biases and how they affect our emotions and behaviors.

The authors argue that bias is embedded in every step we take when we process information. We already have a set of beliefs. Unless we exert deliberate effort, our thinking habits automatically try to confirm what we already believe. This bias manifests in what we pay attention to, how we perceive things, how we evaluate situations, how we reconstruct information, and how we look for new information.

The authors also put forth the idea that most of our biases are forms of confirmation bias. (The list of biases is biased towards Splitters; see this enormous list of cognitive biases on Wikipedia.) As Lumpers, the authors distill common biases down to two:

  • “My experience is a reasonable reference.”
  • “I make correct assessments.”

As a result, they argue that we can significantly reduce our biases “if people were led to deliberately consider the notion and search for information suggesting that their own experience might not be an adequate reference for the respective judgments about others” (see comment above about article wordiness) and “if people deliberately considered the notion that they do not make correct assessments”.

My mind then ties these biases into the primary framework of cognitive behavioral therapy (CBT). CBT is a type of psychotherapy that focuses on identifying and changing thoughts to then alter emotions and behaviors. The three “categories” of “thought targets” include:

  • core beliefs (things we believe about ourselves, other people, and the world that come from our past experiences)
  • dysfunctional assumptions (we tend to believe “negative” things, rather than “positive” things)
  • automatic negative thoughts (these are “habits of thought” that we are often unaware of; much of CBT focuses on recognizing and identifying these thoughts)

(This is a common complaint about CBT: “So you’re telling me that my problem is that I think ‘wrong’ thoughts. Thanks a lot.”)

If it is true that biases can be reduced to only two, then can we assume that these two beliefs—that we ourselves are reasonable reference points and that we make correct assessments—should be common “thought targets” in CBT? Instead of chasing down every single “automatic negative thought”, could we instead focus on these two common beliefs? (I see value in reframing it this way. Labeling something as an “automatic negative thought” can preclude the value that the thought has in our daily lives. For example, I might have the automative “negative” thought, “I am not entirely safe when I go outside.” However, this automatic thought—which may have led me to take self-defense classes and always monitor my surroundings—may have contributed to me staying out of harm’s way. Astute readers will note that my example included the word “entirely”. It is up for debate about whether the inclusion of that word makes it an adaptive, nuanced thought or a true “negative” automatic thought.)

Focusing on these two beliefs seems to tread into Buddhist psychological thought, too. From a lens of impermanence, are thoughts even real? Can they be sustained? Our ideas—our thoughts—can be reasonable in one moment, and completely unreasonable in the next. Same with our assessments: New data and new context can make our assessments wrong in a moment. And what about non-self? Can we even speak of “my reasonable reference” and “my correct assessments” if, in fact, there is no “self”? And aren’t thoughts yet another concept that keep us trapped in suffering?

So, I think there are three main ideas to take from this post:

  • Twitter has some value, some of the time, and is an excellent demonstration of variable ratio reinforcement.
  • You might be able to significantly reduce your cognitive bias if you adopt two habits of thought: (1) Look for evidence that your own experience is inadequate when assessing other people and situations, and (2) Look for evidence that you do not make correct assessments.
  • An oldie but goodie: You can’t always believe what you think.