The Undoing Project: A Friendship that Changed Our Minds: Book Review

January 19, 2018

Another Michael Lewis bestseller, The Undoing Project, is about the collaboration of Israeli psychologists Daniel Kahneman and Amos Tversky, essentially the founders of behavioral economics--the importance of biases and other human characteristics in opinions and decision making. A previous bestseller, Moneyball, was about the Oakland As essentially using these insights without Lewis being aware of either behavioral economics or Kahneman and Tversky. This is something of an odd book, about the relationship of two different personalities, dealing in some detail with Kahneman and getting a handle on Tversky who had died relatively young of cancer. Lewis spends a fair amount of time on their lives and relationships, topics I didn't care that much about. However, there is considerable relevant information on decision making, psychology and other topics that were especially interesting (and presented in unique ways).

 

Apparently, Kahneman was such a great teacher, the dean said "you cannot compare teachers to Kahnemans." Then there was the intelligence test: the quicker you figured out that Tversky was the smartest guy in the room, the smarter you were. Tversky was a very good math guy (quant), common for economists, but unusual for psychologists. Thus, there prospect theory paper was quite quantitative and published in economics (probably required for Kahneman's Nobel price and the creation of behavioral economics). Mainly, TK started with existing theory and then exposed the flaws and created more persuasive theories. 

 

Here is a partial  list of things presented in the book, mainly insights from psychology and TK:

 

Confirmation bias:  people look to reasons to believe their existing perceptions are correct (and ignore counter information). (p. 40)

Endowment effect: overvalue what you have.    

Hindsight bias:  unpredictable outcomes are seemingly predictable--after the fact.

BF Skinner's behaviorism assumed people (like animals, e.g., rat maze experiments) respond only to external rewards and punishments.

 

Gestalt: how does the brain create meaning when facing a chaotic world.

Psychology lacks a single persuasive theory.

Halo effect (Thorndike 1915): first impressions (initially, Army officers evaluating their men), then confirmation.

Clinical versus statistical predictions: simple algorithms won (like the Kahneman score for Israeli officers based on a questionnaire). 

Mischel marshmallow experiments: child's ability to wait correlated with IQ and sense of self-worth. Also noted for the psychology of single questions. (p. 83)

 

Economics of decision making: market focused around rational behavior (including transitivity: if a>b and b>c then a.c).

To predict decisions (Clyde Coombs)--measure preferences (e.g., more money rather than less; less pain rather than more). How to do that when they couldn't be directly observed (especially fuzzy decisions).

Similarity judgments: how to evaluate how much one thing is like another (e.g., how closely they resemble each other). AT: the more features they share, the more similar they are. Grouping things together causes them to seem familiar, like apples and bananas. (pp. 111-115)

 

Importance of unconscious processes (p. 131). Pupil response to stimuli can reveal preferences (Hess), both emotional and mental effort (p. 135). Coctail effect: abilty of people to filter out noise  when they wish to hear something.

Bayes game is used to see how people respond (probabilities) to new information, adjusting from base rate from bags of poker chips; result: the higher the base rate, the faster the odds shift. People tend to be "conservative Bayesians". Real world: diagnosis of likely cancer based on age (p. 144). DK believed no: people jump to big conclusions based on little information.

Belief in law of small numbers: leaping to conclusions based on small amounts of evidence (including the gambler's fallacy) (p. 159).

 

Expert judgment vs. simple models (using algorithms based on inputs from experts--but more spheres lack data to build models). Paul Hoffman: take expert decisions (cues) and infer weights placed on inputs, e.g., clinical psychology and cancer (p. 168).

Subjective probabilities: odds you assign to any given situation, mainly based on guessing. Importance of heuristics, rules of thumb, beginning with representativeness (judgments based on some model in their minds), in other words, systematic rather than random errors; TK tested birth order (Girl versus boy), and others (p. 183).

 

Availability bias: judgments likely based on what people recall or mentally retrieve, distortion by the memorable (p. 188).

Anchoring and adjustment: people can be anchored by irrelevant information (e.g., sum of 1x2x3x4x5 versus  5x4x3x2x1); related: scenarios versus probability judgment (p. 192).

Prediction is a judgment  involving uncertainty (p. 198). Base rates: what you would predict with no information at all (p. 199). Important to economic planning, technology forecasting, legal evidence, medical diagnosis: importance of biases in predictions.

 

History: Tversky on historical biases, based on intuitive interpretations; prone to hindsight bias (fitting history to story telling), suggesting things are inevitable in an uncertain world (p. 206). False order placed on random events, Tversky called it "creeping determinism."

Decision analysis (Ron Howard): force decision makers to assign probabilities to various outcomes to make the thinking explicit (p. 209).

Doctors do not think statistically: representative heuristic: a simple diagnosis that pops into mind that explains everything (p. 216).

Judgment under uncertainly paper; representativeness, availability, anchoring, leading to mistakes that were predictable and systematic (p. 219). Evidence-based medicine (p. 222).

 

Samuelson bet: 50-50 chance, win $200 or lose $100, most economists would not take this bet (p. 227). Doctors: individual patients versus society, behave differently (p. 230); misperception of randomness, which can have patterns (p. 231); gap between people's experience of pain and their memory of it--memory associated with maximum pain and how they felt when pain ended (p. 234).

 

Decisions: no one ever made a decision based on numbers, they needed a story (p. 250). 

Decision theory began with dice-rolling in the early 18th century. Gamblers took bets with negative expected values; insurance: paying premiums the exceed expected losses. Daniel Bernoulli, 1730s: people maximize utility rather than value; assumed people were risk-averse (p. 253). Expected utility theory. John von Neumann & Oscar Morgenstern rules of rationality included transitivity axiom and independence axiom (that choices would not be changed by irrelevant alternatives) (p. 254-5). Allais paadox, sure thing over gamble with higher expected payout (p. 258).

 

Minimize regret rather than maximize utility. Theory of regret; coming close and failing, the greater the regret, also linked to responsibility. Taking Allais gamble and losing represented anticipated regret. Closest to sure thing is the status quo (p. 264). People are risk averse because of feelings of regret (a regret premium). Changes more important than absolute levels (p. 266). Gains versus losses: for gains people are risk averse, for losses, risk seeking (p. 269). People are risk seekers when odds are long, such as lottery tickets (p. 271). Loss: being worse off than reference point; Wall Street: reference point is expected return, thus people can be manipulated on reference point. Gains and losses are perceived by subject: therefore framing (how circumstances are described) becomes important, thus attitude toward risk (p. 276).

 

Asian disease problems: how many people will live or die; some number will be saved or die; loss aversion can be rationalized. Richard Thaler kept a list of irrational things people do, but economists assumed people are rational. First up was  the endowment effect, people are attached to what they already have and reluctant to part with them. Economists ignored human nature (p. 283). Systematic bias of human decision making.

 

Prospect theory (a completely distinct name): math with psychology in it (p. 284). The agent of economic theory is rational, selfish, with unchanging tastes. "We study natural stupidity instead of artificial intelligence" (p. 293). Counterfactual emotions: alternative realities including regret, frustration and envy (unrealized possibilities). Imagination obeys the rules of undoing (p. 303)--making sense of infinite possibilities.

 

Metaphors are memorable and vivid and not subjected to critical analysis (p. 315). People have trouble seeing their own mental mistakes; solution: teach others to question their judgment (e.g., airline pilots). Max Black" "I'm not interested in the psychology of stupid people" (p. 318). Representativeness: similarity between what people were judging and some model they had in mind--the Linda problem, people blind to logic imbedded in story (p. 324).

 

"Economists are brash and self-assured. Psychologists were nuanced and doubtful. Psychologists will interrupt for clarification, economists to show how smart they are" (p. 340). Psychologists think economists are immoral, economists think psychologists are stupid.

 

Choice architecture: decisions people make are driven by the way they are presented (p. 343), framing

 

 

 

 

Please reload

  • Twitter Social Icon
  • Facebook Social Icon

© 2016 Gary Giroux

This site was designed with the
.com
website builder. Create your website today.
Start Now