Daniel Kahneman, psychologists, winner of the Nobel Prize is economics (for behavioral economics).
It was a lot of fun to read. I doggy-eared several pages, and I’d like to share that here.
[p. 42]
In terms of mental work, it is suggested that self-control might be finite and depleted, especially when people are “simultaneously challenged by a demanding cognitive task and by a temptation…” (p. 42) Self-control is taxing and it can be further impaired by cognitive load, as well as a “few drinks” or “a sleepless night.” (p. 42) Simply put, “self-control requires attention and effort.” (p. 42)
I’m not sure what “self-control” ends up being on the author’s two system model. To my understanding, self-control is some sort of check in system 2 that needs to be trained to suppress system 1 and exercise the lazy system 2. It’s sort of like having system 1 do a math problem without the need of system 2, then purposefully invoking system 2 to check the problem for errors. It’s a sort of regulating system – like an untransformed werewolf shackling himself up before a full moon, or Dr. Jekyll trying to regain control when he transforms into Mr. Hyde. This is different from just moving a certain practice from system 2 to system 1 like chess masters or poker players do (they train for automated statistical regularities or habits – like the boxer telegraphing punches and countering without thinking). I’m teasing ideas here and they’re not developed, but I hope, at the very least, it’s somewhat coherent.
To go beyond what the authors suggest, we might think of self-control like a muscle. Imagine lifting a stack of dishes: it might seem light at first, but it’s going to feel heavier as fatigue sets in; moreover, if I added more dishes to your stack, you’d fatigue a lot faster. If you had a wrist injury or came back from lifting weights, it’d be harder for you to lift those stacks of plate and you’d again get fatigued faster.
Now, if we push this analogy further, we may develop a hypothesis that self-control can be trained like a muscle. In the same vein of “progressive overload,” we might be able to systematically overload our self-control in small increments to withstand heavier loads. This sort of idea seems to have parallels in memory and other demanding cognitive tasks. For instance, we deliberately and systematically increase our vocabulary for aptitude tests like the GMAT. Perhaps we can train our self-control in the same manner.
[p. 57]
The priming effect might affect how we make moral decisions. The author mentions a study conducted where people helped themselves to tea or coffee and paid via an “honesty box” with suggested prices on the side. Above the suggested price was either a picture of a pair of eyes or some flowers which alternated weekly. The results showed that more money was paid on the weeks with the picture of eyes than the weeks with the picture of flowers. The conclusion here is that people were primed by the eyes and nudged towards more moral behavior.
This informs our picture of moral motivation. The priming effect motivates some people to give more, particularly true of charitable acts – think about the time you were approached by a beggar when you were with a partner, or how generous you are when giving tips to your attractive server, or how charitable you were when the check-out cashier asks if you’d like to donate. All these showy forms of charity point to something deeper about our moral motivation: it is deeply social. This contrasts the picture of an ascetic monk who carefully develops moral virtues in isolation. It’s hard to be moral for morality’s sake. Ideas of integrity or personal virtue might motivate us only so far; maybe we need better motivations, like a personal deity (incidentally, religious artifacts, rites and rituals might be thought of as primers). Maybe when we face a beneficial but immoral act, and we believe nobody will ever find out, we cast aside our morals and act in our own interest.
[p. 192]
A reoccurring theme in the book is a regression to the mean. When the sample shows a lot of variance, and as the sample increases, the variance will regress to the mean. This has some interesting practical consequences. For instance, a single job interview is a poor indicator for a quality candidate – the sample is too small. The same goes for first impressions in general.
In addition, organizations should not reward risky behavior or punish problems arising from chance. It would be like taking financial advice from a lottery winner or throwing just Hail Mary passes.
[p. 201]
“Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance.”
We want to weave together coherent stories. Essential details are omitted or put to the background while conjectures and contrived inferences take the foreground. What does this say about witness testimony? Everybody’s memory is going to be altered by the story they want to tell.
[p. 255]
Optimism gives us a false lens of reality: the world seems like a safe haven for us, the main character of this wonderful story who can readily accomplish anything. This sounds like an exaggeration or some delusion of grandeur, but more subtle cases of having just a positive disposition leads us “exaggerate our ability to forecast the future, which fosters optimistic overconfidence.” This might suggest that a pessimistic disposition leads to more accurate results, thus, should be preferred.
However, a “delusional sense of significance” (p. 264) can be adaptive insofar as providing motivation. Compare two cases: first, a megalomaniac night guard for an apartment complex believes he is the only thing standing between the residents and mortal danger; second, a depressed night guard for the same apartment complex who believes his work is vapid and meaningless. I don’t want to suggest that these are the only two options, but I want to show here how two extreme examples show that optimism and blissful ignorance can improve one’s quality of life and relatedly one’s quality of work.
[p. 285]
An interesting insight that the author emphasizes is that perceived risk of loss is much more painful than the pleasure of perceived risk of gain; that is, we are generally more risk adverse than is statistically appropriate and we aren’t consistent with our risks in terms of probabilities. Again, going beyond what the author presents, this might be crucially linked to why we generally don’t like change. New things, new risks, seem much worse than they actually are – perhaps the people who seek new experiences are the high rollers of life.
“First, tastes are not fixed; they vary with the reference point. Second, the disadvantages of a change loom larger than its advantages, inducing a bias that favors the status quo.” (p. 292)
The author suggests that loss aversion is “built into the automatic evaluations of System 1.” (p. 296) Just like the baby who doesn’t want to give up the toy he’s not playing with, or the couple in a toxic relationship who just don’t want to give each other up, or the salary man in a dead end job who just won’t leave…
[p.407]
We treat our memories in strange ways. We remember thin slices of time and neglect what happens at other times. For instance, if a procedure ended with a painful experience near the end, we take the procedure as a whole to be worse than if the painful experience was somewhere else. Moreover, when we have fond memories of past lovers, we neglect all the rough and painful times – again, our memories are shaped by the stories we want to tell.