Monday, September 29, 2025

Think Fast!!

I don't think I've ever written this about a book before, but I believe that everybody should read "Thinking: Fast and Slow" by Daniel Kahneman. It's a book about how we think and why we make the decisions we do, and since everybody needs to think and make decisions, it could improve the life of literally anybody who reads it. I personally found it very eye-opening, immediately recognizing cases and patterns where I have made or continue to make poor decisions, as well as getting a better understanding for how I might recognize those situations and do better in the future.

 


 

There are a lot of books on thinking out there, and one unique advantage of TFaS is that it's written by a direct expert in the field: Daniel Kahneman was a psychologist and a founder of Prospect Theory, which revolutionized the models used by psychologists in evaluating decision-making. His work led to the triumph of Behavioral Economics, which has overturned a half-century of largely misguided models and explanations for how people behave in markets. This gives him a huge leg up over popular writers like Malcolm Gladwell who can write engagingly but don't have direct experience with the science behind their books and need to rely on the information provided by professional scientists or untrained enthusiasts. Fortunately, Kahneman is also a great writer. The book is a bit long but very compelling and highly readable, spending enough time on each concept to make it stick without overstaying its welcome, and using lots of vivid and memorable examples to prove his points.

Possibly the most remarkable aspect of this book is that it contains its own proof. I'm used to non-fiction books saying "A study shows that X" or something like that, and I'll just kind of nod and glide over it. TFaS will do something like challenge you to solve a particular multiplication question in your head, then note what you probably experienced while working it, and I'll be like, "That's right! That is what I experienced!" I become the living embodiment of the proof of what he's writing, I can test the assertions on myself in real-time. I know that isn't applicable for every type of book, like WW2 strategy or the history of debt, but it does work for this type of book and works really effectively.

I'm going to jump around a little in capturing a few (probably long) thoughts and responses to the book, so this post will almost certainly feel more disjointed than the actual book. I definitely won't get to everything he writes about, just focusing on the ones that resonate most with me at this time.

The titular "fast and slow" of the title refer to our two main modes of thinking, which he refers to as System 1 and System 2. System 1 is automatic and effortless, our immediate perception and analysis of a situation. Some examples he gives include hearing someone's voice and detecting whether they're angry, driving along a freeway with light traffic, noticing the one black sheep in a flock of white sheep, or answering what two plus two equals. System 2 is effortful thinking, which requires our focus and attention: when engaged in System 2 thinking, our pupils dilate and our pulse quickens. Examples of System 2 thinking include being asked to recall an example of a certain incident from our past, merging onto the freeway in heavy traffic, explaining why a black sheep might be in a flock of white sheep, or answering what thirty-six times twenty-four equals.

We spend the vast majority of our lives following System 1 thinking, and most of the time it works great. Occasionally we encounter a situation where System 1 can't provide a satisfactory answer, and we need to marshal our System 2 to decide what to do. Often times our System 1 will offer up one or more possibilities, and our System 2 will evaluate and analyze them; other times our System 2 will need to do all the work, as in the multiplication example. System 2 is lazy, and thinking slowly is hard, so often times we will stop as soon as we arrive at a plausible answer and get on with our lives. Through the examples he provides to the reader in the book, Kahneman shows how even supposedly bright people like me will often answer a question incorrectly or make a less-optimal choice, whereas if you spent the time to fully think through a problem you would make the opposite choice.

Much of the first part of the book focuses on how System 1 operates, what it can and cannot do. Examples include the importance of framing: we tend to judge things relative to what is "ordinary". We're primed to respond to differences more than to absolute amounts. He also looks at ways our System 2 tends to be reliably lazy. We rely on "anchoring", where we find a number (or value or action or whatever) and then adjust up or down from there, instead of working out the best value from scratch. We're satisfied with answers being merely "plausible" rather than being "correct": if something our System 1 proposes sounds reasonable, or if we arrive at a reasonable-sounding answer quickly, we'll accept that without checking and verifying it.

I'm tempted to say that System 1 is "bad" and System 2 is "good", but that isn't at all the case, they're just different. There's a lot that's amazing about System 1, including its ability to quickly derive meaning from scant information. We need to be cautious about evaluating that meaning, but it's remarkable how much it can collect. Kahneman references an old animated film in the book, which I subsequently looked up on YouTube, and it's pretty amazing. An animation from the 1940s, it just shows two triangles, a circle and a few lines; and yet, over the course of about 90 seconds, you can infer a really powerful and emotionally compelling story from it. There's nothing in the real world that looks like these abstract shapes, and yet our minds are able to immediately supply compelling information about what they're doing.

 

 

While our thinking can get us into trouble, there is a very good biological and evolutionary reason for why we think the way we do. For example, we're quick to notice outliers and unusual situations, because in the past that was often a sign of danger: if we notice that there are more lions than usual on the savannah today, we might want to go and hide in the jungle. That doesn't necessarily mean that the lions are planning to attack - there's a natural ebb and flow in the number of lions - but a species that consistently acts on the potential of danger will out-perform a species that is less likely to act. As with so many other things in psychology, our mental tools that were helpful in prehistoric times aren't nearly as useful in the modern world, but we're stuck with the machinery we have.

I think the first part of the book that really wowed me was his discussion on reversion to the mean. RttM has been a big part of my economic reading over the last 20 years or so, but Kahneman's presentation is pretty eye-opening for me, in particular why it's critical to have control groups in scientific studies. The literature I follow most closely these days closely relates to nutrition and health, where an experiment might ask something like, "Does eating almonds help reduce cataracts"? If you run this experiment on a population of people with cataracts, you will see that many peoples' cataracts do diminish or disappear over the time they're eating more almonds. But the thing is, you would expect for some portion of those cataracts to disappear anyways: the population as a whole has some base rate of people with cataracts, and if you have a population of all-cataract-havers, they will tend towards the base rate of getting fewer cataracts; and if you have a population of no-cataract-havers, they will tend towards the base rate of getting more cataracts. We construct causal stories ("The almonds cured cataracts!") when there might not be any causal relationship. In the past I've thought of the "placebo effect" as an almost mystical force, that just believing that things will get better can make things get better. That mental effect is relatively minor, though, the bigger impact is simple mean-reversion. I can't believe I'm finally understanding this so late in life!

Mean-reversion is a great explainer of so many things. One that immediately comes to mind is my business. Over the years, I've noticed that when we have a very successful year, it's usually followed by a less-successful one; and a less-successful year is usually followed by a better one. This has seemed abnormal to me - why do we swing so much from year to year? - but it's very well explained by reversion to the mean. Luck plays a huge part in business, with "luck" defined as "things outside of our control": how the American economy as a whole is doing, how much investment money is available, whether we happen to meet the right person at the right time to kick off a project and have the right people to make it succeed. Nobody is consistently lucky or consistently unlucky or even consistently middle-lucky: we all fluctuate over time, so of course there are inconsistencies: you should expect them! You can still infer long-term trends; even a less-successful year now will generate more revenue than an above-average year did a decade ago - but the short-term movement is random (as Burton Malkiel predicted). And that randomness is not weird, it is normal.

My experience is also an illustration of the "Law of Small Numbers". We're a small company with relatively few projects, so we'll tend to have more extreme results from year to year compared to a much larger company. Or, to put it another way, we'd expect more variations from the base rate. If the economy as a whole does poorly, we'll have a higher chance of doing well, or of doing disastrously, while a larger company would be more likely to more closely match the overall average.

 I like how Kahneman mathematically represents this, with a formula like "Company performance = Overall economic environment + Unique factors". I think we're an especially talented and skillful company (I'm biased!), so I think we have an "edge"; but we don't completely control our own destiny. It's very tempting to attribute our success to causal factors: "We hired a lot more people and we did worse, so hiring was a mistake" or "We focused on large clients and we did better, so we should continue focusing on large clients." We did get some evidence from our actions, but we should approach them with skepticism, compare with the base rates, and not draw too strong of conclusions from circumstantial evidence. Over the long term, we want to continue making good decisions even if they lead to poor outcomes, and not get complacent if we get good outcomes even after making poor decisions.

I found that many of the subjects in TFaS are adjacent to a lot of other reading I've done in recent years. One example is William Bernstein's "The Four Pillars of Investing," with one of the pillars being the psychology of investing. Bernstein writes similarly there that we as a species have evolved to trust stories over data, which was evolutionarily beneficial but mal-adaptive for the modern world generally and for investing in particular. Kahneman's example is a CTO who told him "I went to a Ford auto show and decided to buy stock in Ford - they make great cars!" Whether they make good cars isn't really relevant in an investment decision, what matters is whether Ford's current stock price is over-valued or under-valued.

TFaS also shows how our tendency to focus on anecdotes instead of data drives mass misunderstandings and leads to bad political and social decisions. This paragraph really resonated with me:

An availability cascade is a self-sustaining chain of events, which may start from media reports of a relatively minor event and lead up to public panic and large-scale government action. On some occasions, a media story about a risk catches the attention of a segment of the public, which becomes aroused and worried. This emotional reaction becomes a story in itself, prompting additional coverage in the media, which in turn produces greater concern and involvement. The cycle is sometimes sped along deliberately by "availability entrepreneurs," individuals or organizations who work to ensure a continuous flow of worrying news. The danger is increasingly exaggerated as the media compete for attention-grabbing headlines. Scientists and others who try to dampen the increasing fear and revulsion attract little attention, most of it hostile: anyone who claims that the danger is overstated is suspected of association with a "heinous cover-up." The issue becomes politically important because it is on everyone's mind, and the response of the political system is guided by the intensity of public sentiment. The availability cascade has now reset priorities. Other risks, and other ways that resources could be applied for the public good, all have faded into the background.

That's a great description of an ever-increasing phenomenon, which I often pair with George Saunder's excellent essay "The Braindead Megaphone": we end up talking about the wrong things because it's what everyone is talking about. The example I think about most often these days has to do with crime rates. In polls, people overwhelmingly believe that crime is worse now than it's ever been before, even though statistics show that crime is lower than it's been in the last 60 years. There's a very simple reason for this: the media (both mass media and social media) loves lurid and grisly crime stories, people love reading them, our brains soak up those anecdotes and come to the conclusion that the world is a dangerous place. Interestingly, Kahneman's examples are about environmental disasters rather than violent crime: supposedly "minor" environmental incidents like the Love Canal became major causes that drove a lot of attention and responses.

I'm definitely personally biased more towards cleaning up environmental issues and punishing polluters, so that was a more challenging example for me than crime. Cleaning up toxic waste is a good thing to do. Kahneman's point is that, in a world with limited resources, we should direct those resources based on some rational metric (like the number of life-years impacted per dollar of remediation) rather than who can most effectively muster public outrage. But Kahneman also notes that in a democracy we need to respond to the reality of people's feelings, even when those feelings do not reflect reality. The alternative is rule by an unaccountable technocratic elite, which will destroy trust in the democratic system. I think that's the tension we've broadly seen in the EU during this century, with populist reactions against Brussels bureaucrats. Still, the problem Kahneman identified has exploded into overdrive in recent years, as outlying incidents are used to whip up widespread hatred among the populace and demonize vulnerable minorities for political gain.

Shifting gears, later in the book he describes "Prospect Theory," which is a basis for behavioral economics. I've encountered behavioral economics a ton in my adult reading, but its findings still feel slightly surprising, and it has been a huge revolution in the field of economics. Kahneman talks about how economists may talk about one of two species of people. The first are what he calls "Econs," who are perfectly rational and absolutely selfish, and will act in the way that maximizes their utility (money and/or happiness). The other species he calls "Humans," who are not always rational, who are capable of altruism, and who can and often do make mistakes. Needless to say, studying "Humans" will probably benefit us more.

Traditional economics (I'm not sure if he uses this term, but I think it's the mode that was dominant after the Red Scare up through 2000 or so, building on earlier work) evaluates based on states, looking at how much utility a person has - for example, how a person with $20k in wealth would or should behave when faced with a profitable but risky gamble. Kahneman's great discovery was that people are influenced by changes more than state. Someone who started with no wealth, and then received $20k in wealth, will tend to be risk-averse: they will want to lock in their gains and stick with the sure thing rather than take a chance. But someone who started with $30k and dropped to $20k in wealth will tend to be risk-seeking. They won't want to lock in their loss, and will be inclined to gamble in hopes of returning to their previous baseline, even if it means losing even more. An Econ would behave the same in both situations since it only cares about the present state and the probability of future outcomes, while a Human is deeply influenced by the past.

We have a baseline and we make evaluations based on changes to that baseline, whether we are measuring salaries, vacation days, food quality, social interactions, or anything else we care about. This is true when evaluating our own situations and decisions, but also when judging the actions of others. For example, we perceive a merchant as greedy if they raise prices to increase their profits; while a traditional economist would say that the merchant should raise prices to what the market will bear, in the real world customers may well be upset and punish the merchant by not shopping with them, even if doing so pains the customer. This also explains why employers almost never reduce salaries, as the change to existing pay is perceived as painful, while hiring a new employee at a lower salary is acceptable to everyone.

We are more sensitive to losses than to gains. I've read that statement a lot, but it's a relatively new discovery in the field. Kahneman and his collaborators have been able to quantify the degree over multiple experiments, and discovered that we feel the pain of losses about twice as strongly as we feel the pleasure of gains. This segues into a great practical primer on the benefits of long-term investing. For any individual gamble we're likely to want to play it safe, but over the long run we are far better off taking (reasonable) gambles, and we should take the long view. (For example, imagine offering to play a game where you flip a coin, and if it comes up Heads you win $120 but if it comes up Tails you lose $100. Many of us would be reluctant to play, because losing $100 is more painful than winning $120. But if you have the opportunity to play that game 100 times in a row, you should absolutely do so, since you are almost guaranteed to earn a fair amount of money at a very low risk. That's essentially what we do in the stock market: in the short term we can gain or lose a large amount of money, and we feel those losses keenly, but over a long period of time we are far better off than if we'd kept the money in our mattress.)

Prospect theory feels pretty applicable to political and economic ideas I've been reading about lately. Reading about income inequality makes me feel outraged, and it's hard for me to understand why everyone isn't up in arms about it. But people are more sensitive to their reference points. If there was a widespread decline in the standard of living, people would get up in arms, but so long as the broad status quo stays similar, it makes sense that people aren't thinking about the big-picture allocation of resources. TFaS also ties in directly by explaining how we have a hard time evaluating extremes, either very large or very small amounts. Mathematically, a billionaire is insanely more wealthy and powerful than a millionaire; but our primitive brains collapse the two into a single "very rich" category.

And on the other hand, I think it also helps get at the much-discussed and often-maligned "white working class" resentments we've been talking about for the last decade. It's tempting to look at the absolute standard of living, or compare that standard of living to other groups, and say that WWC households are far better off than non-American households or non-WWC households. But people aren't comparing their wealth to global standards (as an Econ might), they're comparing to their own reference points, where they were raised or earlier stages of their careers. So declines in things like well-paying union jobs are devastating and cause a great deal of anger and resentment, while at the same time outsiders would judge those people are comparatively secure.

Of course, Kahneman isn't saying that these attitudes are good or bad, right or wrong, he's just describing how our brains work. But that's very useful information. And it can be used for bad! In particular, he has studied how our brains process unlikely occurrences. If you say "There is a 0.0001% chance a parolee will commit a violent crime six months after being released", then people will be inclined to grant parole, but if you say "1 in 10,000 parolees will commit a violent crime six months after being released", people will be inclined to reject parole. The two statements are mathematically identical, but emotionally, the second statement primes your mind to visualize that specific incident of the violent crime, which makes it feel more immediate and likely. Anyways, you see this being used relentlessly in political discourse now - it definitely dates back at least as far as Reagan and the Willie Horton ad, but is omnipresent in 2025. It's bad, but it works!

Back to prospect theory: He illustrates what has been called the Four-Fold Pattern, a two-by-two matrix describing risk tolerance for both high-probability and low-probability events, and gains versus losses. For the most part these make intuitive sense: if you're pretty sure you'll achieve a good outcome, but not 100% sure, then you would feel great disappointment if you fail to achieve that outcome, so you'll want to protect getting a positive outcome even at the cost of a less-positive outcome (as in someone who is likely to win a $1,000,000 lawsuit being willing to settle for $950,000). On the other hand, if you're unlikely to achieve a good outcome, you'll be more inclined to take risk: you aren't expecting a good outcome anyways, so you won't feel too disappointed when you fail.

 



The scenario that's most surprising is the upper-right. Given a 100% chance to lose $90 or a 90% chance to lose $100, people will pick the 90% chance; even if the choice is a 100% chance to lose $90 or a $90 chance to lose $130, people will still tend to take the gamble. We give disproportionate weight to the sliver of hope of not suffering any loss, which we experience as more than 10% of the difference between the 0% chance and the 100% chance. I think I'm a little less likely than most people to fall into this trap in my investment life; but reading this book I'm struck by how that's absolutely the case for me in personal matters, including relationships and commuting (very different domains!). If a relationship is going poorly, and I can choose between ending the relationship now and suffering certain awkwardness and sadness and pain; or can choose to continue the relationship into the future, knowing that I'm very likely to experience even more awkwardness and sadness and pain, I'm inclined to irrationally continue it, as it keeps alive the (small) possibility of a positive outcome.

Actually, now that I think about it, commuting is probably the opposite for me. In a commute, I'm gambling with losses, which is how much time I spend waiting for my bus or my train to come. I know approximately when the train or bus will get to the stop, and I know approximately how long it will take me to reach it, but there is risk involved; the vehicle could be early, or (more often) delayed, or canceled; my own journey may go a little quicker or slower as well. The worst feeling for me is arriving at a stop and seeing my bus or train pull away, which means I'll need to wait another 10, 15 or 20 minutes for the next one to come. Because of this, I time my arrivals to include a good amount of buffer, often aiming to arrive 5 minutes or so early. Even if I'm running a little late or transit is running a little early, I won't "miss" my trip; of course, if I'm running early or transit is running late (both of which are more likely), I'll wait even longer than usual. I haven't modeled this out, but I'm almost certain that my current approach results in more time spent waiting for vehicles than it would if I aimed to only get there 1 minute before: every once in a while I would miss my ride, but most of the time I would save several minutes , which would more than make up for it. But again, we feel losses more keenly than gains, and the "I missed my train" punishment feels more severe than ten "I saved five minutes" rewards. Anyways, I've thought vaguely in the past about how my commuting strategy probably isn't optimal, but I think this book has given me more precise language and models for thinking about what I'm doing and why.

Returning once more to the four-fold pattern: it also provides an explanation for the phenomenon of people in poverty seeming to behave irrationally. When people are poor, often the only choices they have are losses: they may be trading off between smaller or larger losses, and deciding where to allocate their losses, but their only decisions are often bad ones. So they end up occupying that same upper-right quadrant, of a high probability of losses leading to risk-seeking behavior. An example of my own: suppose someone needs $100 to buy enough food, but only has $75. They can buy that $75 of food, knowing that they will still be hungry; or they could buy $70 of food and $5 in lottery tickets. They will almost certainly end up hungrier than they would be otherwise, but they keep alive a sliver of hope that maybe one ticket will be a winner and they won't need to be hungry at all. It's easy for people like me with enough security to judge and say that people with the least resources need to be the most careful in how they use them; reading this book has helped me understand why that isn't always the case, and connect that to other decisions I might make.

The book ends more strongly on economic topics. The last few chapters would make a great standalone Boglehead book. They build convincingly on the earlier chapters on psychology to come to strong practical and logical investment advice, somewhat in the same way that Malkiel's conclusions derive from mathematics of Bernstein's conclusions derive from history.

There's a really interesting section on optimism. As Kahneman describes them optimists are kind of defective: they are unaware of the risks they face, feel indifferent to the risks, or fail to properly measure the impact of risk. Optimists are strongly drawn to entrepreneurship. It's gutsy to take on risk to start a new business, and their particular foolishness or blindness leads them to do so. The most successful people are optimists, since people who take big risks can earn big rewards. Of course, most optimists and their businesses will fail, but optimists will also tend to bounce back more robustly from hardship than pessimists, largely because an optimist tells themselves a story where the failure wasn't their fault, but the fault of external factors. In most cases people would be better off selling their labor to someone else than they would be working for themselves, but optimists won't stop to think about that. This is bad for them individually, but in aggregate it gives us a dynamic economy with many more businesses started and greater overall growth and wealth.

There are many fine qualities of optimists: they tend to be more cheerful, and as a result are better-liked, which can also lead to more success as others are more drawn towards optimists. They're likely to live longer and be in better health. In my own life, I think my natural temperament runs more towards cautious conservatism than optimism, but in my adult life I've kind of become a "learned optimist", in large part as a side-effect of my career. I've definitely observed that projecting optimism can lead to more successful business outcomes, plus life is just more fun if you go through it with an optimistic mindset: looking for good opportunities, and appreciating the best parts of mixed experiences. I think optimism is a mode I can push myself to inhabit, and I find it's usually a good mode for me to be in.

This is a side-note, but on a recent rewatch of the Lord of the Rings movies, I was struck by the contrast between Theoden and Denethor. Both of them are leaving a group that needs to defend against a much larger and more dangerous foe. Theoden's dialogue seems unduly optimistic: he talks about how the fortress of Helm's Deep has never fallen, how Rohan has survived every invasion in the past, and so on. Rohan has also never faced so dangerous a threat before or been so poorly prepared, so his words seem a bit delusional. But when Aragorn questions him, Theoden snaps for a moment, and you can see that he's deliberately projecting this optimism for the benefit of his people: if they give up they're doomed, but if he can inspire them and instill hope then they'll fight harder and have a chance at survival. In contrast, Denethor has completely given up hope. He doesn't think there's any chance of Gondor surviving, and as a result he fails to take even basic defensive actions. This culminates (in the movie, not the book) of him telling his men to throw down their swords and flee. That's terrible! Between Theoden's optimism and Denethor's pessimism, Theoden is clearly superior.

That in turn reminds me of China Mieville's "A Spectre, Haunting", in which he writes about the "manifesto" as a particular mode of writing. An impartial journalist should just report facts, an impartial analyst should just present probabilities, but someone who wants to achieve a specific outcome should use rhetoric to inspire their audience. "We will fight them, and we will win!" may not be justified by the cold-blooded statistics of the situation, but those words are deployed to change reality, which I think is what Theoden is trying to do. Ultimately Aragorn steers more of a middle ground but much closer to Theoden, saying that he does not know if they prevail, but exhorting his men to battle nonetheless.

As a side-side note, I've been meaning for well over a year now to write a blog post on Denethor and the palantir. I still hope to write it one day!

Back to TFaS: near the end of the book he writes about our experiencing selves versus our remembering selves. Our experiencing selves are what we actually experience from second to second, minute to minute and month to month: how happy or sad we feel in a given moment, how much pleasure or pain we experience over the duration of an episode. Our remembering selves are how we evaluate previous incidents and judge them as positive or negative, painful or pleasurable. These are two very different selves!

The big illustrative example he gives is the "cold hand" experiment, where participants placed their hand in a bowl of extremely cold water, cold enough to cause pain without physical damage. Each participant was told there would be three trials. In one trial (which half took first and half took second), they endured the pain for 60 seconds before being allowed to remove their hand. In another trial, they had the same temperature for 60 seconds, then a small amount of warm water was added to the bowl to raise the temperature by 1 degree: still painful, but not quite as painful. They waited an additional 30 seconds before removing their hand. Then the participants were asked which of the experiences they would like to repeat for the third trial. Regardless of the order they took them in, almost everyone preferred the 90-second experience to the 60-second one. But from the perspective of the experiencing self, that's absurd: every moment of pain in the 60-second one was repeated in the 90-second one, and the overall experience was 90 seconds of pain instead of 60 seconds. And yet, because the end of the longer trial had less intense pain, the subjects' remembering selves preferred that one.

Kahneman has further studied this phenomenon and drawn a few conclusions from it. One is what he calls the Peak-End effect: when we remember an experience, we tend to remember the most intense moment during that experience, as well as how we felt at the end of it, and essentially average those two. So during a surgery, peoples' impressions are mostly derived from how painful the worst part of it was and how painful it felt at the end; a much longer surgery with a less intense middle or an easier end will get a better review than a quicker surgery with less total suffering but a moment of intense pain or more pain near the end. Kahneman mostly focuses on suffering in this section, but it applies to pleasure as well: if we're listening to a beautiful symphony performance, but there's a harsh note near the end, that will diminish our remembering selves' impression of the symphony, even though that note can't possibly undo the hour of pleasure we got from listening to the rest of it.

Another aspect is that our remembering selves are indifferent to duration. We feel about as positively about a three-week vacation as we do about a two-week vacation, even though we experience 50% more total pleasure over the course of the three-week vacation. Going back to the cold-hand experiment, people didn't seem to mind suffering the same amount of pain over a longer period of time.

Overall this seems like a mistake on our part, that we should prioritize our actual experiences over our memories, our life-as-lived over our life-as-recalled. But the remembering self is what's responsible for making decisions: when we draw on our past to decide on a future course of action, it's our memories we marshal, not an integral function of total pleasure and pain over a period of time. We need a balance, mostly for our own lives but also when thinking how our actions impact others (such as doctors picking a course of treatment for their patient, or a musician arranging their set list).

Near the end of the book, Kahneman aligns rational Econ thinking with the Chicago School of libertarian-style economics. Under this worldview, you don't need to worry about people (Econs) making mistakes, because they're making the right decisions for them. Maybe they aren't saving for retirement because they plan to die before retirement age, or because they have decided that pleasure in their 30s outweighs pleasure in their 70s. But in the real world, people often don't think at all about the choices they make, or don't think them through thoroughly. Sometimes that's OK, but often it results in regret and suffering. Retirement savings is again an example, where people made impulsive decisions (cashing out a 401k) or no decision at all (not contributing), and feel profound regret later in life. Kahneman likes what sometimes is called "Paternalistic libertarianism," where individuals are still free to make their own decisions (so long as they don't harm others), but society frames those decisions such that the easiest decisions to make are the ones most likely to lead to a good outcome for that individual.

TFaS was published in 2011, and I think it captures a lot of the optimism of the first Obama term, with competent managers viewing the government as a force for strategic good. Near the end of the book Kahneman lists the successful applications of his research, including allowing employers to default to opting-in to 401k contributions and auto-increasing contribution amounts, which build on what we now know about the importance of framing, reference points, and how we respond to foregone gains very differently from losses. I don't have nearly the same optimism today about this research leading to improvement in peoples' lives. I think that government and business leaders still are aware of the research, but are using it to grow power instead of to benefit people, whether addicting people to social networks or dismantling civil society or normalizing corruption.

Anyways, for all sorts of reasons I think this is a great book and people should read it. As Kahneman notes early on, we can perceive flaws in others' reasoning much more than our own. I'm sure that I'm guilty of taht even while I'm writing this - "Yes, other people have a much harder time seeing the flaws in their own reasoning!" But I did walk away from this book with a lot to chew over on my own thinking and decision-making, past and future. And even if all you take away from this book is recognizing how others process information about the world, that's valuable too. It helps us predict how people will respond to given situations, how we might influence others, and so on.

The writing style is fantastic. I'd noted earlier that it contains its own proof, and in addition to that it's very anecdotal and story-driven, even though the book as a while is saying that we rely too much on anecdote and story. He's acknowledging what's effective, and saying that's what he's doing, showing his work while also making it clear that, say, System 1 and System 2 are just shorthands and not empirically-existing entities.

I think that's all I have to say! If I could buy everyone I know one book, I would probably get them this one.

No comments:

Post a Comment