Thursday, June 12, 2025

Far from the Madding Crowd

I recently finished reading "The Wisdom of Crowds" by James Surowiecki. I had to look up that name just now to spell it - I've been a fan of James' writing for two decades, mostly his excellent column "The Financial Page" in The New Yorker, and have been curious about this book for a while, but just finally got around to reading it. My immediate impetus for checking it out was noticing it on this list of best financial books as compiled by Larry Swedroe. A lot of those are books I've already read and loved, a few I know I'm not interested in, but a few ones like this jumped out at me and gave me a nudge to finally read them.

 


 

TWoC is a great read, the kind of book that's entertaining and makes you feel smart, like you've learned something both personally useful and true about the universe. It feels a bit Malcolm Gladwell-y, in the best positive sense. Like a lot of books in this genre, it's focused around a simple thesis: in this case, that groups of people tend to make better decisions than individuals. Groups are more than the sum of their parts, smarter than the smartest people in them, and can collectively arrive at solutions beyond any individual.

The first example he opens with is the classic example of a contest to guess the number of items in a large jar - how many gumballs or jelly beans or ping-pong balls or whatever. At a state fair people will write down their guess and submit it. At the end, if you add up everyone's guesses and divide by the number of entries, you'll end up with a really great approximation of the answer. In fact, that average answer will often be closer than any individual guesser. He gives a lot of other examples of groups collectively arriving at good solutions, both to complex problems (finding where a submarine landed deep on the ocean floor) and seemingly mundane ones (how to walk down a crowded sidewalk).

One thing that surprised me, though, is that after introducing the thesis, he spends more time exploring examples that seem to contradict the thesis: cases where crowds acted dumbly, where adding more people led to a worse solution, where something that worked well at a small level failed to scale up to a larger one. I think this really helps clarify the main point: by seeing what doesn't work, we can identify what does. The real world is, of course, complicated. We can't say "A always causes B!" We can say "A usually leads to B!" And then we can think about why that's the case, and what makes it more likely for A to lead to B, and what obstacles might prevent A from reaching B.

James classifies the types of problems crowds face into three main types, what he calls "cognition problems," "coordination problems" and "cooperation problems." (Again, this feels pretty Gladwell-y.) Cognition problems are the simplest type, and are cases where there is a specific right answer to find. You don't know at the outset what that answer is, but at some point in the future you will know whether you chose correctly or not. The gumballs-in-a-jar problem is one example, and so are "Where did the submarine crash to the ocean floor?" and even "How much money will this company earn over the next 20 years?" or "Who will be the next President of the United States"? There are multiple ways that groups can organize to tackle these problems, and he is a big fan of a "market" organization, where people bid on their best guesses: putting some skin in the game seems to drastically help the accuracy of predictions. (As a loyal Patreon subscriber to Election Profit Makers, I was intrigued by the description of the Iowa Electronic Market, a forerunner to PredictIt, Polymarket and other modern prediction markets.)

In a "coordination problem," there isn't some fact (past, present, or future) that you're trying to uncover. Instead, a group needs to decide how to accomplish some task, with everyone acting as individuals. Most of his examples here involve traffic, like busy New York City sidewalks or congested freeways. What's interesting about coordination problems is that, as a species, we are actually really good at coordinating, and we tend to do it without much thinking of direction. When scientists studied pedestrians, they discovered that pedestrians don't walk directly behind one another: each one walks slightly to the side of the person in front of them, so they can peer over their shoulder and see what's ahead. We do that for our own benefit, so we can be prepared if foot traffic is snarling ahead, but it also benefits everyone else in the group, since it keeps traffic moving smoothly.

Another example he looks at that I really liked is seating on a subway car. During rush hour, there are more riders than there are seats available. So we have a coordination problem to solve: who should get seats? The unspoken system we've landed with is simple: if you'd like to sit, and there's an open seat, you take it. (With of course caveats for handicapped seats.) Now, you can definitely argue that this isn't an optimal system, and it certainly isn't the only system you can imagine. Maybe we should prioritize giving seats to the people who have the longest to ride - that does seem fair. But if we were to implement that system, then you would need to have some Seating Czar on each subway car, who would quiz each rider as they boarded, compare their travel plans to others already on the car, and reassign seats as needed. Or each boarding rider would need to quiz every other rider to determine their seating order. The system we ended up with is good enough, much simpler and has way less overhead, and it actually does end up having an effect like that ideal system: people who are riding for longer will have more chances to claim seats, so while it won't necessarily be "fair" 100% of the time, it will be pretty fair most of the time. That's decent coordination.

Pretty much everything in the book is based on published scientific studies, not anecdotes, which I appreciated. In the coordination section, he notes that a lot of our behavior is strongly influenced by culture. People feel strongly about queues, for example, and will react strongly if someone asks to cut into a queue; on the other hand, people are much more willing to give up a subway seat if politely requested. The line is a more powerful force in our psyche than a subway car. Behavioral economists and sociologists have played "coordination games" in different countries around the world, and found differences in how players behave. Which is fine - again, for coordination there isn't necessarily a "right" or a "wrong" answer. We can all agree to drive on the left, or we can all agree to drive on the right, and as long as we're all doing it the same way we've successfully coordinated.

The last, and most challenging, class of problem he considers is the "cooperation problem." In this case, you are trying to solve a problem and make something happen: work as a construction crew to build a building, or as a party committee to put on a prom, or as NASA to bring a shuttle of astronauts safely back to Earth. One of the inherently hard things about this type of problem is that there isn't a clear black-or-white "right answer", but there definitely are good outcomes (everyone had a good time at Prom!) and bad outcomes (nobody had a good time at Prom!). With a cooperation problem, you need to decide what needs to be done, how to do it, and execute on it.

There are lots of different ways to organize things. You could have a pure committee, where each person gets one vote, and everyone does what the majority says. You could have a dictatorship, where one person (the boss) decides what to do, and makes everyone do it. And everything in between: pyramidal management structures, affinity groups, multiple classes of participants, etc.

In general, James likes broad-based groups, to tap into the wisdom of crowds. Adding more voices can bring in more ideas, help identify blind spots, and lead to better outcomes. He goes into a lot of detail on where this is and isn't effective. Diversity is very important - not sociological diversity, but people with different perspectives and background who bring unique thoughts to the table. Adding a bunch more Harvard MBAs likely won't improve decision-making, but adding a mix of MBAs, long-time employees, outsiders from other organizations, and customers will.

He writes about "private knowledge," which just means something unique that one person has which isn't shared by everyone. This could be some expertise, but also just having had a previous experience in the past, or knowing a random fact. The sum of all the private knowledge in a group will be greater than the private knowledge of even the most knowledgeable person. So an interesting quality of many group dynamics is that, if you add "dumber" people to a group, the decisions that group makes can actually become "smarter". The total knowledge of the group increases additively, it isn't reduced to the mean.

But, how the group is organized has a huge impact on realizing this potential. There's a long and kind of heart-breaking example of the Columbia explosion, and the days of internal NASA meetings that completely failed to handle the problem. (The underlying issue: there was a botched takeoff, which damaged the foam and protective heat tiles. When it re-entered the atmosphere at the end of the mission, the intense heat destroyed the shuttle.) One big aspect of this was management: the mission manager chair Linda Ham quickly settled on an interpretation of the facts that she liked, and downplayed and cut off other voices who suggested alternative (and, it turns out, better) explanations. This is a really wrenching section to read, I think especially because the dynamics shown here are often seen in workplaces throughout the country, but with the severest consequences.

James doesn't just blame the one person at top (even if the fact one person was driving everything contributed to the problem). He rhetorically asks, what was the difference between the Apollo 13 mission and the Columbia mission? One answer is that the Apollo 13 ground control was much more diverse. As he points out, that is a kind of shocking thing to say: when you see photos of the Apollo-era ground control, everyone has identical haircuts, identical classes and identical short-sleeved shirts. But because NASA was so new, everyone had worked somewhere else before joining. Some were ex-military, some were in manufacturing, some had worked in research labs, some managed retail stores. By contrast, by the time of the Columbia disaster, NASA was a much more insular and bureaucratic operation. Most people there had joined right out of college and spent their entire careers inside NASA. Because of that they all shared the same culture, similar mindsets and attitudes towards hierarchies. That made it far less likely for someone to speak up to challenge the decision of a leader, and even less likely to press on an issue once they had been shut down.

The book is filled with nifty examples like that. It is very much a relic of its time, having been written in 2004, and I felt a little sad reading James's praises of Google. I was reminded of just how magical Google was back then - you could type something into a search box and it did a really good job at finding you the information you needed. He explains how the PageRank algorithm works, which is essentially a voting mechanism, tapping into the "wisdom of crowds" to find the most useful information instead of relying on a single authoritative source (a la Yahoo at the time). Google's fall from grace has been well-documented, and I'm a bit more sympathetic to their decline: even before the disastrous decision to remove the wall between Search and Ads, Google had been dealing for over a decade with an entire SEO industry that had sprung up specifically to manipulate its algorithm into unduly weighting preferred sites. Voting worked well when the data was clean, but when you vote with dirty data you'll get dirty results.

It was also interesting to read about business in the window after the tech bubble crash of the late 90s-early 2000s and the Great Financial Crisis of 2008. I don't think the GFC invalidated anything he wrote here, but he certainly would have referenced it if it had already happened. In the context of this book, an asset bubble like tech stocks is probably a more applicable example than a systemic financial problem like the GFC.

I think of James primarily as a business writer, but TWoC as a whole is much less about business than I had expected. He draws mostly from science and sociology, with smatterings of history, pop culture and other fields. There are more business examples as the book continues, but he mostly pays attention to how things work inside individual businesses, especially at the level of small teams. This is much more micro than most of my business and econ reading these days, which are far more focused on macro. But the micro level is much more applicable to our lives, the level where we can recognize problems and systems and personally act to fix them.

I found myself thinking periodically of Nexus while reading this, and parts of this book seemed to be in conversation with that one, but that may be because I read them back-to-back. Both books are interested in information, and have examples where adding more information leads to a worse outcome. Returning to that gumball example, late in the book James recounts a study where a professor ran the classic experiment, and as usual got a pretty accurate result. He didn't share this result with his class, but instead asked them to submit new guesses, this time prompting them to note that the container was made out of plastic, and that the area under the cap could contain additional gumballs. All of this is information students could have, and maybe did, observe in the first round, and all of the information he shared was true. And yet in the second round of voting, guesses came in significantly over the actual answer. Having an authority figure inject additional information skewed the natural accuracy of the crowd and drew it astray. That's interesting!

Overall I think the Wisdom of Crowds is a more optimistic book than Nexus. It's hopeful that crowds can identify the truth that points to an underlying reality, collectively making discoveries. That isn't a guarantee, and there are definitely ways that crowds can be drawn astray or hijacked by bad actors, but in general we as a society do our best work when we're cooperating in groups. He has some thoughtful pages on the implications for democracy as well. It's pretty shocking how poorly informed voters are: most people (at least as of 2004, though I doubt it's much better now) believe that the US spends 20-40% of its tax dollars on foreign aid, when it's actually less than 1%. Collectively, we're shockingly ignorant about economics, foreign policy, and most big-picture topics. And yet, we (historically) have done a decent job at electing leaders who do a decent job at handling those things. Some of us daydream of a technocratic elite that knows stuff and can do things, but the dumb masses end up with a system that works just as well, and is (theoretically) more resistant to capture or authoritarianism.

So, yeah! I liked this book, it's one of those general topics that I think will be interesting and relevant to most people.

1 comment:

  1. Bro - really fascinating, the problems you describe from the book are definitely something we struggle with in our field (though of course we use different names) both with inter-agency work and with citizen response. There's a lot of research work on how to better communicate and collaborate, but I'm going to digest the note on mono-culture because I think that aligns with some of the issues I've observed.

    I'll add to my reading/wishlist. Thanks!

    ReplyDelete