I recently read "Nexus" by Yuval Noah Harari. My friend Dan recommended it, and I can see why: it's an intriguing book, bursting with knowledge, analysis, theories and predictions. The author is a professional historian, and the book liberally cites historical examples, but the book is more interested in describing how society works as a system: the parameters, the choices and limits available to us, how technological developments have opened or closed doors in the past and how they might change in the future.
Looking back, this kind of feels like two books to me. The first third looks at the history of "information networks" from the dawn of our species through the present day. This is a very broad but very vital topic: how ideas are generated, debated, accepted, spread, and how they affect us as individuals, groups and nations. The second two thirds focus on the impact of computers in general and AI in particular, sounding an alarm for the potentially existential threats they pose to our way of life. I found the first section extremely compelling and convincing, the latter part less so.
It's hard to summarize the whole book in a blog post, but my primary take-away of his argument is that, while we tend to think of "information" as reflecting reality, it doesn't necessarily have any connection to reality. "Information" is just data or thoughts, which could be true or misleading or false or fictional. Nonetheless, despite not necessarily being true information does have a profound impact on our entire lives. Concepts like "money" are purely human inventions that don't reflect natural law, yet the shared ideas we have about "money" control so many aspects of our lives. In fact, the most powerful forces in our history have essentially been myths and stories we tell ourselves: about religion, race, nations on the macro level; love, friendship, rivalry, heroism on the micro level.
I'll note early on my biggest criticism of the book, that some of the language feels a bit shifty. Harari calls this out in particular: he writes a lot about "information" but acknowledges that this word means something different to a biologist, a historian, a journalist, a computer scientist, and so on. I think his personal definition is carefully crafted and fit to purpose, but I get the nagging feeling that there's some semantic sleight-of-hand in how he uses it throughout the book.
Somewhat similarly, he writes a lot about "dictatorship" in contrast with "democracy," but he seems to basically define "democracy" as "a good government." He explicitly says that a democracy is not about majority rule, which I think is insane. In my opinion, you can have dictatorships that protect minority rights, or democracies that do not protect minority rights, but he seems to think that any system that protects minority rights is automatically democratic. He really should use different terms for what he's talking about instead of slapping significantly different meanings on well-established words. (It's wild that he defined "populist" and traces back the etymology but never does this for "democracy.")
Later on he somewhat snippily writes that he doesn't want to create neologisms so he insists on using common words. Which, fine, that's his choice. But I don't think you get to do that and then complain about how people are misunderstanding or misinterpreting your argument, when you're using common words to mean something different from how most people understand them. I vastly prefer Piketty's approach, using a neutral word like "proprietarian" that he carefully defines and then can usefully examine in his work, sidestepping the confusing baggage that comes with words like "liberal" or "capitalist" (or "democrat").
But those complaints about words aside, I think Harari's big argument is very correct and is actually something I've been thinking about a lot lately, paralleling some significant changes in my own thinking over my life. When I was a baby libertarian in my late teens and early twenties, I whole-heartedly agreed with statements like "The solution to hate speech is more speech." As I've grown older and observed how things actually work in the real world, and how things have worked in the past, I've come to see that this isn't true at all: adding more voices does not automatically, consistently or reliably neutralize the harm generated by hate speech.
Prior to reading this book, I've tended to think that this is a symptom of the modern world, where the sheer volume of information is far too much for us to properly inspect and interrogate. We have enough time to read 100 opinions when rapidly scrolling a social media feed, and won't click on any of the articles they link to, let alone follow up on those articles' primary sources (if any). So we early believe the lies, spins, misrepresentations and exaggerations we encounter in our informational ecosystem. Why? We're biologically conditioned with a tendency to latch on to the first thing we hear as "true" and become skeptical of subsequent arguments or evidence against our previously received beliefs.
As Harari shows, though, the spread and persistence of misinformation isn't at all a modern phenomenon. One especially compelling example he gives is the history of witch hunts in Europe. In the medieval era, belief in witches was very local and varied a great deal from one community to the next: each village had their own folklore about witches, maybe viewing them as a mixture of good and bad: sometimes bringing rain, sometimes killing goats, sometimes mixing love potions. Late in the medieval era, the official Catholic church doctrine was that belief in witches was a superstition, and good Christians should trust in God rather than worry about magical neighbors. That changed when Heinrich Kramer, a man with bizarre sexual and misogynistic hang-ups, rolled into the Alps denouncing particular women for having sex with Satan and stealing men's genitalia. He was shut down by local secular and church authorities. He left town, got access to a printing press, and printed up thousands of copies of the Malleus Maleficarum, which gave lurid and shocking details about a supposed global conspiracy of secret witches who had infiltrated every village and carried out horrific crimes against children. This took off like wildfire and led to centuries of torture and execution of innocent people This is all very similar to QAnon, Pizzagate, and trans panics today. An individual can write a compelling and completely false narrative and set off a global campaign of hate and violence, completely deaf to the litany of evidence against these lies. Whether in the 1600s or the 2000s, having more information didn't bring the world closer to truth or solve problems, it led to immense misery and evil.
The conventional view of the printing press is that it broke the Catholic church's religious stranglehold on information and enabled the development of the Scientific Method, allowing people to freely publish and share their ideas. There is some truth in this, but it's overstated. Copernicus's groundbreaking book on the heliocentric system failed to sell its initial run of 1000 copies, and has been called "The worst seller of all time." Meanwhile, the Malleus Maleficarum instantly sold through multiple runs and continued to be a best-seller for centuries. The fact that Copernicus's book was more true than Kramer's did nothing to increase its popularity or reception or impact on the world.
Instead of unfettered access to information, Harari credits the Scientific Method to the creation of institutions with a capacity for self-correction. This was very different from the Catholic Church, which was (and is!) forced by its own doctrine to deny any error. Interestingly, Harari points out that most of the founders of the scientific revolution did not hail from universities, either. Instead, they were an information network of royal societies, independent researchers, journals and so on. The key difference here was that information was peer-reviewed: people wouldn't just say "Trust me," but would share their theories, experiments and data as well as their conclusions to their colleagues, who would look for errors, omissions or alternative explanations. And if an error later was discovered, journals would publicize the error, making corrections to the past record rather than cover it up or ignore it. While this seems like it would weaken the reliability of a source, it ends up building trust in the long run: the reality is that, whether we acknowledge it or not, we are fallible, and by embracing this self-correcting system we can move in the direction of greater truth, not merely the most compelling story.
Fundamentally, Nexus is arguing against what it calls the "naive" view of information, which is basically "More information will reveal the truth, and the truth will produce order and power." This idea is that more information is always good, because true and useful information will drown out the bad and lead to a better understanding of how the world works. Again, this view is easily disproven by history. One alternate view is what Harari calls the "populist" view, which essentially denies that an eternal "truth" exists at all, and equates information with power. Controlling the production and flow of information will produce power, which in the "populist" view is implicitly good in its own right.
Taken from another angle, Harari thinks that there is a "truth" which reflects "reality", but "information" doesn't have any intrinsic relationship to truth. Some information truly reflects reality, other information distorts reality. The consistent effect of information is that it connects - when we tell stories to each other, we grow more connected, and I can persuade you of my ideas and convince you to act in a certain way, or you can make me feel a kinship with you and act for your benefit. There is an even larger class of information that contributes to what he calls an "intersubjective reality". This is information that exists on its own independent of an underlying physical reality. Think of story-telling: you might make up an impressive work of fiction, someone else might write fan-fiction based on your world, a critic might write a review of your fiction summarizing what happens in it, a fan would argue that a character should have made a different choice than they actually did. You end up with this entire ecosphere of carefully-constructed and internally-consistent thoughts about an idea that doesn't have an underlying reality. You, the critic, and the fan are all choosing to participate in a shared intersubjective reality.
There is actually some evolutionary advantage to our ability to create and share stories. We talk about how our fight-or-flight instincts are biologically inherited from our ancestors who needed to quickly react to the presence of a saber-toothed tiger. Harari brings up the interesting point that our neanderthal and sapiens forefathers had a similar evolutionary advantage around their ability to cooperate in teams. You can have a small band of, say, chimpanzees or bonobos that may cooperate against another band, but you never see chimpanzee communities of hundreds or thousands. You can get bands of that many humans, though, thanks to their ability to share stories and ideas. These ideas may be built around myths, concepts of extended kinship, oral traditions of prior hardships and victories.
To this day we have a very strong reaction to all sorts of "primitive" stories: boy-meets-girl, good-man-beats-bad-man, sibling-rivalry, etc. These stories gave evolutionarily beneficial advantages in winning mates, having children, taking territory and defeating enemies. Today, we still strongly respond to those stories; however, in the same way that our daily lives have many more encounters with rude bicyclists than with saber-toothed tigers, we're far more likely to need to navigate an opaque bureaucracy than to kill a rival chieftain. But we don't have a gut-level appreciation for stories about bureaucracies in the same way we appreciate action or romance stories. And our brains don't retain information about bureaucracies very well: we can remember bible stories about rivalries and murder and who fathered who, but we are terrible at remembering lists of sewage inspection reports or NGO organization charts or certification requirements.
As an aside, this observation reminds me of William Bernstein, who writes about how man is a story-telling animal. We respond much more strongly to stories than we do to data, which was evolutionarily adaptive in the past (we won't eat the red berries because someone told us that they're poisonous) but gets us into all sorts of trouble today (we listen to our friend who says investing in bitcoin is safer than US Treasuries). Interestingly I think this observation is from his finance book The Four Pillars of Investing and not one of his history books, although the observation seems even more relevant to history. But Bernstein is a trained neurologist, has a keen understanding of how our biological makeup and mental hardware impacts our daily lives and how we organize as a society.
Harari is a big fan of the Scientific Method, as is Bernstein (in The Birth of Plenty), but neither writer is too rose-tinted. One thing I've heard in the past that Nexus backs up is that individual scientists almost never change their mind, even when faced with persuasive empirical evidence challenging their prior beliefs. Scientists are humans, with egos and prejudices and concerned about maintaining their prestige and positions. When science advances, it isn't like everyone reads a journal and changes their mind; it's that the old guard continues believing the old thing but eventually dies off, and is replaced by a new generation that grew up being persuaded by the better, new belief. Change is measured in decades, not months or years.
Which is fine, if that's how it works, but feels discouraging when considering the problems we face today. We may not have decades to react to crises like climate change or the subversion of democracy. And "decades" is specifically for the class of professional scientists who pay attention to evidence for a career; it's even less likely that the populace as a whole will change their mind to a truer, more correct belief. I mean, Newtonian physics was disproven something like 120 years ago, yet we still learn it in school and most of us follow it in daily life; those of us who have finished high school are vaguely aware of relativity, and have probably heard of string theory but don't really understand it.
All to say that, I don't think science can save us from urgent wide-spread problems. It's slow, and while it can influence the elites it can't change the mind of the masses. Harari seems to suggest that the real key is trust. If we're a society that trusts scientists, because we know they peer-review their work and admit mistakes and are continually improving, we may accept their pronouncements even if we personally don't have the time or inclination to check all their work. But if we don't trust scientists, we lose the benefits of science: longevity, productivity and affluence.
Science is ultimately about truth, but as Harari keeps noting, truth isn't the end-all and be-all: a society with access to truth does have some advantages (it can keep its citizens healthier and produce more reliable military equipment), but it is not guaranteed to triumph over a society with less devotion to the truth. Harari sees Order and Truth as two separate pillars upon which societies are built. You need both of these. Without truth you can't survive: you'll have feces in the water supply, desolate cropland with the wrong grains planted in the wrong season, walking on foot because you don't have motors. Think of something like the Great Leap Forward in China, which upended scientific truths and led to internal misery and the stunting of external power. (In a surprising coincidence, Nexus devotes a few paragraphs to Trofim Lysenko, who I just wrote about in my last post: he was a charlatan who convinced Stalin that genetics was bogus and led the USSR down a path that led to the evisceration of its sciences and widespread man-made famines.)
But you also need order in a society. If you don't have order, then you have anarchy, the collapse of the bureaucracy and the inability to function. Again, you can have bad sewage, because nobody is preventing others from poisoning the water supply; crops are desolate, because farmers know bandits will take any crops they grow, walking on foot because nobody is organizing the factory which makes motors. Between truth and order, you can make a convincing argument that order is the more important factor. Stalin was a moral nightmare, his internal terror was horrific to truth, which caused huge real problems like massive losses in the Red Army; and yet, the system was incredibly stable. Nobody dared challenge Stalin despite his many failures, the USSR endured for multiple generations and had a real shot at total world domination. Or consider the Catholic Church: it has consistently prioritized order over truth, defending bad ideas like the geocentric nature of the universe, disastrous crusades and self-destructive inquisitions; and yet it has lasted for two thousand years, far longer than the School of Athens, the League of Nations or the Royal Academy of Sciences.
In an ideal world, of course, you would balance these two. Those of us in the West will generally push for the primacy of Truth, but still recognize Order as an essential ingredient. There may be times when this requires tough choices, as in the 1960s with widespread dissent and protest against the Vietnam War and racial injustice. One thing I really like about Harari is that, like Piketty and unlike Marx, he foregrounds the importance of choice. Order doesn't inevitably triumph over Truth, nor Truth over Order; multiple stable configurations exist, we can help shape the kind of society we live in, and we should also recognize that other societies may follow other paths, with results that are different from ours and may be stronger or weaker than us.
Going back to the various views of information, Harari has rejected the naive view that information leads to truth, and truth leads to wisdom and power. He also rejects the simplistic populist view that there is no truth or wisdom, that information directly leads to power. His view is that information produces both truth and order. Truth and order, in tandem, generate power. Separately, truth also leads to wisdom. Wisdom relies on truth, but power does not require wisdom. It's an interesting view; I think I'll need to sit with it a while longer to digest and decide if I actually agree with it, but it does feel useful to me.
Phew! All of the above thoughts and reactions are for the first third of the book, which is mostly teeing up the second two-thirds. (There's a lot more I didn't get into in this post, like how advances in information technology enabled large-scale democracies for the first time, the historical development of the bureaucracy, or the 20th-century conflicts between democracy and totalitarianism.) I'm less enthused by the last 2/3 of the book which is mostly about the threat posed by AI.
Examining my own reaction, I think I have a knee-jerk skepticism. Overall I find his arguments persuasive but annoying. I am not at all an apologist for or proponent of AI, but I've been in the camp that views AI as the latest graduated step in advancing technology, whereas Harari sees it as fundamentally different from prior technologies. His point is that algorithms in general and AI in particular are agenic: they can actually take action. Up until now technology has merely augmented human decision-making. A human needs to consult a book, then execute the action described by the book; a human is in the loop, so there's an opportunity to stop and question the book's instructions before carrying them out. But a computer program can, say, deny credit card applications or impose a prison sentence or alter the outflow rate at a sewage treatment facility without requiring any human intervention. Two programs can directly communicate with one another in a way that two books or two TV shows could not.
In another interesting little coincidence, I just recently (re?-)watched The Net, the 1995 thriller starring Sandra Bullock. Many parts of that movie felt like they were in strong conversation with Nexus. For example, in one scene her character Angela Bennett is trying to get back into her hotel room, but the clerk tells her, "The computer says that Angela Bennett checked out two days ago." She insists, "No, I'm Angela Bennett, and I didn't check out, I'm standing right here!" but the clerk refuses to engage with her and moves on to the next person. Even thirty years ago we had offloaded our decision-making to the computer, so what's different now? The fact that there won't even be a clerk in the future: just touchless entry at the door, with nobody to hear your complaint or the ability to override the system. And the ubiquity of the system: in The Net, human hackers had singled out Angela Bennett in particular (much like Will Smith's character in Enemy of the State); but in the future, AI might target entire classes of people: the sick, or anyone with a criminal record, or humanity as a whole.
The triumph of AI isn't inevitable: it requires us choosing to give it control. But if we do make that choice, we may find it impossible to reverse. We can't appeal to AI's mercy or wait for it to fall asleep. Harari repeatedly refers to AI as not just "Artificial Intelligence" but "Alien Intelligence": it isn't that it thinks like a human but more rapidly, it "thinks" in a completely different way from us. For well over a decade now AI has been a black box: we can't understand how it makes its decisions, only watch the final choice it makes. All this adds up to a very urgent and potentially deadly situation.
Harari does offer some suggestions for how to address the threat posed by AI, which I do appreciate. It's very annoying when books or articles lay out doom-and-gloom scenarios without any suggested solutions. The proposals in Nexus tend to be pretty narrow and technical. They include things like keeping humans in the loop, requiring us to sign off on decisions made by AI; along with this, AI needs to explain its reasoning. Harari also muses about banning or at least prominently labeling all bots and generated content online: we waste far too much mental energy arguing against bots, and the more we engage with them the better they get to know us and the more likely they are to persuade us.
As modest as these proposals seem, he acknowledges that they still seem unlikely to be implemented. In the US they would require legislative action, which is incredibly difficult these days, and even more so when the majority party is (perhaps temporarily) benefiting from AI support.
One of my annoyances with this book is how Harari stumbles into what I think of as terminal pundit brain, the impulse to treat political factions as equivalent. He writes things like "Both parties are losing the ability to communicate or even agree on basic facts like who won the 2020 elections." It's insane to act like the Democratic party is equally to blame for January 6 and election denialism! Elsewhere, though, he does acknowledge the reality of the situation, making a cogent abservation about the abrupt transformation of right-leaning parties. Historically the conservative party has, following Edmund Burke, argued for cautious, slow and gradual change, while the progressive party has argued for faster and more ambitious change. But in the last decade or so, Trump's Republican party along with parties abroad like Bolsonaro in Brazil or Duterte in the Philipines have transformed into radical parties that seek to overthrow the status quo: getting rid of bureaucracies, axing the separation of powers, imposing new economic systems, and broadly and rapidly changing social relations.
This is a surprising change on its own, but Harari notes that this has also thrust the traditional left-leaning progressive party like the US's Democratic party into the unlikely role of the defender of the status quo. They aren't necessarily adopting more conservative positions, but they do want to retain the overall democratic system. While Harari doesn't dig into this aspect much further, it does really resonate to me. I often feel like the Democratic party insists on bringing a knife to a gunfight. It's very frustrating to hear, say, Chuck Schumer repeat the tired paeans to bipartisan cooperation and consensus, when the house is burning down behind him. I do feel a bit more sympathy for him when I think of how he wants to keep a robust pluralistic democracy running, but I have yet to see any convincing evidence that his actions will help bring that about. My overall pessimistic feeling has been that that era is just over now, and while a populist left may be less stable than a broad-based democratic left or broad-based democratic right, it's the best option available to us now.
I think that Piketty is much more useful in this area than Harari. If we're going to marshal the resources to actually address climate change and similar existential issues, we need to retake democratic control of our wealth, which in practical terms means taxing the rich and limiting the influence of money in our politics. It's no coincidence that the ascendant conservative faction tearing down institutional systems is the faction aligned with the wealthy.
Harari points to the breakdown in political and social cohesion in the US. During the 60s the country was wracked by big divisions over civil rights, women's rights, war in Vietnam, and other points of friction. The entire Western world seemed to be coming apart at the seams. And yet the system still functioned pretty well. The Civil Rights Act was supported by majorities in both parties, the Nixon administration broke every norm of the justice system yet ultimately abided by court order. The fragile and messy democratic West eventually came through this period and triumphed over the more order-orientated USSR. Today, there's no bipartisanship, not a shared set of beliefs in facts let alone ideology, a lack of trust not just in specific bureaucracies like the CDC or the FBI but overall institutions like science and government as a whole, as well as a rejection of core structural decisions like the separation of powers.
Harari admits that he doesn't know what the reason is for this breakdown in consensus that has occurred over the last decade or so, but he implies that there's at least a chance it's the impact of alien intelligence: shrill political bots driving outrage on social media, algorithms steering individuals into more siloed media environments, and so on. Personally, though, I think you can draw a straight line from Newt Gingrich giving speeches to an empty House of Representatives in 1984 through to Donald Trump pardoning the January 6th rioters in 2025. There's a pundit-brain temptation for symmetry and a refusal to acknowledge that one faction just wants power and doesn't have qualms about how to get it or keep it.
Again, there's a lot of stuff in this book that I found valuable which I haven't unpacked in this post. I should mention that Harari does a terrific job at examining Facebook's culpability in the genocide against the Rohingya in Myanmar and YouTube's role in bringing right-wing nationalist parties to power in 2016. That's all stuff I'd heard before (and lived through!), but it's really helpful to view as a unified trend and not isolated phenomena. But once more, I think Harari's instinct towards bipartisanship blunts the potential insights he could have. He views the algorithmic pull towards outrage in purely capitalist terms, as angrier people will interact more with content, not only generating direct ad revenue but also providing Facebook and Google with additional data they can store to make their products more powerful. But he skips over the fateful Peter Thiel-led decision to axe the human team running the Facebook News team in favor of the algorithm in the first place. Likewise, he doesn't mention how the GOP House accused YouTube of left-leaning bias and pushed for a more "neutral" algorithm, which in practice meant less truthful content and more outrageous content. Harari argues that we have collectively given too much power to the machine; in my view, a specific faction has led that charge, and is benefiting the most from the consequences.
I should also mention that Nexus is an extremely readable book. It looks a bit long, but I flew through the whole thing in just a few days. The language is very readable, each section is just a few pages long and makes a clear and cogent point. For all my complaints, I think Harari does an excellent job at noting what parts of the book are well-established facts, which are well-supported inferences, which are controversial statements, and what are merely speculative scenarios.
Overall I think I'd recommend this book to others. I think the first section is fantastic, the latter two are arguably even more important but less fun. I am curious to check out Harari's earlier books, it sounds like he's been working in adjacent areas for a while. I like his mix of concrete history and abstract systemic theorizing, and am curious what other tools he has come up with.
There is something insidious about the party that invented "fake news" holding the door open for the technology of "deep fakes".
ReplyDeleteI'm actually less worried about the agenic AI than the humans that hold the reigns while pretending AI is neutral. They'll craft the beast to tell the stories they want to tell whether about "white genocide" or who won what election. There's an entire generation that turn to the black box for answers, but there's never been a more "pay no attention to the Peter Thiel behind the curtain" situation.