Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Monday, May 26, 2025

Nexus

I recently read "Nexus" by Yuval Noah Harari. My friend Dan recommended it, and I can see why: it's an intriguing book, bursting with knowledge, analysis, theories and predictions. The author is a professional historian, and the book liberally cites historical examples, but the book is more interested in describing how society works as a system: the parameters, the choices and limits available to us, how technological developments have opened or closed doors in the past and how they might change in the future.

 


 

Looking back, this kind of feels like two books to me. The first third looks at the history of "information networks" from the dawn of our species through the present day. This is a very broad but very vital topic: how ideas are generated, debated, accepted, spread, and how they affect us as individuals, groups and nations. The second two thirds focus on the impact of computers in general and AI in particular, sounding an alarm for the potentially existential threats they pose to our way of life. I found the first section extremely compelling and convincing, the latter part less so.

It's hard to summarize the whole book in a blog post, but my primary take-away of his argument is that, while we tend to think of "information" as reflecting reality, it doesn't necessarily have any connection to reality. "Information" is just data or thoughts, which could be true or misleading or false or fictional. Nonetheless, despite not necessarily being true information does have a profound impact on our entire lives. Concepts like "money" are purely human inventions that don't reflect natural law, yet the shared ideas we have about "money" control so many aspects of our lives. In fact, the most powerful forces in our history have essentially been myths and stories we tell ourselves: about religion, race, nations on the macro level; love, friendship, rivalry, heroism on the micro level.

I'll note early on my biggest criticism of the book, that some of the language feels a bit shifty. Harari calls this out in particular: he writes a lot about "information" but acknowledges that this word means something different to a biologist, a historian, a journalist, a computer scientist, and so on. I think his personal definition is carefully crafted and fit to purpose, but I get the nagging feeling that there's some semantic sleight-of-hand in how he uses it throughout the book.

Somewhat similarly, he writes a lot about "dictatorship" in contrast with "democracy," but he seems to basically define "democracy" as "a good government." He explicitly says that a democracy is not about majority rule, which I think is insane. In my opinion, you can have dictatorships that protect minority rights, or democracies that do not protect minority rights, but he seems to think that any system that protects minority rights is automatically democratic. He really should use different terms for what he's talking about instead of slapping significantly different meanings on well-established words. (It's wild that he defined "populist" and traces back the etymology but never does this for "democracy.")

Later on he somewhat snippily writes that he doesn't want to create neologisms so he insists on using common words. Which, fine, that's his choice. But I don't think you get to do that and then complain about how people are misunderstanding or misinterpreting your argument, when you're using common words to mean something different from how most people understand them. I vastly prefer Piketty's approach, using a neutral word like "proprietarian" that he carefully defines and then can usefully examine in his work, sidestepping the confusing baggage that comes with words like "liberal" or "capitalist" (or "democrat").

But those complaints about words aside, I think Harari's big argument is very correct and is actually something I've been thinking about a lot lately, paralleling some significant changes in my own thinking over my life. When I was a baby libertarian in my late teens and early twenties, I whole-heartedly agreed with statements like "The solution to hate speech is more speech." As I've grown older and observed how things actually work in the real world, and how things have worked in the past, I've come to see that this isn't true at all: adding more voices does not automatically, consistently or reliably neutralize the harm generated by hate speech.

Prior to reading this book, I've tended to think that this is a symptom of the modern world, where the sheer volume of information is far too much for us to properly inspect and interrogate. We have enough time to read 100 opinions when rapidly scrolling a social media feed, and won't click on any of the articles they link to, let alone follow up on those articles' primary sources (if any). So we early believe the lies, spins, misrepresentations and exaggerations we encounter in our informational ecosystem. Why? We're biologically conditioned with a tendency to latch on to the first thing we hear as "true" and become skeptical of subsequent arguments or evidence against our previously received beliefs.

As Harari shows, though, the spread and persistence of misinformation isn't at all a modern phenomenon. One especially compelling example he gives is the history of witch hunts in Europe. In the medieval era, belief in witches was very local and varied a great deal from one community to the next: each village had their own folklore about witches, maybe viewing them as a mixture of good and bad: sometimes bringing rain, sometimes killing goats, sometimes mixing love potions. Late in the medieval era, the official Catholic church doctrine was that belief in witches was a superstition, and good Christians should trust in God rather than worry about magical neighbors. That changed when Heinrich Kramer, a man with bizarre sexual and misogynistic hang-ups, rolled into the Alps denouncing particular women for having sex with Satan and stealing men's genitalia. He was shut down by local secular and church authorities. He left town, got access to a printing press, and printed up thousands of copies of the Malleus Maleficarum, which gave lurid and shocking details about a supposed global conspiracy of secret witches who had infiltrated every village and carried out horrific crimes against children. This took off like wildfire and led to centuries of torture and execution of innocent people This is all very similar to QAnon, Pizzagate, and trans panics today. An individual can write a compelling and completely false narrative and set off a global campaign of hate and violence, completely deaf to the litany of evidence against these lies. Whether in the 1600s or the 2000s, having more information didn't bring the world closer to truth or solve problems, it led to immense misery and evil.

The conventional view of the printing press is that it broke the Catholic church's religious stranglehold on information and enabled the development of the Scientific Method, allowing people to freely publish and share their ideas. There is some truth in this, but it's overstated. Copernicus's groundbreaking book on the heliocentric system failed to sell its initial run of 1000 copies, and has been called "The worst seller of all time." Meanwhile, the Malleus Maleficarum instantly sold through multiple runs and continued to be a best-seller for centuries. The fact that Copernicus's book was more true than Kramer's did nothing to increase its popularity or reception or impact on the world.

Instead of unfettered access to information, Harari credits the Scientific Method to the creation of institutions with a capacity for self-correction. This was very different from the Catholic Church, which was (and is!) forced by its own doctrine to deny any error. Interestingly, Harari points out that most of the founders of the scientific revolution did not hail from universities, either. Instead, they were an information network of royal societies, independent researchers, journals and so on. The key difference here was that information was peer-reviewed: people wouldn't just say "Trust me," but would share their theories, experiments and data as well as their conclusions to their colleagues, who would look for errors, omissions or alternative explanations. And if an error later was discovered, journals would publicize the error, making corrections to the past record rather than cover it up or ignore it. While this seems like it would weaken the reliability of a source, it ends up building trust in the long run: the reality is that, whether we acknowledge it or not, we are fallible, and by embracing this self-correcting system we can move in the direction of greater truth, not merely the most compelling story.

Fundamentally, Nexus is arguing against what it calls the "naive" view of information, which is basically "More information will reveal the truth, and the truth will produce order and power." This idea is that more information is always good, because true and useful information will drown out the bad and lead to a better understanding of how the world works. Again, this view is easily disproven by history. One alternate view is what Harari calls the "populist" view, which essentially denies that an eternal "truth" exists at all, and equates information with power. Controlling the production and flow of information will produce power, which in the "populist" view is implicitly good in its own right.

Taken from another angle, Harari thinks that there is a "truth" which reflects "reality", but "information" doesn't have any intrinsic relationship to truth. Some information truly reflects reality, other information distorts reality. The consistent effect of information is that it connects - when we tell stories to each other, we grow more connected, and I can persuade you of my ideas and convince you to act in a certain way, or you can make me feel a kinship with you and act for your benefit. There is an even larger class of information that contributes to what he calls an "intersubjective reality". This is information that exists on its own independent of an underlying physical reality. Think of story-telling: you might make up an impressive work of fiction, someone else might write fan-fiction based on your world, a critic might write a review of your fiction summarizing what happens in it, a fan would argue that a character should have made a different choice than they actually did. You end up with this entire ecosphere of carefully-constructed and internally-consistent thoughts about an idea that doesn't have an underlying reality. You, the critic, and the fan are all choosing to participate in a shared intersubjective reality.

There is actually some evolutionary advantage to our ability to create and share stories. We talk about how our fight-or-flight instincts are biologically inherited from our ancestors who needed to quickly react to the presence of a saber-toothed tiger. Harari brings up the interesting point that our neanderthal and sapiens forefathers had a similar evolutionary advantage around their ability to cooperate in teams. You can have a small band of, say, chimpanzees or bonobos that may cooperate against another band, but you never see chimpanzee communities of hundreds or thousands. You can get bands of that many humans, though, thanks to their ability to share stories and ideas. These ideas may be built around myths, concepts of extended kinship, oral traditions of prior hardships and victories.

To this day we have a very strong reaction to all sorts of "primitive" stories: boy-meets-girl, good-man-beats-bad-man, sibling-rivalry, etc. These stories gave evolutionarily beneficial advantages in winning mates, having children, taking territory and defeating enemies. Today, we still strongly respond to those stories; however, in the same way that our daily lives have many more encounters with rude bicyclists than with saber-toothed tigers, we're far more likely to need to navigate an opaque bureaucracy than to kill a rival chieftain. But we don't have a gut-level appreciation for stories about bureaucracies in the same way we appreciate action or romance stories. And our brains don't retain information about bureaucracies very well: we can remember bible stories about rivalries and murder and who fathered who, but we are terrible at remembering lists of sewage inspection reports or NGO organization charts or certification requirements.

As an aside, this observation reminds me of William Bernstein, who writes about how man is a story-telling animal. We respond much more strongly to stories than we do to data, which was evolutionarily adaptive in the past (we won't eat the red berries because someone told us that they're poisonous) but gets us into all sorts of trouble today (we listen to our friend who says investing in bitcoin is safer than US Treasuries). Interestingly I think this observation is from his finance book The Four Pillars of Investing and not one of his history books, although the observation seems even more relevant to history. But Bernstein is a trained neurologist, has a keen understanding of how our biological makeup and mental hardware impacts our daily lives and how we organize as a society.

Harari is a big fan of the Scientific Method, as is Bernstein (in The Birth of Plenty), but neither writer is too rose-tinted. One thing I've heard in the past that Nexus backs up is that individual scientists almost never change their mind, even when faced with persuasive empirical evidence challenging their prior beliefs. Scientists are humans, with egos and prejudices and concerned about maintaining their prestige and positions. When science advances, it isn't like everyone reads a journal and changes their mind; it's that the old guard continues believing the old thing but eventually dies off, and is replaced by a new generation that grew up being persuaded by the better, new belief. Change is measured in decades, not months or years.

Which is fine, if that's how it works, but feels discouraging when considering the problems we face today. We may not have decades to react to crises like climate change or the subversion of democracy. And "decades" is specifically for the class of professional scientists who pay attention to evidence for a career; it's even less likely that the populace as a whole will change their mind to a truer, more correct belief. I mean, Newtonian physics was disproven something like 120 years ago, yet we still learn it in school and most of us follow it in daily life; those of us who have finished high school are vaguely aware of relativity, and have probably heard of string theory but don't really understand it.

All to say that, I don't think science can save us from urgent wide-spread problems. It's slow, and while it can influence the elites it can't change the mind of the masses. Harari seems to suggest that the real key is trust. If we're a society that trusts scientists, because we know they peer-review their work and admit mistakes and are continually improving, we may accept their pronouncements even if we personally don't have the time or inclination to check all their work. But if we don't trust scientists, we lose the benefits of science: longevity, productivity and affluence.

Science is ultimately about truth, but as Harari keeps noting, truth isn't the end-all and be-all: a society with access to truth does have some advantages (it can keep its citizens healthier and produce more reliable military equipment), but it is not guaranteed to triumph over a society with less devotion to the truth. Harari sees Order and Truth as two separate pillars upon which societies are built. You need both of these. Without truth you can't survive: you'll have feces in the water supply, desolate cropland with the wrong grains planted in the wrong season, walking on foot because you don't have motors. Think of something like the Great Leap Forward in China, which upended scientific truths and led to internal misery and the stunting of external power. (In a surprising coincidence, Nexus devotes a few paragraphs to Trofim Lysenko, who I just wrote about in my last post: he was a charlatan who convinced Stalin that genetics was bogus and led the USSR down a path that led to the evisceration of its sciences and widespread man-made famines.)

But you also need order in a society. If you don't have order, then you have anarchy, the collapse of the bureaucracy and the inability to function. Again, you can have bad sewage, because nobody is preventing others from poisoning the water supply; crops are desolate, because farmers know bandits will take any crops they grow, walking on foot because nobody is organizing the factory which makes motors. Between truth and order, you can make a convincing argument that order is the more important factor. Stalin was a moral nightmare, his internal terror was horrific to truth, which caused huge real problems like massive losses in the Red Army; and yet, the system was incredibly stable. Nobody dared challenge Stalin despite his many failures, the USSR endured for multiple generations and had a real shot at total world domination. Or consider the Catholic Church: it has consistently prioritized order over truth, defending bad ideas like the geocentric nature of the universe, disastrous crusades and self-destructive inquisitions; and yet it has lasted for two thousand years, far longer than the School of Athens, the League of Nations or the Royal Academy of Sciences.

In an ideal world, of course, you would balance these two. Those of us in the West will generally push for the primacy of Truth, but still recognize Order as an essential ingredient. There may be times when this requires tough choices, as in the 1960s with widespread dissent and protest against the Vietnam War and racial injustice. One thing I really like about Harari is that, like Piketty and unlike Marx, he foregrounds the importance of choice. Order doesn't inevitably triumph over Truth, nor Truth over Order; multiple stable configurations exist, we can help shape the kind of society we live in, and we should also recognize that other societies may follow other paths, with results that are different from ours and may be stronger or weaker than us.

Going back to the various views of information, Harari has rejected the naive view that information leads to truth, and truth leads to wisdom and power. He also rejects the simplistic populist view that there is no truth or wisdom, that information directly leads to power. His view is that information produces both truth and order. Truth and order, in tandem, generate power. Separately, truth also leads to wisdom. Wisdom relies on truth, but power does not require wisdom. It's an interesting view; I think I'll need to sit with it a while longer to digest and decide if I actually agree with it, but it does feel useful to me.

Phew! All of the above thoughts and reactions are for the first third of the book, which is mostly teeing up the second two-thirds. (There's a lot more I didn't get into in this post, like how advances in information technology enabled large-scale democracies for the first time, the historical development of the bureaucracy, or the 20th-century conflicts between democracy and totalitarianism.) I'm less enthused by the last 2/3 of the book which is mostly about the threat posed by AI.

Examining my own reaction, I think I have a knee-jerk skepticism. Overall I find his arguments persuasive but annoying. I am not at all an apologist for or proponent of AI, but I've been in the camp that views AI as the latest graduated step in advancing technology, whereas Harari sees it as fundamentally different from prior technologies. His point is that algorithms in general and AI in particular are agenic: they can actually take action. Up until now technology has merely augmented human decision-making. A human needs to consult a book, then execute the action described by the book; a human is in the loop, so there's an opportunity to stop and question the book's instructions before carrying them out. But a computer program can, say, deny credit card applications or impose a prison sentence or alter the outflow rate at a sewage treatment facility without requiring any human intervention. Two programs can directly communicate with one another in a way that two books or two TV shows could not.

In another interesting little coincidence, I just recently (re?-)watched The Net, the 1995 thriller starring Sandra Bullock. Many parts of that movie felt like they were in strong conversation with Nexus. For example, in one scene her character Angela Bennett is trying to get back into her hotel room, but the clerk tells her, "The computer says that Angela Bennett checked out two days ago." She insists, "No, I'm Angela Bennett, and I didn't check out, I'm standing right here!" but the clerk refuses to engage with her and moves on to the next person. Even thirty years ago we had offloaded our decision-making to the computer, so what's different now? The fact that there won't even be a clerk in the future: just touchless entry at the door, with nobody to hear your complaint or the ability to override the system. And the ubiquity of the system: in The Net, human hackers had singled out Angela Bennett in particular (much like Will Smith's character in Enemy of the State); but in the future, AI might target entire classes of people: the sick, or anyone with a criminal record, or humanity as a whole.

The triumph of AI isn't inevitable: it requires us choosing to give it control. But if we do make that choice, we may find it impossible to reverse. We can't appeal to AI's mercy or wait for it to fall asleep. Harari repeatedly refers to AI as not just "Artificial Intelligence" but "Alien Intelligence": it isn't that it thinks like a human but more rapidly, it "thinks" in a completely different way from us. For well over a decade now AI has been a black box: we can't understand how it makes its decisions, only watch the final choice it makes. All this adds up to a very urgent and potentially deadly situation.

Harari does offer some suggestions for how to address the threat posed by AI, which I do appreciate. It's very annoying when books or articles lay out doom-and-gloom scenarios without any suggested solutions. The proposals in Nexus tend to be pretty narrow and technical. They include things like keeping humans in the loop, requiring us to sign off on decisions made by AI; along with this, AI needs to explain its reasoning. Harari also muses about banning or at least prominently labeling all bots and generated content online: we waste far too much mental energy arguing against bots, and the more we engage with them the better they get to know us and the more likely they are to persuade us.

As modest as these proposals seem, he acknowledges that they still seem unlikely to be implemented. In the US they would require legislative action, which is incredibly difficult these days, and even more so when the majority party is (perhaps temporarily) benefiting from AI support.

One of my annoyances with this book is how Harari stumbles into what I think of as terminal pundit brain, the impulse to treat political factions as equivalent. He writes things like "Both parties are losing the ability to communicate or even agree on basic facts like who won the 2020 elections." It's insane to act like the Democratic party is equally to blame for January 6 and election denialism! Elsewhere, though, he does acknowledge the reality of the situation, making a cogent abservation about the abrupt transformation of right-leaning parties. Historically the conservative party has, following Edmund Burke, argued for cautious, slow and gradual change, while the progressive party has argued for faster and more ambitious change. But in the last decade or so, Trump's Republican party along with parties abroad like Bolsonaro in Brazil or Duterte in the Philipines have transformed into radical parties that seek to overthrow the status quo: getting rid of bureaucracies, axing the separation of powers, imposing new economic systems, and broadly and rapidly changing social relations.

This is a surprising change on its own, but Harari notes that this has also thrust the traditional left-leaning progressive party like the US's Democratic party into the unlikely role of the defender of the status quo. They aren't necessarily adopting more conservative positions, but they do want to retain the overall democratic system. While Harari doesn't dig into this aspect much further, it does really resonate to me. I often feel like the Democratic party insists on bringing a knife to a gunfight. It's very frustrating to hear, say, Chuck Schumer repeat the tired paeans to bipartisan cooperation and consensus, when the house is burning down behind him. I do feel a bit more sympathy for him when I think of how he wants to keep a robust pluralistic democracy running, but I have yet to see any convincing evidence that his actions will help bring that about. My overall pessimistic feeling has been that that era is just over now, and while a populist left may be less stable than a broad-based democratic left or broad-based democratic right, it's the best option available to us now.

I think that Piketty is much more useful in this area than Harari. If we're going to marshal the resources to actually address climate change and similar existential issues, we need to retake democratic control of our wealth, which in practical terms means taxing the rich and limiting the influence of money in our politics. It's no coincidence that the ascendant conservative faction tearing down institutional systems is the faction aligned with the wealthy.

Harari points to the breakdown in political and social cohesion in the US. During the 60s the country was wracked by big divisions over civil rights, women's rights, war in Vietnam, and other points of friction. The entire Western world seemed to be coming apart at the seams. And yet the system still functioned pretty well. The Civil Rights Act was supported by majorities in both parties, the Nixon administration broke every norm of the justice system yet ultimately abided by court order. The fragile and messy democratic West eventually came through this period and triumphed over the more order-orientated USSR. Today, there's no bipartisanship, not a shared set of beliefs in facts let alone ideology, a lack of trust not just in specific bureaucracies like the CDC or the FBI but overall institutions like science and government as a whole, as well as a rejection of core structural decisions like the separation of powers.

Harari admits that he doesn't know what the reason is for this breakdown in consensus that has occurred over the last decade or so, but he implies that there's at least a chance it's the impact of alien intelligence: shrill political bots driving outrage on social media, algorithms steering individuals into more siloed media environments, and so on. Personally, though, I think you can draw a straight line from Newt Gingrich giving speeches to an empty House of Representatives in 1984 through to Donald Trump pardoning the January 6th rioters in 2025. There's a pundit-brain temptation for symmetry and a refusal to acknowledge that one faction just wants power and doesn't have qualms about how to get it or keep it.

Again, there's a lot of stuff in this book that I found valuable which I haven't unpacked in this post. I should mention that Harari does a terrific job at examining Facebook's culpability in the genocide against the Rohingya in Myanmar and YouTube's role in bringing right-wing nationalist parties to power in 2016. That's all stuff I'd heard before (and lived through!), but it's really helpful to view as a unified trend and not isolated phenomena. But once more, I think Harari's instinct towards bipartisanship blunts the potential insights he could have. He views the algorithmic pull towards outrage in purely capitalist terms, as angrier people will interact more with content, not only generating direct ad revenue but also providing Facebook and Google with additional data they can store to make their products more powerful. But he skips over the fateful Peter Thiel-led decision to axe the human team running the Facebook News team in favor of the algorithm in the first place. Likewise, he doesn't mention how the GOP House accused YouTube of left-leaning bias and pushed for a more "neutral" algorithm, which in practice meant less truthful content and more outrageous content. Harari argues that we have collectively given too much power to the machine; in my view, a specific faction has led that charge, and is benefiting the most from the consequences.

I should also mention that Nexus is an extremely readable book. It looks a bit long, but I flew through the whole thing in just a few days. The language is very readable, each section is just a few pages long and makes a clear and cogent point. For all my complaints, I think Harari does an excellent job at noting what parts of the book are well-established facts, which are well-supported inferences, which are controversial statements, and what are merely speculative scenarios.

Overall I think I'd recommend this book to others. I think the first section is fantastic, the latter two are arguably even more important but less fun. I am curious to check out Harari's earlier books, it sounds like he's been working in adjacent areas for a while. I like his mix of concrete history and abstract systemic theorizing, and am curious what other tools he has come up with.

Tuesday, July 01, 2014

Layer 05: Motherboard

In a weird bit of synchronicity, I finished reading Neal Stephenson's "Some Remarks" on the same day I finished re-watching Serial Experiments Lain. I had initially thought of doing a separate blog post for each, but they aligned in some kind of freaky ways, so I'll go ahead and debrief here while it's still fresh.

I absolutely did not plan to consume both of these media at the same time. I'd pre-ordered the Lain Blu-Rays last year after they were announced, and had forgotten I had done so until they showed up in my mailbox a few weeks ago, after I had already cracked open Some Remarks. I was looking forward to both of them, and both ended up exceeding my expectations. Neal Stephenson has been one of my favorite authors ever since I read "Snow Crash" back in college; I'd already read a few of the pieces collected in "Some Remarks," and was honestly getting it for completeness' sake as much as anything, but ended up being blown away by a few longer essays that I'd never read before, most notably "Mother Earth, Mother Board." Somewhat similarly, Serial Experiments Lain has long been my favorite anime series, and one of my favorite works of fiction in any medium. It's been years since I last watched it, and even longer since I'd seen past the first couple of episodes (my standard gambit for introducing folks to this mindblowing series, which typically ends with them running away screaming). I'd felt slightly apprehensive that it wouldn't live up to my idealized memory of it; in practice, it actually exceeded it. The Blu-Ray upgrade makes the art even more beautiful than before, and the odd perspective it took on technology has made it seem weirdly prophetic rather than dated.

(Some spoilers for Lain follow; technically for the Stephenson too, I suppose, but can you really spoil nonfiction?)

The mantra of Lain is "everything is connected." What this actually means is very much up for debate: Lain is one of the most determinedly opaque pieces of fiction I've encountered, and I still refuse to investigate wikis or anything else that would destroy the mystery; I enjoy the sensation of my mind chewing over it too much. Still, every time I watch it I feel like I'm getting closer to some sort of understanding. Even though the conclusions are unclear, the questions are relatively easy to spot: how constant connection and communication affects us as individuals and as a species; whether the Internet represents a fundamentally new thing or an evolution of previous analog networks; exactly when one's existence begins and ends; the extent to which we can ever truly know another human being.

One question that is debated with increasing frequency in the series is, "Is the Wired [i.e., the Internet] an upper layer of the 'real world'? What exactly does that mean? And, if the answer is "No," then what is the Wired? After mulling over it some more (and, I have to admit, with Mother Earth, Mother Board fresh in my mind), I see it like this:

Thesis One: The Wired was created by humans, out of physical materials in the world. It exists on top of our society, and can only reflect ideas that already exist within that society: nothing new can emerge from within the Wired.

Thesis Two: The Wired was created by humans. It grew in complexity, and now has an equal level of reality as that of our universe. (To crib from another Stephenson novel, it holds an equivalent position on the Wick.) Events that begin in The Wired can affect the real world just as strongly as events in the real world can affect The Wired. A thing (e.g., an idea or a memory) can continue to exist in the Wired even if it no longer exists (or never existed) on Earth.

Thesis Three: The Wired is a digital manifestation of the collective unconsciousness that our species already had. Ideas can emerge and hold power from The Wired, but such ideas could also emerge in the past from analog sources (mysticism, religion, metaphysics). Such impulses in the past have guided our civilization's growth, and thus in a sense the Wired invented itself.

The series seems to end with  a suggestion that Thesis Three is correct, but I'm far from confident in saying that. Throughout the show, we regularly get one character or another stating with great assurance "You must not forget that The Wired is just an upper layer of the real world" or "The Wired is not, after all, an upper layer of the real world." In any other show, we would be led to the "correct" answer, but Lain seems to delight in muddying the waters, forcing us to think for ourselves about what explanation seems most plausible (or least distressing!).

This muddying extends beyond the words to the actual actions in the anime. Another enormous mystery that runs throughout the entire show is the identity of the "other" Lain. We understand several properties of her: she looks like Lain but is more confident, aggressive, and skilled. At one point, it seems like the "real" Lain has cracked the mystery: she confronts Taro, stating that the "other" Lain is only a product of the Wired; in the real world, she has only ever been spotted within Cyberia, which is a sort of threshold spot (not unlike one of the "thin places" of religious revelation) where the Wired can easily manifest thanks to the prevalence of projecting electronic devices. Mystery solved!

And yet... only an episode before, we saw another Lain step out from her own body, greeting her classmates and cutting her off. That certainly didn't happen in Cyberia. It's pretty hard to interpret that scene as anything other than the Lain of the Wired manifesting in the real world, a doppelganger supplanting its double. And, of course, a few episodes later we'll learn the bombshell that the Lain of the Wired is the original Lain; and the "real" Lain is the copy. Presuming, of course, that we can trust the man providing this information. The truth of this conundrum seems inextricably tied to which of the three theses we can accept: is it possible for a being to emerge from The Wired, and if so, could that being then influence "reality"?

These aren't new questions, but they were pretty new at the time the show was created. Lain was originally created in the mid-to-late 90s, before The Matrix and Eternal Sunshine of the Spotless Mind and other movies that played around with these sorts of concepts (though, to be fair, well after Gibson and Stephenson and Ghost in the Shell). Still, I think that the show has aged really well, partly as a result of its light touch on matters of technology. Inevitably, works of cyberpunk that try to be very specific about how technology is going to evolve end up seeming laughably misguided after only a decade or so. When Lain does make predictions, they end up being correct with startling subtlety. It took a while for me to realize the oddness of a class full of students with wireless Internet-enabled devices: it is ubiquitous today, and impossible to imagine back when the show was created. Ultimately, Lain is more interested in philosophical and existential questions that underlie not just the Internet but the universe as a whole, and so it continues to feel very relevant today.

It's that same continuity that makes Stephenson's Mother Earth, Mother Board almost shocking to read. Near the end of the essay, he quotes one of his sources as saying that, when it comes to digital communications, in the last one hundred and fifty years there have been no new ideas, only improved techniques. It's a shocking thing to say, but by that point in the essay I feel compelled to agree with him. Using the same techniques that he deployed so effectively in The Baroque Cycle, Stephenson shows how a modern institution was, for all practical purposes, completely formed in a short amount of time long ago.

The primary focus of MEMB is the physical structure that underlies the Internet: more specifically, the undersea cables that connect the continents together and enable the global exchange of information. It's a long, sprawling, and completely engrossing essay, covering every conceivable aspect of this: how cables are financed, how they are insured, who runs the project, how they are installed (different for every country!), the hazards they face, the politics of their creation and utilization, how data gets transmitted over those wires, etc. It's a bit humbling to think that, when I visit the legacy web site for Serial Experiments Lain, my request is getting put onto a wire, traveling 5,400 miles across the Pacific Ocean, connecting to some computer over there, then that data is coming back to me over another 5,400 miles before forming a picture in my web browser... and that all of this technology is fundamentally identical to that which was used in 1858 when the first transatlantic cable was laid between England and America.

So, much as Lain seems to be somewhat ambiguously suggesting that The Wired isn't really anything new, Stephenson points out that today's global telecommunications regime is a matter of increasing intensity, not something truly original. The kinds of debates we hold now, about whether we are too easily distracted, whether constant communication is changing our minds and relationships, are exactly the ones that people were writing about back in the Victorian era. This isn't to say that the Internet isn't important: but, the Internet is the current state of evolution of a process of technologically binding people together which has been progressing for a long, long time.

The rest of Some Remarks is also really good, although nothing else quite reaches the level of virtuosity of MEMB. The infamous Slashdot Interview is freely available online, and is pretty essential reading for anyone who liked reading, even if you don't like Stephenson. There's also a fascinating short story called The Great Simoleon Caper that perfectly predicts the rise of BitCoin, DogeCoin and other crypto-currencies, albeit not the practical difficulties they would encounter. (Oh, and you know those annoying windows that appear on web sites and say things like 'Hi, I'm Tiffany! Ask me if you have any questions!'? It also anticipated those by a good ten years or so.) The rest of the book is a solid collection of stand-alone short fiction, essays that expand on some topics he's already treated in fiction (notably the Royal Society feud between Newton and Leibniz), and some great one-off pieces like his introduction to David Foster Wallace's book on infinity that plumbs their shared experiences growing up in Midwestern college towns.

Near the end of the book is a somewhat depressing essay about the failure of science fiction to inspire the next generation of scientists and engineers, which Stephenson links to a broader tendency in our modern capitalist society to avoid risks. He points out that the wide-eyed idealism of early science fiction, from Wells and Verne up through Heinlein in the 1950s, gave tangible goals for society to pursue; later science fiction, which tended to be more dystopic, was more focused on showing the perils of progress. Almost all of cyberpunk falls into this category: the future might look cool, but it also has serious and dangerous problems. Lain, which isn't exactly cyberpunk, seems to glide past this distinction. It isn't encouraging us to step forward into a bright new future, nor is it cautioning us against what tomorrow will bring. Instead, it's asking us to think, to ask questions, to wonder what it means to be human and how our embrace of technology might be changing us. I don't think that Lain will inspire any entrepreneurs to build a new artificial intelligence, nor do I think it will inspire politicians to set limits on children's use of the Internet. I do think that it could inspire a new generation of philosophers, or merely informed citizens, who could help our children understand the questions that I struggle with while watching this remarkable series.

Wednesday, April 16, 2014

The Delightful World of Open Source Software

The Heartbleed vulnerability was an awful bug, and caused incredible damage to the web of trust we have built around online transactions. But, there was a silver lining: people became aware of how much of our critical Internet structure relies on old, poorly-maintained code that nobody wants to support any more. That's, um, a bad thing, but now that we're aware of it people can start fixing it!


By far my favorite thing I've seen online recently is the git commit log for (a copy of) OpenSSL, the software with the Heartbleed bug. Once people started paying attention, interest in the project spiked, and they started seriously working to not just plug Heartbleed, but to fix the code in general. This led to a flurry of activity, and a fascinating kind of oral history that reminds me of someone discovering the Necronomicon: initial excitement and a sense of superiority gradually give way to a creeping suspicion that something is not right here, and ending in a howling wave of madness as everything you love in the world is destroyed. Here are some of my favorite commit messages, all from the last three days or so, presented in chronological order:

I am completely blown away that the same IETF that cannot efficiently allocate needed protocol, service numbers, or other such things when they are needed, can so quickly and easily rubber stamp the addition of a 64K Covert Channel in a critical protocol.  The organization should look at itself very carefully, find out how this this happened, and everyone who allowed this to happen on their watch should be evicted from the decision making process.  IETF, I don't trust you.
 


remove more cases of MS_STATIC, MS_CALLBACK, and MS_FAR. Did you know that MS_STATIC doesn't mean it is static?  How far can lies and half-truths be layered?  I wonder if anyone got fooled, and actually returned a pointer..

 

Remove various horrible socket syscall wrappers, especially SHUTDOWN* which did shutdown + close, all nasty and surprising.  Use the raw syscalls that everyone knows the behaviour of.

 

First pass at applying KNF to the OpenSSL code, which almost makes it readable.




Flense all use of BIO_snprintf from ssl source - use the real one instead


o_dir.c has a questionable odor


Toss a `unifdef -U OPENSSL_SYS_WINDOWS' bomb into crypto/bio.

 

No longer mention OPENSSL_EC_BIN_PT_COMP being required to allow for `compressed' EC point representation.
First, as researched by djb, quoting from http://cr.yp.to/ecdh/patents.html :
``It should, in any case, be obvious to the reader that a patent cannot
  cover compression mechanisms published seven years before the patent
  was filed.'' 

Second, that define was actually removed from the code in in OpenSSL 1.0.0.

 

remove FIPS mode support. people who require FIPS can buy something that meets their needs, but dumping it in here only penalizes the rest of us.

 

Q: How would you like your lies, sir?
A: Rare.


 

just like every web browser expands until it can read mail, every modular library expands until it has its own dlfcn wrapper, and libcrypto is no exception.
 


The NO_ASN1_OLD define was introduced in 0.9.7, 8 years ago, to allow for obsolete (and mostly internal) routines to be compiled out. We don't expect any reasonable software to stick to these interfaces, so better clean up the view and unifdef -DNO_ASN1_OLD. The astute reader will notice the existence of NO_OLD_ASN1 which serves a similar purpose, but is more entangled. Its time will come, soon.

 

imake died in a fire a long time ago

 

we don't use these files for building

 

we don't use this makefile

 

the VMS code is legion 


remove ssl2 support even more completely. in the process, always include ssl3 and tls1, we don't need config options for them. when the time comes to expire ssl3, it will be with an ax.


Remove wraparounds for operating systems which lack issetugid(). I will note that some were missing, looking at you Solaris!!!  Anyone home? Using my own copyright on the file now, since this is a rewrite of a trivial wrapper around a system call I invented.


use explicit_bzero instead of a bizarro "no compiler could ever be smart enough to optimize this" monstrosity.

 

Three wrappers in this file: OPENSSL_strncasecmp, OPENSSL_strcasecmp, and OPENSSL_memcmp. All modern systems have strncasecmp.  No need to rewrite it. Same with memcmp, call the system one!  It is more likely to be hot in the icache, and is specifically optimized for the platform.  I thought these OpenSSL people cared about performance?

 

you do not want to do the things this program does




strncpy(d, s, strlen(s)) is a special kind of stupid. even when it's right,it looks wrong. replace with auditable code and eliminate many strlen calls to improve efficiency. (wait, did somebody say FASTER?)


spray the apps directory with anti-VMS napalm. so that its lovecraftian horror is not forever lost, i reproduce below a comment from the deleted code.
        /* 2011-03-22 SMS.
         * If we have 32-bit pointers everywhere, then we're safe, and
         * we bypass this mess, as on non-VMS systems.  (See ARGV,
         * above.)
         * Problem 1: Compaq/HP C before V7.3 always used 32-bit
         * pointers for argv[].
         * Fix 1: For a 32-bit argv[], when we're using 64-bit pointers
         * everywhere else, we always allocate and use a 64-bit
         * duplicate of argv[].
         * Problem 2: Compaq/HP C V7.3 (Alpha, IA64) before ECO1 failed
         * to NULL-terminate a 64-bit argv[].  (As this was written, the
         * compiler ECO was available only on IA64.)
         * Fix 2: Unless advised not to (VMS_TRUST_ARGV), we test a
         * 64-bit argv[argc] for NULL, and, if necessary, use a
         * (properly) NULL-terminated (64-bit) duplicate of argv[].
         * The same code is used in either case to duplicate argv[].
         * Some of these decisions could be handled in preprocessing,
         * but the code tends to get even uglier, and the penalty for
         * deciding at compile- or run-time is tiny.
         */

 

Remove non-posix support. Why is OPENSSL_isservice even here? Is this a crypto library or a generic platform abstraction library? "A hack to make Visual C++ 5.0 work correctly" ... time to upgrade.
 


Your operating system memory allocation functions are your friend. If they are not please fix your operating system.

 

Make this byzantine horror a shell of it's former self by stubbing the functions. The ability to set the debug mem functions died with mem.c


Actually, now that I look at all that together at once, it's really reminding me of Johnny's journal entries from House of Leaves.

Now, for an open-source enthusiast like myself, the entire Heartbleed saga has been distressing on a psychological, even philosophical level. One of the core axioms that open-source advocates embrace is the belief that open software leads to greater security and stability in code. When you offer up your source code for the entire world, you gain thousands of eyeballs, any of who can spot bugs in the code and offer solutions. The thinking is that this is much safer than the closed-source Microsoft world, where bugs lie hidden in compiled code, and can't be discovered until a nefarious actor exploits it.

Heartbleed totally inverted that expectation: all the lame companies who used Microsoft IIS came out of the incident with flying colors, while all of the cool companies running LAMP-style stacks seemed like dupes. The truly embarrassing thing is that the bug was in OpenSSL for over two years before it was finally noticed and fixed. That seems to disprove Eric Raymond's Bazaar argument that "given enough eyeballs, all bugs are shallow."

Now that the dust has settled somewhat, a more nuanced view seems to be emerging. The reality is that there were almost no actual eyeballs on OpenSSL; even though six billion people could have looked at it, only three people were spending a couple of hours a month. Why? Well, because:
  1. It's ancient software. Programmers are always excited by newer and better things; who wants to waste time trawling through outdated code?
  2. It's a nightmare to read. As Neal Stephenson noted in his chapter on Linux from In the Beginning..., most low-level open-source software is written in C, and contains such a staggering amount of boilerplate precompiler definitions that it's a nightmare just to find where the actual code in the project lives. Many or most of the above-quoted commits are related to this complaint.
  3. Tied to the first two points, virtually all open source software is maintained on a purely volunteer basis. These are almost always programmers with full-time day jobs, so they'll spend their limited free-time programming hours on software that excites them and/or that has potential for future career advancement; which, respectively, means well-written projects and/or projects using cutting-edge technology. Exciting new projects with clean code and active communities like Django get a lot of volunteers and can advance very quickly; fugly old legacy projects like OpenSSL don't get volunteers.
  4. So, because nobody wants to work on these projects that are critically important but mind-numbingly dull, it's up to "the community" to fund continued support. Last year, the project received $2000 in donations, which isn't much at all, and most of which came from a couple of Internet companies. That's wildly out of sync with how important the software is, and a staggeringly small amount of resources from the Internet companies (Google, Amazon, Facebook, etc.) that rely on using the software. (By my calculations, the total budget last year for OpenSSL is equivalent to roughly 1.5% of a single Google engineer's salary.)
Fundamentally, Raymond isn't wrong, but we were wrong to assume that just because a limitless number of people could review software, enough people were reviewing it. 

It will be interesting to see how the industry as a whole responds to this incident. In the short term, I'm concerned that we might see a surge of similar exploits: now that it's widely known that vulnerabilities can languish for years in these open-source projects, criminal hackers are probably poring through CVS repositories looking for the next unpatched buffer overflow exploit. Any such additional vulnerabilities would intensify a call to action, but one is probably inevitably coming anyways. What needs to happen? In my opinion, those who have profited most from the labor of the community should contribute more to its survival, either fiscally (via donations of money) or in kind (by assigning employees to their maintenance, as Google currently does). Hopefully self-interest will motivate the big companies to do so; if not, it may be helpful to "name and shame" any freeloaders. (And, really, we're not talking about huge sums of money here, certainly far less than paying a license for commercial software would cost.)

The other big thing that needs to happen is what those wonderful, doomed folks at OpenSSL are doing now: throwing back the curtain around this old software, gaping in unfeigned shock at how awful it is, and taking a chainsaw to the worst bits of it, trying to hack it down to a state that's possible to comprehend and maintain. Any software developer will tell you that this is the worst kind of programming imaginable: fixing bad code that you didn't write in the first place. But, it's crucially important. (I'll be curious to see if this incident also adds a sense of urgency for rewriting software components in more modern languages that don't even allow frequently-exploited "features" like buffer overflows.)

With that in mind, I'm even happier to read the words in this commit log. Not just because they're funny, not just because they're enlightening, but because they're a part of the long oral tradition that is open source software. More so than any other area of software development, open source relies on programmers being brutally candid about what's going on: if someone writes a hack, you can be sure that they'll leave a comment pointing out that it is a hack, explaining why they had to do it, and what the implications of that hack are. Without any corporate PR arm around to vet them, coders can be perfectly frank about their opinion of any software that they're writing or reading. Back when I was in college and first getting into Linux, whenever I was bored (and didn't feel like piping random text files to /dev/dsp) I would open up a console prompt and type grep -R <my favorite curse-word> /usr/src/linux. This would inevitably bring me to the most interesting parts of the Linux codebase: not the reams of #ifdef macro commands, not the fiddly make settings for obscure hardware, but the places where some philosophical debate was occurring between different generations of Linux developers. Heartbleed is almost certainly the biggest threat that open source software has faced in the past twenty years, but if it can quickly respond with transparency, candor, and action, it will emerge stronger than ever before.

Saturday, March 15, 2014

Small Victories

After a few years of half-hearted, very occasional efforts, I’m now the proud owner of the seberin.com domain name! I don’t have any particular plans for it at the moment, so it’s just pointing to this blog for the time being. Still, as one of my two online handles, it feels kinda nice to have claimed it for myself.

Getting domains can be weird. If you just want ANY domain, it’s pretty simple: just find one that hasn’t been registered yet, and register it. This is relatively cheap (usually around $15/year, depending on the domain extension, usually with a discount for the first year), but since all the short and memorable ones have already been claimed, you’ll probably need to use a long name, punctuation, and/or numbers.

There’s a massive market in domain name resales, not unlike flipping properties in a housing boom market. Marquee domains like cars.com can sell for tens of millions of dollars. A far vaster market exists for pure speculation, claiming domains that aren’t popular now, but might be in the future. It’s a bit of a gamble: speculators might be paying thousands of dollars a year to maintain a portfolio of domains that they aren’t actually using, in the hopes that one day they’ll be able to sell one for a big payday. “Cybersquatting” isn’t as big a problem as it was in the past, thanks to the new prices associated with domains, but it continues to be an annoyance.

One of the domains I would love to own is most likely forever beyond my reach: it’s actively being used by a for-profit company, who are unlikely to ever want to sell it. However, seberin.com was an unusual case. It was registered by a person or entity located in the Ukraine, but was not pointing to any valid IP address. It was both owned and abandoned. Now, if I was a hot-shot entrepreneur who needed to wanted to claim the domain as part of an elaborate business scheme, there was a route to success I could have followed: reach out to the domain owner, either directly or through a lawyer/agent, indicate my interest in purchasing, and negotiate a price. Needless to say, this didn’t apply to me and my vague, fairly indifferent desires.

Since the domain wasn’t being actively used, it seemed somewhat likely that it would become available sooner or later: after all, it’s a bit of a waste to keep paying for renewing something that you aren’t using. So, a year or so ago, I did some research into the wonderful world of domain backorders. Whenever a domain registration expires without being renewed, it is “released” and comes up for grabs to the first person willing to re-register it. However, because there are always more speculators around, there’s a good chance that another person might snap it up. After doing some research, I decided to create an account at pool.com, a backordering service. This allows you to list all the domain names you’re interested in acquiring; once the domain becomes available, their servers will attempt to auto-register you as quickly as you can. It’s a bit pricey at $60, but far less expensive than purchasing directly from a current owner would be, and probably cheaper than acquiring in an auction. Plus, there’s no cost unless they actually acquire the domain.

I learned a fair amount during the process. I was initially excited when the expiration date passed and the registrant hadn’t renewed. However, it turns out that in many cases a domain doesn’t revert the instant it expires. Instead, there’s a grace period for re-registration; then it goes into “pending delete” status, whereupon interested parties can start to compete for it. Before seberin.com entered Pending Delete, it was re-registered. I was very slightly bummed to have gotten my hopes up, but at the time time I’m glad for the system, since it would presumably protect me from having a domain “stolen” if I forgot to update my credit card information or whatever.

I took the domain off the backorder list, and set a recurring calendar reminder to check on it each year. Given the lengthy period between expiring and deleting, I figured I could check the status and re-add the backorder sometime in the week after the official expiration date.

Well, much to my surprise and delight, the next time I checked on it, it was available! Not in an auction, not pending deletion, just straight-up unclaimed. I happily grabbed the rights and set it up to point to my blog, at least until and unless I decide to do something else with it.

I’ve been feeling weirdly guilty since then - I doubt that there’s any connection between the current situation in the Ukraine and the domain becoming available, but I would hate to think that I benefited from a result of the chaos there. Most likely, though, whoever had it before decided to save a few bucks each year, and I am happy to add a little trophy to my online collection of vanities.

Wednesday, May 29, 2013

Silicon Valley Musings

You know how, when you read an article on a topic you know well, you get excited at first, and then get progressively more frustrated at the things it gets wrong or fails to address? Well, that's how I felt when reading an article in the current New Yorker magazine titled "Can Silicon Valley Embrace Politics?" (Weirdly enough, even though it's a current article, it's already gone behind a paywall. Usually the New Yorker is good about keeping them up for two weeks after an issue comes out. I don't know if this is an exception for this article, or a new policy. Anyways, you'll need to be a subscriber to read the whole thing, but you'll get the gist from the intro at the link.)

First of all, I need to make the standard disclaimers: I can only write about my own personal experience, which I'll cheerful admit may not be representative of the industry as a whole. I've worked in tech for over a decade (wow!) and in the Bay Area since 2005. I've worked from the south end (Los Gatos) to the north (San Francisco), but have exclusively worked for smaller companies with anywhere from 3 to 50 employees. I don't have first-hand experience of working for the titans like Apple, Google, or Facebook, which George Packer focuses on in his article. I do have friends at those companies, and have visited their campuses, so I have some familiarity even if I don't have much direct knowledge.

And actually, that's a great place for me to start. Packer writes that "Though tech companies promote an open and connected world, they are extremely secretive, preventing outsiders from learning the most basic facts about their internal workings." Well... I don't think that's true. Just last week, I and about two hundred other random engineers dropped by Twitter's headquarters for a series of lightning tech talks. It was held in their commons, and we could see the literal blueprints on the walls showing the next several months of construction on their new mid-Market home. The Twitter engineers talked about their testing process, the risk/reward evaluation they have to make for supporting new features, how they decide whether to open-source something like their AFNetworking modifications for iOS. And this was hardly an isolated case. I've wandered on Google's campus, walked through eBay's parking lots, sat in on Mobile Monday events hosted at dozens of the Valley's largest and smallest companies. Granted, I have the advantage of actually living here, but companies like Google are actually surprisingly open about many of their activities, such as writing blog posts detailing how they operate their data centers.

So, yeah, I was pretty surprised to read Packer describe them as "secretive." I think that Silicon Valley tech companies are far more open and collaborative than their counterparts elsewhere in the country, and would go so far as to say that it's probably the single most important factor in explaining the Valley's global dominance. The atmosphere out here is incredibly exciting and collegial: workers at various companies are constantly meeting one another socially, chatting with each other over drinks or coffee, coming up with new ideas tangentially related to their day jobs. That's what gives the Valley its energy and dynamism: the constant cross-pollination between companies as people collaborate, leave their old companies to start new ones, hire new people, and then those new hires start the cycle again.

Granted, Google isn't going to divulge details on its search algorithms (though it gives more information than you would think), and Facebook won't reveal an upcoming acquisition, but the industry out here is far more open and less secretive than what you'll find in New York City, the Massachusetts Route 128 corridor, the Research Triangle, or any of the other tech hubs in the country (or, I think, the world). (In Overland Park near Kansas City, Sprint's headquarters resemble a fortress, complete with walls and guard towers. The first job I had in that city, with a small company of fewer than 50 people, required a magnetized keycard to enter our office. In contrast, most Silicon Valley headquarters are laid out like college campuses, and it's easy to visit a friend during the day.) The tech industry as a whole doesn't strike me as especially secretive... granted, they're more secretive than the entertainment industry or academia, but much less so than defense contractors or banking.

I should backtrack here and admit that, for the most part, I agree with Packer's overall thesis: that the Valley is insufficiently politically engaged, and that the local tech boom has created a highly stratified society of the super-rich and the very poor. In both cases, though, I think the piece suffers by failing to make comparisons to other locales or industries that would help show just how far behind the Valley is. In the former case, he paints a picture of tech workers as being isolated and apolitical. He quotes one person as saying that people out here "Don't read The Economist." Well... yeah, we do. And The Atlantic, and the New Yorker, and McSweeney's. Again, I can only speak to my personal experience, but the people I've met here (both in the tech industry and outside of it) are smart and politically aware. I'll grant that there is a large amount of cynicism, that "politicians can't get good things done," but that's hardly unique to Silicon Valley. Compared to the communities I've been part of in Minnesota, Chicago, St. Louis, and Kansas City, I'd say that the Bay Area tech community is more politically engaged. It's certainly possible that we're less politically active than the tech community in New York City or Washington, D. C. Again, I would have appreciated some point of comparison in the article.

Having lived most of my life in the Midwest, I have noticed a significant difference in the political atmosphere out here, but it's a difference in kind and not degree. Compared to much of the country, there's a remarkably high level of consensus on many issues that otherwise divide communities. Pretty much everyone out here supports same-sex marriage, even the few Republicans I know. There's broad support for progressive tax policy, even though this area benefited hugely from George W. Bush-style tax cuts. There's no local disbelief in global warming. All local politicians are pro-choice. And so, there just aren't very high local stakes for these kinds of issues. If someone is very committed to a cause, they're likely to support a national candidate, or someone in a more competitive state (I've gotten in the habit of backing some Minnesotan candidates and causes), but there isn't much outlet for those emotional political issues here.

When there are issues that affect people locally, of course, they get very engaged. Everyone I know (in tech or out) has an opinion on San Francisco's sit/lie ordinance. (For the record, my circle is roughly 75% in favor, 25% opposed.) There's regular tension between anti-growth and smart-growth forces. (Should we build more high-rise developments to support dense urban living?) Everyone cares about high-speed rail. (Almost every tech worker I know supports it; non-tech people are roughly evenly split.)

I can't quite tell if Packer is arguing that the rank-and-file is politically disengaged, or that the big companies themselves aren't politically active, or both. I don't see the evidence of individual disengagement... tech people here write checks, and quit their jobs to join the Obama campaign, and drive to Sacramento to support marriage equality. I suppose a stronger case could be made for the big companies' not being involved in politics, but again, I have to wonder who Packer is comparing them to. The oil companies? Koch Industries? Unilever? I certainly agree that Silicon Valley could do more than it does, but so could everyone. I do wonder what other industries might be good to emulate... the entertainment industry obviously is very engaged in politics, but I feel like there must be some other ones that may also be fairly positively engaged, like retail and housing. I would have loved to see some numbers around the portion of Silicon Valley's wealth that goes to charity and politics, compared with numbers from other industries.

Speaking of wealth, the aspect of Packer's article that I was most on board with was the bifurcated local economy. San Francisco is one of the best places in the world to live if you're young and have money: you can eat amazing fourteen-dollar sandwiches from Tartine Bakery, and live in a ten-million-dollar condo with 1500 square feet and views of the Golden Gate and Bay Bridges, and use TaskRabbit to hire someone to pick up a pair of twenty-sided dice in time for tonight's Dungeons & Dragons game. It's also one of the less painful places to live if you're destitute: the weather is temperate year-round, the city provides a lot of services, and many local groups advocate for the homeless. For people in the middle, though, the city can seem impossible. Rent prices are insane and housing prices even worse. The cost of living is nearly the highest in the country, and non-tech jobs just don't pay enough to cover it. The trend is continuing, too. San Francisco is becoming even more attractive for young tech workers, thanks to relatively new additions like the Valley shuttle buses, so the median income keeps creeping up, and lower incomes keep getting squeezed out.

Ever since moving out here, I've actively promulgated a fantasy: I want everyone who I like to move out here, so they can enjoy the beautiful weather, the awesome culture, and, more importantly, I won't need to fly east to visit them. I've since come to realize that this is an even more selfish fantasy than I had first thought. Visiting the Bay Area can be a dream. Living here can be a hardship, unless you're fortunate enough to land a good job.

So, what should we do about this? It's really the same old problem of gentrification, just written even larger with more money. San Francisco already does a lot with tools like rent control, without which I'm sure there would be even fewer lower-middle-class families living in the city. And like any example of gentrification, it can be really hard to untangle the positive benefits of increasing wealth with the negative impacts on the neighborhood's existing diversity and character.

At one point in the article, Packer describes a coffee meeting he had in Four Barrel, sipping single-origin coffee while surrounded by the casually hip. I immediately perked up at that: my office is just two doors away from Four Barrel, and I think it's a fascinating focal point for what's going on in the Mission, both good and bad. Packer does a great job at briefly and accurately conveying the atmosphere of the Valencia Street corridor: young tech workers with Apple devices, shopping at boutique stores and gourmet restaurants. While he was waiting in line for half an hour to get a coffee, Packer may have glanced at the buildings across the street from Four Barrel. He doesn't write about them, and there's no particular reason why he would have: they look like the kind of new mid-rise condo development that would house these young elites. He may have been surprised to learn that this is actually a public housing development: these nice-looking, well-kept buildings on one of the hottest blocks in the city are reserved for the use of the city's poorest and least enfranchised residents.

And... I'm not sure what to do with that fascinating nugget. The reason San Francisco has so much money to spend on its virtuous social programs is because so many wealthy people live here and help fund their budget. This block of Valencia in particular has benefited in very substantial ways from the changing demographics: I've chatted with people who have lived here for decades, and remember when this was considered a bad part of town. But then places like Four Barrel moved in, and trendy spenders started hanging out, and then more shops started opening up to cater to those people, and the city rebuilt the old projects, and it's now a fun, comfortable, safe space to wander. And yet, Packer is definitely right when he writes about how the Latino population in the neighborhood is shrinking, and families who have lived here for a long time are increasingly finding it hard to find affordable rent, or even shops that provide useful services. Again, it's the same old story of gentrification, and I wish I knew how to "solve" it such that everyone could get what they want and nobody has to leave.

While on the subject of Latinos, Packer quotes Mitch Kapor (of Lotus Notes) saying that "asking questions about the lack of racial and gender diversity in tech companies leaves people in Silicon Valley intensely uncomfortable." I think he's definitely right about the lack of racial diversity: it's absolutely shameful how few African-Americans and Latinos are employed in high-tech positions. It's something we need to do much better on. However, I have (again, in my own limited personal experience) noticed a huge improvement in the last few years on addressing gender diversity. It's definitely a problem: at my previous company, we had hired over twenty (!!) men before hiring our first woman. Companies now openly talk about combating the "Dave ratio" (the ratio of men named "Dave" to women in a company; a 1:1 or worse ratio is shockingly common) and are explicitly describing their need to improve representation. Larger companies are addressing every stage of the pipeline: getting more school-age girls interested in science and math, encouraging college women to pursue STEM careers, setting up internship programs specifically for female programmers, and proactively recruiting woman developers. Smaller companies like my own make greater efforts to seek out female developers, and even use terms like "affirmative action" that would have been anathema a few years ago. While there is still a very long way to go in improving the situation, it feels great to see more traction now that companies are discussing it. I was delighted when my own sister enrolled in the fantastic Hackbright program, and landed a great programming job within a few weeks of graduation.

But, the point still stands that we have a long way to go to even begin to approach parity, and we need to start taking similar initiatives to improve racial and cultural diversity. The tech industry in general, and Silicon Valley in particular, lags far behind most other American industries on both counts.

Um. I think that's all I had to say. I worry a little that I'm just a sensitive San Franciscan getting defensive when a writer from New York criticizes my town and my career, but I do have to admit that he's right about at least a few things. My biggest complaint is that he doesn't offer much perspective in how San Francisco compares to other cities in America, or how the tech industry compares to other industries; some objective figures on our miserliness or apathy would have impressed me much more than scattered anecdotes and interviews. Then again, I am a privileged tech guy, so that's exactly what you would expect me to say.

Monday, April 22, 2013

CISPA

Hey everyone,

I try not to be too political on this blog. Even though it's a very casual and personal thing, I try to stay focused on works of creativity that interest me. However, I've decided to participate in today's online protest against CISPA, and I thought I'd share some typically verbose thoughts about the bill.

If it feels like we've been through this before, you're not alone. Ever since the Internet became a major channel for communication among Americans, it has been a target of frequent attack by politicians and government officials seeking to control its content or monitor its activities. I first became politicized back in 1996 when the Communications Decency Act passed Congress and was signed by President Bill Clinton: this law would have criminalized otherwise-legal materials when placed on the Internet, granted the government sweeping powers to regulate undefined "indecent" content, and held internet service providers responsible for the content sent through their networks.

Fortunately, the Supreme Court struck down the worst parts of the CDA. Ever since then, though, there's been a relentless assault on our constitutional freedoms, fought on the battlefield of the Internet. Most of you will remember last year's protests against SOPA, the Stop Online Piracy Act, another poorly-written and overly broad bill that would have established pervasive censorship of previously protected speech. While being rushed to the US House, it encountered massive opposition that was spearheaded by major web sites such as Google and Wikipedia. Fortunately, the public clamor encouraged the judiciary committee to resist the bill's push, and it was defeated.

CISPA (the Cyber Intelligence Sharing and Protection Act) is not a repeat of SOPA. SOPA was ostensibly designed to protect against piracy, and would have enabled censorship. CISPA is ostensibly designed to enable sharing of information among law enforcement agencies, but will lead to pervasive surveillance of citizens by corporations and the government.

In America, we are very fortunate to have a strong tradition of constitutionally protected rights. High on this list is the Fourth Amendment, which prohibits the government from spying on its citizens or seizing their property without following due process of the law. Of course, if a person poses a threat, the government can absolutely monitor them and take action to stop them: all it takes is a warrant. Under CISPA, every company with access to your data - Google, Apple, Facebook, Twitter, etc. - can spy on you and provide your data to the federal government. This is true regardless of an individual site's privacy policy. The bill will also allow companies to "hack" perceived threats, even if their victims are innocent, so long as they acted "in good faith."

So, CISPA essentially wipes the slate clean of all the existing judicial processes meant to provide checks and balances between the government's need for information and individuals' desire for privacy. The Wiretap Act, Video Privacy Protection Act, Electronic Communications Privacy Act: all will be rendered moot. A company can promise to keep your information private, then turn around and supply it to the government, which can then share it with anyone and use it for other purposes. You will have no recourse against the offending company or the government.

It's a bad bill. A really, really bad bill.

President Obama has already stated that he will veto the bill if it passes the Senate, so I'm cautiously optimistic that CISPA will not go into effect. However, former Senator Obama previously reneged on a promise to block the immunity granted to AT&T's warrantless wiretapping, a case that bears disturbing similarities to CISPA. Public opposition to the bill will provide political courage and political cover to Obama and members of the Senate who oppose CISPA's overreaching.

So, what can you do to help stop CISPA?
  • Call your Senator. A call is much more effective than an email or other electronic communication, and will count much more quickly than a hand-written postal letter.
  • Spread the word. Don't spam your newsfeed, but a brief post explaining your concerns and pointing to information on the bill will help the movement grow.
  • Support the EFF. These are the good guys, and they have been fighting against awful legislation like CISPA for twenty years. 
  • (Bonus points): Contact your House representative and let them know what you think of their vote. Yes, it's too late to affect legislation, but if they realize how important this issue is to their constituents, they'll be more likely to oppose this type of legislation in the future.
I'll return to babbling about video games and books shortly. Thanks for your time, everyone!

- Chris

Tuesday, February 07, 2012

Go Off the Record

I'm partway through a fascinating article in the last New Yorker. It's about the suicide of Tyler Clementi, a college freshman who briefly was in the news last year when he killed himself after his roommate posted embarrassing stuff about him online. Tyler was gay, and the whole incident fit right into a national dialog we've been having about bullying, particularly in relation to the "It Gets Better" campaign.

The article is kind of hard to read. Emotionally, I mean… recounting a suicide victim, even a stranger, is taxing. It's very well-written. It's also a very surprising article. I'm probably like most people in that I saw the headlines and had a basic understanding of the story, but my understanding was flawed; like most people, I thought that the roommate had posted a secretly-taped sex video of his roommate; he actually didn't. What's really getting me about this article, though, is the amazing volume of online evidence that's been turned up, and is now part of the public record thanks to filed court documents. If the article is still available (it will eventually go behind a paywall), I highly recommend looking at it, just to get some idea of the long digital trail that perpetrator and victim left behind.

(I realize that "perpetrator" and "victim" are strongly-charged words, and some people will rightly disagree with them. As the article makes clear, there's a great deal of ambiguity in the full story of what happened.)

So, what's in the record? Every IM conversation that both people had held. A lot of this is from conversations that they'd had about one another. Dharun, the straight roommate, went on an online fishing expedition once he received his roommate assignment. Armed only with his roommate's first name, last initial, and email address, he was able to find his online identity, locate posts he'd created on message boards going back nearly a decade; eventually, he figured out that Tyler was gay… before the two of them had ever met, or even responded to an email. Throughout this whole process, Dharum was chatting on IM with a friend, and so we have a running dialog about Dharun's activities and his evolving opinions of Tyler: he's poor. He's stupid. He's a wuss. He's (horrors!) gay. The callousness of both Dharun and his friend are really shocking.

Similarly, we also vicariously eavesdrop on Tyler's IM conversations with a younger female friend of his; in one touching and awkward conversation, Tyler and Dharun have moved in together, and Dharun is ignoring Tyler; Tyler is trying to think of something to say to Dharun, but is too shy to break the ice. Many of us have been in this situation, but here we have an actual real-time description of it, as Tyler self-effacingly writes about his predicament.

All this is pretty heartbreaking or horrifying in the context of the story we're reading and the tragedy we know is approaching. It's a more subtle or disquieting effect when I think about how it applies to my own life. As I read this article, I couldn't help but think about the long electronic trail I've left behind. How many times, when I've been joking around with good friends online, have I typed a sentence that, devoid of context, could one day return to haunt me? It wouldn't be difficult to cull through the tens of thousands of IMs and emails I've written, pull out a dozen or so quotes, and portray me as… well, as whatever you wanted to show me as. A monster, a sissy, a bully, a Democrat, a Republican, an atheist, a fundamentalist Christian. Heck, even just going through the 600+ blog posts I've written on this site over the last few years would provide tons of fodder for the scandal mill if I ever were to become newsworthy (though I do pity the person who'd need to sift through all the treatises on Sid Meier's Civilization, postmodern literature, and British detective shows).

The simple fact is, I approach all IM, and to a lesser extent email and blogging, as conversational. I spout off whatever occurs to me, which may or may not line up with what I'd choose to write if I was working in a medium that felt more permanent and less ephemeral. Because I'm usually communicating with friends, who know my idiosyncrasies and with whom I share a deep well of valued popular culture, a lot of what I "say" draws on the accumulated shorthand of our relationships, a kind of Chris King argot: when I say that I'm bringing the Necronomicon to a cabin in the woods, I'm not REALLY admitting to a desire to engage in demonic rituals; I'm invoking the night back in high school when we watched an Evil Dead marathon over Halloween, and implicitly expressing optimism that we'll have as much fun now as we did over a decade ago. When I warn someone that I'm made of poison, I'm not REALLY expressing a desire to kill everyone I come in contact with; I'm invoking Topato, who… um… okay, it's a little complicated. You have to already know it, or it's impossible to explain.

Anyways. The article has been a huge cause for reflection on my part, and I'm still not sure what the take-away should be. I've been aware forever that online information stays around indefinitely, and that anything you write can be traced back to you; but there's a huge difference between knowing a fact, and having it presented to you in the form of a tragedy. Part of me wants to do a full sweep: take advantage of Google Chat's "Off the Record" feature, use a PGP plug-in for AIM, don't post on message boards, delete my Facebook account. Really, though, I benefit a lot from all those things. I've debated before about "Off the Record," and have decided that there have been enough times that I've been happy to look up a conversation months later (or, conversely, been bummed when I haven't been able to find a chat lot from AIM or another service) that it's worth keeping that data in Google's hands. And, really, the statistical odds of my babblings ever becoming of interest to the public at large are so small that my even considering the possibility is probably megalomaniacal of me.

But, again, that's just my own reaction. I think that this is a big deal, and something that will only grow more important as our online identities take a larger share of our total identity.  Everyone should think long and hard about what kind of trail they're leaving behind, and what that trail says about them.

Thursday, October 06, 2011

Goodbye, Steve

Like practically everybody here in the Bay Area, I was sad to hear that Steve Jobs had passed away. We've known for a while that his health was poor and his days were numbered, but still, it comes as a shock to realize that this titan no longer walks among us.

Steve was inextricable from Apple; even during his years in the wilderness, his stamp remained on the company, and his return cemented his place in the tiny pantheon of true technology leaders.

Most famously, Steve Jobs and Steve Wozniak invented the non-hobbyist version of the personal computer. Without the Apple, there might not have been an IBM PC; we would probably still have PCs, but I imagine that they would have taken much longer to come into our homes. As a programmer who first found his life's calling by typing BASIC programs into a home PC, I owe Steve my thanks.

When I was growing up, I bought into and helped perpetuate the whole "Macs-Versus-PCs" debate. In elementary school, I enjoyed our Apple IIe computers, where I experienced the joys of Oregon Trail, Number Munchers, and other light educational games. In junior high, I was put off by the Apple Macintosh computers in the lab: where were the games? Where was BASIC? I sneered and returned to my command line at home.

I still remember when Jobs came back to Apple, and particularly when he "sold out" and announced a deal with Microsoft. I remember crowing to my friends, announcing that Apple's days were numbered, that in another year or two we would be living in a glorious, PC/Microsoft-only world.

I'm willing to admit that I was very, very wrong.

Steve proved me, and everyone else, wrong. I can't think of another person over the past fifteen years with his track record. Creating a successful business is hard. Reinvigorating a dying business? I would have said that was impossible, if I hadn't seen it for myself. He rescued Macs, and made them presentable to a wide group of people, including (probably not as crucially as I would like to think) coders like me. OS X proved to be the natural evolution of my college-age love affair with Linux, as I could finally achieve high productivity in a beautiful environment.

Steve had a knack for changing the world. For all the truthful accusations of Microsoft copying Apple's innovations, Apple itself had a tendency to take something that had previously been tried, and failed, and turn it into something indispensable. Our planet has had portable digital music players since the Rio and the Zen; few people cared, and now everyone has an iPod. Palm had been making PDAs for a decade, and Microsoft had spent untold millions of dollars creating Windows Mobile phones. Now, the iPhone dictates every move of the mobile market where I make my living. When the iPad was announced, I wondered whether a market could possibly exist between the small-size phone and the large-size laptop. Yes, it can, and it's where most of the growth in our industry is occurring.

I never met the man, and in all honesty, I'm not sure if I would have wanted to. From what I hear he could be curt and abrasive. He demanded perfection, and could get upset when he didn't receive it. That made life difficult for a few hundred people. It made life better for millions of others.

Nobody can fill his shoes. I'm confident that Apple and Tim Cook will stay true to the vision that Steve Jobs so ardently embodied. When you look throughout the boardrooms of Silicon Valley companies, though, you can see many other smart people, many other talented people, but nobody else who can shake up so many industries, and personally touch so many lives, the way that Steve did. He helped change the way we communicate; he personally brought the publishing industries into the 21st century; he created beautiful objects that can enrich our lives. He left a legacy. He will be missed.