The Fumble
Attention Conservation Notice: This isn’t an attempt at a holistic explanation of the election so much as it is an attempt to store my current mental state for future reference, and as such, it may be even less coherent than normal. If you want to read an actually coherent holistic explanation from somebody who knows what they’re talking about, the best one I read was: The Deeper Reasons Democrats Lost. Also, this is best read if you imagine me talking in a calm, normal, reasoned sort of voice for the first thousand or so words, and then every thousand after that getting increasingly agitated and loud with a sort of “I don’t care if they kick out out of the restaurant!” borderline hysteria by the end.
It’s been a month or so now since the election, and I’ve been trying to figure out how to articulate the way I feel about how things went down, especially in the wake of a million “this confirms my priors” postmortems. I’m disappointed and irritated, to put it mildly, but it’s also a very specific disappointed and irritated that I couldn’t quite put my finger on. And then it hit me: I feel the exact same way about the Harris/Walz campaign as I do about the 1990 San Francisco 49ers.
For those of you that don’t have 30+ year old sports failures memorized, let me recap. As the 1990 season started, the 49ers had won two Superbowls back to back, Joe Montana was at his peak, Jerry Rice could catch anything, their win in the previous Superbowl still holds the record for “most points scored by a team” and “widest margin of victory.” They’re blatantly the best team in the league, and in contention for “best team of all time.” This is the team people mean when they talk about "The 49ers". People start murmuring about “A three-peat”; at the time, no team had ever won three Superbowls in a row, and the Niners looked like they were about to make it look easy.
This was the era where the NFC was totally dominant, so the other “best team in the league” were the New York Giants. As was becoming tradition at the time, the “real” superbowl was going to be the NFC championship game. Coming into that game, the Niners looked unstoppable, they’d already beat the Giants once that year, there was this mounting excitement that this really might happen. Lots of “you’re only saying it’s impossible because no one ever has” vibes.
Wikipedia has a surprisingly compelling summary of the game, but the short version is: game turned into a grueling defensive slugfest, with both teams staggering down the field and mostly kicking field goals. The Niners stayed ahead, but barely. Then, in the fourth quarter, Montana took a hit so hard not only was he out for the rest of the game, but he wouldn’t play again for almost two years. Steve Young did his best, but then with under three minutes to go, the normally infallible Roger Craig fumbled the ball, the Giants recovered, and at the last second kicked a field goal. Game was over, Giants won 15–13. And that was it. Giants went on to win the Superbowl. Whoops.
I’m not saying the Niners “deserved” to win, because that’s not how this works. I’m also not saying that the Giants only won because they got lucky with two unexpected disasters. What I am saying is that I watched that whole season, and I know the Niners could have won that game, but when it really mattered they couldn’t get it done. There is a very specific kind of irritation watching someone who should have been doing well enough that those disasters didn’t matter.
And that’s how I feel, a month out from Harris/Walz blowing the big game, is that bone-deep frustration that this was winnable.
A lot of that is fueled by finding out just exactly how close it was. It was a broad rinsing, but shallow; the last numbers I saw showed that something like 200,000 votes in the right three or four cities would have swung this the other way. Still a loss, but that’s not a mandate, or a landslide. There wasn’t One Cool Trick that would have done the job, but you get the sense that any number of combinations of Little Cool Tricks might have. Heck, this could have gone either way based on how nice the weather was in the midewest that Tuesday.
Like all Presidential elections, this was all vibes, and the vibes went bad. The winning vote was “bleah, who even cares anymore.” Because the specific big difference between this year and 2020 was the group that showed up to vote against Trump in 2020 didn’t show up this time.
I will say this though: I will go to my grave convinced that if they’d let Walz continue to call people weird and leaned even harder into “do you really want more of these assholes?” and “not going back” that Harris would have won. “Weird” was working. Not going back was working. Turn the Page was working.
Instead, I guess that was “too negative”, Walz vanished, and Harris started campaigning with the daughter of an infamous war criminal who came to national attention right when the voters you’re trying to court got old enough to be politically informed. To be clear, I don’t think Liz Cheney cost any votes, but she sure didn’t get us any. That was all wasted effort.
So now we’re in the middle of the Democrat’s most despicable tradition: the post-loss argument about which group to throw out of the boat. This year it’s the Transgender community, who have committed the terrible crime of “continuing to exist.”
Throwing the civil rights of our transgender sisters and brothers under the bus to try and pick up a few votes is, of course, a moral horror. “But!” I hear the worst people alive say, “it’s just a sound strategic move!” No! It’s a stupid strategic move! Because look—there’s already a whole-ass American Political Party whose core platform is “not everyone counts as real people.” The voters whose issue is “I only want people like me to be treated well” are already voting for the other guys! Why would you go see a cover band when the real band is playing next door? So you end up in Electoral Stupid Limbo, where you’re not bigoted enough to get the bigot vote, but have made it clear to a group of voters looking for a candidate that you won’t help them out if you win. So they stay home!
This is what is so enraging about abandoning “weird”, is that “weird” made it clear that this was the platform for everyone else. The Republicans have tied up the weirdo bigoted middle-aged white guy demographic, and I’m convinced a campaign centered on “screw those guys, we’re sick of them treating us like crap” would work, mostly because it was working.
I’m gonna come back to that in a second, but if you’ll indulge me, my entry for the department of “recent surprising events have confirmed my priors,” is that I remain convinced that there are, effectively, no such things as “undecided voters.” What you have is, both “teams” have a core group of voters that always show up, and then each team has another pool of people who will either vote for them or stay home, but won’t switch sides.
And yeah, there are definitely Trump-Biden-Trump voters, but if you dig in a little further they all seem to be voting for “not the chick,” rather than any kind of team preference. These guys aren’t “undecided”, they’re “sexist” and that’s a different problem.
This is one of the places where I think using the sports metaphor is really apt. Trying to “flip” voters is like trying to convince people to root for a different team. “Come on and root for the Cowboys just this once, the 49ers aren’t even going to make the playoffs!” Absolutely not, that’s deranged, I’m gonna root for whoever the Cowboys are playing. The people you want to get are the people who watched every 49er game until Steve Young retired. “Come on back! We’re good this year! Lemme tell you the good news about this neat kid we got to play QB!”
The big difference, I think, that this election really displayed is that the Reps have a larger group of core voters, but the Dems have a larger group of “maybes.” Higher floor vs higher ceiling.
There’s a fundamental tactical asymmetry here, in that the Red Team can focus on depressing Blue Team turnout and have that work, but the opposite doesn’t. After a decade-plus of playing the refs, the Red Team has gotten very, very good at this. The Blue Team has to actually get the “maybes” to turn out to win.
The problem is that the Dems keep trying to pick up the other team’s “maybes.” That’s the endless “pivot to the center” that never works. Why? Why spend all that time courting people that aren’t ever going to vote for you?
I think the “fourth why” really is how old most of the Dem leadership is. Most of the people running the show, either “on camera” or more importantly “behind the scenes” have been doing this since the 70s. A lot of great things game out of the 70s that I like a lot, like Star Wars, or The Sting! But things have changed since then.
One of the big things that changed is that the dems are now the “big tent” diverse party. They haven’t won the “white vote” since before Jimmy Carter. (I’m using “white” here as a shorthand for straight, cisgender people whose ancestors were from select areas of western europe and are probably Protestant, in the vague American way.)
I think the core problem, the “fifth why”, is that those 70s era leaders still think of “white, probably Protestant men” as the “real voters.” “If we can get them, we win!” Nope. Statistically, they vote Red team. Like it or not, and there’s a class of consultant who really do not, the Dems have been the “everyone but white guys” party for half a century.
And look, there is a way to maybe get that mass of vaguely racist, vaguely sexist white men to show up, and that was to run somebody that looks like Joe Biden But when you’re running a young-ish woman of color, there’s no getting that vote. They’re gonna find a whole bunch of reasons why they obviously support women, just not this woman. That’s not great, but it should not be a surprise. That’s just baked-in to the population we have. Once you’ve got a candidate that looks like this, you have to put all the energy on “everyone else.”
Speaking of the “male vote”, there was a lot of chatter around the fact that Millennial white men went for Harris but Gen-Xers went for Trump. And, while that’s accurate, I don’t think that it’s a “generational” thing so much as I think there’s a standing wave somewhere around 35-40. There’s a certain kind of dude who’s finally successful, got a good job, fancy truck they like, has all the wraparound shades he’ll ever need, is well paid, successful, and they can’t figure out why people still don’t like them. Their kids are jerks, their wives aren’t as hot as they thought they deserved. (These are the “I’ll do anything to protect my family” doofuses who refused to wear a mask. “I’ll do anything for love, but I won’t do that” was wear a mask and get vaxxed, it turns out.) There’s that oft repeated line about people getting more conservative the older they get, and this is usually framed as now you make enough money you’re for tax cuts, but no, I think this is about guys who never learned that the trick was to grow their own personality and develop empathy deciding that if they can’t have “social power” they might as well get some “political power.” The number of dudes online who made their whole personality “if Trump wins I’ll get laid” was bizarre. (And so I’m calling my shot now, in another two elections we’re going to get a whole set of thinkpieces about “why have Millenial men gone to the right?” Because they passed through that threshold.)
There’s a deep, deep insecurity there. (I saw someone online say that if we could cure baldness, the male vote would move left 20 points.) This is where the anti-elite thing comes from: a whole bunch of comfortable, insecure men who have convinced themselves that someone, somewhere think they’re better than they are, and they’re gonna vote for someone to take them down a notch. The people who want to be jerks, but think the worst possible thing is for them to be criticized for being a jerk. They tend to really dislike “liberals” because they’re giving “those people” things they “don’t deserve,” but mostly they dislike anyone who makes them feel like maybe being a jerk isn’t the best idea possible. The Red Team’s whole messaging the last several cycles has really centered on this: these people think they are better than you, but they are not! Let’s take ‘em down!
I’m bringing all this up because this was one of the brilliant things about having Tim Walz being the one saying “weird,” and why it was such a mistake for him to vanish after the convention.
There aren’t a lot of “cool dads” in pop culture, not as good role models anyway. So having Walz show up as CoolDadTM saying, basically, “hey, fellas, there’s a better way to be a man and protect your family,” seemed like a real opportunity. This is the “toxic vs tonic masculinity” we were all talking about back in September.
And that’s what makes this election so maddening, is because what they were doing looked like it was working.
The real teeth-gritting part is that I can almost, almost see the logic in trying to peel off some R votes in an election where the stakes are “this guy wants to be a warlord.” There are probably Republicans to campaign with who are broadly well-liked enough for that to have worked. Clint Eastwood? Arnold Schwarzenegger maybe? But instead, they front and centered the Cheneys, and this is the other problem with having your leadership be that old, because to them the Cheneys are fellow “serious people”, but to everyone in that critical elder Millennial to younger Gen-X demographic, instead the Cheneys are “a big part of the the reason all my high school friends aren’t still alive.”
Again, I don’t know if that actually lost votes, but it also sure as hell didn’t gain any, and it was a huge opportunity cost versus things that could have.
Having someone that looks like Walz pointing at the loser weirdos calling them loser weirdos, while the creeps made fun of his kids? That was working.
But the real problem, the actual catastrophic problem there, is that the Dems didn’t actually have an answer to “well, if Trump did all these crimes, why isn’t he in jail?” Your whole argument about saving democracy evaporates when this is the third election in a row you’ve run on that, and haven’t been doing anything to save it with the resources you already have. To be fair, that wasn’t necessarily Harris’ fault, but it was her problem. If you’re a less-engaged, lower-information voter—which by defintion all those “maybes” are—it’s real easy to hear all “that stuff” about what Trump did, and then conclude it’s bullshit because obviously if it was real he’d be in the slammer. (But if you focus on the fact that he’s weird and gross? All that “danger to democracy”power slides away, and he’s just another asshole.)
Voters like it when the people they voted for actually wield the power they gave them. The dems have a long and inglorious history of just… not doing what they can because reasons. Because that’ll score more moral points with… somebody?
I was mid-draft of this mess when Biden pardoned his kid, and as near as I can tell the reaction from the Dem base was “yeah, man, like that! You shoulda done that one day one and kept going.”
Most normal sane people, the people you want to vote for you, would also absolutely do whatever they could do pull their kids out of a fire. The people you were gonna get blowback from weren’t going to vote for you anyway. It was a perfect microcosm of not using the power available because they thought that might impress… someone? Someone who still didn’t vote for them? Instead, you were asking people to come out and vote for someone who wasn’t even willing to help his own son until it was clear that wasn’t going to hurt him politically, but his party was going to fight for you?
Your sales pitch can’t keep being “the other people might do things! Vote for us so we keep doing nothing!” That’s a bad position.
A whole lot of people showed up in 2020, and the Dems didn’t do anything to make sure those people stuck around, instead they assumed they would and tried to flip others. There’s a sense that summer 2021 was the Democrat’s “Mission Accomplished” moment. “We’re done! Democracy saved! Pandemic over!” And then they let all the social programs expire and all the criminals off the hook. And then a whole bunch more people died?
Of course, when you do wield power, you have to actually show off a little, make some noise, take the credit. There’s a line I read somewhere over the summer that Biden was Progressive President with Centrist Vibes, and that’s stuck with me. And part of that was just not talking about the things they did manage to do.
Which brings me to my last major problem, which is that it’s hard to miss that people believed a lot of obviously untrue stuff this cycle. The Information Environment is absolutely cooked for the Dems these days, after years of Reps playing the refs and a media owned by a shrinking group of very rich Billionaires, who, I think, genuinely wanted Trump to win.
Pretty much every post-mortem on this one I’ve read boils down to “voters believed a lot of really obvious and easily disprovable lies.”
I continue to be slightly baffled by how much certain chunks of the center-to-left hate Biden? But aslo, if you only get your news from the NYT, I guess that makes sense. There’s a basic “tell people what you did and then keep telling them” side that the Dems can’t do, and big chunks of the media will refused to help with.
There’s a story I can’t find a link to again about a group of people who voted for Trump because “he gave us money last time,” and it turns out they thought the COVID stimulus checks were his personal money, not—you know—a program the Dems in congress voted for. This also tracks very closely to some conversations I had in real life over the summer, where it was very clear the people I was talking to were convinced Trump did a whole lot of things that were actually done by the Dems because he, you know, put his damn name on the checks.
There’s also that chart that was going around showing that “knowingly consuming” political news tracked incredibly closely with candidate choice: those that consumed a lot went Harris, those that didn’t consume any went Trump. (The other theme for this piece is that I failed to keep my references organized, so of course I can’t find that chart again.)
But the key word there was “knowingly.” I’m gonna loop back to that “male vote” for a second, because there is so much weird right-wing manosphere garbage in every “male hobby” out there. Video games, gyms, table top gaming, everywhere. So you start going to the gym, and maybe you personally don’t listen to Rogan, but the other guys there do, and this stuff seeps in. I cannot tell you how many middle aged guys I know who consider themselves to be proudly “independent centrists” and then two sentences later they start quoting opinions that would have got you kicked off 4chan a decade ago. It’s insane how good a job the far right has done in building out a media ecosystem that doesn’t look like one. It ought to be the easiest thing in the world to build out a youtube channel talking about Warhammer or Battletech or Star Wars, or TTRPGs from—if not a left perspective—at least from a non-fascist perspective, but instead I keep getting youtubes with the caption “pretty funny!” or “sounds about right!” that are just straight up nazi propoganda over promo photos of Luke Skywalker and Kathleen Kennedy. And then, the real kicker is, if you “don’t consume political news” you don’t have any context and aren’t reading any of the stuff pointing that out. Neat!
The flip side of all that, though, is that we’re clearly in a post-campaign era. I think overall Harris & co. ran a really solid campaign—excepting the part where they lost, obviously—but the other guy just mumbled his way though various slurs and swayed on stage to music made by people that hate him. But nothing got covered like that, and you end up with coverage about made-up versions of the two candidates. This post-truth era where no one believes either candidate will do what they say they will, good or bad. (Which is causing a lot of shocked pikachu memes at the moment.)
So, to summarize: going into this, the Dems had four big problems that I think mattered more than anything else:
- International, wide-spread repudiation of incumbents.
- Running a Woman of Color in a racist, sexist country.
- The overall Information Environment being wired for the other team.
- Not having an answer for “if Trump did all these crimes, why isn’t he in jail?”
And this is why I’m so frustrated about them abandoning “weird”, because it cut through all of those like a hot knife through butter:
The worldwide anti-incumbent attitude? Weird/Won’t go back neatly harnessed that energy, aikido style, and directed it away, framing this as moving on from what was basically the “Trump-Biden” era, but also the whole 90s era political scene that’s just never gone away, and the guy that Biff from 80s classic Back to the Future was literally based on.
The racist/sexists? “Do you want those weirdoes to be in charge?”
The Information Environment? No one would cover things like policy proposals, but they sure as heck carried “weird.” There’s no way to garble that!
Trump’s crimes? If you talk like he’s a threat to civilization, you have to answer why you don’t act like that. But if you just keep pointing out what a sadsack dipshit loser weirdo he and his cronies are, that drains all his mystique away. Plus the Red Team hated that, and started doing some really, really dumb things because of it.
You gotta focus on the base. Your own base. Not take your base for granted and try and pick up the other guy’s. Parts of this campaign felt like that kid in class who skipped the final project but tried to make it up on extra credit. (Didn’t work for them ether.)
I think the key political element of our times is that everyone, and I mean everyone is absolutely seething with anger. The entire Trump project is based on harnessing that from one subgroup and redirecting it towards everyone else. That health insurance CEO was gunned down while I was wrapping this up, and the utterly wild reaction to the shooting is what I’m talking about. People are done. This was basically the third “table-flip” election in a row, and that can’t be great. “Weird” was a way to harness that, direct it, say “we hear you, and we’re as mad about the same stuff you are.”
Would it have worked if they’d kept going with “they’re weird and we’re mad” ’til the end? I don’t know, but I’d sure like to have found out.
Oh wait, they’re really kicking us out of the restaurant…
Cabel Sasser’s XOXO Talk
This was two months ago now, but in case you missed it, I want to strongly recommend Cabel Sasser’s XOXO talk.
The less you know going in, the better. It’s 20 minutes, get a beverage and settle in. The opening is a little clunky, but trust me.
Delicious Library Crosses Over
Okay, this one hurts. Amazon has finally turned off the thing Delicious Library used, so the app has been discontinued.
Over at Daring Fireball, Gruber has a really nice writeup about how Delicious Library was the exemplar for the era of apps as art in their own right that seems to have mostly passed: Daring Fireball: The End of the Line for Delicious Library
Personally? I have a Delicious Library that contains (almost) everything I own. “Dad, did you scan these yet” was a standard phase of any new purchase or gift. Books, games, toys, whatever, the thing where it could pull data on anything with a UPC or EAN from Amazon was amazing. Even more amazing was the barcode scanner app for the iPhone that used the camera. Just walk around the house zip, zip, zip.
You know how they say you should have a list of everything you own for insurance purposes? I had that! Plus, the ability to see when something was bought was surprisingly useful. “What birthday was that?” You could answer those questions! I loved it.
It was clear DL was on its way out: Wil Shipley has been an Apple employee for years now, the app hasn’t been updated in ages, the Amazon link was getting… “sketchy.” But there just isn’t a replacement.
This is the sort of think where all you can do is throw your hands up and make a sort of “ecchhhhh” sound. One more great thing we used to have that’s gone because the easiest way for some middle manager somewhere to make one of their KPIs was to break it.
Although, maybe the most maddening thing about this one is that I would have absolutely paid some kind of subscription fee to Amazon (directly or indirectly) to keep this working, and… nope. Not an option.
“Sex-Haver Energy”
Somewhere in the last couple of years I had read “something” “somewhere” that described Kurt Russell as having “sex-haver energy.” And he does! The phrase stuck with me as I lost the grip on the source. Part of the problem with our fractured media environment is finding something a second time feels like it should be possible, but rarely is. Was it an article? A tweet? Something from my RSS feeds? Apple News? Linked off one of those? The decayed google search didn’t turn up anything either.
Anyway, I stumbled across it again! It was in Everyone Is Beautiful and No One Is Horny - Blood Knife, which I dug up to drop a link in my piece on Deadpool. It’s funny, because I had completely forgotten that was the origin, despite remembering the article very clearly. The reason my searches never turned it up was that the phrase was actually about Snake Pliskin, not the actor that played him.
Memory is a wacky thing.
Anyway, go read that; one of the best analyses of modern media I’ve read in a long time.
Woah! Slow Down, Maurice!
I’m a decade late to this, but please feast your eyes on:
This so perfectly captures why Gaston is my favorite Disney antagonist. Because he’s not a “villain”, he just an asshole. He’s not summoning the powers of darkness, or setting kingdom against kingdom, or scheming of any kind. His entire program is:
- He wants to hear a lot of compliments
- He wants to bang the hot nerd
And that’s it.
It’s so deliciously low-stakes for a Disney Fantasy movie that also includes, you know, a giant monster man and a singing candlestick. And that’s part and parcel of why I love that movie so much, because the core engine of the plot is that the three mediocre men in Belle’s life collide with each other, and while nothing that happens after is is her fault it all becomes her problem. So even by the end when you’ve got a rampaging mob attacking a castle, the root cause is still one asshole who couldn’t handle that only 99% of the village liked him.
The end result is that two of those dudes get their act together and the third one falls off a roof. And, you know…
Don’t Panic: Infocom’s Hitchhiker’s Guide to the Galaxy at 40
Well! It turns out that this coming weekend is the 40th anniversary of Infocom’s Hitchhiker’s Guide to the Galaxy text adventure game by Douglas Adams and Steve Meretzky. I mentioned the game in passing back in July when talking about Salmon of Doubt, but I’ll take an excuse to talk about it more.
To recap: Hitchhiker started as a six-part radio show in 1978, which was a surprise hit, and was quickly followed by a second series, an album—which was a rewrite and re-record with the original cast instead of just being a straight release of the radio show—a 2-part book adaptation, a TV adaptation, and by 1984, a third book with a fourth on the way. Hitchhiker was a huge hit.
Somewhere in there, Adams discovered computers, and (so legend has it) also became a fan of Infocom’s style of literate Interactive Fiction. They were fans of his as well, and to say their respective fan-bases had a lot of overlap would be an understatement. A collaboration seemed obvious.
(For the details on how the game actually got made, I’ll point you at The Digital Antiquarian’s series of philosophical blockbusters Douglas Adams, The Computerized Hitchhiker’s, and Hitchhiking the Galaxy Infocom-Style.)
These are two of my absolute favorite things—Infocom games and Hitchhiker—so this should be a “two great tastes taste great together” situation, right? Well, unfortunately, it’s a little less “peanut butter cup” and a little more “orange juice on my corn chex.”
“Book adaptation” is the sort of thing that seemed like an obvious fit for Infocom, and they did several of them, and they were all aggressively mediocre. Either the adaptation sticks too close to the book, and you end up painfully recreating the source text, usually while you “wait” and let the book keep going until you have something to do, or you lean the other way and end up with something “inspired by” rather than “based on.” Hitchhiker, amusingly, manages to do both.
By this point Adams had well established his reputation for blowing deadlines (and loving “the whooshing noise they make as they go by”) so Infocom did the sane thing and teamed him up Steve Meretzky, who had just written the spectacular—and not terribly dissimilar from Hitchhiker—Planetfall, with the understanding that Meretzky would do the programming and if Adams flagged then Meretzky could step in and push the game over the finish line.
The game would cover roughly the start of the story; starting with Arthur’s house being knocked down, continuing through the Vogon ship, arriving on the Heart of Gold, and then ending as they land on Magrathea. So, depending on your point of view, about the first two episodes of the radio and TV versions, or the first half of the first book. This was Adams’ fourth revision of this same basic set of jokes, and one senses his enthusiasm waning.
You play as Arthur (mostly, but we’ll get to that,) and the game tracks very closely to the other versions up through Arthur and Ford getting picked up by the Heart of Gold. At that point, the game starts doing its own thing, and it’s hard not to wonder if that’s where Adams got bored and let Meretzky take over.
The game—or at least the first part—wants to be terribly meta and subversive about being a text adventure game, but more often than not offers up things that are joke-shaped, but are far more irritating than funny.
The first puzzle in the game is that it is dark, and you have to open your eyes. This is a little clever, since finding and maintaining light sources are a major theme in earlier Zork-style Infocom games, and here you don’t need a battery-powered brass lantern or a glowing elvish sword, you can just open your eyes! Haha, no grues in this game, chief! Then the second puzzle is where the game really shows its colors.
Because, you see, you’ve woken up with a hangover, and you need to find and take some painkillers. Again, this is a text adventure, so you need to actually type the names of anything you want to interact with. This is long before point-and-click interfaces, or even terminal-style tab-complete. Most text games tried to keep the names of nouns you need to interact with as short as possible for ergonomic reasons, so in a normal game, the painkillers would be “pills”, or “drugs”, or “tablets”, or some other short name. Bur no, in this game, the only phrase the game recognizes for the meds is “buffered analgesic”. And look, that’s the sort of think that I’m sure sounds funny ahead of time, but is just plain irritating to actually type. (Although, credit where credit is due, four decades later, I can still type “buffered analgesic” really fast.)
And for extra gear-griding, the verb you’d use in reglar speech to consume a “buffered analgesic” would be to “take” it, except that’s the verb Infocom games use to mean “pick something up and put it in your inventory” so then you get to do a little extra puzzle where you have to guess what other verb Adams used to mean put it in your mouth and swallow.
The really famous puzzle shows up a little later: the Babel Fish. This seems to be the one that most people gave up at, and there was a stretch where Infocom was selling t-shirts that read “I got the Babel Fish!”
The setup is this: You, as Arthur, have hitchhiked on to the Vogon ship with Ford. The ship has a Babel Fish dispenser (an idea taken from the TV version, as opposed to earlier iterations where Ford was just carrying a spare.) You need to get the Babel fish into your ear so that it’ll start translating for you and you can understand what the Vogons yell at you when they show up to throw you off the ship in a little bit. So, you press the button on the machine, and a fish flies out and vanishes into a crack in the wall.
What follows is a pretty solid early-80s adventure game puzzle. You hang your bathrobe over the crack, press the button again, and then the fish hits the bathrobe, slides down, and falls into a grate on the floor. And so on, and you build out a Rube Goldberg–style solution to catch the fish. The 80s-style difficulty is that there are only a few fish in the dispenser, and when you run out you have to reload your game to before you started trying to dispense fish. This, from the era where game length was extended by making you sit and wait for your five-inch floppy drive to grind through another game load.
Everything you need to solve the puzzle is in the room, except one: the last thing you need to get the fish is the pile of junk mail from Arthur’s front porch, which you needed to have picked up on your way to lie in front of the bulldozer way back a the start of the game. No one thinks to do this the first time, or even first dozen times, and so you end up endlessly replaying the first hour of the game, trying to find what you missed.
(The Babel Fish isn’t called out by name in Why Adventure Games Suck, but one suspects it was top of Ron Gilbert’s mind when he wrote out his manifesto for Monkey Island four years later.)
The usual reaction, upon learning that the missing element was the junk mail, and coming after the thing with the eyes and the “buffered analgesic” is to mutter, screw this and stop playing.
There’s also a bit right after that where the parser starts lying to you and you have to argue with it to tell you what’s in a room, which is also the kind of joke that only sounds funny if you’re not playing the game, and probably accounted for the rest of the people throwing their hands up in the air and doing literally anything else with their time.
Which is a terrible shame, because just after that, you end up on the Heart of Gold and the game stops painfully rewriting the book or trying to be arch about being a game. Fairly quickly, Ford, Zaphod, and Trillian go hang out in the HoG’s sauna, leaving you to do your own thing. Your own thing ends up being using the backup Improbability Generator to teleport yourself around the galaxy, either as yourself or “quantum leap-style” jumping into other people. You play out sequences as all of Ford, Zaphod, and Trillian, and end up in places the main characters never end up in any of the other versions—on board the battlefleet that Arthur’s careless coment sets in motion, inside the whale, outside the lair of the Ravenous Bugblatter Beast of Traal. The various locations can be played in any order, and like an RPG from fifteen years later, the thing you need to beat the game has one piece in each location.
This is where the game settles in and turns into an actual adventure game instead of a retelling of the same half-dozen skits. And, more to the point, this is where the game starts doing interesting riffs on the source material instead of just recreating it.
As an example, at one point, you end up outside the cave of the Ravenenous Bugblatter Beast of Traal, and the way you keep it from eating you is by carving your name on the memorial to the Beast’s victims, so that it thinks it has already eaten you. This is a solid spin on the book’s joke that the Beast is so dumb that it thinks that if you can’t see it, it can’t see you, but manges to make having read the book a bonus but not a requirement.
As in the book, to make the backup Improbability Drive work you need a source of Brownian Motion, like a cup of hot liquid. At first, you get a cup of Advanced Tea Substitute from the Nutrimat—the thing that’s almost, but not quite, entirely unlike tea. Later, after some puzzles and the missile attack, you can get a cup of real tea to plug into the drive, which allows it work better and makes it possible to choose your destination instead of it being random. Again, that’s three different jokes from the source material mashed together in an interesting and new way.
There’s a bit towards the end where you need to prove to Marvin that you’re intelligent, and the way you do that is by holding “tea” and “no tea” at the same time. The way you do that is by using the backup Improbably Drive to teleport into your own brain and removing your common sense particle, which is a really solid Hitchhiker joke that only appears in the game.
The game was a huge success at the time, but the general consensus seemed to be that it was very funny but very hard. You got the sense that a very small percentage of the people who played the game beat it, even grading on the curve of Infocom’s usual DNF rate. You also got the sense that there were a whole lot of people for whom HHGG was both their first and last Infocom game. Like Myst a decade later, it seemed to be the kind of game people who didn’t play games got bought for them, and didn’t convert a lot of people.
In retrospect, it’s baffling that Infocom would allow what was sure to be their best-selling game amongst new customers to be so obtuse and off-putting. It’s wild that HHGG came out the same year as Seastalker, their science fiction–themed game designed for “junior level” difficulty, and was followed by the brilliant jewel of Wishbringer, their “Introductory” game which was an absolute clinic in teaching people how to play text adventure games. Hitchhiker sold more than twice those two games combined.
(For fun, See Infocom Sales Figures, 1981-1986 | Jason Scott | Flickr)
Infocom made great art, but was not a company overly-burdened by business acumen. The company was run by people who thought of games as a way to bootstrap the company, with the intent to eventually graduate to “real” business software. The next year they “finally” released Cornerstone—their relational database product that was going to get them to the big leagues. It did not; sales were disastrous compared to the amount of money spent on development, the year after that, Infocom would sell itself to Activision; Activision would shut them down completely in 1989.
Cornerstone was a huge, self-inflicted wound, but it’s hard not to look at those sales figures, with Hitchhiker wildly outstripping everything else other than Zork I, and wonder what would have happened if Hitchhiker had left new players eager for more instead of trying to remember how to spell “analgesic.”
As Infocom recedes into the past and the memories of old people and enthusiasts, Hitchhiker maintains it’s name recognition. People who never would have heard the name “Zork” stumble across the game as the other, other, other version of Hitchhiker Adams worked on.
And so, the reality is that nowadays HHGG is likely to be most people’s first—and only—encounter with an Infocom game, and that’s too bad, because it’s really not a good example of what their games were actually like. If you’re looking for recommendation, scare up a copy of Enchanter. I’d recommend that, Wishbringer, Planetfall, and Zork II long before getting to Hitchhiker. (Zork is the famous game with the name recognition, but the second one is by far the best of the five games with “Zork” in the title.)
BBC Radio 4 did a 30th anniversary web version some years ago, which added graphics in the same style as the guide entries from the TV show, done by the same people, which feels like a re-release Infocom would have done in the late 80s if the company hadn’t been busy drowning in consequences of their bad decisions.
It’s still fun, taken on its own terms. I’d recommend the game to any fan of the other iterations of the Guide, with the caveat that it should be played with a cup of tea in one hand and a walkthrough within easy reach of the other.
All that said, it’s easy to sit here in the future and be too hard on it. The Secret of Monkey Island was a conceptual thermocline for adventure games as a genre, it’s so well designed, and it’s design philosophy is so well expressed in that design, that once you’ve played it it’s incredibly obvious what every game before it did wrong.
As a kid, though, this game fascinated me. It was baffling, and seemingly impossible, but I kept plowing at it. I loved Hitchhiker, still do, and there I was, playing Arthur Dent, looking things up in my copy of the Guide and figuring out how to make the Improbability Drive work. It wasn’t great, it wasn’t amazing, it was amazingly amazing. At one point I printed out all the Guide entries from the game and made a physical Guide out of cardboard?
As an adult, what irritates me is that the game’s “questionable” design means that it’s impossible to share that magic from when I was 10. There are plenty of other things I loved at that time I can show people now, and the magic still works—Star Wars, Earthsea, Monkey Island, the other iterations of Hitchhiker, other Infocom games. This game, though, is lost. It was too much of its exact time, and while you can still play it, it’s impossible to recreate what it was like to realize you can pick up the junk mail. Not all magic lasts. Normally, this is where I’d type something like “and that’s okay”, but in this particular case, I wish they’d tried to make it last a little harder.
As a postscript, Meretzky was something of a packrat, and it turns out he saved everything. He donated his “Infocom Cabinet” to the Internet Archive, and it’s an absolute treasure trove of behind-the-scenes information, memos, designs, artwork. The Hitchhiker material is here: Infocom Cabinet: Hitchhikers Guide to the Galaxy : Steve Meretzky and Douglas Adams
How to Monetize a Blog
If, like me, you have a blog thats purely a cost center, you may be interested in How to Monetize a Blog. Lotta good tips in there!
(Trust me, and make sure you scroll all the way to the end.)
Ableist, huh?
Well! Hell of a week to decide I’m done writing about AI for a while!
For everyone playing along at home, NaNoWriMo, the nonprofit that grew up around the National Novel Writing Month challenge, has published a new policy on the use of AI, which includes this absolute jaw-dropper:
We also want to be clear in our belief that the categorical condemnation of Artificial Intelligence has classist and ableist undertones, and that questions around the use of Al tie to questions around privilege.
Really? Lack of access to AI is the only reason “the poors” haven’t been able to write books? This is the thing that’s going to improve access for the disabled? It’s so blatantly “we got a payoff, and we’re using lefty language to deflect criticism,” so disingenuine, and in such bad faith, that the only appropriate reaction is “hahahha Fuck You.”
That said, my absolute favorite response was El Sandifer on Bluesky:
"Fucking dare anyone to tell Alan Moore, to his face, that working class writers need AI in order to create."; immediately followed by "“Who the fuck said that I’ll fucking break his skull open” said William Blake in a 2024 seance."
It’s always a mistake to engage with Bad Faith garbage like this, but I did enjoy these attempts:
You Don't Need AI To Write A Novel - Aftermath
NaNoWriMo Shits The Bed On Artificial Intelligence – Chuck Wendig: Terribleminds
There’s something extra hilarious about the grifters getting to NaNoWriMo—the whole point of writing 50,000 words in a month is not that the world needs more unreadable 50k manuscripts, but that it’s an excuse to practice, you gotta write 50k bad words before you can get to 50k good ones. Using AI here is literally bringing a robot to the gym to lift weights for you.
If you’re the kind of ghoul that wants to use a robot to write a book for you, that’s one (terrible) thing, but using it to “win” a for-fun contest that exists just to provide a community of support for people trying to practice? That’s beyond despicable.
The NaNoWriMo organization has been a mess for a long time, it’s a classic volunteer-run non-profit where the founders have moved on and the replacements have been… poor. It’s been a scandal engine for a decade now, and they’ve fired everyone and brought in new people at least once? And the fix is clearly in; NoNoWiMo got a new Executive Director this year, and the one thing the “AI” “Industry” has at the moment is gobs of money.
I wonder how small the bribe was. Someone got handed a check, excuse me, a “sponsorship”, and I wonder how embarrassingly, enragingly small the number was.
I mean, any amount would be deeply disgusting, but if it was, “all you have to do is sell out the basic principles non-profit you’re now in charge of and you can live in luxury for the rest of your life” that’s still terrible but at least I would understand. But you know, you know, however much money changed hands was pathetically small.
These are the kind of people who should be hounded out of any functional civilization.
And then I wake up to the news that Oprah is going to host a prime time special on The AI? Ahhhh, there we go, that’s starting to smell like a Matt Damon Superbowl Ad. From the guest list—Bill Gates?—it’s pretty clearly some high-profile reputation laundering, although I’m sure Oprah got a bigger paycheck than those suckers at NaNoWriMo. I see the discourse has already decayed through a cycle of “should we pre-judge this” (spoiler: yes) and then landed on whether or not there are still “cool” uses for AI. This is such a dishonest deflection that it almost takes my breath away. Whether or not it’s “cool” is literally the least relevant point. Asbestos was pretty cool too, you know?
And Another Thing… AI Postscript
I thought I was done talking about The AI for a while after last week’s “Why is this Happening” trilogy (Part I, Part II, Part III,) but The AI wasn’t done with me just yet.
First, In one of those great coincidences, Ted Chiang has a new piece on AI in the New Yorker, Why A.I. Isn’t Going to Make Art (and yeah, that’s behind a paywall, but cough).
It’s nice to know Ted C. and I were having the same week last week! It’s the sort of piece where once you start quoting it’s hard to stop, so I’ll quote the bit everyone else has been:
The task that generative A.I. has been most successful at is lowering our expectations, both of the things we read and of ourselves when we write anything for others to read. It is a fundamentally dehumanizing technology because it treats us as less than what we are: creators and apprehenders of meaning. It reduces the amount of intention in the world.
Intention is something he locks onto here; creative work is about making lots of decisions as you do the work which can’t be replaced by a statistical average of past decisions by other people.
Second, continuing the weekend of coincidences, the kids and I went to an Anime convention this past weekend. We went to a panel on storyboarding in animation, which was fascinating, because storyboarding doesn’t quite mean the same thing in animation that it does in live-action movies.
At one point, the speaker was talking about a character in a show he had worked on named “Ai”, and specified he meant the name, not the two letters as an abbreviation, and almost reflexively spitted out “I hate A. I.!” between literally gritted teeth.
Reader, the room—which was packed—roared in approval. It was the kind of noise you’d expect to lead to a pitchfork-wielding mob heading towards the castle above town.
Outside of the more galaxy-brained corners of the wreckage of what used to be called twitter or pockets of techbros, real people in the real world hate this stuff. I can’t think of another technology from my lifetime that has ever gotten a room full of people to do that. Nothing that isn’t armed can be successful against that sort of disgust; I think we’re going to be okay.
Why is this Happening, Part III: Investing in Shares of a Stairway to Heaven
We’ve talked a lot about “The AI” here at Icecano, mostly in terms ranging from “unflattering” to “extremely unflattering.” Which is why I’ve found myself stewing on this question the last few months: Why is this happening?
The easy answer is that, for starters, it’s a scam, a con. That goes hand-in-hand with it also being hype-fueled bubble, which is finally starting to show signs of deflating. We’re not quite at the “Matt Damon in Superbowl ads” phase yet, but I think we’re closer than not to the bubble popping.
Fad-tech bubbles are nothing new in the tech world, in recent memory we had similar grifts around the metaverse, blockchain & “web3”, “quantum”, self-driving cars. (And a whole lot of those bubbles all had the same people behind them as the current one around AI. Lots of the same datacenters full of GPUs, too!) I’m also old enough to remember similar bubbles around things like bittorrent, “4gl languages”, two or three cycles on VR, 3D TV.
This one has been different, though. There’s a viciousness to the boosters, a barely contained glee at the idea that this will put people out of work, which has been matched in intensity by the pushback. To put all that another way, when ELIZA came out, no one from MIT openly delighted at the idea that they were about to put all the therapists out of work.
But what is it about this one, though? Why did this ignite in a way that those others didn’t?
A sentiment I see a lot, as a response to AI skepticism, is to say something like “no no, this is real, it’s happening.” And the correct response to that is to say that, well, asbestos pajamas really didn’t catch fire, either. Then what happened? Just because AI is “real” it doesn’t mean it’s “good”. Those mesothelioma ads aren’t because asbestos wasn’t real.
(Again, these tend to be the same people who a few years back had a straight face when they said they were “bullish on bitcoin.”)
But I there’s another sentiment I see a lot that I think is standing behind that one: that this is the “last new tech we’ll see in our careers”. This tends to come from younger Xers & elder Millennials, folks who were just slightly too young to make it rich in the dot com boom, but old enough that they thought they were going to.
I think this one is interesting, because it illuminates part of how things have changed. From the late 70s through sometime in the 00s, new stuff showed up constantly, and more importantly, the new stuff was always better. There’s a joke from the 90s that goes like this: Two teams each developed a piece of software that didn’t run well enough on home computers. The first team spent months sweating blood, working around the clock to improve performance. The second team went and sat on a beach. Then, six months later, both teams bought new computers. And on those new machines, both systems ran great. So who did a better job? Who did a smarter job?
We all got absolutely hooked on the dopamine rush of new stuff, and it’s easy to see why; I mean, there were three extra verses of “We Didn’t Light the Fire” just in the 90s alone.
But a weird side effect is that as a culture of practitioners, we never really learned how to tell if the new thing was better than the old thing. This isn’t a new observation, Microsoft figured out to weaponize this early on as Fire And Motion. And I think this has really driven the software industry’s tendency towards “fad-oriented development,” we never built up a herd immunity to shiny new things.
A big part of this, of course, is that the press tech profoundly failed. A completely un-skeptical, overly gullible press that was infatuated shiny gadgets foisted a whole parade of con artists and scamtech on all of us, abdicating any duty they had to investigate accurately instead of just laundering press releases. The Professionally Surprised.
And for a long while, that was all okay, the occasional CueCat notwithstanding, because new stuff generally was better, and even if was only marginally better, there was often a lot of money to be made by jumping in early. Maybe not “private island” money, but at least “retire early to the foothills” money.
But then somewhere between the Dot Com Crash and the Great Recession, things slowed down. Those two events didn’t help much, but also somewhere in there “computers” plateaued at “pretty good”. Mobile kept the party going for a while, but then that slowed down too.
My Mom tells a story about being a teenager while the Beatles were around, and how she grew up in a world where every nine months pop music was reinvented, like clockwork. Then the Beatles broke up, the 70s hit, and that all stopped. And she’s pretty open about how much she misses that whole era; the heady “anything can happen” rush. I know the feeling.
If your whole identity and worldview about computers as a profession is wrapped up in diving into a Big New Thing every couple of years, it’s strange to have it settle down a little. To maintain. To have to assess. And so it’s easy to find yourself grasping for what the Next Thing is, to try and get back that feeling of the whole world constantly reinventing itself.
But missing the heyday of the PC boom isn’t the reason that AI took off. But it provides a pretty good set of excuses to cover the real reasons.
Is there a difference between “The AI” and “Robots?” I think, broadly, the answer is “no;” but they’re different lenses on the same idea. There is an interesting difference between “robot” (we built it to sit outside in the back seat of the spaceship and fix engines while getting shot at) and “the AI” (write my email for me), but that’s more about evolving stories about which is the stuff that sucks than a deep philosophical difference.
There’s a “creative” vs “mechanical” difference too. If we could build an artificial person like C-3PO I’m not sure that having it wash dishes would be the best or most appropriate possible use, but I like that as an example because, rounding to the nearest significant digit, that’s an activity no one enjoys, and as an activity it’s not exactly a hotbed of innovative new techniques. It’s the sort of chore it would be great if you could just hand off to someone. I joke this is one of the main reasons to have kids, so you can trick them into doing chores for you.
However, once “robots” went all-digital and became “the AI”, they started having access to this creative space instead of the physical-mechanical one, and the whole field backed into a moral hazard I’m not sure they noticed ahead of time.
There’s a world of difference between “better clone stamp in photoshop” and “look, we automatically made an entire website full of fake recipes to farm ad clicks”; and it turns out there’s this weird grifter class that can’t tell the difference.
Gesturing back at a century of science fiction thought experiments about robots, being able to make creative art of any kind was nearly always treated as an indicator that the robot wasn’t just “a robot.” I’ll single out Asimov’s Bicentennial Man as an early representative example—the titular robot learns how to make art, and this both causes the manufacturer to redesign future robots to prevent this happening again, and sets him on a path towards trying to be a “real person.”
We make fun of the Torment Nexus a lot, but it keeps happening—techbros keep misunderstanding the point behind the fiction they grew up on.
Unless I’m hugely misinformed, there isn’t a mass of people clamoring to wash dishes, kids don’t grow up fantasizing about a future in vacuuming. Conversely, it’s not like there’s a shortage of people who want to make a living writing, making art, doing journalism, being creative. The market is flooded with people desperate to make a living doing the fun part. So why did people who would never do that work decide that was the stuff that sucked and needed to be automated away?
So, finally: why?
I think there are several causes, all tangled.
These causes are adjacent to but not the same as the root causes of the greater enshittification—excuse me, “Platform Decay”—of the web. Nor are we talking about the largely orthogonal reasons why Facebook is full of old people being fooled by obvious AI glop. We’re interested in why the people making these AI tools are making them. Why they decided that this was the stuff that sucked.
First, we have this weird cultural stew where creative jobs are “desired” but not “desirable”. There’s a lot of cultural cachet around being a “creator” or having a “creative” jobs, but not a lot of respect for the people actually doing them. So you get the thing where people oppose the writer’s strike because they “need” a steady supply of TV, but the people who make it don’t deserve a living wage.
Graeber has a whole bit adjacent to this in Bullshit Jobs. Quoting the originating essay:
It's even clearer in the US, where Republicans have had remarkable success mobilizing resentment against school teachers, or auto workers (and not, significantly, against the school administrators or auto industry managers who actually cause the problems) for their supposedly bloated wages and benefits. It's as if they are being told ‘but you get to teach children! Or make cars! You get to have real jobs! And on top of that you have the nerve to also expect middle-class pensions and health care?’
“I made this” has cultural power. “I wrote a book,” “I made a movie,” are the sort of things you can say at a party that get people to perk up; “oh really? Tell me more!”
Add to this thirty-plus years of pressure to restructure public education around “STEM”, because those are the “real” and “valuable” skills that lead to “good jobs”, as if the only point of education was as a job training program. A very narrow job training program, because again, we need those TV shows but don’t care to support new people learning how to make them.
There’s always a class of people who think they should be able to buy anything; any skill someone else has acquired is something they should be able to purchase. This feels like a place I could put several paragraphs that use the word “neoliberalism” and then quote from Ayn Rand, The Incredibles, or Led Zeppelin lyrics depending on the vibe I was going for, but instead I’m just going to say “you know, the kind of people who only bought the Cliffs Notes, never the real book,” and trust you know what I mean. The kind of people who never learned the difference between “productivity hacks” and “cheating”.
The sort of people who only interact with books as a source of isolated nuggets of information, the kind of people who look at a pile of books and say something like “I wish I had access to all that information,” instead of “I want to read those.”
People who think money should count at least as much, if not more than, social skills or talent.
On top of all that, we have the financializtion of everything. Hobbies for their own sake are not acceptable, everything has to be a side hustle. How can I use this to make money? Why is this worth doing if I can’t do it well enough to sell it? Is there a bootcamp? A video tutorial? How fast can I start making money at this?
Finally, and critically, I think there’s a large mass of people working in software that don’t like their jobs and aren’t that great at them. I can’t speak for other industries first hand, but the tech world is full of folks who really don’t like their jobs, but they really like the money and being able to pretend they’re the masters of the universe.
All things considered, “making computers do things” is a pretty great gig. In the world of Professional Careers, software sits at the sweet spot of “amount you actually have to know & how much school you really need” vs “how much you get paid”.
I’ve said many times that I feel very fortunate that the thing I got super interested in when I was twelve happened to turn into a fully functional career when I hit my twenties. Not everyone gets that! And more importantly, there are a lot of people making those computers do things who didn’t get super interested in computers when they were twelve, because the thing they got super interested in doesn’t pay for a mortgage.
Look, if you need a good job, and maybe aren’t really interested in anything specific, or at least in anything that people will pay for, “computers”—or computer-adjacent—is a pretty sweet direction for your parents to point you. I’ve worked with more of these than I can count—developers, designers, architects, product people, project managers, middle managers—and most of them are perfectly fine people, doing a job they’re a little bored by, and then they go home and do something that they can actually self-actualize about. And I suspect this is true for a lot of “sit down inside email jobs,” that there’s a large mass of people who, in a just universe, their job would be “beach” or “guitar” or “games”, but instead they gotta help knock out front-end web code for a mid-list insurance company. Probably, most careers are like that, there’s the one accountant that loves it, and then a couple other guys counting down the hours until their band’s next unpaid gig.
But one of the things that makes computers stand out is that those accountants all had to get certified. The computer guys just needed a bootcamp and a couple weekends worth of video tutorials, and suddenly they get to put “Engineer” on their resume.
And let’s be honest: software should be creative, usually is marketed as such, but frequently isn’t. We like to talk about software development as if it’s nothing but innovation and “putting a dent in the universe”, but the real day-to-day is pulling another underwritten story off the backlog that claims to be easy but is going to take a whole week to write one more DTO, or web UI widget, or RESTful API that’s almost, but not quite, entirely unlike the last dozen of those. Another user-submitted bug caused by someone doing something stupid that the code that got written badly and shipped early couldn’t handle. Another change to government regulations that’s going to cause a remodel of the guts of this thing, which somehow manages to be a surprise despite the fact the law was passed before anyone in this meeting even started working here.
They don’t have time to learn how that regulation works, or why it changed, or how the data objects were supposed to happen, or what the right way to do that UI widget is—the story is only three points, get it out the door or our velocity will slip!—so they find someting they can copy, slap something together, write a test that passes, ship it. Move on to the next. Peel another one off the backlog. Keep that going. Forever.
And that also leads to this weird thing software has where everyone is just kind of bluffing everyone all the time, or at least until they can go look something up on stack overflow. No one really understands anything, just gotta keep the feature factory humming.
The people who actually like this stuff, who got into it because they liked making compteurs do things for their own sake keep finding ways to make it fun, or at least different. “Continuous Improvement,” we call it. Or, you know, they move on, leaving behind all those people whose twelve-year old selves would be horrified.
But then there’s the group that’s in the center of the Venn Diagram of everything above. All this mixes together, and in a certain kind of reduced-empathy individual, manifests as a fundamental disbelief in craft as a concept. Deep down, they really don’t believe expertise exists. That “expertise” and “bias” are synonyms. They look at people who are “good” at their jobs, who seem “satisfied” and are jealous of how well that person is executing the con.
Whatever they were into at twelve didn’t turn into a career, and they learned the wrong lesson from that. The kind of people who were in a band as a teenager and then spent the years since as a management consultant, and think the only problem with that is that they ever wanted to be in a band, instead of being mad that society has more open positions for management consultants than bass players.
They know which is the stuff that sucks: everything. None of this is the fun part; the fun part doesn’t even exist; that was a lie they believed as a kid. So they keep trying to build things where they don’t have to do their jobs anymore but still get paid gobs of money.
They dislike their jobs so much, they can’t believe anyone else likes theirs. They don’t believe expertise or skill is real, because they have none. They think everything is a con because thats what they do. Anything you can’t just buy must be a trick of some kind.
(Yeah, the trick is called “practice”.)
These aren’t people who think that critically about their own field, which is another thing that happens when you value STEM over everything else, and forget to teach people ethics and critical thinking.
Really, all they want to be are “Idea Guys”, tossing off half-baked concepts and surrounded by people they don’t have to respect and who wont talk back, who will figure out how to make a functional version of their ill-formed ramblings. That they can take credit for.
And this gets to the heart of whats so evil about the current crop of AI.
These aren’t tools built by the people who do the work to automate the boring parts of their own work; these are built by people who don’t value creative work at all and want to be rid of it.
As a point of comparison, the iPod was clearly made by people who listened to a lot of music and wanted a better way to do so. Apple has always been unique in the tech space in that it works more like a consumer electronics company, the vast majority of it’s products are clearly made by people who would themselves be an enthusiastic customer. In this field we talk about “eating your own dog-food” a lot, but if you’re writing a claims processing system for an insurance company, there’s only so far you can go. Making a better digital music player? That lets you think different.
But no: AI is all being built by people who don’t create, who resent having to create, who resent having to hire people who can create. Beyond even “I should be able to buy expertise” and into “I value this so little that I don’t even recognize this as a real skill”.
One of the first things these people tried to automate away was writing code—their own jobs. These people respect skill, expertise, craft so little that they don’t even respect their own. They dislike their jobs so much, and respect their own skills so little, that they can’t imagine that someone might not feel that way about their own.
A common pattern has been how surprised the techbros have been at the pushback. One of the funnier (in a laugh so you don’t cry way) sideshows is the way the techbros keep going “look, you don’t have to write anymore!” and every writer everywhere is all “ummmmm, I write because I like it, why would I want to stop” and then it just cuts back and forth between the two groups saying “what?” louder and angrier.
We’re really starting to pay for the fact that our civilization spent 20-plus years shoving kids that didn’t like programming into the career because it paid well and you could do it sitting down inside and didn’t have to be that great at it.
What future are they building for themselves? What future do they expect to live in, with this bold AI-powered utopia? Some vague middle-management “Idea Guy” economy, with the worst people in the world summoning books and art and movies out of thin air for no one to read or look at or watch, because everyone else is doing the same thing? A web full of AI slop made by and for robots trying to trick each other? Meanwhile the dishes are piling up? That’s the utopia?
I’m not sure they even know what they want, they just want to stop doing the stuff that sucks.
And I think that’s our way out of this.
What do we do?
For starters, AI Companies need to be regulated, preferably out of existence. There’s a flavor of libertarian-leaning engineer that likes to say things like “code is law,” but actually, turns out “law” is law. There’s whole swathes of this that we as a civilization should have no tolerance for; maybe not to a full Butlerian Jihad, but at least enough to send deepfakes back to the Abyss. We dealt with CFCs and asbestos, we can deal with this.
Education needs to be less STEM-focused. We need to carve out more career paths (not “jobs”, not “gigs”, “careers”) that have the benefits of tech but aren’t tech. And we need to furiously defend and expand spaces for creative work to flourish. And for that work to get paid.
But those are broad, society-wide changes. But what can those of us in the tech world actually do? How can we help solve these problems in our own little corners? We can we go into work tomorrow and actually do?
It’s on all of us in the tech world to make sure there’s less of the stuff that sucks.
We can’t do much about the lack of jobs for dance majors, but we can help make sure those people don’t stop believing in skill as a concept. Instead of assuming what we think sucks is what everyone thinks sucks, is there a way to make it not suck? Is there a way to find a person who doesn’t think it sucks? (And no, I don’t mean “Uber for writing my emails”) We gotta invite people in and make sure they see the fun part.
The actual practice of software has become deeply dehumanizing. None of what I just spent a week describing is the result of healthy people working in a field they enjoy, doing work they value. This is the challenge we have before us, how can we change course so that the tech industry doesn’t breed this. Those of us that got lucky at twelve need to find new ways to bring along the people who didn’t.
With that in mind, next Friday on Icecano we start a new series on growing better software.
Several people provided invaluable feedback on earlier iterations of this material; you all know who you are and thank you.
And as a final note, I’d like to personally apologize to the one person who I know for sure clicked Open in New Tab on every single link. Sorry man, they’re good tabs!
Why is this Happening, Part II: Letting Computers Do The Fun Part
Previously: Part I
Let’s leave the Stuff that Sucks aside for the moment, and ask a different question. Which Part is the Fun Part? What are we going to do with this time the robots have freed up for us?
It’s easy to get wrapped up in pointing at the parts of living that suck; especially when fantasizing about assigning work to C-3PO’s cousin. And it’s easy to spiral to a place where you just start waving your hands around at everything.
But even Bertie Wooster had things he enjoyed, that he occasionally got paid for, rather than let Jeeves work his jaw for him.
So it’s worth recalibrating for a moment: which are the fun parts?
As aggravating as it can be at times, I do actually like making computers do things. I like programming, I like designing software, I like building systems. I like finding clever solutions to problems. I got into this career on purpose. If it was fun all the time they wouldn’t have to call it “work”, but it’s fun a whole lot of the time.
I like writing (obviously.) For me, that dovetails pretty nicely with liking to design software; I’m generally the guy who ends up writing specs or design docs. It’s fun! I owned the customer-facing documentation several jobs back. It was fun!
I like to draw! I’m not great at it, but I’m also not trying to make a living out of it. I think having hobbies you enjoy but aren’t great at is a good thing. Not every skill needs to have a direct line to a career or a side hustle. Draw goofy robots to make your kids laugh! You don’t need to have to figure out a the monetization strategy.
In my “outside of work” life I think I know more writers and artists than programmers. For all of them, the work itself—the writing, the drawing, the music, making the movie—is the fun part. The parts they don’t like so well is the “figuring out how to get paid” part, or the dealing with printers part, or the weird contracts part. The hustle. Or, you know, the doing dishes, laundry, and vacuuming part. The “chores” part.
So every time I see a new “AI tool” release that writes text or generates images or makes video, I always as the same question:
Why would I let the computer do the fun part?
The writing is the fun part! The drawing pictures is the fun part! Writing the computer programs are the fun part! Why, why, are they trying to tell us that those are the parts that suck?
Why are the techbros trying to automate away the work people want to do?
It’s fun, and I worked hard to get good at it! Now they want me to let a robot do it?
Generative AI only seems impressive if you’ve never successfully created anything. Part of what makes “AI art” so enragingly radicalizing is the sight of someone whose never tried to create something before, never studied, never practiced, never put the time in, never really even thought about it, joylessly showing off their terrible AI slop they made and demanding to be treated as if they made it themselves, not that they used a tool built on the fruits of a million million stolen works.
Inspiration and plagiarism are not the same thing, the same way that “building a statistical model of word order probability from stuff we downloaded from the web” is not the same as “learning”. A plagiarism machine is not an artist.
But no, the really enraging part is watching these people show off this garbage realizing that these people can’t tell the difference. And AI art seems to be getting worse, AI pictures are getting easier spot, not harder, because of course it is, because the people making the systems don’t know what good is. And the culture is following: “it looks like AI made it” has become the exact opposite of a compliment. AI-generated glop is seen as tacky, low quality. And more importantly, seen as cheap, made by someone who wasn’t willing to spend any money on the real thing. Trying to pass off Krusty Burgers as their own cooking.
These are people with absolutely no taste, and I don’t mean people who don’t have a favorite Kurosawa film, I mean people who order a $50 steak well done and then drown it in A1 sauce. The kind of people who, deep down, don’t believe “good” is real. That it’s all just “marketing.”
The act of creation is inherently valuable; creation is an act that changes the creator as much as anyone. Writing things down isn’t just documentation, it’s a process that allows and enables the writer to discover what they think, explore how they actually feel.
“Having AI write that for you is like having a robot lift weights for you.”
AI writing is deeply dehumanizing, to both the person who prompted it and to the reader. There is so much weird stuff to unpack from someone saying, in what appears to be total sincerity, that they used AI to write a book. That the part they thought sucked was the fun part, the writing, and left their time free for… what? Marketing? Uploading metadata to Amazon? If you don’t want to write, why do you want people to call you a writer?
Why on earth would I want to read something the author couldn’t be bothered to write? Do these ghouls really just want the social credit for being “an artist”? Who are they trying to impress, what new parties do they think they’re going to get into because they have a self-published AI-written book with their name on it? Talk about participation trophies.
All the people I know in real life or follow on the feeds who use computers to do their thing but don’t consider themselves “computer people” have reacted with a strong and consistant full-body disgust. Personally, compared to all those past bubbles, this is the first tech I’ve ever encountered where my reaction was complete revulsion.
Meanwhile, many (not all) of the “computer people” in my orbit tend to be at-least AI curious, lots of hedging like “it’s useful in some cases” or “it’s inevitable” or full-blown enthusiasm.
One side, “absolutely not”, the other side, “well, mayyybe?” As a point of reference, this was the exact breakdown of how these same people reacted to blockchain and bitcoin.
One group looks at the other and sees people musing about if the face-eating leopard has some good points. The other group looks at the first and sees a bunch of neo-luddites. Of course, the correct reaction to that is “you’re absolutely correct, but not for the reasons you think.”
There’s a Douglas Adams bit that gets quoted a lot lately, which was printed in Salmon of Doubt but I think was around before that:
I’ve come up with a set of rules that describe our reactions to technologies:
Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
Anything invented after you’re thirty-five is against the natural order of things.
The better-read AI-grifters keep pointing at rule 3. But I keep thinking of the bit from Dirk Gently’s Detective Agency about the Electric Monk:
The Electric Monk was a labour-saving device, like a dishwasher or a video recorder. Dishwashers washed tedious dishes for you, thus saving you the bother of washing them yourself, video recorders watched tedious television for you, thus saving you the bother of looking at it yourself; Electric Monks believed things for you, thus saving you what was becoming an increasingly onerous task, that of believing all the things the world expected you to believe.
So, what are the people who own the Monks doing, then?
Let’s speak plainly for a moment—the tech industry has always had a certain…. ethical flexibility. The “things” in “move fast and break things” wasn’t talking about furniture or fancy vases, this isn’t just playing baseball inside the house. And this has been true for a long time, the Open Letter to Hobbyists was basically Gates complaining that other people’s theft was undermining the con he was running.
We all liked to pretend “disruption” was about finding “market inefficiencies” or whatever, but mostly what that meant was moving in to a market where the incumbents were regulated and labor had legal protection and finding a way to do business there while ignoring the rules. Only a psychopath thinks “having to pay employees” is an “inefficiency.”
Vast chunks of what it takes to make generative AI possible are already illegal or at least highly unethical. The Internet has always been governed by a sort of combination of gentleman’s agreements and pirate codes, and in the hunger for new training data, the AI companies have sucked up everything, copyright, licensing, and good neighborship be damned.
There’s some half-hearted attempts to combat AI via arguments that it violates copyright or open source licensing or other legal approach. And more power to them! Personally, I’m not really interested in the argument the AI training data violates contract law, because I care more about the fact that it’s deeply immoral. See that Vonnegut line about “those who devised means of getting paid enormously for committing crimes against which no laws had been passed.” Much like I think people who drive too fast in front of schools should get a ticket, sure, but I’m not opposed to that action because it was illegal, but because it was dangerous to the kids.
It’s been pointed out more than once that AI breaks the deal behind webcrawlers and search—search engines are allowed to suck up everyone’s content in exchange for sending traffic their way. But AI just takes and regurgitates, without sharing the traffic, or even the credit. It’s the AI Search Doomsday Cult. Even Uber didn’t try to put car manufacturers out of business.
But beyond all that, making things is fun! Making things for other people is fun! It’s about making a connection between people, not about formal correctness or commercial viability. And then you see those terrible google fan letter ads at the olympics, or see people crowing that they used AI to generate a kids book for their children, and you wonder, how can these people have so little regard for their audience that they don’t want to make the connection themselves? That they’d rather give their kids something a jumped-up spreadsheet full of stolen words barfed out instead of something they made themselves? Why pass on the fun part, just so you can take credit for something thoughtless and tacky? The AI ads want you to believe that you need their help to find “the right word”; what thay don’t tell you is that no you don’t, what you need to do is have fun finding your word.
Robots turned out to be hard. Actually, properly hard. You can read these papers by computer researchers in the fifties where they’re pretty sure Threepio-style robot butlers are only 20 years away, which seems laughable now. Robots are the kind of hard where the more we learn the harder they seem.
As an example: Doctor Who in the early 80s added a robot character who was played by the prototype of an actual robot. This went about as poorly as you might imagine. That’s impossible to imagine now, no producer would risk their production on a homemade robot today, matter how impressive the demo was. You want a thing that looks like Threepio walking around and talking with a voice like a Transformer? Put a guy in a suit. Actors are much easier to work with. Even though they have a union.
Similarly, “General AI” in the HAL/KITT/Threepio sense has been permanently 20 years in the future for at least 70 years now. The AI class I took in the 90s was essentially a survey of things that hadn’t worked, and ended with a kind of shrug and “maybe another 20?”
Humans are really, really good at seeing faces in things, and finding patterns that aren’t there. Any halfway decent professional programmer can whip up an ELIZA clone in an afternoon, and even knowing how the trick works it “feels” smarter than it is. A lot of AI research projects are like that, a sleight-of-hand trick that depends on doing a lot of math quickly and on the human capacity to anthropomorphize. And then the self-described brightest minds of our generation fail the mirror test over and over.
Actually building a thing that can “think”? Increasingly seems impossible.
You know what’s easy, though, comparatively speaking? Building a statistical model of all the text you can pull off the web.
On Friday: conclusions, such as they are.
This Adam Savage Video
The YouTube algorithm has decided that what I really want to watch are Adam Savage videos, and it turns out the robots are occasionally right? So, I’d like to draw your attention to this vid where Adam answers some user questions: Were Any Myths Deemed Too Simple to Test on MythBusters?
It quickly veers moderately off-topic, and gets into a the weeds on what kinds of topics MythBusters tackled and why. You should go watch it, but the upshot is that MythBusters never wanted to invite someone on just to make them look bad or do a gotcha, so there was a whole class of “debunking” topics they didn’t have a way in on; the example Adam cites is dowsing, because there’s no way to do an episode busting dowsing without having a dowser on to debunk.
And this instantly made clear to me why I loved MythBusters but couldn’t stand Penn & Teller’s Bullshit!. The P&T Show was pretty much an extended exercise in “Look at this Asshole”, and was usually happy to stop there. MythBusters was never interested in looking at assholes.
And, speaking of Adam Savage, did I ever link to the new Bobby Fingers?
This is relevant because it’s a collaboration with Adam Savage, and the Slow Mo Guys, who also posted their own videos on the topic:
Shooting Ballistic Gel Birds at Silicone Fabio with @bobbyfingers and @theslowmoguys!
75mph Bird to the Face with Adam Savage (@tested) and @bobbyfingers - The Slow Mo Guys
It’s like a youtube channel Rashomon, it’s great.
Cognitive Surplus
I finally dug up a piece I that has been living rent-free in my head for sixteen years:
Gin, Television, and Social Surplus - Here Comes Everybody:
I was recently reminded of some reading I did in college, way back in the last century, by a British historian arguing that the critical technology, for the early phase of the industrial revolution, was gin.
The transformation from rural to urban life was so sudden, and so wrenching, that the only thing society could do to manage was to drink itself into a stupor for a generation. The stories from that era are amazing--there were gin pushcarts working their way through the streets of London.
...
This was a talk Clay Shirky gave in 2008, and the transcript lived on his website for a long time but eventually rotted away. I stumbled across an old bookmark the other day, and realized I could use that to dig it out of the Internet Archive’s Wayback Machine, so here we are!
A couple years later, Shirky turned this talk into a full book called Cognitive Surplus: How Technology Makes Consumers into Collaborators, which is presumably why the prototype vanished off his website. The book is okay, but it’s a classic example of an idea that really only needs about 20 pages at the most getting blown out to 200 because there’s a market for “books”, but not “long blog posts.” The original talk is much better, and I’m glad to find it again.
The core idea has suck with me all this time: that social and technological advances free up people’s time and create a surplus of cognitive and social energy, and then new “products” emerge to soak that surplus up so that people don’t actually use it for anything dangerous or disruptive. Shirky’s two examples are Gin and TV Sitcoms; this has been in my mind more than usual of late as people argue about superhero movies and talk about movies as “escapism” in the exact same terms you’d use to talk about drinking yourself into a stupor.
Something I talk about a lot in my actual day job is “cognitive bandwidth,” largely inspired by this talk; what are we filling our bandwidth with, and how can we use less, how can we create a surplus.
And, in all aspects of our lives, how can we be mindful about what we spend that surplus on.
Salmon of Doubt by Douglas Adams (2002)
There are multiple interlocking tradegies of Douglas Adams’ death—not the least of which is the fact that he died at all. But also he passed at what appeared to be the end of a decade-long career slump—well, not slump exactly, but a decade where he seemed to spend his time being very, very irritated at the career he’d accidentally found.
After he died unexpectedly in May of 2001 at 49, his publisher rushed out a collection of previously unpublished work called Salmon of Doubt. It’s a weird book—a book that only could have happened under the exact circumstances that it did, scrambled out to take advantage of the situation, part collection, part funeral.
Douglas Adams is, by far, the writer whose had the biggest influence on my own work, and it’s not even close. I’m not even sure who would be number two? Ursula LeGuin, probably? But that’s a pretty distant second place—The Hitchhiker’s Guide to the Galaxy is the first “grown-up” book I ever read on my own, which is sort of my secret origin story.
As such I gulped Salmon down the instant it came out in 2002, and hadn’t read it since. There was a bit I vaguely remembered that I wanted to quote in something else I was working on, so I’ve recently bought a new copy, as my original one has disappeared over the years. (Actually, I’m pretty sure I know exactly what happened to it, but it’s a minor footnote in a larger, more depressing story, so lets draw a veil across it and pretend that it was pilfered by elves.)
Re-reading the book decades later, two things are very obvious:
First, Adams would never have let a book like this happen while he was alive. It’s self-indulgent in exactly the way he never was, badly organized, clearly rushed. I mean, the three main sections are “Life”, “The Universe”, and “And Everything”, which in addition to being obvious to the point of being tacky, is an absolutely terrible table of contents because there’s no rhyme or reason why one item is in one section versus another.
Second, a book like this should have happened years before. There was so much stuff Adams wrote—magazine articles, newspaper columns, bits and bobs on the internet—that a non-fiction essay collection–style book was long overdue.
This book is weird for other reasons, including that a bunch of other people show up and try to be funny. It’s been remarked more than once that no other generally good writer has inspired more bad writing that Douglas Adams, and other contributions to this book are a perfect example. The copy I have now is the US paperback, with a “new introduction” by Terry Jones—yes, of Monty Python—which might be the least funny thing I’ve ever read, not just unfunny but actively anti-funny, the humor equivalent of anti-matter. The other introductions are less abrasive, but badly misjudge the audience’s tolerance for a low-skill pastiche at the start of what amounts to a memorial service.
The main selling point here is the unfinished 3rd Dirk Gently novel, which may or may not have actually been the unfinished 6th Hitchhiker’s Guide to the Galaxy novel. However, that only takes about about 80 pages of a 290 page book; by my math thats a hair over a quarter, which is a little underwhelming. It’s clear the goal was to take whatever the raw material looked like and edit it into something reasonably coherent and readable, which it is. But even at the time, it felt like heavily-edited “grit-out-of-the-spigot” early drafts rather than an actual unfinished book, I’d be willing to bet a fiver that if Adams had lived to finish whatever that book turned into, none of the text here would have been in it. As more unfinished pieces have leaked out over the years, such as the excerpts in 42: The Wildly Improbable Ideas of Douglas Adams, it’s clear that there was a lot more than made it into Salmon, and while less “complete”, that other stuff was a lot more interesting. As an example, the excerpts from Salmon in 42 include some passages from one of the magazine articles collected here, except in the context of the novel instead of Adams himself on a trip? What’s the story there? Which came first? Which way did that recycling go? Both volumes are frustratingly silent.
It’s those non-novel parts that are actually good, though. That magazine article is casually one of the best bits of travel writing I’ve ever read, there’s some really insightful bits about computers and technology, a couple of jokes that I’ve been quoting for years having forgotten they weren’t in Hitchhiker proper. The organization, and the rushed nature of the compilation, make these frustrating, because there will be an absolutely killer paragraph on its own, with no context for where did this come from? Under what circumstances was this written? Similarly for the magazine articles, newspaper columns, excerpts from (I assume) his website; there’s no context or dates or backstory, the kinds of things you’d hope for in a collection like this. Most of them seem to date to “the 90s” from context clues, but it’s hard to say where exactly all these things fit in.
But mopst of what really makes the book so weird is how fundamentally weird Adams’ career itself was in the last decade of his life.
In a classic example of working for years to become an overnight success, Adams had a remarkably busy period from 1978–1984, which included (deep breath) two series of the Hitchhiker radio show, a revised script for the album version of the first series, a Doctor Who episode, a stint as Doctor Who’s script editor during which he wrote two more episodes—one of which was the single best episode of the old show—and heavily rewrote several others, the TV adaptation of Hitchhiker which was similar but not identical to the first radio series, the third Hitchhiker novel based (loosely) on a rejected pitch for yet another Doctor Who, and ending in 1984 with the near simultaneous release of the fourth Hitchhiker novel and the Infocom text adventure based on the first.
(In a lot of ways, HHGG makes more sense if you remember that it happened in the shadow of his work for Doctor Who, more than anything it functions as a satire of the older program, the Galaxy Quest to Who’s Star Trek, if you will. Ford is the Doctor if he just wanted to go to a party, Arthur is a Doctor Who companion who doesn’t want to be there and argues back, in the radio show at least, The Heart of Gold operates almost exactly like the Tardis. If you’ll forgive the reference, I’ve always found it improbable, that Hitchhiker found its greatest success in America at a time where Who was barely known.)
After all that, to steal a line from his own work, “he went into a bit of a decline.”
Somewhere in there he also became immensely rich, and it’s worth remembering for the rest of this story that somewhere in the very early 80s Adams crossed the line of “never needs to work again.”
Those last two projects in 1984 are worth spending an extra beat on. It’s not exactly a secret that Adams actually had very little to do with the Hitchhiker game other than the initial kickoff, and that the vast majority of the writing and the puzzles were Steve Meretzky doing an impeccable Adams impression. (See The Digital Antiquarian’s Douglas Adams, The Computerized Hitchhiker’s, and Hitchhiking the Galaxy Infocom-Style for more on how all that happened.)
Meanwhile, the novel So Long and Thanks for All The Fish kicks off what I think of his middle period. It’s not really a SF comedy, it’s a magical realism romance novel that just happens to star the main character from Hitchhiker. It wasn’t super well received. It’s also my personal favorite? You get the feeling that’s the sort of direction he wanted to move in, not just recycling the same riffs from a decade earlier. There’s a real sense of his growth as an author. It also ties up the Hitchhiker series with a perfect ending.
Then a couple of more things happen. Infocom had a contract for up to six Hitchhiker games, and they really, really wanted to make at least a second. Adams, however, had a different idea for a game, which resulted in Infocom’s loved-by-nobody Bureaucracy, which again, Adams largely had nothing to do with beyond the concept, with a different set of folks stepping in to finish the project. (Again, see Bureaucracy at The Digital Antiquarian for the gory details.)
Meanwhile, he had landed a two book deal for two “non-Hitchhiker books”, which resulted in the pair of Dirk Gently novels, of which exactly one of them is good.
The first, Dirk Gently’s Holistic Detective Agency, is probably his best novel. It reworks a couple of ideas from those late 70s Doctor Whos but remixed in interesting ways. The writing is just better, better characters, funnier, subtler jokes, a time-travel murder-mystery plot that clicks together like a swiss watch around a Samuel Coleridge poem and a sofa. It’s incredible.
The second Dirk Gently book, Long Dark Teatime of the Soul, is a terrible book, full stop, and I would describe it as one of the most angry, bitter, nihilistic books I’ve ever read, except I’ve also read Mostly Harmless, the final Hitchhiker book. Both of those books drip with the voice of an author that clearly really, really doesn’t want to be doing what he’s doing.
(I’m convinced Gaiman’s American Gods is a direct riposte to the bleak and depressing Teatime.)
The two Dirk books came out in ’87 and ’88, the only time he turned a book around that fast. (Pin that.) After wrapping up the Dirk contract, he went and wrote Last Chance to See, his best book period, out in 1990.
Which brings us back around to the book nominally at hand—Salmon of Doubt. The unfinished work published here claims to be a potential third Dirk novel, and frankly, it’s hard to believe that was ever seriously under consideration. Because, look, the Gently contract was for two books, neither of which did all that well. According to the intro of this compilation, the first files for Salmon date to ’93, and he clearly noodled on and around that for a decade. That book was never actually going to be finished. If there was desire for a 3rd Gently novel, they would have sat him down and forced him to finish it in ’94. Instead, they locked him in a room and got Mostly Harmless.
There’s a longstanding rumor that Mostly Harmless was largely ghostwritten, and it’s hard to argue. It’s very different from his other works, mean, bad-tempered, vicious towards its characters in a way his other works aren’t. Except it has a lot in common with Bureaucracy which was largely finished by someone else. And, it has to be said, both of those have a very similar voice to the equally mean and bad-tempered Teatime. This gets extra suspicious when you consider the unprecedented-for-him turnaround time on Teatime. It’s hard to know how much stock to put into that rumor mill, since Adams didn’t write anything after that we can compare them to—except Last Chance which is in a completely different mood and in the same style as his earlier, better work. Late period style or ghostwriter? The only person alive who still knows hasn’t piped up on the subject.
Personally? I’m inclined to believe that Dirk Gently’s Holistic Detective Agency was the last novel he wrote on his own, and that his contributions to both Teatime and Mostly Harmless were a sketch of an outline and some jokes. Which all, frankly, makes his work—or approximation thereof—over the course of the 90s even stranger.
In one of the great moments of synchronicity, while I was working on this, the Digital Antiquarian published a piece on Adams’ late period, and specifically the absolute mess of the Starship Titanic computer game, so rather than me covering the same ground, you should pause here and go read The Later Years of Douglas Adams. But the upshot is he spent a lot of time doing not very much of anything, and spawning at least two projects pawned off on others to finish.
After the garbage fire of Starship Titanic and then the strangely prescient h2g2—which mostly failed when it choked out on the the reams of unreadable prose that resulted from a horde of fans trying and failing to write wikipedia in the style of Adams’ guide entries—there was a distinct vibe shift. Whereas interviews with him in the mid 90s tended to have him say things like “I accidentally wrote a best-selling novel” and indicate a general dislike of novel writing as a profession, there seemed to be a thaw, a sense that maybe after a decade-plus resenting his found career, maybe he was ready to accept it and lean back in.
And then he died in the gym at 49.
One of the many maddening things about his death is that we never got to see what his late style would have looked like. His last two good books provide a hint of where he was heading.
And that’s the real value of Salmon of Doubt—the theoretical novel contained within would never have been finished in that form, the rest of the content is largely comprised of articles or blog posts or other trivialities, but it’s the only glimpse of what “Late Adams” would have looked like that we’ll ever get.
As a point of comparison, let continue getting side-tracked and talk about the guy who succeeded Adams as “the satirical genre writer beloved by nerds,” Terry Pratchett. Pratchett started writing novels about the same time Adams did, but as the saying goes, put the amount of energy into writing books that Adams spent avoiding writing them. He also, you know, lived longer, despite also dying younger than he should have. Even if we just scope down to Discworld, Pratchett wrote 40 novels, 28 of which were while Adams was also alive and working. Good Omens, his collaboration with Neil Gaiman, which is Discworld-adjacent at least, came out in 1990, and serves as a useful piece of temporal geography; that book is solidly still operating in “inspired by Douglas Adams” territory, and Pratchett wasn’t yet Terry Pratchett, beloved icon. But somewhere around there at the turn of the decade is where he stops writing comedy fantasy and starts writing satirical masterpieces. “What’s the first truly great Discworld novel?” is the sort of unanswerable question the old web thrived on, despite the fact that the answer is clearly Guards! Guards! from ’89. But the point here is that was book 8 after a decade of constant writing. And thats still a long way away from Going Postal or The Wee Free Men. We never got to see what a “Douglas Adams 8th Novel” looked like, much less a 33rd.
What got me thinking about this was I saw a discussion recently about whom of Adams or Pratchett were the better writer. And again, this is a weird comparison, because Pratchett had a late period that Adams never had. Personally, I think there’s very little Pratchett that’s as good as Adams at his peak, but Pratchett wrote ten times the number of novels Adams did and lived twenty years longer. Yes, Pratchett’s 21st century late period books are probably better than Adam’s early 80s work, but we never got to see what Adams would have done at the same age.
(Of course the real answer is: they’re both great, but PG Wodehouse was better than both of them.)
And this is the underlying frustration of Salmon and the Late Adams that never happened. There’s these little glimpses of what could have been, career paths he didn’t take. It not that hard to imagine a version of Hitchhiker that worked liked Discworld did, picking up new characters and side-series but always just rolling along, a way for the author to knock out a book every year where Arthur Dent encountered whatever Adams was thinking about, where Adams didn’t try to tie it off twice. Or where Adams went the Asimov route and left fiction behind to write thoughtful explanatory non-fiction in the style of Last Chance.
Instead all we have is this. It’s scraps. but scraps I’m grateful for.
This is where I put a horizontal line and shift gears dramatically. Something I’ve wondered with increasing frequency over the last decade is who Adams would have turned into. I wonder this, because it’s hard to miss that nearly everybody in Adams’ orbit has turned into a giant asshole. The living non-Eric Ide Pythons, Dawkins and the whole New Atheist movement, the broader 90s Skeptic/Humanist/“Bright” folks all went mask-off the last few years. Even the guy who took over the math puzzles column in Scientific American from Martin Gardner now has a podcast where he rails against “wokeists” and vomits out transphobia. Hell, as I write this, Neil Gaiman, who wrote the definitive biography of Adams and whose first novel was a blatant Adams pastiche, has turned out to be “problematic” at best.
There’s something of a meme in the broader fanbase that it’s a strange relief that Adams died before we found out if he was going to go full racist TERF like all of his friends. I want to believe he wouldn’t, but then I think about the casual viscousness with which Adams slaughtered off Arthur Dent in Mostly Harmless—the beloved character who made him famous and rich—and remember why I hope those rumors about ghostwriters are true.
The New Atheists always kind of bugged me for reasons it took me a long time to articulate; I was going to put a longer bit on that theme here, but this piece continues to be proof that if you let something sit in your drafts folder long enough someone else will knock out an article covering the parts you haven’t written yet, and as such The Defector had an absolutely dead-on piece on that whole movement a month or so ago: The Ghosts Of New Atheism Still Haunt Us. Adams goes (mercifully) unmentioned, but recall Dawkins met his wife—Doctor Who’s Romana II herself, Lalla Ward!—after Adams introduced the two of them at a party Adams was hosting, and Adams was a huge sloppy fan of Dawkins and his work.
I bring all this up here and now because one of the pieces in Salmon of Doubt is an interview of Adams by the “American Atheist”, credited to The American Atheist 37, No. 1 which in keeping with Salmon’s poor organization isn’t dated, but a little digging on the web reveals to be the Winter 1998–1999 issue.
It’s incredible, because the questions the person interviewing ask him just don’t compute with Adams. Adams can’t even engage on the world-view the American Atheists have. I’m going to quote the best exchange here:
AMERICAN ATHEISTS: Have you faced any obstacles in your professional life because of your Atheism (bigotry against Atheists), and how did you handle it? How often does this happen?
DNA: Not even remotely. It's an inconceivable idea.
One can easily imagine, and by “imagine” I mean “remember”, other figures from that movement going on and on about how poorly society treats atheists, and instead here Adams just responds with blank incomprehension. Elsewhere in the interview he dismissed their disconnect as a difference between the US and the UK, which is both blatantly a lie but also demonstrates the sort of kindness and empathy one doesn’t expect from the New Atheists. Every response Adams gives has the air of him thinking “what in the world is wrong with you?”
And, here in the twenties, that was my takeaway from reading Salmon again. It’s a book bursting with empathy, kindness, and a fundamentally optimistic take on the absurd world we find ourselves in. A guy too excited about how great things could be to rant about how stupid they are (or, indeed, to put the work into getting there.) A book full of things written by, fundamentally, one of the good guys.
If Adams had lived, I’m pretty sure three things would be true. First, there’d be a rumor every year this this was the year he was finally going to finish a script for the new Doctor Who show despite the fact that this never actually ends up happening. Second, that we never would have been able to buy a completed Salmon of Doubt. Third, I’m pretty sure he wouldn’t be on twitter asking people to define “a woman.”
In other words: Don't Panic.
Bad Art is Still Art
It’s “Spicy Takes Week” over at Polygon, and one of the bits they’re kicking off with is: Roger Ebert saying video games are not art is still haunting games.
For everyone that made better choices about how to spend the early 00s than I did, almost two decades ago film critic Roger Ebert claimed that video games were not and could not be art, which was an opinion that the video game–playing denizens of the web took in good humor and weren’t weird about at all. HAHA, of course I am kidding, and instead it turned into a whole thing which still has occasional outbreaks, and the vitrol of the response was in retrospect was an early-warning sign of the forces that would congeal into gamergate and then keep going.
At the time, I thought it was terribly funny, mostly because of the irony of a critic of a new-ish artform that was only recently regarded as art kicking down the ladder behind him, but also because the movie that inspired him to share this view was the 2005 adaptation of DOOM, and look, if that movie was my only data point I’d deny that games were art too.
Whenever the videogames-as-art topic pops back up, I’m always briefly hopeful, because there are actually a lot of interesting topics here—what does it mean for authorship and art if the audience is also invited to be part of that authorship? If video games are art, are tabletop games? Can collaborative art made exclusively for the participants be art? (For the record, yes, yes, and yes.) There’s also fun potential side-order of “games may not be art but can contain art, and even better can be used to create art,” which is where the real juice is.
But no, that’s never what anyone wants to talk about, instead it’s always, as polygon says, about people wanting to sit at what they see as the big kids table without having to think through the implications, with a side-order of the most tedious “is it still art it you make money” arguments you’ve ever seen, surrounded by the toxic sheen of teenagers who don’t think they’re being taken seriously enough.
I think one of the reason’s that the “Ebert thing” specifically has stuck around long past his death is that of all the mainstream critics, he seemed the most likely to be “one of us.” He was always more sympathetic to genre stuff than most of his colleagues. He loved Star Wars! He called out Pauline Kael by name to argue that no, Raiders of the Lost Ark is great, actually. He wrote Beyond the Valley of the Dolls, for heavensakes. It sure seems like he’s be the kind of guy that would be all “heck yeah, I love video games!” and instead he said that not only they weren’t at the adults table, but that they could never get there.
Kind of a surprise, but everyone is entitled to their opinion. And look, whatever argument that there might have existed to change Ebert’s mind, a bunch of 16-year olds telling him that Halo of all things was the greatest piece of art ever created was the exact opposite.
Mostly, I’m “yes, and-ing” polygon’s article so I finally have an excise link to this interview with George Lucas at Cannes from a few months ago, which apparently only exists on the wreckage formerly known as twitter?.
The whole interview is great, a classic sharp-and-cranky Lucas interview. It’s all worth watching, but the bit I’m quoting here starts at about 7:40. The interviewer asks him about Martin Scorsese saying that Marvel movies aren’t cinema, and Lucas manages to look even grouchier and with a sort of sigh says "Look. Cinema is the art of a moving image. So if the image moves, then it’s… cinema.” (Seriously, the look on his face, a sort of patronizing exhaustion, is great.)
And I think that really cuts to the core of these weird semantic gatekeeping debates: Cinema you don’t enjoy is still Cinema. Bad Art is Still Art.
There’s so much to enjoy here. It’s not clear from the way he asks the question if the interviewer knows how much backstory there is to that question. Does he know that George and Marty have been friends for half a century? Does he know that Marcia Lucas edited a bunch of Marty’s movies. Does he know Marty has been talking shit about Star Wars since before it was released, in exactly the same way he talks about Marvel movies? Lucas’ demeanor in this is as if that Franco “First Time?” meme came to life, an air that he’s been having this exact conversation since before the guy asking the question was born, and is resigned to continuing to do so for the rest of his life.
But it’s the same set of arguments. It’s not art because it’s fun, or made money, or has spaceships, or because I just didn’t like it very much. I have a list of qualities I associate with art, and I can’t or wont recognize their presence here.
All these arguments, with video games, or superhero movies, or Star Wars or whatever, always centers around the animus of the word “art”, and the desire to make that word into a synonym for “quality”, or more importantly “quality that I, personally, value.”
It always seems to boil down to “I have a lot of emotional investment in this word meaning this exact list of things and I find it threatening whenever someone suggests the tent should be wider,” which semantically is just “TRUKK NOT MUNKY” with extra steps.
Anyway, if people make something for other people to enjoy, it’s art. Even if it’s bad.
Outage
Because it never rains but it pours, we just recovered from a nearly 2-day internet outage here at Icecano headquarters. Apparently the upstream fiber backbone got damaged, and this may or may not have had something to do with the wildfires outside of town. (There’s a reasonably well-sourced rumor that one of the reasons this took so long to recover was that the replacement parts spent most of yesterday stuck in traffic, which is just perfect.)
This next bit isn’t exactly news, but it was remarkable how much stuff just assumes a live, high-speed internet connection now, regardless of whether that makes any sense or needs it or not. Everything I was doing for work yesterday was entirely local to my work machine, and everthing took forever, because nearly every user action still tripped off a background network call that had to time out before it would go on and do the local action I had asked it to do. I know we all got really excited about “the network is the compter,” but maybe profesional tools should keep working right when there’s no connection, you know?
Anyway, the upside is this gives me an excuse to link to brr.fyi, which I have been meaning to link to now for ages. it’s a blog from someone at McMurdo station in Antarctica describing their experiences; they're home now, but still writing some “wrapping it up” reflection-style posts. I had this all on my mind because the author just wrote a post titled: Engineering for Slow Internet.
Last Week In Good Sentences
It’s been a little while since I did an open tab balance transfer, so I’d like to tell you about some good sentences I read last week.
Up first, old-school blogger Jason Kottke links to a podcast conversation between Ezra Klein and Nilay Patel in The Art of Work in the Age of AI Production. Kottke quotes a couple of lines that I’m going to re-quote here because I like them so much:
EZRA KLEIN: You said something on your show that I thought was one of the wisest, single things I’ve heard on the whole last decade and a half of media, which is that places were building traffic thinking they were building an audience. And the traffic, at least in that era, was easy, but an audience is really hard. Talk a bit about that.
NILAY PATEL: Yeah first of all, I need to give credit to Casey Newton for that line. That is something — at The Verge, we used to say that to ourselves all the time just to keep ourselves from the temptations of getting cheap traffic. I think most media companies built relationships with the platforms, not with the people that were consuming their content.
“Building traffic instead of an audience” sums up the last decade and change of the web perfectly. I don’t even have anything to add there, just a little wave and “there you go.”
Kottke ends by linking out to The Revenge of the Home Page in the The New Yorker, talking about the web starting to climb back towards a pre-social media form. And that’s a thought thats clearly in the air these days, because other old school blogger Andy Baio linked to We can have a different web.
I have this theory that we’re slowly reckoning with the amount of cognitive space that was absorbed by twitter. Not “social media”, but twitter, specifically. As someone who still mostly consumes the web via his RSS reader, and has been the whole time, I’ve had to spend a lot of time re-working my feeds the last several months because I didn’t realize how many feeds had rotted away but I hadn’t noticed because those sites were posting update as tweets.
Twitter absorbed so much oxygen, and there was so much stuff that migrated from “other places” onto twitter in a way that didn’t happen with other social media systems. And now that twitter is mostly gone, and all that creativity and energy is out there looking for new places to land.
If you’ll allow me a strained metaphor, last summer felt like last call before the party at twitter fully shut down; this summer really feels like that next morning, where we’ve all shook off the hangover and now everyone is looking at each other over breakfast asking “okay, what do you want to go do now?”
Jumping back up the stack to Patel talking about AI for a moment, a couple of extra sentences:
But these models in their most reductive essence are just statistical representations of the past. They are not great at new ideas. […] The human creativity is reduced to a prompt, and I think that’s the message of A.I. that I worry about the most, is when you take your creativity and you say, this is actually easy. It’s actually easy to get to this thing that’s a pastiche of the thing that was hard, you just let the computer run its way through whatever statistical path to get there. Then I think more people will fail to recognize the hard thing for being hard.
(The whole interview is great, you should go read it.)
But that bit about ideas and reducing creativity to a prompt brings me to my last good sentences, in this depressing-but-enlightening article over on 404 media: Flood of AI-Generated Submissions ‘Final Straw’ for Small 22-Year-Old Publisher
A small publisher for speculative fiction and roleplaying games is shuttering after 22 years, and the “final straw,” its founder said, is an influx of AI-generated submissions. […] “The problem with AI is the people who use AI. They don't respect the written word,” [founder Julie Ann] Dawson told me. “These are people who think their ‘ideas’ are more important than the actual craft of writing, so they churn out all these ‘ideas’ and enter their idea prompts and think the output is a story. But they never bothered to learn the craft of writing. Most of them don't even read recreationally. They are more enamored with the idea of being a writer than the process of being a writer. They think in terms of quantity and not quality.”
And this really gets to one of the things that bothers me so much about The Plagiarism Machine—the sheer, raw entitlement. Why shouldn’t they get to just have an easy copy of something someone else worked hard on? Why can’t they just have the respect of being an artist, while bypassing the work it takes to earn it?
My usual metaphor for AI is that it’s asbestos, but it’s also the art equivalent of steroids in pro sports. Sure, you hit all those home runs or won all those races, but we don’t care, we choose to live in a civilization where those don’t count, where those are cheating.
I know several people who have become enamored with the Plagiarism Machines over the last year—as I imagine all of us do now—and I’m always struck by a couple of things whenever they accidentally show me their latest works:
First, they’re always crap, just absolute dogshit garbage. And I think to myself, how did you make it to adulthood without being able to tell what’s good or not? There’s a basic artistic media literacy that’s just missing.
Second, how did we get to the point where you’ve got the nerve to be proud that you were cheating?
Hundreds of Beavers
Your weekend movie recommendation: Hundreds of Beavers. An indie movie that did the festival circuit over the last year or so, just came out on iTunes this week. It’s a comedy about an applejack salesman becoming north America’s greatest fur trapper. I had a chance to see this one early. All I’ll say is that it’s kid-friendly, and it’s funny.
I’m gonna need to you to trust me on this. Part of the joy of this movie is the discovery. Don’t read anything about it, don’t watch the trailer. Just watch it.
“And Then My Reward Was More Crunch”
For reasons that are probably contextually obvious, I spent the weekend diving into Tim Cain’s YouTube Channel. Tim Cain is still probably best known as “the guy who did the first Fallout,” but he spent decades working on phenominal games. He’s semi-retired these days, and rather than write memoirs, he’s got a “stories from the old days” YouTube channel, and it’s fantastic.
Fallout is one of those bits of art that seems to accrete urban legends. One of the joys of his channel has been having one of the people who was really there say “let me tell you what really happened.”
One of the more infamous beats around Fallout was that Cain and the other leads of the first Fallout left partway through development of Fallout 2 and founded Troika Games. What happened there? Fallout was a hit, and it’s from the outside it’s always been baffling that Interplay just let the people who made it… walk out the door?
I’m late to this particular party, but a couple months ago Cain went on the record with what happened:
and a key postscript:
Listening To My Stories With Nuance
…And oh man, did that hit me where I live, because something very similar happened to me once.
Several lifetimes ago. I was the lead on one of those strange projects that happen in corporate America where it absolutely had to happen, but it wasn’t considered important enough to actually put people or resources on it. We had to completely retool a legacy system by a hard deadline or lose a pretty substantial revenue stream, but it wasn’t one of the big sexy projects, so my tiny team basically got told to figure it out and left alone for the better part of two years.
Theoretically the lack of “adult supervision” gaves us a bunch of flexibility, but in practice it was a hige impediment every time we needed help or resources or infrastructure. It came down to the wire, but we pulled it off, mostly by sheer willpower. It was one of those miracles you can sometimes manage to pull off; we hit the date, stayed in budget, produced a higher-quality system with more features that was easier to maintain and build on. Not only that, but transition from the old system to the new went off with barely a ripple, and we replaced a system that was constantly falling over with one that last I heard was still running on years of 100% uptime. The end was nearly a year-long sprint, barely getting it over the finish line. We were all exhausted, I was about ready to die.
And the reward was: nothing. No recognition, no bonus, no time off, the promotion that kept getting talked about evaporated. Even the corp-standard “keep inflation at bay” raise was not only lower than I expected but lower than I was told it was going to be; when I asked about that, the answer was “oh, someone wrote the wrong number down the first time, don’t worry about it.”
I’m, uh, gonna worry about it a little bit, if that’s all the same to you, actually.
Morale was low, is what I’m saying.
But the real “lemon juice in the papercut” moment was the next project. We needed to do something similar to the next legacy system over, and now armed with the results of the past two years, I went in to talk about how that was going to go. I didn’t want to do that next one at all, and said so. I also thought maybe I had earned the right to move up to one of the projects that people did care about? But no, we really want you do run this one too. Okay, fine. It’s nice to be wanted, I guess?
It was, roughly, four times as much work as the previous, and it needed to get done in about the same amount of of time. Keeping in mind we barely made it the first time, I said, okay, here’s what we need to do to pull this off, here’s the support I need, the people, here’s my plan to land this thing. There’s aways more than one way to get something done, I either needed some more time, or more people, I had some underperformers on the team I needed rotated out. And got told, no, you can’t have any version of that. We have a hard deadline, you can’t have any more people, you have to keep the dead weight. Just find a way to get four times as much work done with what you have in less time. Maybe just keep working crazy hours? All with a tone that I can’t possibly know what I’m talking about.
And this is the part of Tim Cain’s story I really vibrated with. I had pulled off a miracle, and the only reward was more crunch. I remember sitting in my boss’s boss’s office, thinking to myself “why would I do this? Why would they even think I would say yes to this?”
Then, they had the unmitigated gall to be surprised when I took another job offer.
I wasn’t the only person that left. The punchline, and you can probably see this coming, is that it didn’t ship for years after that hard deadline and they had to throw way more people on it after all.
But, okay, other than general commiserating with an internet stranger about past jobs, why bring all this up? What’s the point?
Because this is exactly what I was talking about on Friday in Getting Less out of People. Because we didn’t get a whole lot of back story with Barry. What’s going on with that guy?
The focus was on getting Maria to be like Barry, but does does Barry want to be like Barry? Does he feel like he’s being taken advantage of? Is he expecting a reward and then a return to normal while you’re focusing on getting Maria to spend less time on her novel and more time on unpaid overtime? What’s he gonna do when he realizes that what he thinks is “crunch” is what you think is “higher performing”?
There’s a tendency to think of productivity like a ratchet; more story points, more velocity, more whatever. Number go up! But people will always find an equilibrium. The key to real success to to figure out how to provide that equilibrium to your people, because if you don’t, someone else will.
Electronics Does What, Now?
A couple months back, jwz dug up this great interview of Bill Gates conducted by Terry Pratchett in 1996 which includes this absolute gem: jwz: "Electronics gives us a way of classifying things"
TP: OK. Let's say I call myself the Institute for Something-or-other and I decide to promote a spurious treatise saying the Jews were entirely responsible for the Second World War and the Holocaust didn't happen. And it goes out there on the Internet and is available on the same terms as any piece of historical research which has undergone peer review and so on. There's a kind of parity of esteem of information on the Net. It's all there: there's no way of finding out whether this stuff has any bottom to it or whether someone has just made it up.
BG: Not for long. Electronics gives us a way of classifying things. You will have authorities on the Net and because an article is contained in their index it will mean something. For all practical purposes, there'll be an infinite amount of text out there and you'll only receive a piece of text through levels of direction, like a friend who says, "Hey, go read this", or a brand name which is associated with a group of referees, or a particular expert, or consumer reports, or the equivalent of a newspaper... they'll point out the things that are of particular interest. The whole way that you can check somebody's reputation will be so much more sophisticated on the Net than it is in print today.
“Electronics gives us a way of classifying things,” you say?
One of the most maddening aspects of this timeline we live in was that all our troubles were not only “forseeable”, but actually actively “forseen”.
But we already knew that; that’s not why this has been, as they say, living rent-free in my head. I keep thinking about this because it’s so funny.
First, you just have the basic absurdity of Bill Gates and Terry Pratchett in the same room, that’s just funny. What was that even like?
Then, you have the slightly sharper absurdity of PTerry saying “so, let me exactly describe 2024 for you” and then BillG waves his hands and is all “someone will handle it, don’t worry.” There’s just something so darkly funny to BillG patronizing Terry Pratchet of all people, whose entire career was built around imagining ways people could misuse systems for their own benefit. Just a perfect example of the people who understood people doing a better job predicting the future than the people who understood computers. It’s extra funny that it wasn’t thaaat long after this he wrote his book satirizing the news?
Then, PTerry fails to ask the really obvious follow-up question, namely “okay great, whose gonna build all that?”
Because, let’s pause and engage with the proposal on it’s own merits for a second. Thats a huge system Bill is proposing that “someone” is gonna build. Whose gonna build all that, Bill? Staff it? You? What’s the business model? Is it going to be grassroots? That’s probably not what he means, since this is the mid-90s and MSFT still thinks that open source is a cancer. Instead: magical thinking.
Like the plagiarism thing with AI, there’s just no engagement with the fact that publishing and journalism have been around for literally centuries and have already worked out most of the solutions to these problems. Instead, we had guys in business casual telling us not to worry about bad things happening, because someone in charge will solve the problem, all while actively setting fire to the systems that were already doing it.
And it’s clear there’s been no thought to “what if someone uses it in bad faith”. You can tell that even in ’96, Terry is getting more email chain letters than Bill was.
But also, it’s 1996, baby, the ship has sailed. The fuse is lit, and all the things that are making our lives hard now are locked in.
But mostly, what I think is so funny about this is that Terry is talking to the wrong guy. Bill Gates is still “Mister Computer” to the general population, but “the internet” happened in spite of his company, not due to any work they actually did. Instead, very shortly after this interview, Bill’s company is going to get shanked by the DOJ for trying to throttle the web in its crib.
None of this “internet stuff” is going to center around what Bill thinks is going to happen, so even if he was able to see the problem, there wasn’t anything he could do about it. The internet was going well before MICROS~1 noticed, and routed around it and kept going. There were some Stanford grad students Terry needed to get to instead.
But I’m sure Microsoft’s Electronic System for classifying reputation will ship any day now.
I don’t have a big conclusion here other than “Terry Pratchett was always right,” and we knew that already.