Wacky Times for AI
Been a wacky month or two for AI news! Open AI is reorganizing! Apple declined to invest! Whatsisname decided he wanted equity after all! The Governor of CA vetoed an AI safety bill! Microsoft is rebooting Three Mile Island, which feels like a particularly lazy piece of satire from the late 90s that escaped into reality! Study after study keeps showing no measurable benefit to AI deployment! The web is drowning in AI slop that no one likes!
I don’t know what any of that means, but it’s starting to feel like we’re getting real close to the part of The Sting where Kid Twist tells Quint from Jaws something confusing on purpose.
But, in our quest here at Icecano to bring you the best sentences from around the web, I’d like to point you at The Subprime AI Crisis because it includes this truly excellent sentence:
Generative AI must seem kind of magical when your entire life is either being in a meeting or reading an email
Oh Snap!
Elsewhere in that piece it covers the absolutely eye-watering amounts of money being spent on the plagiarism machine. There are bigger problems than the cost; the slop itself, the degraded information environment, the toll on the actual environment. But man, that is a hell of a lot of money to just set on fire to get a bunch of bad pictures no one likes. The opportunity cost is hard to fathom; imagine what that money could have been spent on! Imagine how many actually cool startups that would have launched! Imagine how much real art that would have bought!
But that’s actually not what I’m interested in today, what I am interested in are statements like these:
State of Play: Kobold Press Issues the No AI Pledge
Legendary Mario creator on AI: Nintendo is “going the opposite direction”
I stand by my metaphor that AI is like asbestos, but increasingly it’s also the digital equivalent of High Fructose Corn Syrup. Everyone has accepted that AI stuff is “not as good”, and it’s increasingly treated as low-quality filler, even by the people who are pushing it.
What’s intriguing to me is that companies whose reputation or brand centers around creativity or uniqueness are working hard to openly distance themselves. There’s a real “organic farm” energy, or maybe more of a “restaurant with real food, not fast food.”
Beyond the moral & ethical angle, it gives me hope that “NO AI” is emerging as a viable advertising strategy, in a way that “Made with AI” absolutely isn’t.
Dungeons & Dragons (2024): Trying to Make a Big Tent Bigger
Dungeons & Dragons is a weird game. I don’t mean that as some kind of poetic statement about role-playing games in general, I mean that specifically within the world of tabletop RPGs, D&D is weird. It’s weird for a lot of reasons, including, but not limited to:
- It’s the only TTRPG with with actual “real world” name recognition or any sort of cross-over brand awareness.
- For most of its existence, it hasn’t been a very good game.
And then for bonus points, it’s not even one game! Depending on how you count it’s at least six different related but totally incompatible games.
The usual example for a brand name getting turned into a generic noun is “kleenex”, but the thing where “Dungeons and Dragons” has become a generic noun for all RPGs is so strange.
It’s so much more well known that everything else it’s like if all TV shows were called MASH, as in “hey, that new MASH with the dragons is pretty good, ” or “I stayed in and rewatched that MASH with the time-traveller with the police box,” etc.
There was a joke in the mid-90s that all computer games got pitched as “it’s like DOOM, but…” and then just pitched the game regardless of how much it was actually like Doom; “It’s like DOOM except it’s not in first person, it’s not in real time, you don’t have a gun, you’re a pirate, you’re not in space, and instead you solve puzzles”. D&D is like that but for real.
Which is a testament to the power of a great name and the first mover advantage, because mechanically, the first 30-or-so years of the game were a total mess. In a lot of ways, RPGs became an industry because everyone who spent more than about 90 seconds with D&D in the 70s, 80, or 90s immediately thought of ten ways to improve the game, and were right about at least eight of them. (One of the best running bits in Shannon Applecline’s seminial Designers & Dungeons is how many successful RPG companies literally started like this.)
And this mechanical weirdness isn’t just because it was first, but because of things like Gary Gygax’s desire to turn it into a competitive sport played at conventions, but also make sure that Dave Arneson didn’t get paid any royalties, and also show off how many different names of polearms he knew. As much as RPGs are sold as “do anything, the only limit is your imagination!” D&D has always been defined by it’s weird and seemingly arbitrary limits. So there’s a certain semi-effable “D&D-ness” you need for a game to be “Dungeons & Dragons” and not just another heroic fantasy game, not all of which make for a great system. It’s a game where its flaws have become part of the charm; the magic system is objectively terrible, but is also a fundamental part of it’s D&D-ness.
The upshot of all that is that for most of its life, D&D had a very clear job within the broader TTRPG world: it was the game that onboarded new players to the hobby, who then immediately graduated to other, better games. The old Red Box was one of the great New Customer Acquisition products of all time, but most people proceeded to bounce right off Advanced D&D, and then moved on to Ninja Turtles, or Traveller, or Vampire, or GURPS, or Shadowrun, or Paranoia, or Star Wars, or any number of other systems that were both better games and were more tailored to a specific vibe or genre, but all assumed you already knew how to play. It wasn’t a game you stuck with. You hear stories about people who have been playing the same AD&D 2nd Edition game for years, and then you ask a couple of follow-up questions and realize that their home rules make the Ship of Theseus look under-remodeled.
Now, for the hobby at large that’s fairly healthy, but if your salary depends on people buying “Dungeons & Dragons” books specifically, I can see how that would be fairly maddening. The game, and the people who make it, have been in an ongoing negotiation with the player base to find a flavor of the game that people are actually willing to stick around for. This results in the game’s deeply weird approach to “Editons”, where each numbered edition is effectively a whole new game, always sold with a fairly explicit “Look! We finally fixed it!”
This has obviously been something of a mixed bag. I think a big part of the reason the d20 boom happened at the turn of the century was that for the first time, 3rd edition D&D was actually a good game. Not perfect, but finally worth playing. 4e, meanwhile, was the best-designed game that no one wanted to play, and it blew up the hobby so much that it created both Pathfinder and served as one of the sparks to light off the twenty-teens narrative RPG boom.
Another result of this ongoing negotiation is that D&D also has a long tradition of “stealth” updates, where new books come out that aren’t a formal revision, but if you pull the content in it dramatically changes the game. AD&D 1 had Oriental Adventures and Unearthed Arcana, AD&D 2 had those Player’s Option books (non-weapon proficiencies!), Basic had at least three versions (the original B/X, the BECMI sets, and then the Rules Cyclopedia). 3rd had the rare Formal Update in the form of the 3.5 release, but it also had things like the Miniatures Handbook (which, if you combine that with the SAGA Edition of Star Wars, makes the path from 3 to 4 more obvious.) 4e had Essentials.
2024 is a radically different time for tabletop games than 2014 was. As the twenty-teens dawned, there was growing sense that maybe there just wasn’t going to be a commercial TTRPG industry anymore. Sales were down, the remaining publishers were pivoting to PDF-only releases, companies were either folding or moving towards other fields. TTRPGs were just going to be a hobbyist niche thing from here on out, and maybe that was going to be okay. I mean, text-based Interactive Fiction Adventure games hadn’t been commercially viable since the late 80s, but the Spring Thing was always full of new submissions. I remember an article on EN World or some such in 2012 or 2013 that described the previous year’s sales as “an extinction level event for the industry.”
Designers & Dungeons perfectly preserves the mood from the time. I have the expanded 2014 4-volume edition, although the vast majority of the content is still from the 2011 original, which officially covers the industry up to 2009 and then peeks around the corner just a bit. The sense of “history being over” pervades the entire work, theres a real sense that the heyday is over, and so now is the time to get the first draft of history right.
As such, the Dungeons & Dragons (2014) books had a certain “last party of summer vacation” quality to them. The time where D&D would have multiple teams with cool codenames working on different parts of the game was long past, this was done by a small group in a short amount of time, and somewhat infamously wasn’t really finished, which is why so many parts of the book seem to run out of steam and end with a shrug emoji and “let the DM sort it out.” The bones are pretty good, but huge chunks of it read like one of those book reports where you’re trying to hide the fact you only read the first and last chapters.
That’s attracted a lot of criticism over the years, but in their (mild) defense, I don’t think it occurred to them that anyone new was going to be playing Fifth. “We’re gonna go out on a high note, then turn the lights out after us.” Most of the non-core book product line was outsourced for the first year or so, it was all just sorta spinning down.
Obviously, that’s not how things went. Everyone has their own theory about why 5th Edition caught fire the way no previous edition had, and here’s mine: The game went back to a non-miniatures, low-math design right as the key enabling technology for onboarding new players arrived: Live Play Podcasts. By hook or by crook, the ruleset for 5E is almost perfect for an audio-only medium, and moves fast, in a way that none of the previous 21st century variants had been.
And so we find outselves in a future where D&D, as a brand, is one of Hasbro’s biggest moneymakers.
Part of what drove that success is that Hasbro has been very conservative about changes to the game, which has clearly let the game flourish like never before, but the same issues are still there. Occasionally one of the original team would pop up on twitter and say something like “yeah, it’s obvious now what we should have done instead of bonus actions,” but nothing ever shipped as a product.
5th edition has already had its stealth update in the form the Tasha/Xanathar/Mordenkainen triptych, but now we’ve got something that D&D really hasn’t had before: the 2024 books are essentially 5th Edition, 2nd Edition. Leading the charge of a strangely spaced-out release schedule is the new Player’s Handbook (2024).
Let’s start with the best part: The first thirty pages are a wonder. It opens with the best “what is an RPG” intro I have ever read, and works its way up though the basics, and by page 28 has fully explained the entire ruleset. To be clear: there aren’t later chapters with names like “Using Skills” or “Combat”, or “Advanced Rules”, this is it.
The “examples of play” are a real thing of art. The page is split into two columns: the left side of the page is a running script-like dialogue of play, and the right side is a series of annotations and explanations describing exactly what rule was in play, why they rolled what they rolled, what the outcome was. I’ve never seen anything quite like it.
This is followed by an incredibly clear set of instructions on how to create a character, and then… the rest of the book is reference material. Chapters on the classes, character origins, feats, equipment, spells, a map of the Planes, stat blocks for creatures to use as familiars or morph targets.
Finally, the book ends with its other best idea: the Rules Glossary. It’s 18 pages of The Rules, alphabetical by Formal Name, clearly written. Theres no flipping around in the book looking for how to Grapple or something, it’s in the glossary. Generally, the book will refer the reader to the glossary instead of stating a rule in place.
It’s really easy to imagine how to repackage this layout into a couple of Red Box–style booklets covering the first few levels. You can basically pop the first 30 pages out as-is and slap a cover on it that says “Read This First!”
Back when I wrote about Tales of the Valiant, I made a crack that maybe there just wasn’t a best order for this material. I stand corrected. It’s outstanding.
Design-wise the book is very similar to it’s predecessor: same fonts, same pseudo-parchment look to the paper, same basic page layout. My favorite change is that the fonts are all larger, which my rapidly aging eyes appreciates.
It’s about 70 pages longer than the 2014 book, and it wouldn’t surprise me to learn that both books have the same number of words and that the extra space is taken up with the larger text and more art. The book is gorgeous, and is absolutely chock full of illustrations. Each class gets a full-page piece, and then each subclass gets a half-page piece showing an example of that build. It’s probably the first version of this game where you can flip through the classes chapter, and then stop at a cool picture and go “hang on, I want to play one of THOSE”. The art style feels fresh and modern in a way that’s guaranteed to make everyone say “that is so twenties” years from now; the same way that the art for the original 3rd edition books looked all clean and modern at the time, but now screams “late 90s” in a way I don’t have the critical vocabulary to describe. (Remember how everything cool had to be asymmetrical for a while there? Good times!)
Some of the early art previewed included a piece with the cast from 80s D&D cartoon drawn in the modern style of the book. At the time, I thought that was a weird piece of nostalgia bait: really? Now’s the time to do a callback to a 40-year old cartoon? Whose the audience for that?
But I was wrong about the intent, because this book is absolutely full of all manner of callbacks and cameos. The DragonLance twins are in the first couple of pages, everyone’s favorite Drow shows up not long after, there’s a guy from Baldur’s Gate 3, the examples of play are set in Castle Ravenloft, there’s Eberron airships, characters from the 80s action figure line, the idol from the old DMG cover, a cityscape of Sigil with the Lady floating down the street. It’s not a nostalgia play so much as it is a “big tent” play: the message, over and over again, is that everything fits. You remember some weird piece of D&D stuff from ages ago? Yeah, that’s in here too. Previous versions of this game have tended to start with a posture of “here’s the default way to play now”, with other “weirder” stuff floating in later. This takes the exact opposite approach, this is full-throated “yes, and” to everything D&D. So not only does Spelljammer get a shoutout in the 2 page appendix about the planes, but rules for guns are in the main equipment chapter, the psionic subclasses are in the main book, airships are in the travel costs table. Heck, the para-elemental planes are in the inner planes diagram, and I thought I was the only person who remembered those existed.
And this doesn’t just mean obscure lore pulls, the art is a case study in how to do “actual diversity”. There’s an explosion of body types, genders, skin tones, styles, and everyone looks cool.
Theres a constant, pervasive sense of trying to make the tent as big and as welcoming as possible. Turns out “One D&D” was the right codename for this; it wasn’t a version number, it was a goal.
Beyond just the art, 2024 book has a different vibe. There’s a whimsicalness from the 2014 version that’s gone: the humorous disclaimer on the title page isn’t there, there isn’t a joke entry for THAC0 in the index. If the 2014 book was an end-of-summer party, this is a start of the year syllabus.
The whole thing has been adjusted to be easier to use. The 2014 book had a very distinct yellowed-parchment pattern behind the text, the 2024 book has a similar pattern, but it’s much less busy and paler, so the text stands out better against the background. All the text is shorter, more to the point. The 2014 book had a lot of fluff that just kinda clogged up the rules when you were trying to look something up in a hurry, the 2024 book has been through an intense editing pass.
As an example: in the section for each class, each class ability has a subheading with the name of the power, and then a description, like this:
Invert the Polarity Starting at 7th level, your growing knowledge of power systems allows you to invert the polarity of control circuits, such as in teleport control panels or force fields. As a bonus action, you can add a d4 to attempts to control electrical systems. After using this power, you must take a short or long rest before using it again.
Now, it’s like this:
Level 7: Invert the Polarity Add 1d4 to checks made with the Sonic Screwdriver Tool. You regain this feature after a short or long rest.
For better or worse, it’s still 5th edition D&D. All the mechanical warts of the system are still there; the weird economy around Bonus Actions, too many classes have weird pools of bonus dice, the strange way that some classes get a whole set of “spell-like” powers to choose from, and other classes “just get spells.” There still isn’t a caster that just uses spell points. Warlocks still look like they were designed on the bus on the way to school the morning the homework was due. Inspiration is still an anemic version of better ideas from other systems. Bounded accuracy still feels weird if you’re not used to it. It’s still allergic to putting math in the text. It still tries to sweep more complex mechanics under the rug by having a very simple general rule, and then a whole host of seemingly one-off exceptions that feel like could have just been one equation or table. The text is still full of tangled sentences about powers recharging after short and long rests instead of just saying powers can used used so many times per day or encounter. There’s still no mechanic for “partial success” or “success with consequences.” You still can’t build any character from The Princess Bride. If 5th wasn’t your jam, there’s nothing here that’ll change your mind.
On the other hand, the good stuff is largely left unchanged: The Advantage/Disadvantage mechanic is still brilliant. The universal proficiency bonus is still a great approach. Bounded Accuracy enables the game to stay fun long past the point where other editions crash into a ditch filled with endless +2 modifiers. It’s the same goofball combat-focused fantasy-themed superhero game it’s been for a long time. I’ve said many times, 5e felt like the first version of D&D that wasn’t actively fighting against the way I like to run games, and the 2024 version stays that way.
All that said, it feels finished in a way the 2014 book didn’t. It’s a significantly smaller mechanical change that 3 to 3.5 was, but the revisions are where it counts.
Hasbro has helpfully published a comprehensive list of the mechanics changes as Updates in the Player’s Handbook (2024) | Dungeons & Dragons, so rather than drain the list, here are the highlights that stood out to me:
The big one is that Races are now Species, and Backgrounds have been reworked and made more important, and the pair are treated as “Origins”. This is massive improvement, gone is the weird racial determinism, and where you grew up is now way more important than where your ancestors came from. There’s some really solid rules for porting an older race or background into the new rules. The half-races are gone, replaced by “real Orcs” and the Aaisimar and Goliaths being called up to the big leagues. Backgrounds in 2014 were kinda just there, a way to pick up a bonus skill proficiency, here they’re the source of the attribute bonus and an actual Feat. Choosing a pair feels like making actual choices about a specific character in a different way that how previous editions would sort of devolve that choice into “choose your favorite Fellowship member”.
Multi-classing and Feats are flushed out and no longer relegated to an “optional because we ran out of time” sidebar. Feats specifically are much closer to where they were in 3e—interesting choices to dial in your character. The they split the difference with the choice you had to make in 5e to either get a stat boost or a feat, you still make that choice, but the stat boost bumps up two stats, and every general feat inclues a single stat boost.
The rules around skills vs tools make sense. At first glance, there don’t seem to be weird overlaps anymore. Tools were one of those undercooked features in 2014, they were kinda like skills, but not? When did you use a tool vs a plain skill check? How do you know what attribute bonus to use? Now, every attribute and skill has a broad description and examples of what you can use them from. Each tool has a full description, including the linked attribute, at least one action you can use it for, and at least one thing you can craft with it. And, each background comes with at least one tool proficiency. You don’t have to guess or make something up on the fly, or worse, remember what you made up last time. It’s not a huge change, but feels done.
Every class has four subclasses in the main book now, which cover a pretty wide spread of options, and sanity has prevailed and all subclasses start at level 3. (In a lot of ways, level 3 is clearly the first “real” level, with the first two as essentially the tutorial, which syncs well with that if you follow the recommended progression, you’ll hit 3rd level at the end of the second session.)
The subclasses are a mix of ones from the 2014 book, various expansions, and new material, but each has gotten a tune up top focus on what the actual fantasy is. To use Monk for example, the subclasses are “Hong Kong movie martial artist”, “ninja assassin”, “airbender”, and, basically, Jet Li from Kiss of the Dragon? The Fighter subclasses have a pretty clear sliding scale of “how complicated do you want to make this for yourself,” spanning “Basic Fighter”, “3rd Edition Fighter”, “Elf from Basic D&D”, and “Psionics Bullshit (Complementary)”.
Weapons now have “Weapon Mastery Properties” that, if you have the right class power or feat, allow you do do additional actions or effects with certain weapons, which does a lot to distinguish A-track fighters from everyone else without just making their attack bonus higher.
The anemic Ideals/Flaws/Bonds thing from 2014 is gone, but in it’s place there’s a really neat set of tables with descriptive words for both high and low attributes and alignment that you can roll against to rough in a personality.
On the other hand, lets talk about whats not here. The last page of the book is not the OGL, and there’s no hint of what any future 3rd party licensing might be. The OGL kerfluffle may have put the 2014 SRD under a CC license, but there’s no indication that there will even be a 2024 SRD.
There’s basically nothing in the way of explicit roleplaying/social hooks; and nothing at all in the way of inter-party hooks. PbtA is a thing, you know? But more to the point, so was Vampire. So was Planescape. There’s a whole stack of 30-year old innovations that just aren’t here.
Similarly there’s no recognition of “the party” as a mechanical construct.
There’s nothing on safety tools or the like; there is a callout box about Session Zero, but not much else. I’m withholding judgement on that one, since it looks like there’s something on that front in the DMG.
There’s very little mechanics for things other than combat; although once again, D&D tends to treat that as a DMG concern.
The other best idea that 4e had was recognizing that “an encounter” was a mechanical construct, but didn’t always have to mean “a fight.” This wasn’t new there, using games I can see from where I’m sitting as an example, Feng Shui was organized around “scenes” in the early 90s. Once you admit an encounter is A Thing, you can just say “this works once an encounter” without having to put on a big show about short rests or whatever, when everyone knows what you mean.
Speaking for myself, as someone who DMs more than he plays, I can’t say as I noticed anything that would change the way I run. The ergonomics and presentation of the book, yes, more different and better player options, yes, but from the other side of the table, they’re pretty much the same game.
Dungeons & Dragons is in a strage spot in the conceptual space. It’s not an explicit generic system like GURPS or Cypher, but it wants to make the Heroic Fantasy tent big enough that it can support pretty much any paperback you find in the fantasy section of the used book store. There’s always been a core of fantasy that D&D was “pretty good at” that got steadily weedier the further you got from it. This incarnation seems to have done a decent job of widening out that center while keeping the weed growth the a minimum.
It seems safe to call this the best version of Dungeons & Dragons to date, and perfectly positioned to do the thing D&D is best at: bring new players into the hobby, get them excited, and then let them move on.
But, of course, it’s double volcano summer, so this is the second revised Fifth Edition this year, after Kobold’s Tales of the Valiant. Alert readers will note that both games made almost the exact same list of changes, but this is less “two asteroid movies” and more “these were the obvious things to go fix.” It’s fascinating how similar they both are, I was expecting to have a whole compare and contrast section here, but not so much! I’m not as tapped into “the scene” as I used to be, so I don’t know how common these ideas were out in the wild, but both books feel like the stable versions of two very similar sets of house rules. It kinda feels like there are going to be a lot of games running a hacked combo of the the two.
(To scratch the compare-and-contrast itch: At first glance, I like the ToV Lineage-Heritage-Background set more than the D&D(2024) Species-Background pair, but the D&D(2024) weapon properties and feats look better than their ToV equivalents. Oh, to be 20 and unemployed again!)
The major difference is that ToV is trying to be a complete game, whereas the 2024 D&D still wants to treat the rest of the post-2014 product line as valid.
As of this writing, both games still have their respective DM books pending, which I suspect is where they’ll really diverge.
More than anything, this reminds me of that 2002-2003 period where people kept knocking out alternate versions of 3e (Arcana Unearthed, Conan, Spycraft, d20 Star Wars, etc, etc) capped off with 3.5. A whole explosion of takes on the same basic frame.
This feels like the point where I should make some kind of recommendation. Should you get it?That feels like one of those “no ethical consumption under capitalism” riddles. Maybe?
To put it mildly, it hasn’t been a bump-free decade for ‘ol Hasbro; recently the D&D group has made a series of what we might politely call “unforced errors,” or if we were less polite “a disastrously mishandled situation or undertaking.”
Most of those didn’t look malevolent, but the sort of profound screwups you get when too many people in the room are middle-aged white guys with MBAs, and not enough literally anyone else. Credit where credit is due, and uncharacteristically for a public-traded American corporation, they seemed to actually be humbled by some of these, and seemed to be making a genuine attempt to fix the systems that got them into a place where they published a book where they updated an existing race of space apes by giving them the exciting new backstory of “they’re escaped slaves!” Or blowing up the entire 3rd party licensing model for no obvious reason. Or sending the literal Pinkertons to someone’s house.
There seems to be an attempt to use the 2024 books to reset. There seems to be a genuine attempt here to get better at diversity and inclusion, to actually move forward. On the other hand, there’s still no sign of what’s going to happen next with the licensing situation.
And this is all slightly fatuous, because I clearly bought it, and money you spend while holding your nose is still legal tender. Your milage may vary.
My honest answer is that if you’re only looking to get one new 5e-compatible PHB this year, I’d recommend you get Tales of the Valiant instead, they’re a small company and could use the sales. If you’re in the market for a second, pick this one up. If you’ve bought in to the 5e ecosystem, the new PHB is probably worth the cover price for the improved ergonomics alone.
Going all the way back to where we started, the last way that D&D is weird is that whether we play it or not, all of us who care about this hobby have a vested interest in Dungeons & Dragons doing well. As D&D goes, so goes the industry: if you’ll forgive a mixed metaphor, when D&D does well the rising tide lifts all boats, but when it does poorly D&D is the Fisher King looking out across a blasted landscape.
If nothing else, I want to live in a world where as many people’s jobs are “RPG” as possible.
D&D is healthier than it’s ever been, and that should give us all a sigh of relief. They didn’t burn the house down and start over, they tried to make a good game better. They’re trying to make it more welcoming, more open, trying to make a big tent bigger. Here in the ongoing Disaster of the Twenties, and as the omni-crisis of 2024 shrieks towards its uncertain conclusion, I’ll welcome anyone trying to make things better.
Software Forestry 0x03: Overgrowth, Catalogued
Previously, we talked about What We Talk About When We Talk About Tech Debt, and that one of the things that makes that debt metaphor challenging is that it has expanded to encompass all manner of Overgrowth, not all of which fits that financial mental model.
From a Forestry perspective, not all the things that have been absorbed by “debt” are necessarily bad, and aren’t always avoidable. Taking a long-term, stewardship-focused view, there’s a bunch of stuff that’s more like emergent properties of a long-running project, as opposed to getting spendy with the credit card.
So, if not debt, what are we talking about when we talk about tech debt?
It’s easy to get over-excited about Lists of Things, but I got into computer science from the applied philosophy side, rather than the applied math side. I think there are maybe seven categories of “Overgrowth” that are different enough to make it useful to talk about them separately:
1. Actual Tech Debt.
Situations where an explicit decision to do something “not as good” to ship faster. There’s two broad subcategories here: using a hacky or unsustainable design to move faster, and cutting scope to hit a date.
In fairness, the original Martin Fowler post just talks about “cruft” in a broad sense, but generally speaking “Formal” (orthodox?) tech debt assumes a conscious choice to accept that debt.
This is the category where the debt analogy works the best. “I can’t buy this now with cash on hand, but I can take on more credit.” (Of course, this also includes “wait, what’s a variable rate?”)
In my experience, this is the least common species of Overgrowth, and the most straightforwardly self correcting. All development processes have some kind of “things to do next” list or backlog, regardless of the formal name. When making that decision to take on the debt, you put an item on the todo list to pay it off.
That list of cut features becomes the nucleus of the plan for the next major version, or update, or DLC. Sometimes, the schedule did you a favor, you realize it was a bad idea, and that cut feature debt gets written off instead of paid off.
The more internal or infrastructure–type items become those items you talk about with the phrase “we gotta do something about…”; the logging system, the metrics observability, that validation system, adding internationalization. Sometimes this isn’t a big formal effort, just a recognition that the next piece of work in that area is going to take a couple extra days to tidy up the mess we left last time.
Fundamentally, paying this off is a scheduling and planning problem, not a technical one. You had to have some kind of an idea about what the work was to make the decision to defer the work, so you can use that same understanding to find it a spot on the schedule.
That makes this the only category where you can actually pay it off. There’s a bounded amount of work you can plan around. If the work keeps getting deferred, or rescheduled, or kicked down the road, you need to stop and ask yourself if this is actually debt or a something asperational that went septic on you.
2. We made the right decision, but then things happened.
Sometimes you make the right decisions, don’t choose to take on any debt, and then things happen and the world imposes work on you anyway.
The classic example: Third party libraries move forward, the new version isn’t cleanly backwards compatible, and the version you’re using suddenly has a critical security flaw. This isn’t tech debt, you didn’t take out a loan! This is more like tech property taxes.
This is also a planning problem, but tricker, because it’s on someone else’s schedule. Unlike the tech debt above, this isn’t something you can pay down once. Those libraries or frameworks are going to keep getting updated, and you need to find a way to stay on top of them without making it a huge effort every time.
Of course, if they stop getting updated you don’t have an ongoing scheduling problem anymore, but you have the next category…
3. It seemed like a good idea at the time.
Sometimes you just guess wrong, and the rest of the world zigs instead of zags. You do your research, weigh the pros and cons, build what you think is the right thing, and then it’s suddenly a few years later and your CEO is asking why your best-in-class data rich web UI console won’t load on his new iPad, and you have to tell him it’s because it was written in Flash.
You can’t always guess right, and sometimes you’re left with something unsupported and with no future. This is very common; there’s a whole lot of systems out there that went all-in on XML-RPC, or RMI, or GWT, or Angular 1, or Delphi, or ColdFusion, or something else that looked like it was going to be the future right up until it wasn’t.
Personally, I find this to be the most irritating. Like Han Solo would say, it’s not your fault! This was all fine, and then someone you never met makes a strategic decision, and now you have to decide how or if you’re going to replace the discontinued tech. It’s really easy to get into a “if it ain’t broke don’t fix it” headspace, right up until you grind to a halt because you can’t hire anyone who knows how to add a new screen to the app anymore. This is when you start using phrases like “modernization effort”.
4. We did the best we could but there are better options now.
There’s a lot more stuff available than there used to be, and so sometimes you roll onto a new project and discover a home-brew ORM, or a hand-rolled messaging queue, or a strange pattern, and you stop and realize that oh wait, this was written before “the thing I would use” existed. (My favorite example of this is when you find a class full of static final constants in an old Java codebase and realize this was from before Java 5 added enums.)
A lot of the time, the custom, hand-rolled thing isn’t necessarily “worse” than some more recent library or framework, but you have to have some serious conversations about where to spend your time; if something isn’t your core business and has become a commodity, it’s probably not worth pouring more effort in to maintaining your custom version. Everyone wants to build the framework, but no one really wants to maintain the framework. Is our custom JSON serializer really still worth putting effort into?
Like the previous, it’s probably time to take a deep breath and talk about re-designing; but unlike the previous, the persion who designed the current version is probably still on the payroll. This usually isn’t a technical problem so much as it is a grief management one.
5. We solved a different problem.
Things change. You built the right thing at the time, but now we got new customers, shifted markets, increased scale, maybe the feds passed a law. The business requirements have changed. Yesterday, this was the right thing, and now it isn’t.
For example: Maybe you had a perfectly functional app to sell mp3 files to customers to download and play on their laptops, and now you have to retrofit that into a subscription-based music streaming platform for smartphones.
This is a good problem to have! But you still gotta find a way to re-landscape that forest.
6. Context Drift.
There’s a pithy line that Legacy Code is “code without tests,” but I think that’s only part of the problem. Legacy code is code without continuity of philosophy. Why was it built this way? There’s no one left who knows! A system gets built in a certain context, and as time passes that context changes, and the further away we get from the original context, the more overgrown and weedy the system appears to become. Tests—good tests—are one way to preserve context, but not the only way.
A whole lot of what’s called “cruft” is here, because It’s harder to read code than to write it.. A lot of that “cruft” is congealed knowledge. That weird custom string utility thats only used the one place? Sure, maybe someone didn’t understand the standard library, or maybe you don’t know about the weekend the client API started handing back malformed data and they wouldn’t fix it—and even worse, this still happens at unpredictable times.
This is both the easiest and least glamorous to treat, because the trick here is documentation. Don’t just document what the code does, document why the code does what it does, why it was built this way. A very small amount of effort while something is being planted goes a long way towards making sure the context is preserved. As Henry Jones Sr, says, you write it down so you don’t have to remember.
To put all that another way: Documentation debt is still tech debt.
7. Not debt, just an old mistake.
The one no one likes to talk about. For whatever reason, someone didn’t do A-quality work. This isn’t necessarily because they were incompetent or careless, sometimes shit happens, you know? This the flip side of the original Tech Debt category; it wasn’t on purpose, but sometimes people are in a hurry, or need to leave early, or just can’t think of anything better.
And so for whatever the reason, the doors aren’t straight, there’a bunch of unpainted plywood, those stairs aren’t up to code. Weeds everywhere. You gotta spend some time re-cutting your trails through the forest.
As we said at the start, each of those types of Overgrowth has their own root causes, but also needs a different kind of forest management. Next week, we start talking about techniques to keep the Overgrowth under control.
TV Rewatch: The Good Place
spoilers ahoy
We’ve been rewatching The Good Place. (Or rather, I’ve been rewatching it—I watched it on and off while it was on—everyone else around here is watching it for the first time.)
It is, of course, an absolute jewel. Probably the last great network comedy prior to the streaming/covid era. It’s a masterclass. In joke construction, in structure, in hiding jokes in set-dressing signs. It hits that sweet spot of being both genuinely funny while also have recognizable human emotions, which tends to beyond the grasp of most network sitcoms.
It’s also a case study in why you hire people with experience; Kristen Bell and Ted Danson are just outstanding at the basic skill of “starring in a TV comedy”, but have never as good as they are here. Ted Danson especially is a revelation here, he’s has been on TV essentially my entire life, and he’s better than he’s ever been, but in a way that feels like this is because he finally has material good enough.
But on top of all that, It’s got a really interesting take on what being a “good person” means, and the implications thereof. It’s not just re-heated half-remembered psychology classes, this is a show made by people that have really thought about it. Philosophers get named-dropped, but in a way that indicates that the people writing the show have actually read the material and absorbed it, instead of just leaving a blank spot in the script that said TECH.
Continuing with that contrasting example, Star Trek: The Next Generation spent hours on hours talking about emotions and ethics and morality, but never had an actual take on the concept, beyond a sort of mealy-mouthed “emotions are probably good, unless they’re bad?” and never once managed to be as insightful as the average joke in TGP. It’s great.
I’m gonna put a horizontal line here and then do some medium spoilers, so if you never watched the show you should go do something about that instead of reading on.
...
The Good Place has maybe my all-time favorite piece of narrative sleight of hand. (Other than the season of Doctor Who that locked into place around the Tardis being all four parts of “something old, something new, something borrowed, something blue.”)
In the very first episode, a character tells something to another character—and by extension the audience. That thing is, in fact, a lie, but neither the character nor the audience have any reason to doubt it. The show then spends the rest of the first season absolutely screaming at the audience that this was a lie, all while trusting that the audience won’t believe their lying eyes and ignore the mounting evidence.
So, when the shoe finally drops, it manages to be both a) a total surprise, but also b) obviously true. I can’t think of another example of a show that so clearly gives the audience everything they need to know, but trusts them not to put the pieces together until the characters do.
And then, it came back for another season knowing that the audience was in on “the secret” and managed to both be a totally new show and the same show it always was at the same time. It’s a remarkable piece of work.
Checking In On Space Glasses
It’s been a while since we checked in on Space Glasses here at the old ‘cano, and while we weren’t paying attention this happens: Meta Unveils 'Orion' Augmented Reality Glasses.
Most of the discussion—rightly—has focused on the fact that they’re a non-production internal prototype that reportedly costs 10 Gs a pop. And they’re a little, cough, “rubenesque”?
Alert readers will recall I spent several years working on Space Glasses professionally, trying to help make fetch happen as it were, and I looked at a lot of prototype or near-prototype space glasses. These Orion glasses were the first time I sat up and said “ooh, they might be on to something, there.”
My favorite piece about the Orions was Ben Thompson at Stratechery’s Interview with Meta CTO Andrew Bosworth About Orion and Reality Labs, which included this phenomenal sentence:
Orion makes every other VR or AR device I have tried feel like a mistake — including the Apple Vision Pro.
There are a lot of things that make Space Glasses hard, but the really properly hard part is the lenses. You need something optically clear and distortion-free enough to do regular tasks while looking through, while also being able to display high-enough resolution content to be readable without eyestrain, and do all that will being spectacle lens–sized, stay in sync with each other, and do it all while mounted in something roughly glasses-sized, and ideally without shooting lasers at anyone’s eyeballs or having a weird prism sidecar.
The rest of it: chunky bodies, battery life, software, those aren’t “Easy”, but making those better is a known quantity; it doesn’t require super-science, it’s “just work.”
I’m personally inclined to believe that a Steve Jobs–esque “one more thing” surprise reveal is more valuable than a John Sculley–style fantasy movie about Knowledge Navigators, but if I’d solved the core problem with space glasses while my main competitor was mired in a swamp full of Playthings for the Alone, I’d probably flex too.
Software Forestry 0x02: What We Talk About When We Talk About Tech Debt
Tell me if this sounds familiar: you start a new job, or roll onto a new project, or even have a job interview, and someone says in a slightly hushed tone, words to the effect of “Now, I don’t want to scare you, but we have a lot of tech debt.” Or maybe, “this system is all Legacy Code.” Usually followed by something like “but don’t worry! We’ve got a modernization effort!”
Everyone seems to be absolutely drowning in “tech debt”; hardly a day goes by where you don’t read another article about some system with some terrible problem that was caused by being out of date, deferred maintenance, “in debt.” We constantly joke about the fragile house-of-cards nature of basically, everything. Everyone is hacking their way, pun absolutely intended, through overgrown forests.
There’s a lot to unpack from all that. Other engineering disciplines don’t beg to rebuild their bridges or chemical plants after a couple of years, but they also don’t need to; they build them to last. How does this happen? Why is it like this?
For starters, I think this is one of those places where our metaphors are leading us wrong.
I can’t now remember when I first heard the term Technical Debt. I think it was early twenty-teens, the place I was working in the mid-aughts had a lot of tech debt but I don’t ever remember anyone using that term, the place I was working in the early teens also had a lot, and we definitely called it that.
One of the things metaphors are for is to make it easier to talk to people with a different background—software developers and business folks, for example. We might use different jargon in our respective areas of expertise, but if we can find an area of shared understanding, we can use that to talk about the same thing. “Debt” seems like a kind of obvious middle-ground: basically everyone who participates in the modern economy has a basic, gut-level understanding of how debt works.
Except, do they have the same understanding?
Personally, I think “debt” is a terrible metaphor, bordering on catastrophic. Here’s why: it has very, very different moral dimensions depending on whose talking about it.
To the math and engineering types who popularized the term, “debt” is obviously bad, bordering on immoral. They’re the kind of people who played enough D&D as kids to understand how probability works, probably don’t gamble, probably pay off their credit cards in full every month. Obviously we don’t want to run up debt! We need to pay that back down! Can’t let it build up! Queue that scene in Ghostbusters where Egon is talking about the interest on Ray’s two mortgages.
Meanwhile, when the business-background folks making the decisions about where to put their investments hear that they can rack up “debt” to get features faster, but can pay it off in their own time with no measurable penalty or interest, they make the obvious-to-them choice to rack up a hell of a lot of it. They debt-financed the company with real money, why not the software with metaphorical currency? “We can do it, but that’ll add tech debt” means something completely different to the technical and business staff.
Even worse, “debt” as a metaphor implies that it ends. In real life, you can actually pay the debt off; pay off the house, end the car payment, pay back the municipal bonds, keep your credit cards up to date, whatever. But keeping your systems “debt free” is a process, a way of working, not really something you can save up and pay off.
I’m not sure any single metaphor has done more damage to our industry’s ability to understand itself than “tech debt.”
Of course, the definiton “tech debt” expanded and has come to encompass everything about a software system that makes it hard to work on or the developers don’t like. “Cruft” is the word Fowler uses. “Tech debt”, “legacy”, “lack of maintenance” all kind of swirl into a big mish-mash, meaning, roughly, “old code that’s hard to work on.” Which makes it even less useful as a metaphor, because it covers a lot of different kinds of challenges, each of which calls for different techniques to treat and prevent. In fairness, Fowler takes a swing at categorizing tech debts via the Technical Debt Quadrant, which isn’t terrible, but is a little too abstract to reflect the lived reality.
This is a place where our Forestry metaphor offers up an obvious alternate metaphor: Overgrowth. Which gets close to heart of what the problem feels like: that we built a perfectly fine system, and now, after no action on our part, its not fine. Weeds. There’s that sense that it gets worse when you’re not looking,
There’s something very vexing about this. As Joel said: As if source code rusted. But somehow, that system that was just fine not that long ago is old and hard to work on now. We talk about maintenance, but the kind of maintenance a computer system needs is very different from a giant engine that needs to get oiled or it’ll break down.
I think a big part of the reason why it seems so endemic nowadays is that there was a whole lot of appetite to rewrite “everything for the web” in either Java or .Net around the turn of the century, at the same time a lot of other software got rebuilt mostly from scratch to support, depending on your field, Linux, Mac OS X, or post-NT Windows. There hasn’t been a similar “replant the forest” mood since, so by the teens everyone had a decade-old system with no external impetus to rebuild it. For a lot of fields, this was the first point where we had to think in terms of long term maintenance instead of the OS vendor forcing a rebuild. (We all became mainframe programmers, metaphorically speaking.) And so, even though the Fowler article dates to ’03, and the term is older than that, “Tech Debt” became a twenty-teens concern. Construction stopped being the main concern, replaced with care and feeding.
Software Forests need a different kind of tending than the old rewrite-updates-rewrite again loop. As Foresters, we know the codebases we work on were here before us, and will continue on after us. The occasional greenfield side project, the occasional big new feature, but mostly out job is to keep the forest healthy and hand it along to the next Forester. It takes a different, longer term, continuous world view than counting down the number of car payments left.
Of course, there’s more than one way a forest can get out of hand. Next Time: Types of Overgrowth, catalogued
is it still called a subtweet if you use your blog to do it
There’s that line, that’s incorrectly attributed to Werner Herzog, that goes “Dear America: You are waking up, as Germany once did, to the awareness that 1/3 of your people would kill another 1/3 while 1/3 watches.” (It was actually the former @WernerTwertzog.)
But sure, false attribution aside, I grew up hearing stories about that. The reason my family lives on this side of the planet is because of reasons strongly adjacent to that. Disappointing, but not surprising, when that energy rears back up.
What does keep surprising me over the last few years, is that I never expected that last third to be so smug about it.
The next Dr Who Blu Ray release is… Blake’s 7?
It turns out the next Doctor Who blu-ray release is… the first season of Blakes 7? Wait, what? Holy Smokes!
I describe Blake's 7 as “the other, other, other, British Science fiction show”, implicitly after Doctor Who , The Hitchhiker's Guide to the Galaxy, and Red Dwarf. Unlike those other three, Blake didn’t get widespread PBS airings in the US (I’m not sure my local PBS channel ever showed it, and it ran everything.)
Which is a shame, because it deserves to be better known. The elevator pitch is essentially The Magnificent Seven/Seven Samurai in space”; a group of convicts, desperadoes, and revolutionaries lead a revolt against the totalitarian Earth Federation. In a move that could only be done in the mid-70s, the “evil Federation” is blatantly the Federation from Star Trek, rotted out and gone fascist, following a long line of British SF about fascism happening “here.”
It was made almost entirely by people who had previously worked on Doctor Who, and it shows; while there was never a formal crossover, the entire show feels like a 70s Who episode where the TARDIS just never lands and things keep getting worse. My other joke though, is that whereas Doctor Who’s budget was whatever change they could find in the BBC lobby couch cushions, Blake’s budget was whatever Doctor Who didn’t use. It’s almost hypnotically low budget, with some episodes so cheap that they seem more like avant garde theatre than they do a TV show whose reach is exceeding its grasp.
On the other hand, its got some of the best writing of all time, great characters, great acting. It revels in shades of gray and moral ambiguity decades before that came into vogue. And without spoiling anything, it has one of the all-time great last episodes of any show. It’s really fun. It’s a show I always want to recommend, but I’m not sure it ever got a real home video release in North America.
So a full, plugs out release is long overdue. The same team that does the outstanding Doctor Who blu-ray sets is doing this; same level of restoration, same kind of special features. Apparently, they’re doing “updated special effects”, except some of the original effects team came out of retirement and they’re shooting new model work? Incredible. The real shame is that so many of the people behind the show have since passed; both main writers, several of the actors, including the one who played the best character. Hopefully there’s some archive material to fill in the gaps.
Blake ran for 4 years, presumably the Doctor Who releases will stay and 2 a year with Blake getting that third slot.
and another thing: they’re not stranded. please don’t put in the newspaper that they’re stranded.
Well, three months in to an eight day mission, the Boeing Starliner made it back to earth, leaving its former crew hanging around at the orbital truckstop until February. What a bizarre episode in the history of human spaceflight, although my favorite part was when the spaceship started making strange noises .
It’s surprisingly hard to find a number for the actual amount of money NASA has handed Boeing so far for this rickety-ass “space” “ship”, but it seems to be somewhere in the $3–4 billion range?
And I know, in the annals of US tax dollars being misspent that number is very small, but while all this is going on the Chandra X-ray Observatory is basically holding a bake sale to stay in operation? Imagine what that money could have been spent on! That’s basically two full space telescopes, or one telescope with enough money left over to staff it forever. That’s two or three deep space probes. That’s a whole lot of fun science we could have done, instead of paying for an empty capsule cooling in the middle of the desert.
Software Forestry 0x01: Somewhere Between a Flower Pot and a Rainforest
“Software” covers a lot of ground. As there are a lot of different kinds and ecosystems of forests, there are a lot of kinds and ecosystems of software. And like forests, each of those kinds of software has their own goals, objectives, constraints, rules, needs.
One of the big challenges when reading about software “processes” or “best practices” or even just plain general advice is that people so rarely state up front what kind of software they’re talking about. And that leads to a lot of bad outcomes, where people take a technique or a process or an architecture that’s intrinsically linked to its originating context out of that context, recommend it, and then it gets applied to situations that are wildly inappropriate. Just like “leaves falling off” means something very different in an evergreen redwood forest than it does in one full of deciduous oak trees, different kinds of software projects need different care. As practitioners, it’s very easy for us to talk past each other.
(This generally gets cited in cases like “if you aren’t a massive social network with a dedicated performance team you probably don’t need React,” but also, pop quiz: what kind of software were all the signers of the Agile Manifesto writing at the time they wrote and signed it?1)
So, before we delve into the practice of Software Forestry, let’s orient ourselves in the landscape. What kinds of software are there?
As usual for our industry, one of the best pieces written on this is a twenty-year old Joel On Software Article,2 where he breaks software up into Five Worlds:
- Shrinkwrap (which he further subdivides into Open Source, Consultingware, and Commercial web based)
- Internal
- Embedded
- Games
- Throwaway
And that’s still a pretty good list! I especially like the way he buckets not based on design or architecture but more on economic models and business contraints.
I’d argue that in the years since that was written, “Commercial web-based” has evolved to be more like what he calls “Internal” than “Shrinkwrap”; or more to the point, those feel less like discrete categories than they do like convenient locations on a continuous spectrum. Widening that out a little, all five of those categories feel like the intersections of several spectrums.
I think spectrums are a good way to view the landcape of modern software development. Not discrete buckets or binary yes/no questions, but continuous ranges where various projects land somewhere in between the extremes.
And so, in the spirit of an enthusiastic “yes, and”, I’d like to offer up what I think are the five most interesting or influential spectrums for talking about kinds of software, which we can express as questions sketching out a left-to-right spectrum:
- Is it a Flower Pot or a Sprawling Forest?
- Does it Run on the Customer’s Computers or the Company’s Computers?
- Are the Users Paid to Use It or do they Pay to Use It?
- How Often Do Your Customers Pay You?
- How Much Does it Matter to the Users?
Is it a Flower Pot or a Sprawling Forest?
This isn’t about size or scale, necessarily, as mich as it is about overall “complexity”, the number of parts. On one end, you have small, single-purpose scripts running on one machine, on the other end, you have sprawling systems with multiple farms or clusters interacting with each other over custom messaging busses.
How many computers does it need? How many different applications work together? Different languages? How many versions do you have to maintain at once? What scale does it operate at?3 How many people can draw an accurate diagram from memory?
This has huge impacts on not only the technology, but things like team structure, coordination, and planning. Joel’s Shrinkwrap and Internal categories are on the right here, the other three are more towards the left.
Does it Run on the Customer’s Computers or the Company’s Computers?
To put that another way, how much of it works without an internet connection? Almost nothing is on one end or the other; no one ships dumb terminals or desktop software that can’t call home anymore.
Web apps are pretty far to the right, depending on how complex the in-browser client app is. Mobile apps are usually in the middle somewhere, with a strong dependency on server-side resources, but also will usually work in airplane mode. Single-player Games are pretty far to the left, only needing server components for things like updates and achievement tracking; multiplayer starts moving right. Embedded software is all the way to the left. Joel’s Shrinkwrap is left of center, Internal is all the way to the right.
This has huge implications for development processes; as an example, I started my career in what we then called “Desktop Software”. Deployment was an installer which got burned to a disk. Spinning up a new test system was unbelievably easy, pull a fresh copy of the installer and install it into a VM! Working in a micoservice mesh environment, there are days that feels like the software equivalent of greek fire, a secret long lost. In a world of sprawling services, spinning up a new environment is sometimes an insurmountable task.
A final way to look at this: how involved do your users have to be with an update?
Are the Users Paid to Use It or do they Pay to Use It?
What kind of alternate options do the people actually using the software have? Can they use something else? A lot of times you see this talked about as being “IT vs commercial,” but it’s broader than that. On the extreme ends here, the user can always choose to play a different mobile game, but if they want to renew their driver’s license, the DMV webpage is the only game in town. And the software their company had custom built to do their job is even less optional.
Another very closely related way of looking at this: Are your Customers and Users the same people? That is, are the people looking at the screen and clicking buttons the same people who cut the check to pay for it? The oft-repeated “if you’re not the customer you’re the product” is a point center-left of this spectrum.
The distance between the people paying and the people using has profound effects on the design and feedback loops for a software project. As an extreme example, one of the major—maybe the most significant—differences between Microsoft and Apple is that Microsoft is very good at selling things to CIOs, and Apple is very good at selling things to individuals, and neither is any good at selling things the other direction.
Bluntly, the things your users care about and that you get feedback on are very, very different depending on if they paid you or if they’re getting paid to use it.
Joel’s Internal category is all the way to the left here, the others are mostly over on the right side.
How Often Do Your Customers Pay You?
This feels like the aspect that’s exploded in complexity since that original Joel piece. The traditional answer to this was “once, and maybe a second time for big upgrades.” Now though, you’ve got subscriptions, live service models, “in-app purchases”, and a whole universe of models around charging a middle-man fee on other transactions. This gets even stranger for internal or mostly-internal tools, in my corporate life, I describe this spectrum as a line where the two ends are labeled “CAPEX” and “OPEX”.
Joel’s piece doesn’t really talk about business models, but the assumption seems to be a turn-of-the-century Microsoft “pay once and then for upgrades” model.
How Much Does it Matter to the Users?
Years and years ago, I worked on one of the computer systems backing the State of California’s welfare system. And on my first day, the boss opened with “however you feel about welfare, politically, if this system goes down, someone can’t feed their kids, and we’re not going to let that happen.” “Will this make a kid hungry” infused everything we did.
Some software matters. Embedded pacemakers. The phone system. Fly-by-wire flight control. Banks.
And some, frankly, doesn’t. If that mobile game glitches out, well, that’s annoying, but it was almost my appointment time anyway, you know?
Everyone likes to believe that what they’re working on is very important, but they also like to be able to say “look, this isn’t aerospace” as a way to skip more testing. And thats okay, there’s a lot of software that if it goes down for an hour or two, or glitches out on launch and needs a patch, that’s not a real problem. A minor inconvenience for a few people, forgotten about the next day.
As always, it’s a spectrum. There’s plenty of stuff in the middle: does a restaurant website matter? In the grand scheme of things, not a lot, but if the hours are wrong that’ll start having an impact on the bottom line. In my experience, there’s a strong perception bias towards the middle of this spectrum.
Joel touches on this with Embedded, but mostly seems to be fairly casual about how critical the other categories are.
There are plenty of other possible spectrums, but over the last twenty years those are the ones I’ve found myself thinking about the most. And I think the combination does a reasonable job sketching out the landscape of modern software.
A lot of things in software development are basically the same regardless of what kind of software you’re developing, but not everything. Like Joel says, it’s not like Id was hiring consultants to make UML diagrams for DOOM, and so it’s important to remember where you are in the landscape before taking advice or adopting someone’s “best practices.”
As follows from the name, Software Forestry is concerned with forests—the bigger systems, with a lot of parts, that matter, with paying customers. In general, the things more on the right side of those spectrums.
As Joel said 22 years ago, we can still learn something from each other regardless of where we all stand on those spectrums, but we need to remember where we’re standing.
Next Time: What We Talk About When We Talk About Tech Debt
- I don’t know, and the point is you don’t either, because they didn’t say.
- This almost certainly wont be the last Software Forestry post to act as extended midrash on a Joel On Software post.
- Is it web scale?
That’s How You Do That
I’ve never been convinced that “debates” are a useful contribution to presidential campaigns; the idea that the candidates are going to do some kind of good faith high school debate club show is the same kind of pundit class galaxy brained take as “there are undecided voters." But then again, we’ve found ourselves with a system where the most powerful person in the world is selected by 6000 low-information people in rural Pennsylvania, so that results in some strange artifacts.
That said.
There’s your choice America. I can’t think of another occasion with that stark a contrast between candidates for anything. Both the best and the worst debate performance I’ve ever seen, on the same stage. Once again, Harris is proving that the way to deal with the convicted felon is to call him on his bullshit as clearly as possible and to his face. You love to see it.
With that said, I wish, I really wish, that some debate moderator would open with “so, we all know this isn’t about policy, this is about appearances and vibes, so I’m going to abandon the prepared questions and open with this: What’s your best joke?” Maybe move into the Voight-Kampff questions after that.
Internet Archive Loses Appeal
In an unsurprising but nevertheless depressing ruling, the Internet Archive’s has lost its appeal in the case about their digital library. (Ars, Techdirt.)
So let me get this straight; out of everything that happened in 2020, the only people facing any kinds of legal consequences are the Internet Archive, for checks notes letting people read some books?
Software Forestry 0x00: Time For More Metaphors
Software is a young field. Creating software as a mainstream profession is barely 70 years old, depending on when you start counting. Its legends are still, if just, living memory.
Young enough that it still doesn’t have much of its own language. Other than the purely technical jargon, it’s mostly borrowed words. What’s the verb for making software? Program? Develop? Write? Similarly, what’s the name for someone who makes software? Programmer? Developer? We’ve settled, more or less, on Engineer, but what we do has little in common with other branches of engineering. Even the word “computer” is borrowed; not that long ago a computer was something like an accountant, a person who computed.1 None of this is a failing, but it is an indication of how young a field this is.
This extends to the metaphors we use to talk about the practice of creating that software. Metaphors are a cognitive shortcut, a way to borrow a different context to make the current one easier to talk about. But they can also be limiting, you can trap yourself in the boundaries of the context you borrowed.
Not that we’re short on metaphors, far from it! In keeping with the traditions of American Business, we use a lot of terms from both Sports (“Team”, “Sprint,” “Scrum”) and the Military (“Test Fire,” “Strategy vs. Tactics”). The seminal Code Complete proposed “Construction”. Knuth called it both an Art and a branch of Literature. We co-opted the term “Architecture” to talk about larger designs. In recent years, you see a lot of talk about “Craft.” “Maintenance-Oriented Programming.” For a while, I used movies. (The spec is the script! Specialized roles all coming together! But that was a very leaky abstraction.)
The wide spread of metaphors in use shows how slippery software can be, how flexible it is conceptually. We haven’t quite managed to grow a set of terms native to the field, so we keep shopping around looking for more.
I bring this up because what’s interesting about the choice of metaphors isn’t so much the direct metaphors themselves but the way they reflect the underlying philosophy of the people who chose them.
There’s two things I don’t like about a lot of those borrowed metaphors. First, most of them are Zero Sum. They assume someone is going to lose, and maybe even worse, they assume that someone is going to win. I’d be willing to entertain that that might be a useful way to think about a business as a whole in some contexts, but for a software team, that’s useless to the point of being harmful. There’s no group of people a normal software team interacts with that they can “beat”. Everyone succeeds and fails together, and they do it over the long term.
Second, most of them assume a clearly defined end state: win the game, win the battle, finish the building. Most modern software isn’t like that. It doesn’t get put in a box in Egghead anymore. Software is an ongoing effort, it’s maintained, updated, tended. Even software that’s not a service gets ongoing patches, subscriptions, DLC, the next version. There isn’t a point where it is complete, so much as ongoing refinement and care. It’s nurtured. Software is a continuous practice of maintenance and tending.
As such, I’m always looking for new metaphors; new ways of thinking about how we create, maintain, and care for software. This is something I’ve spent a lot of time stewing on over the last two decades and change. I’ve watched a lot of otherwise smart people fail to find a way to talk about what they were doing because they didn’t have the language. To quote every informercial: there has to be a better way.
What are we looking for? Situations where groups of people come together to accomplish a goal, something fundamentally creative, but with strict constraints, both physical and by convention. Where there’s competition, but not zero sum, where everyone can be successful. Most importantly, a conceptual space that assumes an ongoing effort, without a defined ending. A metaphor backed by a philosophy centered around long-term commitment and the way software projects sprawl and grow.
“Gardening” has some appeal here, but that’s a little too precious and small-scale, and doesn’t really capture the team aspect.2 We want something larger, with people working together, assuming a time scale beyond a single person’s career, something focused on sustainable management.
So, I have a new software metaphor to propose: Software Forestry.
These days, software isn’t built so much as it’s grown, increment by increment. Software systems aren’t a garden, they’re a forest, filled with a whole ecosystem of different inhabitants, with different sizes, needs, uses. It’s tended by a team of practitioners who—like foresters—maintain its health and shape that growth. We’re not engineers as much as caretakers. New shoots are tended, branches pruned, fertilizer applied, older plants taken care of, the next season’s new trees planned for. But that software isn’t there for its own sake, and as foresters we’re most concerned with how that software can serve people. We’re focused on sustainability, we know now that the new software we write today is the legacy software of tomorrow. Also “Software Forestry” means the acronym is SWF, which I find hilarious. And personally, I really like trees.3 Like with trees, if we do out jobs right this stuff will still be there long after we’ve moved on.
It’s easy to get too precious about this, and let the metaphor run away with you; that’s why there were all those Black Belts and Ninjas running around a few years ago. I’m not going to start an organization to certify Software Rangers.4 But I think a mindset around care and tending, around seasons, around long-term stewardship, around thinking of software systems as ecosystems, is a much healthier relationship to the software industry we actually have than telling your team with all seriousness that we have to get better at blocking and tackling. We’re never going to win a game, because there’s no game to win. But we might grow a healthy forest of software, and encourage healthier foresters.
Software Forestry is a new weekly feature on Icecano. Join us on Fridays as we look at approaches to growing better software. Next Time: What kind of software forests are we growing?
-
My grandmother was a “civillian computer” during the war, she computed tables describing when and how to release bombs from planes to hit a target; the bombs in those tables were larger than normal, needing new tables computed late in the war. She thought nothing of this at the time, but years later realized she had been working out tables for atomic bombs. Her work went unused, she became a minister.
- Gardening seems to pop up every couple of years; searching the web turns up quite a few abandoned swings at Software Gardening as a concept.
- I did briefly consider “Software Arborists”, but that’s a little too narrow.
- Although I assume the Dúnedain would make excellent programmers.
Ableist, huh?
Well! Hell of a week to decide I’m done writing about AI for a while!
For everyone playing along at home, NaNoWriMo, the nonprofit that grew up around the National Novel Writing Month challenge, has published a new policy on the use of AI, which includes this absolute jaw-dropper:
We also want to be clear in our belief that the categorical condemnation of Artificial Intelligence has classist and ableist undertones, and that questions around the use of Al tie to questions around privilege.
Really? Lack of access to AI is the only reason “the poors” haven’t been able to write books? This is the thing that’s going to improve access for the disabled? It’s so blatantly “we got a payoff, and we’re using lefty language to deflect criticism,” so disingenuine, and in such bad faith, that the only appropriate reaction is “hahahha Fuck You.”
That said, my absolute favorite response was El Sandifer on Bluesky:
"Fucking dare anyone to tell Alan Moore, to his face, that working class writers need AI in order to create."; immediately followed by "“Who the fuck said that I’ll fucking break his skull open” said William Blake in a 2024 seance."
It’s always a mistake to engage with Bad Faith garbage like this, but I did enjoy these attempts:
You Don't Need AI To Write A Novel - Aftermath
NaNoWriMo Shits The Bed On Artificial Intelligence – Chuck Wendig: Terribleminds
There’s something extra hilarious about the grifters getting to NaNoWriMo—the whole point of writing 50,000 words in a month is not that the world needs more unreadable 50k manuscripts, but that it’s an excuse to practice, you gotta write 50k bad words before you can get to 50k good ones. Using AI here is literally bringing a robot to the gym to lift weights for you.
If you’re the kind of ghoul that wants to use a robot to write a book for you, that’s one (terrible) thing, but using it to “win” a for-fun contest that exists just to provide a community of support for people trying to practice? That’s beyond despicable.
The NaNoWriMo organization has been a mess for a long time, it’s a classic volunteer-run non-profit where the founders have moved on and the replacements have been… poor. It’s been a scandal engine for a decade now, and they’ve fired everyone and brought in new people at least once? And the fix is clearly in; NoNoWiMo got a new Executive Director this year, and the one thing the “AI” “Industry” has at the moment is gobs of money.
I wonder how small the bribe was. Someone got handed a check, excuse me, a “sponsorship”, and I wonder how embarrassingly, enragingly small the number was.
I mean, any amount would be deeply disgusting, but if it was, “all you have to do is sell out the basic principles non-profit you’re now in charge of and you can live in luxury for the rest of your life” that’s still terrible but at least I would understand. But you know, you know, however much money changed hands was pathetically small.
These are the kind of people who should be hounded out of any functional civilization.
And then I wake up to the news that Oprah is going to host a prime time special on The AI? Ahhhh, there we go, that’s starting to smell like a Matt Damon Superbowl Ad. From the guest list—Bill Gates?—it’s pretty clearly some high-profile reputation laundering, although I’m sure Oprah got a bigger paycheck than those suckers at NaNoWriMo. I see the discourse has already decayed through a cycle of “should we pre-judge this” (spoiler: yes) and then landed on whether or not there are still “cool” uses for AI. This is such a dishonest deflection that it almost takes my breath away. Whether or not it’s “cool” is literally the least relevant point. Asbestos was pretty cool too, you know?
And Another Thing… AI Postscript
I thought I was done talking about The AI for a while after last week’s “Why is this Happening” trilogy (Part I, Part II, Part III,) but The AI wasn’t done with me just yet.
First, In one of those great coincidences, Ted Chiang has a new piece on AI in the New Yorker, Why A.I. Isn’t Going to Make Art (and yeah, that’s behind a paywall, but cough).
It’s nice to know Ted C. and I were having the same week last week! It’s the sort of piece where once you start quoting it’s hard to stop, so I’ll quote the bit everyone else has been:
The task that generative A.I. has been most successful at is lowering our expectations, both of the things we read and of ourselves when we write anything for others to read. It is a fundamentally dehumanizing technology because it treats us as less than what we are: creators and apprehenders of meaning. It reduces the amount of intention in the world.
Intention is something he locks onto here; creative work is about making lots of decisions as you do the work which can’t be replaced by a statistical average of past decisions by other people.
Second, continuing the weekend of coincidences, the kids and I went to an Anime convention this past weekend. We went to a panel on storyboarding in animation, which was fascinating, because storyboarding doesn’t quite mean the same thing in animation that it does in live-action movies.
At one point, the speaker was talking about a character in a show he had worked on named “Ai”, and specified he meant the name, not the two letters as an abbreviation, and almost reflexively spitted out “I hate A. I.!” between literally gritted teeth.
Reader, the room—which was packed—roared in approval. It was the kind of noise you’d expect to lead to a pitchfork-wielding mob heading towards the castle above town.
Outside of the more galaxy-brained corners of the wreckage of what used to be called twitter or pockets of techbros, real people in the real world hate this stuff. I can’t think of another technology from my lifetime that has ever gotten a room full of people to do that. Nothing that isn’t armed can be successful against that sort of disgust; I think we’re going to be okay.
Happy Bell Riots to All Who Celebrate
Stay safe out there during one of the watershed events of the 21st century! I was going to write something about how the worst dystopia Star Trek could imagine in the mid-90s is dramatically, breathtakingly better than the future we actually got, but jwz has the roundup of people who already did.
Can you imagine the real San Franciso of 2024 setting aside a couple of blocks for homeless people to live? To hand out ration cards? For there to be infrastructure?
Like all good Science Fiction, Deep Space Nine doesn’t say a lot about the future, but it sure says an awful lot about the time in which it was written.
Why is this Happening, Part III: Investing in Shares of a Stairway to Heaven
We’ve talked a lot about “The AI” here at Icecano, mostly in terms ranging from “unflattering” to “extremely unflattering.” Which is why I’ve found myself stewing on this question the last few months: Why is this happening?
The easy answer is that, for starters, it’s a scam, a con. That goes hand-in-hand with it also being hype-fueled bubble, which is finally starting to show signs of deflating. We’re not quite at the “Matt Damon in Superbowl ads” phase yet, but I think we’re closer than not to the bubble popping.
Fad-tech bubbles are nothing new in the tech world, in recent memory we had similar grifts around the metaverse, blockchain & “web3”, “quantum”, self-driving cars. (And a whole lot of those bubbles all had the same people behind them as the current one around AI. Lots of the same datacenters full of GPUs, too!) I’m also old enough to remember similar bubbles around things like bittorrent, “4gl languages”, two or three cycles on VR, 3D TV.
This one has been different, though. There’s a viciousness to the boosters, a barely contained glee at the idea that this will put people out of work, which has been matched in intensity by the pushback. To put all that another way, when ELIZA came out, no one from MIT openly delighted at the idea that they were about to put all the therapists out of work.
But what is it about this one, though? Why did this ignite in a way that those others didn’t?
A sentiment I see a lot, as a response to AI skepticism, is to say something like “no no, this is real, it’s happening.” And the correct response to that is to say that, well, asbestos pajamas really didn’t catch fire, either. Then what happened? Just because AI is “real” it doesn’t mean it’s “good”. Those mesothelioma ads aren’t because asbestos wasn’t real.
(Again, these tend to be the same people who a few years back had a straight face when they said they were “bullish on bitcoin.”)
But I there’s another sentiment I see a lot that I think is standing behind that one: that this is the “last new tech we’ll see in our careers”. This tends to come from younger Xers & elder Millennials, folks who were just slightly too young to make it rich in the dot com boom, but old enough that they thought they were going to.
I think this one is interesting, because it illuminates part of how things have changed. From the late 70s through sometime in the 00s, new stuff showed up constantly, and more importantly, the new stuff was always better. There’s a joke from the 90s that goes like this: Two teams each developed a piece of software that didn’t run well enough on home computers. The first team spent months sweating blood, working around the clock to improve performance. The second team went and sat on a beach. Then, six months later, both teams bought new computers. And on those new machines, both systems ran great. So who did a better job? Who did a smarter job?
We all got absolutely hooked on the dopamine rush of new stuff, and it’s easy to see why; I mean, there were three extra verses of “We Didn’t Light the Fire” just in the 90s alone.
But a weird side effect is that as a culture of practitioners, we never really learned how to tell if the new thing was better than the old thing. This isn’t a new observation, Microsoft figured out to weaponize this early on as Fire And Motion. And I think this has really driven the software industry’s tendency towards “fad-oriented development,” we never built up a herd immunity to shiny new things.
A big part of this, of course, is that the press tech profoundly failed. A completely un-skeptical, overly gullible press that was infatuated shiny gadgets foisted a whole parade of con artists and scamtech on all of us, abdicating any duty they had to investigate accurately instead of just laundering press releases. The Professionally Surprised.
And for a long while, that was all okay, the occasional CueCat notwithstanding, because new stuff generally was better, and even if was only marginally better, there was often a lot of money to be made by jumping in early. Maybe not “private island” money, but at least “retire early to the foothills” money.
But then somewhere between the Dot Com Crash and the Great Recession, things slowed down. Those two events didn’t help much, but also somewhere in there “computers” plateaued at “pretty good”. Mobile kept the party going for a while, but then that slowed down too.
My Mom tells a story about being a teenager while the Beatles were around, and how she grew up in a world where every nine months pop music was reinvented, like clockwork. Then the Beatles broke up, the 70s hit, and that all stopped. And she’s pretty open about how much she misses that whole era; the heady “anything can happen” rush. I know the feeling.
If your whole identity and worldview about computers as a profession is wrapped up in diving into a Big New Thing every couple of years, it’s strange to have it settle down a little. To maintain. To have to assess. And so it’s easy to find yourself grasping for what the Next Thing is, to try and get back that feeling of the whole world constantly reinventing itself.
But missing the heyday of the PC boom isn’t the reason that AI took off. But it provides a pretty good set of excuses to cover the real reasons.
Is there a difference between “The AI” and “Robots?” I think, broadly, the answer is “no;” but they’re different lenses on the same idea. There is an interesting difference between “robot” (we built it to sit outside in the back seat of the spaceship and fix engines while getting shot at) and “the AI” (write my email for me), but that’s more about evolving stories about which is the stuff that sucks than a deep philosophical difference.
There’s a “creative” vs “mechanical” difference too. If we could build an artificial person like C-3PO I’m not sure that having it wash dishes would be the best or most appropriate possible use, but I like that as an example because, rounding to the nearest significant digit, that’s an activity no one enjoys, and as an activity it’s not exactly a hotbed of innovative new techniques. It’s the sort of chore it would be great if you could just hand off to someone. I joke this is one of the main reasons to have kids, so you can trick them into doing chores for you.
However, once “robots” went all-digital and became “the AI”, they started having access to this creative space instead of the physical-mechanical one, and the whole field backed into a moral hazard I’m not sure they noticed ahead of time.
There’s a world of difference between “better clone stamp in photoshop” and “look, we automatically made an entire website full of fake recipes to farm ad clicks”; and it turns out there’s this weird grifter class that can’t tell the difference.
Gesturing back at a century of science fiction thought experiments about robots, being able to make creative art of any kind was nearly always treated as an indicator that the robot wasn’t just “a robot.” I’ll single out Asimov’s Bicentennial Man as an early representative example—the titular robot learns how to make art, and this both causes the manufacturer to redesign future robots to prevent this happening again, and sets him on a path towards trying to be a “real person.”
We make fun of the Torment Nexus a lot, but it keeps happening—techbros keep misunderstanding the point behind the fiction they grew up on.
Unless I’m hugely misinformed, there isn’t a mass of people clamoring to wash dishes, kids don’t grow up fantasizing about a future in vacuuming. Conversely, it’s not like there’s a shortage of people who want to make a living writing, making art, doing journalism, being creative. The market is flooded with people desperate to make a living doing the fun part. So why did people who would never do that work decide that was the stuff that sucked and needed to be automated away?
So, finally: why?
I think there are several causes, all tangled.
These causes are adjacent to but not the same as the root causes of the greater enshittification—excuse me, “Platform Decay”—of the web. Nor are we talking about the largely orthogonal reasons why Facebook is full of old people being fooled by obvious AI glop. We’re interested in why the people making these AI tools are making them. Why they decided that this was the stuff that sucked.
First, we have this weird cultural stew where creative jobs are “desired” but not “desirable”. There’s a lot of cultural cachet around being a “creator” or having a “creative” jobs, but not a lot of respect for the people actually doing them. So you get the thing where people oppose the writer’s strike because they “need” a steady supply of TV, but the people who make it don’t deserve a living wage.
Graeber has a whole bit adjacent to this in Bullshit Jobs. Quoting the originating essay:
It's even clearer in the US, where Republicans have had remarkable success mobilizing resentment against school teachers, or auto workers (and not, significantly, against the school administrators or auto industry managers who actually cause the problems) for their supposedly bloated wages and benefits. It's as if they are being told ‘but you get to teach children! Or make cars! You get to have real jobs! And on top of that you have the nerve to also expect middle-class pensions and health care?’
“I made this” has cultural power. “I wrote a book,” “I made a movie,” are the sort of things you can say at a party that get people to perk up; “oh really? Tell me more!”
Add to this thirty-plus years of pressure to restructure public education around “STEM”, because those are the “real” and “valuable” skills that lead to “good jobs”, as if the only point of education was as a job training program. A very narrow job training program, because again, we need those TV shows but don’t care to support new people learning how to make them.
There’s always a class of people who think they should be able to buy anything; any skill someone else has acquired is something they should be able to purchase. This feels like a place I could put several paragraphs that use the word “neoliberalism” and then quote from Ayn Rand, The Incredibles, or Led Zeppelin lyrics depending on the vibe I was going for, but instead I’m just going to say “you know, the kind of people who only bought the Cliffs Notes, never the real book,” and trust you know what I mean. The kind of people who never learned the difference between “productivity hacks” and “cheating”.
The sort of people who only interact with books as a source of isolated nuggets of information, the kind of people who look at a pile of books and say something like “I wish I had access to all that information,” instead of “I want to read those.”
People who think money should count at least as much, if not more than, social skills or talent.
On top of all that, we have the financializtion of everything. Hobbies for their own sake are not acceptable, everything has to be a side hustle. How can I use this to make money? Why is this worth doing if I can’t do it well enough to sell it? Is there a bootcamp? A video tutorial? How fast can I start making money at this?
Finally, and critically, I think there’s a large mass of people working in software that don’t like their jobs and aren’t that great at them. I can’t speak for other industries first hand, but the tech world is full of folks who really don’t like their jobs, but they really like the money and being able to pretend they’re the masters of the universe.
All things considered, “making computers do things” is a pretty great gig. In the world of Professional Careers, software sits at the sweet spot of “amount you actually have to know & how much school you really need” vs “how much you get paid”.
I’ve said many times that I feel very fortunate that the thing I got super interested in when I was twelve happened to turn into a fully functional career when I hit my twenties. Not everyone gets that! And more importantly, there are a lot of people making those computers do things who didn’t get super interested in computers when they were twelve, because the thing they got super interested in doesn’t pay for a mortgage.
Look, if you need a good job, and maybe aren’t really interested in anything specific, or at least in anything that people will pay for, “computers”—or computer-adjacent—is a pretty sweet direction for your parents to point you. I’ve worked with more of these than I can count—developers, designers, architects, product people, project managers, middle managers—and most of them are perfectly fine people, doing a job they’re a little bored by, and then they go home and do something that they can actually self-actualize about. And I suspect this is true for a lot of “sit down inside email jobs,” that there’s a large mass of people who, in a just universe, their job would be “beach” or “guitar” or “games”, but instead they gotta help knock out front-end web code for a mid-list insurance company. Probably, most careers are like that, there’s the one accountant that loves it, and then a couple other guys counting down the hours until their band’s next unpaid gig.
But one of the things that makes computers stand out is that those accountants all had to get certified. The computer guys just needed a bootcamp and a couple weekends worth of video tutorials, and suddenly they get to put “Engineer” on their resume.
And let’s be honest: software should be creative, usually is marketed as such, but frequently isn’t. We like to talk about software development as if it’s nothing but innovation and “putting a dent in the universe”, but the real day-to-day is pulling another underwritten story off the backlog that claims to be easy but is going to take a whole week to write one more DTO, or web UI widget, or RESTful API that’s almost, but not quite, entirely unlike the last dozen of those. Another user-submitted bug caused by someone doing something stupid that the code that got written badly and shipped early couldn’t handle. Another change to government regulations that’s going to cause a remodel of the guts of this thing, which somehow manages to be a surprise despite the fact the law was passed before anyone in this meeting even started working here.
They don’t have time to learn how that regulation works, or why it changed, or how the data objects were supposed to happen, or what the right way to do that UI widget is—the story is only three points, get it out the door or our velocity will slip!—so they find someting they can copy, slap something together, write a test that passes, ship it. Move on to the next. Peel another one off the backlog. Keep that going. Forever.
And that also leads to this weird thing software has where everyone is just kind of bluffing everyone all the time, or at least until they can go look something up on stack overflow. No one really understands anything, just gotta keep the feature factory humming.
The people who actually like this stuff, who got into it because they liked making compteurs do things for their own sake keep finding ways to make it fun, or at least different. “Continuous Improvement,” we call it. Or, you know, they move on, leaving behind all those people whose twelve-year old selves would be horrified.
But then there’s the group that’s in the center of the Venn Diagram of everything above. All this mixes together, and in a certain kind of reduced-empathy individual, manifests as a fundamental disbelief in craft as a concept. Deep down, they really don’t believe expertise exists. That “expertise” and “bias” are synonyms. They look at people who are “good” at their jobs, who seem “satisfied” and are jealous of how well that person is executing the con.
Whatever they were into at twelve didn’t turn into a career, and they learned the wrong lesson from that. The kind of people who were in a band as a teenager and then spent the years since as a management consultant, and think the only problem with that is that they ever wanted to be in a band, instead of being mad that society has more open positions for management consultants than bass players.
They know which is the stuff that sucks: everything. None of this is the fun part; the fun part doesn’t even exist; that was a lie they believed as a kid. So they keep trying to build things where they don’t have to do their jobs anymore but still get paid gobs of money.
They dislike their jobs so much, they can’t believe anyone else likes theirs. They don’t believe expertise or skill is real, because they have none. They think everything is a con because thats what they do. Anything you can’t just buy must be a trick of some kind.
(Yeah, the trick is called “practice”.)
These aren’t people who think that critically about their own field, which is another thing that happens when you value STEM over everything else, and forget to teach people ethics and critical thinking.
Really, all they want to be are “Idea Guys”, tossing off half-baked concepts and surrounded by people they don’t have to respect and who wont talk back, who will figure out how to make a functional version of their ill-formed ramblings. That they can take credit for.
And this gets to the heart of whats so evil about the current crop of AI.
These aren’t tools built by the people who do the work to automate the boring parts of their own work; these are built by people who don’t value creative work at all and want to be rid of it.
As a point of comparison, the iPod was clearly made by people who listened to a lot of music and wanted a better way to do so. Apple has always been unique in the tech space in that it works more like a consumer electronics company, the vast majority of it’s products are clearly made by people who would themselves be an enthusiastic customer. In this field we talk about “eating your own dog-food” a lot, but if you’re writing a claims processing system for an insurance company, there’s only so far you can go. Making a better digital music player? That lets you think different.
But no: AI is all being built by people who don’t create, who resent having to create, who resent having to hire people who can create. Beyond even “I should be able to buy expertise” and into “I value this so little that I don’t even recognize this as a real skill”.
One of the first things these people tried to automate away was writing code—their own jobs. These people respect skill, expertise, craft so little that they don’t even respect their own. They dislike their jobs so much, and respect their own skills so little, that they can’t imagine that someone might not feel that way about their own.
A common pattern has been how surprised the techbros have been at the pushback. One of the funnier (in a laugh so you don’t cry way) sideshows is the way the techbros keep going “look, you don’t have to write anymore!” and every writer everywhere is all “ummmmm, I write because I like it, why would I want to stop” and then it just cuts back and forth between the two groups saying “what?” louder and angrier.
We’re really starting to pay for the fact that our civilization spent 20-plus years shoving kids that didn’t like programming into the career because it paid well and you could do it sitting down inside and didn’t have to be that great at it.
What future are they building for themselves? What future do they expect to live in, with this bold AI-powered utopia? Some vague middle-management “Idea Guy” economy, with the worst people in the world summoning books and art and movies out of thin air for no one to read or look at or watch, because everyone else is doing the same thing? A web full of AI slop made by and for robots trying to trick each other? Meanwhile the dishes are piling up? That’s the utopia?
I’m not sure they even know what they want, they just want to stop doing the stuff that sucks.
And I think that’s our way out of this.
What do we do?
For starters, AI Companies need to be regulated, preferably out of existence. There’s a flavor of libertarian-leaning engineer that likes to say things like “code is law,” but actually, turns out “law” is law. There’s whole swathes of this that we as a civilization should have no tolerance for; maybe not to a full Butlerian Jihad, but at least enough to send deepfakes back to the Abyss. We dealt with CFCs and asbestos, we can deal with this.
Education needs to be less STEM-focused. We need to carve out more career paths (not “jobs”, not “gigs”, “careers”) that have the benefits of tech but aren’t tech. And we need to furiously defend and expand spaces for creative work to flourish. And for that work to get paid.
But those are broad, society-wide changes. But what can those of us in the tech world actually do? How can we help solve these problems in our own little corners? We can we go into work tomorrow and actually do?
It’s on all of us in the tech world to make sure there’s less of the stuff that sucks.
We can’t do much about the lack of jobs for dance majors, but we can help make sure those people don’t stop believing in skill as a concept. Instead of assuming what we think sucks is what everyone thinks sucks, is there a way to make it not suck? Is there a way to find a person who doesn’t think it sucks? (And no, I don’t mean “Uber for writing my emails”) We gotta invite people in and make sure they see the fun part.
The actual practice of software has become deeply dehumanizing. None of what I just spent a week describing is the result of healthy people working in a field they enjoy, doing work they value. This is the challenge we have before us, how can we change course so that the tech industry doesn’t breed this. Those of us that got lucky at twelve need to find new ways to bring along the people who didn’t.
With that in mind, next Friday on Icecano we start a new series on growing better software.
Several people provided invaluable feedback on earlier iterations of this material; you all know who you are and thank you.
And as a final note, I’d like to personally apologize to the one person who I know for sure clicked Open in New Tab on every single link. Sorry man, they’re good tabs!
Why is this Happening, Part II: Letting Computers Do The Fun Part
Previously: Part I
Let’s leave the Stuff that Sucks aside for the moment, and ask a different question. Which Part is the Fun Part? What are we going to do with this time the robots have freed up for us?
It’s easy to get wrapped up in pointing at the parts of living that suck; especially when fantasizing about assigning work to C-3PO’s cousin. And it’s easy to spiral to a place where you just start waving your hands around at everything.
But even Bertie Wooster had things he enjoyed, that he occasionally got paid for, rather than let Jeeves work his jaw for him.
So it’s worth recalibrating for a moment: which are the fun parts?
As aggravating as it can be at times, I do actually like making computers do things. I like programming, I like designing software, I like building systems. I like finding clever solutions to problems. I got into this career on purpose. If it was fun all the time they wouldn’t have to call it “work”, but it’s fun a whole lot of the time.
I like writing (obviously.) For me, that dovetails pretty nicely with liking to design software; I’m generally the guy who ends up writing specs or design docs. It’s fun! I owned the customer-facing documentation several jobs back. It was fun!
I like to draw! I’m not great at it, but I’m also not trying to make a living out of it. I think having hobbies you enjoy but aren’t great at is a good thing. Not every skill needs to have a direct line to a career or a side hustle. Draw goofy robots to make your kids laugh! You don’t need to have to figure out a the monetization strategy.
In my “outside of work” life I think I know more writers and artists than programmers. For all of them, the work itself—the writing, the drawing, the music, making the movie—is the fun part. The parts they don’t like so well is the “figuring out how to get paid” part, or the dealing with printers part, or the weird contracts part. The hustle. Or, you know, the doing dishes, laundry, and vacuuming part. The “chores” part.
So every time I see a new “AI tool” release that writes text or generates images or makes video, I always as the same question:
Why would I let the computer do the fun part?
The writing is the fun part! The drawing pictures is the fun part! Writing the computer programs are the fun part! Why, why, are they trying to tell us that those are the parts that suck?
Why are the techbros trying to automate away the work people want to do?
It’s fun, and I worked hard to get good at it! Now they want me to let a robot do it?
Generative AI only seems impressive if you’ve never successfully created anything. Part of what makes “AI art” so enragingly radicalizing is the sight of someone whose never tried to create something before, never studied, never practiced, never put the time in, never really even thought about it, joylessly showing off their terrible AI slop they made and demanding to be treated as if they made it themselves, not that they used a tool built on the fruits of a million million stolen works.
Inspiration and plagiarism are not the same thing, the same way that “building a statistical model of word order probability from stuff we downloaded from the web” is not the same as “learning”. A plagiarism machine is not an artist.
But no, the really enraging part is watching these people show off this garbage realizing that these people can’t tell the difference. And AI art seems to be getting worse, AI pictures are getting easier spot, not harder, because of course it is, because the people making the systems don’t know what good is. And the culture is following: “it looks like AI made it” has become the exact opposite of a compliment. AI-generated glop is seen as tacky, low quality. And more importantly, seen as cheap, made by someone who wasn’t willing to spend any money on the real thing. Trying to pass off Krusty Burgers as their own cooking.
These are people with absolutely no taste, and I don’t mean people who don’t have a favorite Kurosawa film, I mean people who order a $50 steak well done and then drown it in A1 sauce. The kind of people who, deep down, don’t believe “good” is real. That it’s all just “marketing.”
The act of creation is inherently valuable; creation is an act that changes the creator as much as anyone. Writing things down isn’t just documentation, it’s a process that allows and enables the writer to discover what they think, explore how they actually feel.
“Having AI write that for you is like having a robot lift weights for you.”
AI writing is deeply dehumanizing, to both the person who prompted it and to the reader. There is so much weird stuff to unpack from someone saying, in what appears to be total sincerity, that they used AI to write a book. That the part they thought sucked was the fun part, the writing, and left their time free for… what? Marketing? Uploading metadata to Amazon? If you don’t want to write, why do you want people to call you a writer?
Why on earth would I want to read something the author couldn’t be bothered to write? Do these ghouls really just want the social credit for being “an artist”? Who are they trying to impress, what new parties do they think they’re going to get into because they have a self-published AI-written book with their name on it? Talk about participation trophies.
All the people I know in real life or follow on the feeds who use computers to do their thing but don’t consider themselves “computer people” have reacted with a strong and consistant full-body disgust. Personally, compared to all those past bubbles, this is the first tech I’ve ever encountered where my reaction was complete revulsion.
Meanwhile, many (not all) of the “computer people” in my orbit tend to be at-least AI curious, lots of hedging like “it’s useful in some cases” or “it’s inevitable” or full-blown enthusiasm.
One side, “absolutely not”, the other side, “well, mayyybe?” As a point of reference, this was the exact breakdown of how these same people reacted to blockchain and bitcoin.
One group looks at the other and sees people musing about if the face-eating leopard has some good points. The other group looks at the first and sees a bunch of neo-luddites. Of course, the correct reaction to that is “you’re absolutely correct, but not for the reasons you think.”
There’s a Douglas Adams bit that gets quoted a lot lately, which was printed in Salmon of Doubt but I think was around before that:
I’ve come up with a set of rules that describe our reactions to technologies:
Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
Anything invented after you’re thirty-five is against the natural order of things.
The better-read AI-grifters keep pointing at rule 3. But I keep thinking of the bit from Dirk Gently’s Detective Agency about the Electric Monk:
The Electric Monk was a labour-saving device, like a dishwasher or a video recorder. Dishwashers washed tedious dishes for you, thus saving you the bother of washing them yourself, video recorders watched tedious television for you, thus saving you the bother of looking at it yourself; Electric Monks believed things for you, thus saving you what was becoming an increasingly onerous task, that of believing all the things the world expected you to believe.
So, what are the people who own the Monks doing, then?
Let’s speak plainly for a moment—the tech industry has always had a certain…. ethical flexibility. The “things” in “move fast and break things” wasn’t talking about furniture or fancy vases, this isn’t just playing baseball inside the house. And this has been true for a long time, the Open Letter to Hobbyists was basically Gates complaining that other people’s theft was undermining the con he was running.
We all liked to pretend “disruption” was about finding “market inefficiencies” or whatever, but mostly what that meant was moving in to a market where the incumbents were regulated and labor had legal protection and finding a way to do business there while ignoring the rules. Only a psychopath thinks “having to pay employees” is an “inefficiency.”
Vast chunks of what it takes to make generative AI possible are already illegal or at least highly unethical. The Internet has always been governed by a sort of combination of gentleman’s agreements and pirate codes, and in the hunger for new training data, the AI companies have sucked up everything, copyright, licensing, and good neighborship be damned.
There’s some half-hearted attempts to combat AI via arguments that it violates copyright or open source licensing or other legal approach. And more power to them! Personally, I’m not really interested in the argument the AI training data violates contract law, because I care more about the fact that it’s deeply immoral. See that Vonnegut line about “those who devised means of getting paid enormously for committing crimes against which no laws had been passed.” Much like I think people who drive too fast in front of schools should get a ticket, sure, but I’m not opposed to that action because it was illegal, but because it was dangerous to the kids.
It’s been pointed out more than once that AI breaks the deal behind webcrawlers and search—search engines are allowed to suck up everyone’s content in exchange for sending traffic their way. But AI just takes and regurgitates, without sharing the traffic, or even the credit. It’s the AI Search Doomsday Cult. Even Uber didn’t try to put car manufacturers out of business.
But beyond all that, making things is fun! Making things for other people is fun! It’s about making a connection between people, not about formal correctness or commercial viability. And then you see those terrible google fan letter ads at the olympics, or see people crowing that they used AI to generate a kids book for their children, and you wonder, how can these people have so little regard for their audience that they don’t want to make the connection themselves? That they’d rather give their kids something a jumped-up spreadsheet full of stolen words barfed out instead of something they made themselves? Why pass on the fun part, just so you can take credit for something thoughtless and tacky? The AI ads want you to believe that you need their help to find “the right word”; what thay don’t tell you is that no you don’t, what you need to do is have fun finding your word.
Robots turned out to be hard. Actually, properly hard. You can read these papers by computer researchers in the fifties where they’re pretty sure Threepio-style robot butlers are only 20 years away, which seems laughable now. Robots are the kind of hard where the more we learn the harder they seem.
As an example: Doctor Who in the early 80s added a robot character who was played by the prototype of an actual robot. This went about as poorly as you might imagine. That’s impossible to imagine now, no producer would risk their production on a homemade robot today, matter how impressive the demo was. You want a thing that looks like Threepio walking around and talking with a voice like a Transformer? Put a guy in a suit. Actors are much easier to work with. Even though they have a union.
Similarly, “General AI” in the HAL/KITT/Threepio sense has been permanently 20 years in the future for at least 70 years now. The AI class I took in the 90s was essentially a survey of things that hadn’t worked, and ended with a kind of shrug and “maybe another 20?”
Humans are really, really good at seeing faces in things, and finding patterns that aren’t there. Any halfway decent professional programmer can whip up an ELIZA clone in an afternoon, and even knowing how the trick works it “feels” smarter than it is. A lot of AI research projects are like that, a sleight-of-hand trick that depends on doing a lot of math quickly and on the human capacity to anthropomorphize. And then the self-described brightest minds of our generation fail the mirror test over and over.
Actually building a thing that can “think”? Increasingly seems impossible.
You know what’s easy, though, comparatively speaking? Building a statistical model of all the text you can pull off the web.
On Friday: conclusions, such as they are.
Why is this Happening, Part I: The Stuff That Sucks
When I was a kid, I had this book called The Star Wars Book of Robots. It was a classic early-80s kids pop-science book; kids are into robots, so let’s have a book talking about what kinds of robots existed at the time, and then what kinds of robots might exist in the future. At the time, Star Wars was the spoonful of sugar to help education go down, so every page talked about a different kind of robot, and then the illustration was a painting of that kind of robot going about its day while C-3PO, R2-D2, and occasionally someone in 1970s leisureware looked on. So you’d have one of those car factory robot arms putting a sedan together while the droids stood off to the side with a sort of “when is Uncle Larry finally going to retire?” energy.
The image from that book that has stuck with me for four decades is the one at the top of this page: Threepio, trying to do the dishes while vacuuming, and having the situation go full slapstick. (As a kid, I was really worried that the soap suds were going to get into his bare midriff there and cause electrical damage, which should be all you need to know to guess exactly what kind of kid I was at 6.)
Nearly all the examples in the book were of some kind of physical labor; delivering mail, welding cars together, doing the dishes, going to physically hostile places. And at the time, this was the standard pop-culture job for robots “in the future”, that robots and robotic automation were fundamentally physical, and were about relieving humans from mechanical labor.
The message is clear: in the not to distant future we’re all going to have some kind of robotic butler or maid or handyman around the house, and that robot is going to do all the Stuff That Sucks. Dishes, chores, laundry, assorted car assembly, whatever it is you don’t want to do, the robot will handle for you.
I’ve been thinking about this a lot over the last year and change since “Generative AI” became a phrase we were all forced to learn. And what’s interesting to me is the way that the sales pitch has evolved around which is the stuff that sucks.
Robots, as a storytelling construct, have always been a thematically rich metaphor in this regard, and provide an interesting social diagnostic. You can tell a lot about what a society thinks is “the stuff that sucks” by looking at both what the robots and the people around them are doing. The work that brought us the word “robot” itself represented them as artificially constructed laborers who revolted against their creators.
Asimov’s body of work, which was the first to treat robots as something industrial and coined the term “robotics” mostly represented them as doing manual labor in places too dangerous for humans while the humans sat around doing science or supervision. But Asimov’s robots also were always shown to be smarter and more individualistic than the humans believed, and generally found a way to do what they wanted to do, regardless of the restrictions from the “Laws of Robotics.”
Even in Star Wars, which buries the political content low in the mix, it’s the droids where the dark satire from THX-1138 pokes through; robots are there as a permanent servant class doing dangerous work either on the outside of spaceships or translating for crime bosses, are the only group shown to be discriminated against, and have otherwise unambiguous “good guys” ordering mind wipes of, despite consistently being some of the smartest and most capable characters.
And then, you know, Blade Runner.
There’s a lot of social anxiety wrapped up in all this. Post-industrial revolution, the expanding middle classes wanted the same kinds of servants and “domestic staff” as the upper classes had. Wouldn’t it be nice to have a butler, a valet, some “staff?” That you didn’t have to worry about?
This is the era of Jeeves & Wooster, and who wouldn’t want a “gentleman’s gentleman” to do the work around the house, make you a hangover cure, drive the car, get you out of scrapes, all while you frittered your time away with idiot friends?
(Of course, I’m sure it’s a total coincidence this is also the period where the Marxists & Socialist thinkers really got going.)
But that stayed asperational, rather than possible, and especially post-World War II, the culture landed on sending women back home and depending on the stay-at-home mom handle “all that.”
There’s a lot of “robot butlers” in mid-century fiction, because how nice would it be if you could just go to the store and buy that robot from The Jetsons, free from any guilt? There’s a lot to unpack there, but that desire for a guilt-free servant class was, and is, persistant in fiction.
Somewhere along the lines, this changes, and robots stop being manual labor or the domestic staff, and start being secretaries, executive assistants. For example, by the time Star Trek: The Next Generation rolls around in the mid-80s, part of the fully automated luxury space communism of the Federation is that the Enterprise computer is basically the perfect secretary—making calls, taking dictation, and doing research. Even by the then it was clear that there was a whole lot of “stuff to know”, and so robots find themselves acting as research assistants. Partly, this is a narrative accelerant—having the Shakespearian actor able to ask thin air for the next plot point helps move things along pretty fast—but the anxiety about information overload was there, even then. Imagine if you could just ask somebody to look it up for you! (Star Trek as a whole is an endless Torment Nexus factory, but that’s a whole other story.)
I’ve been reading a book about the history of keyboards, and one of the more intersting side stories is the way “typing” has interacted with gender roles over the last century. For most of the 1900s, “typing” was a woman’s job, and men, who were of course the bosses, didn’t have time for that sort of tediousness. They’re Idea Guys, and the stuff that sucks is wrestling with an actual typewriter to write them down.
So, they would either handwrite things they needed typed and send it down to the “typing pool”, or dictate to a secretary, who would type it up. Typing becomes a viable job out of the house for younger or unmarried women, albeit one without an actual career path.
This arrangement lasted well into the 80s, and up until then the only men who typed themselves were either writers or nerds. Then computers happened, PCs landed on men’s desks, and it turns out the only thing more powerful than sexism was the desire to cut costs, so men found themselves typing their own emails. (Although, this transition spawns the most unwittingly enlightening quote in the whole book, where someone who was an executive at the time of the transition says it didn’t really matter, because “Feminism ruined everything fun about having a secretary”. pikachu shocked face dot gif)
But we have a pattern; work that would have been done by servants gets handed off to women, and then back to men, and then fiction starts showing up fantasizing about giving that work to a robot, who won’t complain, or have an opinion—or start a union.
Meanwhile, in parallel with all this “chat bots” have been cooking along for as long as there have been computers. Typing at a computer and getting a human-like response was an obvious interface, and spawned a whole set of thought similar but adjacent to all those physical robots. ELIZA emerged almost as soon as computers were able to support such a thing. The Turing test assumes a chat interface. “Software Agents” become a viable area of research. The Infocom text adventure parser came out of the MIT AI lab. What if your secretary was just a page of text on your screen?
One of the ways that thread evolved emerged as LLMs and “Generative AI”. And thanks to the amount of VC being poured in, we get the last couple of years of AI slop. And also a hype cycle that says that any tech company that doesn’t go all-in on “the AI” is going to be left in the dust. It’s the Next Big Thing!
Flash forward to Apple’s Worldwide Developer Conference earlier this summer. The Discourse going into WWDC was that Apple was “behind on AI” and needed to catch up to the industry, although does it really count as behind if all your competitors are up over their skis? And so far AI has been extremely unprofitable, and if anything, Apple is a company that only ships products it knows it can make money on.
The result was that they rolled out the most comprehensive vision of how a Gen AI–powered product suite looks here in 2024. In many ways, “Apple Intelligence” was Apple doing what they do best—namely, doing their market research via letting their erstwhile competitors skid into a ditch, and then slide in with a full Second Mover Advantage by saying “so, now do you want something that works?”
They’re very, very good at identifying The Stuff That Sucks, and announcing that they have a solution. So what stuff was it? Writing text, sending pictures, communicating with other people. All done by a faceless, neutral, “assistant,” who you didn’t have to engage with like they were a person, just a fancy piece of software. Computer! Tea, Earl Gray! Hot!
I’m writing about a marketing event from months ago because watching their giant infomercial was where something clicked for me. They spent an hour talking about speed, “look how much faster you can do stuff!” “You don’t have to write your own text, draw your own pictures, send your own emails, interact directly with anyone!”
Left unsaid was what you were saving all that time for. Critically, they didn’t annouce they were going to a 4-day work week or 6-hour days, all this AI was so people could do more “real work”. Except that the “stuff that sucks” was… that work? What’s the vision of what we’ll be doing when we’ve handed off all this stuff that sucks?
Who is building this stuff? What future do they expect to live in, with this bold AI-powered economy? What are we saving all this time for? What future do these people want? Why are these the things they have decided suck?
I was struck, not for the first time, by what a weird dystopia we find ourselves in: “we gutted public education and non-STEM subjects like writing and drawing, and everyone is so overworked they need a secretary but can’t afford one, so here’s a robot!”
To sharpen the point: why in the fuck am I here doing the dishes myself while a bunch of techbros raise billions of dollars to automate the art and poetry? What happened to Threepio up there? Why is this the AI that’s happening?
On Wednesday, we start kicking over rocks to find an answer...