software forestry Gabriel L. Helman software forestry Gabriel L. Helman

Software Forestry 0x07: The Trellis Pattern

In general, rewriting a system from scratch is a mistake. There are times, however, where replacing a system makes sense.

You’ve checked off every type in the Overgrowth Catalogue, entropy is winning, and someone does a back of the envelope cost analysis and finally says “you know, I think it really would be cheaper to replace this thing.”

Part of what makes this tricky is that presumably, this system has customers and is bringing in revenue. (If it didn’t you could just shut it down and build something new without a hassle.) And so, you need to build and deploy a new system while the old one is still running.

The solution is to use the old system as a Trellis.

The ultimate goal is to eventually replace the legacy system, but you can’t do it all at once. Instead, use the existing system as a Trellis, supporting the new system as it grows along side.

This way, you can thoughtfully replace a part at a time, carefully roll it out to customers, all while maintaining the existing—and working—functionality. The system as a whole will evolve from the current condition to the improved final form.

As you work through each capability, you can either use the legacy system as a blueprint to base the new one on, or harvest code for re‐use.

The great thing about a Trellis is that the Trellis and the Tree are partners. They have the same goals: for the tree to outgrow the trellis. But the trellis can be an active partner. I have a tree right now that still has part of a trellis holding up some branches. It’s one of the those trees that got a little too big a little too fast, put on a little too much fruit. A few years ago it had supports and trellises all around, now it’s down to just one whimsically-leaning stick. If things go well, I’ll be able to pull that out next summer.

The old system isn’t abandoned, it transitions into a new role. You can add features to make it work better as a trellis: an extra API endpoint here, a data export job there. The new system calls into the old one to accomplish something, and then you throw the switch and the old system starts calling into the new one to do that same thing.

Eventually the new system pulls away from the Trellis, grows beyond it. And if you do it right, the trellis falls away when its job is done. Ideally, in such a way that you could never tell it was there in the first place.

Sometimes, you can schedule the last switchover and have a big party when you turn the old system off. But if you’re really lucky, there comes a day where you realize the old load balancer crashed a month ago and no one noticed.

🌲

There’s a social & emotional aspect to this as well, which goes almost entirely undiscussed.

If we’re replacing a system, it’s probably been around for a while. We’re probably talking about systems with a decade+ of run time. The original architects may have moved on, but there are people who work on it, probably people who have built a whole career out of keeping it running.

There’a always some emotions when it comes time to start replacing the old stuff. Some stereotypes exist for a reason, and the sorts of people who become successful software engineers or business people tend to be the sorts of folks for whom “empathy” was their dump stat. There’s something galling about the newly arrived executive talking about moving on from the old and busted system, or the new tech lead espousing how much better the future is going to be. The old system may have gotten chocked out by overgrowth, left behind by the new growth of the tech industry, but it’s still running, still pulling in revenue. It’s paying for the electricity in the projector that’s showing the slide about how bad it is. It deserves respect, and so do the people who worked on it.

Thats the point: The Trellis is a good thing, it’s positive. The old system and—the old system’s staff—have a key role to play. Everyone is on the same team, everyone has the same goal.

🌲

There’s an existing term that’s often used for a pattern similar to this. I am, of course, talking about the Strangler Fig pattern. I hate this term, and I hate the usual shorthand of “strangler pattern” even more.

Really? Your mental model for biring in new software is that it’s an invasive parasite that slowly drains nutrients and kills its host? There are worse ways to go than being strangled, but not by much.

This isn’t an isolated style of metaphor, either. I used to work with someone—who was someone I liked, by the way—who used to say that every system being replaced needed someone to be an executioner and an undertaker.

Really? Your mental model for something ending is violent, state-mandated death?

If Software Forestry has a central thesis, it is this: the ways we talk about what we do and how we do it matter. I can think of no stronger example of what I mean than otherwise sane professionals describing their work as murdering the work of their colleagues, and then being surprised when there’s resistance.

What we do isn’t violent or murderous, it is collaborative and constructive.

What I dislike the most about Strangler Figs, though, is that a Strangler Fig can never exceed the original host, only succede. The Fig is bound to the host forever, at first for sustenance, and then, even after the host has died and rotted away, the Fig has an empty space where the host once stood, a ghost haunting the parasite that it can never fully escape from.

🌲

So if we’re going to use a real tree as our example for how to do this, let’s use my favorite trees.

Let me tell you about the Coastal Redwoods.

The Redwood forest is a whole ecosystem to itself, not just the trees, but the various other plants growing beneath them. When a redwood gets to the end of its life, it falls over. But that fallen tree then serves as the foundation to a whole new mini-ecosystem. The ferns and sorrel cover the fallen trunk. Seedlings sprout up in the newly exposed sunlight. Burls or other nodes sprout new trees from the base of the old, meaning maybe the tree really didn’t die at all, it just transitioned. From one tree springs a whole new generation of the forest.

There are deaths other than murder, and there are endings other than death.

Let’s replace software like a redwood falling; a loud noise, and then an explosion of new possibilities.

Read More
Gabriel L. Helman Gabriel L. Helman

Don’t Panic: Infocom’s Hitchhiker’s Guide to the Galaxy at 40

Well! It turns out that this coming weekend is the 40th anniversary of Infocom’s Hitchhiker’s Guide to the Galaxy text adventure game by Douglas Adams and Steve Meretzky. I mentioned the game in passing back in July when talking about Salmon of Doubt, but I’ll take an excuse to talk about it more.

To recap: Hitchhiker started as a six-part radio show in 1978, which was a surprise hit, and was quickly followed by a second series, an album—which was a rewrite and re-record with the original cast instead of just being a straight release of the radio show—a 2-part book adaptation, a TV adaptation, and by 1984, a third book with a fourth on the way. Hitchhiker was a huge hit.

Somewhere in there, Adams discovered computers, and (so legend has it) also became a fan of Infocom’s style of literate Interactive Fiction. They were fans of his as well, and to say their respective fan-bases had a lot of overlap would be an understatement. A collaboration seemed obvious.

(For the details on how the game actually got made, I’ll point you at The Digital Antiquarian’s series of philosophical blockbusters Douglas Adams, The Computerized Hitchhiker’s, and Hitchhiking the Galaxy Infocom-Style.)

These are two of my absolute favorite things—Infocom games and Hitchhiker—so this should be a “two great tastes taste great together” situation, right? Well, unfortunately, it’s a little less “peanut butter cup” and a little more “orange juice on my corn chex.”

“Book adaptation” is the sort of thing that seemed like an obvious fit for Infocom, and they did several of them, and they were all aggressively mediocre. Either the adaptation sticks too close to the book, and you end up painfully recreating the source text, usually while you “wait” and let the book keep going until you have something to do, or you lean the other way and end up with something “inspired by” rather than “based on.” Hitchhiker, amusingly, manages to do both.

By this point Adams had well established his reputation for blowing deadlines (and loving “the whooshing noise they make as they go by”) so Infocom did the sane thing and teamed him up Steve Meretzky, who had just written the spectacular—and not terribly dissimilar from Hitchhiker—Planetfall, with the understanding that Meretzky would do the programming and if Adams flagged then Meretzky could step in and push the game over the finish line.

The game would cover roughly the start of the story; starting with Arthur’s house being knocked down, continuing through the Vogon ship, arriving on the Heart of Gold, and then ending as they land on Magrathea. So, depending on your point of view, about the first two episodes of the radio and TV versions, or the first half of the first book. This was Adams’ fourth revision of this same basic set of jokes, and one senses his enthusiasm waning.

You play as Arthur (mostly, but we’ll get to that,) and the game tracks very closely to the other versions up through Arthur and Ford getting picked up by the Heart of Gold. At that point, the game starts doing its own thing, and it’s hard not to wonder if that’s where Adams got bored and let Meretzky take over.

The game—or at least the first part—wants to be terribly meta and subversive about being a text adventure game, but more often than not offers up things that are joke-shaped, but are far more irritating than funny.

The first puzzle in the game is that it is dark, and you have to open your eyes. This is a little clever, since finding and maintaining light sources are a major theme in earlier Zork-style Infocom games, and here you don’t need a battery-powered brass lantern or a glowing elvish sword, you can just open your eyes! Haha, no grues in this game, chief! Then the second puzzle is where the game really shows its colors.

Because, you see, you’ve woken up with a hangover, and you need to find and take some painkillers. Again, this is a text adventure, so you need to actually type the names of anything you want to interact with. This is long before point-and-click interfaces, or even terminal-style tab-complete. Most text games tried to keep the names of nouns you need to interact with as short as possible for ergonomic reasons, so in a normal game, the painkillers would be “pills”, or “drugs”, or “tablets”, or some other short name. Bur no, in this game, the only phrase the game recognizes for the meds is “buffered analgesic”. And look, that’s the sort of think that I’m sure sounds funny ahead of time, but is just plain irritating to actually type. (Although, credit where credit is due, four decades later, I can still type “buffered analgesic” really fast.)

And for extra gear-griding, the verb you’d use in reglar speech to consume a “buffered analgesic” would be to “take” it, except that’s the verb Infocom games use to mean “pick something up and put it in your inventory” so then you get to do a little extra puzzle where you have to guess what other verb Adams used to mean put it in your mouth and swallow.

The really famous puzzle shows up a little later: the Babel Fish. This seems to be the one that most people gave up at, and there was a stretch where Infocom was selling t-shirts that read “I got the Babel Fish!”

The setup is this: You, as Arthur, have hitchhiked on to the Vogon ship with Ford. The ship has a Babel Fish dispenser (an idea taken from the TV version, as opposed to earlier iterations where Ford was just carrying a spare.) You need to get the Babel fish into your ear so that it’ll start translating for you and you can understand what the Vogons yell at you when they show up to throw you off the ship in a little bit. So, you press the button on the machine, and a fish flies out and vanishes into a crack in the wall.

What follows is a pretty solid early-80s adventure game puzzle. You hang your bathrobe over the crack, press the button again, and then the fish hits the bathrobe, slides down, and falls into a grate on the floor. And so on, and you build out a Rube Goldberg–style solution to catch the fish. The 80s-style difficulty is that there are only a few fish in the dispenser, and when you run out you have to reload your game to before you started trying to dispense fish. This, from the era where game length was extended by making you sit and wait for your five-inch floppy drive to grind through another game load.

Everything you need to solve the puzzle is in the room, except one: the last thing you need to get the fish is the pile of junk mail from Arthur’s front porch, which you needed to have picked up on your way to lie in front of the bulldozer way back a the start of the game. No one thinks to do this the first time, or even first dozen times, and so you end up endlessly replaying the first hour of the game, trying to find what you missed.

(The Babel Fish isn’t called out by name in Why Adventure Games Suck, but one suspects it was top of Ron Gilbert’s mind when he wrote out his manifesto for Monkey Island four years later.)

The usual reaction, upon learning that the missing element was the junk mail, and coming after the thing with the eyes and the “buffered analgesic” is to mutter, screw this and stop playing.

There’s also a bit right after that where the parser starts lying to you and you have to argue with it to tell you what’s in a room, which is also the kind of joke that only sounds funny if you’re not playing the game, and probably accounted for the rest of the people throwing their hands up in the air and doing literally anything else with their time.

Which is a terrible shame, because just after that, you end up on the Heart of Gold and the game stops painfully rewriting the book or trying to be arch about being a game. Fairly quickly, Ford, Zaphod, and Trillian go hang out in the HoG’s sauna, leaving you to do your own thing. Your own thing ends up being using the backup Improbability Generator to teleport yourself around the galaxy, either as yourself or “quantum leap-style” jumping into other people. You play out sequences as all of Ford, Zaphod, and Trillian, and end up in places the main characters never end up in any of the other versions—on board the battlefleet that Arthur’s careless coment sets in motion, inside the whale, outside the lair of the Ravenous Bugblatter Beast of Traal. The various locations can be played in any order, and like an RPG from fifteen years later, the thing you need to beat the game has one piece in each location.

This is where the game settles in and turns into an actual adventure game instead of a retelling of the same half-dozen skits. And, more to the point, this is where the game starts doing interesting riffs on the source material instead of just recreating it.

As an example, at one point, you end up outside the cave of the Ravenenous Bugblatter Beast of Traal, and the way you keep it from eating you is by carving your name on the memorial to the Beast’s victims, so that it thinks it has already eaten you. This is a solid spin on the book’s joke that the Beast is so dumb that it thinks that if you can’t see it, it can’t see you, but manges to make having read the book a bonus but not a requirement.

As in the book, to make the backup Improbability Drive work you need a source of Brownian Motion, like a cup of hot liquid. At first, you get a cup of Advanced Tea Substitute from the Nutrimat—the thing that’s almost, but not quite, entirely unlike tea. Later, after some puzzles and the missile attack, you can get a cup of real tea to plug into the drive, which allows it work better and makes it possible to choose your destination instead of it being random. Again, that’s three different jokes from the source material mashed together in an interesting and new way.

There’s a bit towards the end where you need to prove to Marvin that you’re intelligent, and the way you do that is by holding “tea” and “no tea” at the same time. The way you do that is by using the backup Improbably Drive to teleport into your own brain and removing your common sense particle, which is a really solid Hitchhiker joke that only appears in the game.

The game was a huge success at the time, but the general consensus seemed to be that it was very funny but very hard. You got the sense that a very small percentage of the people who played the game beat it, even grading on the curve of Infocom’s usual DNF rate. You also got the sense that there were a whole lot of people for whom HHGG was both their first and last Infocom game. Like Myst a decade later, it seemed to be the kind of game people who didn’t play games got bought for them, and didn’t convert a lot of people.

In retrospect, it’s baffling that Infocom would allow what was sure to be their best-selling game amongst new customers to be so obtuse and off-putting. It’s wild that HHGG came out the same year as Seastalker, their science fiction–themed game designed for “junior level” difficulty, and was followed by the brilliant jewel of Wishbringer, their “Introductory” game which was an absolute clinic in teaching people how to play text adventure games. Hitchhiker sold more than twice those two games combined.

(For fun, See Infocom Sales Figures, 1981-1986 | Jason Scott | Flickr)

Infocom made great art, but was not a company overly-burdened by business acumen. The company was run by people who thought of games as a way to bootstrap the company, with the intent to eventually graduate to “real” business software. The next year they “finally” released Cornerstone—their relational database product that was going to get them to the big leagues. It did not; sales were disastrous compared to the amount of money spent on development, the year after that, Infocom would sell itself to Activision; Activision would shut them down completely in 1989.

Cornerstone was a huge, self-inflicted wound, but it’s hard not to look at those sales figures, with Hitchhiker wildly outstripping everything else other than Zork I, and wonder what would have happened if Hitchhiker had left new players eager for more instead of trying to remember how to spell “analgesic.”

As Infocom recedes into the past and the memories of old people and enthusiasts, Hitchhiker maintains it’s name recognition. People who never would have heard the name “Zork” stumble across the game as the other, other, other version of Hitchhiker Adams worked on.

And so, the reality is that nowadays HHGG is likely to be most people’s first—and only—encounter with an Infocom game, and that’s too bad, because it’s really not a good example of what their games were actually like. If you’re looking for recommendation, scare up a copy of Enchanter. I’d recommend that, Wishbringer, Planetfall, and Zork II long before getting to Hitchhiker. (Zork is the famous game with the name recognition, but the second one is by far the best of the five games with “Zork” in the title.)

BBC Radio 4 did a 30th anniversary web version some years ago, which added graphics in the same style as the guide entries from the TV show, done by the same people, which feels like a re-release Infocom would have done in the late 80s if the company hadn’t been busy drowning in consequences of their bad decisions.

It’s still fun, taken on its own terms. I’d recommend the game to any fan of the other iterations of the Guide, with the caveat that it should be played with a cup of tea in one hand and a walkthrough within easy reach of the other.

All that said, it’s easy to sit here in the future and be too hard on it. The Secret of Monkey Island was a conceptual thermocline for adventure games as a genre, it’s so well designed, and it’s design philosophy is so well expressed in that design, that once you’ve played it it’s incredibly obvious what every game before it did wrong.

As a kid, though, this game fascinated me. It was baffling, and seemingly impossible, but I kept plowing at it. I loved Hitchhiker, still do, and there I was, playing Arthur Dent, looking things up in my copy of the Guide and figuring out how to make the Improbability Drive work. It wasn’t great, it wasn’t amazing, it was amazingly amazing. At one point I printed out all the Guide entries from the game and made a physical Guide out of cardboard?

As an adult, what irritates me is that the game’s “questionable” design means that it’s impossible to share that magic from when I was 10. There are plenty of other things I loved at that time I can show people now, and the magic still works—Star Wars, Earthsea, Monkey Island, the other iterations of Hitchhiker, other Infocom games. This game, though, is lost. It was too much of its exact time, and while you can still play it, it’s impossible to recreate what it was like to realize you can pick up the junk mail. Not all magic lasts. Normally, this is where I’d type something like “and that’s okay”, but in this particular case, I wish they’d tried to make it last a little harder.


As a postscript, Meretzky was something of a packrat, and it turns out he saved everything. He donated his “Infocom Cabinet” to the Internet Archive, and it’s an absolute treasure trove of behind-the-scenes information, memos, designs, artwork. The Hitchhiker material is here: Infocom Cabinet: Hitchhikers Guide to the Galaxy : Steve Meretzky and Douglas Adams

Read More
Gabriel L. Helman Gabriel L. Helman

Icecano endorses Kamala Harris for President

Well, fuck it, if the LA Times and the Washington Post won’t do it I will: Icecano Formally Endorses Kamala Harris for President of the United States.

It’s hard to think of another presidential election in living memory that’s more of a slam dunk than the one we have here. One the one hand, we have a career public servant, senator, sitting vice-president, and lady who made a supreme court justice cry. On the other hand, we have the convicted felon, adjudicated rapist, wannabe warlord, racist game show host, and, oh, the last time he was president he got impeached twice and a million people died.

If you strip away all the context—and you shouldn’t, but if you did—there’s still no contest. On basic ability to do the job Harris far outstrips the other guy. But if you pour all the context back—well, here’s the thing, only one candidate has any context worth talking about, and it’s all utterly disqualifying.

Whatever, whoever, or wherever you care about, Harris is going to be better than anything the convicted felon would do. Will she be perfect? Almost certainly not, but that’s not how this works. We’re not choosing a best friend, or a Dungeon Master, or a national therapist, or figuring out who to invite out for drinks. We’re hiring the chief executive of a staggeringly well-armed and sprawling bureaucracy.

I was going to type a bunch more here, but it comes down to this: Harris would actually be good at that job. We know for a fact that other guy will not.


One of the gifts age gives you, as it’s taking away the ability to walk up and down stairs without using the handrail, is perspective.

Every election cycle, there’s a group of self-identified “liberal/leftist” types who mount a “principled” take on why they can’t possibly vote for the Democrat, and have to let the Republican win, despite the fact that the Republican would be objectively worse on the thing they claim to care about. And it’s always the same people.

Here’s the thing: these people are liars. Certainly to you, maybe to themselves.

Nothing has ever satisfied them, and nothing ever will. They’re conservatives, but they want their weed dealer to keep selling to them at the “friendly” price, so they pretend otherwise. You know that “punk rock to conservative voter” pipeline? At best, these are those people, about 2/3 of the way down the line. Stop paying attention to them, stop trying to convert them, and don’t let them distract you.


Whats the point of being a billionaire newspaper owner if you’re going to be a gutless coward about it? I genuinely don’t know which of these two Charles Foster Kane would have supported, but he’d have made a fucking endorsement.


A week to go everyone. Vote.

Read More
software forestry Gabriel L. Helman software forestry Gabriel L. Helman

Software Forestry 0x06: The Controlled Burn

Did you know earthworms aren’t native to North America? Sounds crazy, but it’s true; or at least it has been since the glaciers of the last ice age scoured the continent down to the bedrock and took the earthworms with them. North America certainly has earthworms now, but as a recently introduced invasive species. (For everyone who just thought “citation needed”, Invasive earthworms of North America.)

As such, the biomes North America have very different lifecycles than their counterparts in Eurasia do. In, say, a Redwood Forest, organic matter builds up in a way it doesn’t across the water. Things still rot, there’s still fungus and microbes and bugs and things, but there isn’t a population of worms actively breaking everything down. The biomass decays slower. Some buildup is a good thing, it provides a habitat for smaller plants and animals, but if it builds up too much, it can start choking plants out before it can break down into nutrients.

So what happens is, the forest catches on fire. In a forest with earthworms, a fire pretty much always a bad thing. No so much in the Redwoods, or other Californian forests. The trees are fire resistant, the fire clears away the excess debris, frees those nutrients, and many species of cone-bearing conifer trees—redwoods, pines, cypresses, and the like—have what are called “serotinous” cones, which means they only germinate after a fire. Some are literally covered in a layer of resin that has to melt off before the seeds can sprout. The fire rips though, clears out the debris, and the new plants can sprut in the newly fertilized ground. Fire isn’t a hazard to be endured, it’s been adopted as a critical part of the entire ecosystem’s lifecycle.

Without human intervention, fires happen semi-regularly due to lighting. Of course, that’s a little unpredictable and doesn’t always turn out great. But the real problem is when humans prevent fires from taking hold, and then no matter how much you “sweep the forest,” the debris and overgrowth builds up and builds up, until you get the really huge fires we’ve been having out here.

The people who used to live here (Before, ahh… a bunch of other people “showed up and took over” who only knew how to manage forests with earthworms) knew what the solution was: the Controlled Burn. You choose a time, make some space, and carefully set the fire, making sure it does what it needs to do in the area you’ve made safe, but keep it out of places where the people are. In CA at least, we’re starting to adopt controlled burns as an intentional management technique again, a few hundred years later. (The biology, politics, history, and sociology of setting a forest on fire on purpose are beyond our scope here, but you get the general idea.)

I think a lot of Software Forests are like this too.

Every place I’ve ever worked has struggled with figuring out how to plan and estimate ongoing maintenance outside of a couple of very narrow cases. If it’s something specific, like a library upgrade, or a bug, you can usually scope and plan that without too much trouble. But anything larger is a struggle, because those larger maintenance and care efforts are harder to estimate, especially when there isn’t a specific & measurable customer-facing impact. You don’t have a “thing” you can write a bug on. You don’t know what the issues are, specifically, it’s just acting bad.

The problem requires sustained focus, the kind that lasts long enough to actually make a difference. And that’s hard to get.

One of the reasons why Cutting Trails is so effective is that it doesn’t take that much more time than the work the trail is being cut towards. Back when estimating via Fibonacci Sequence was all the rage, the extra work to cut the trail usually didn’t get you up to the next fibonacci number.

Furthermore, the effort to get in and actually estimate and scope some significant maintenance work is often more work than the actual changes. It’s wasteful to spend a week investigating and then write up a plan for someone to do later. You’re already in there!

Finally, rarely is there a direct advocate. There’s nearly always someone who acts as the Voice of the Customer, or the Voice of the Business, but very rarely is anyone the Voice of the Forest.

(I suspect this is one of the places where agile leads us astray. The need to have everything be a defined amount of work that someone can do in under a week or two makes it incredibly easy to just not do work that doesn’t lend itself to being clearly defined ahead of time.)

So the overgrowth and debris builds up, and you get the software equivalent of an unchecked forest fire: “We need to just rewrite all of this.”

No you don’t! What you need are some Controlled Burns.

It goes like this:

Most Forests have more than one application, for a wide definition of “application.” There’s always at least one that’s limping along, choked with Overgrowth. Choose one. Find a single person to volunteer. (Or get volun-told.) Clear their schedule for a month. Point them at the app with overgrowth and let them loose to fix stuff.

We try to be process-agnostic here at Software Forestry, but we acknowledge most folks these days are doing something agile, or at least agile adjacent. Two-week sprints seems to have settled as the “standard” increment size; so a month is two sprints. That’s not nothing! You gotta mean it to “lose” a resource for that much time. But also, you should be able to absorb four weeks of vacation in a quarter, and this is less disruptive than that. Maybe schedule it as one sprint with the option to extend to a second depending on how things look “next week.”

It helps, but isn’t mandatory, to have success metrics ahead of time. Sometimes, the right move is to send the person in there and assume you’ll find something to paint a bullseye around. But most of the time you’ll want to have some kind of measurement you can do a before-and-after comparison with. The easiest ones are usually performance related, because you can measure those objectively, but probably aren’t getting handled as part of the normal “run the business.” Things like “we currently process x transactions per second, we need to get that to 2x,” or “cut RAM use by 10%,” or “why is this so laggy sometimes?”

I did a Controlled Burn once on a system that needed to, effectively, scan every record in a series of database tables to check for things that needed to be deleted off of a storage device. It scanned everything, then started over and scanned everything again. When I started, it was taking over a day to get through a cycle, and that time was increasing, because it wasn’t keeping up with the amount of new work sliding in. No one knew why it took that long, and everyone with real experience with that app was long gone from the company. After a month of dedicated focus, it got through a cycle in less than two hours. Fixed a couple bits of buggy behavior while I was at it. No big changes, no re-architecture, no platform changes, just a month of dedicated focus and cleanup. A Controlled Burn.

This is the time to get that refactoring done—fix that class hierarchy, split that object into some collaborators. Write a bunch of tests. Refactor until you can write a bunch of tests. Fix that thing in the build process everyone hates. Attach some profilers and see where the time is going.

Dig in, focus, and burn off as much overgrowth as you can. And then leave a list of things to do next time. You should know enough now to do a reasonable job scoping and estimating the next steps, write those up for the to do list. Plants some seeds for new growth. You shouldn’t have to do a Controlled Burn on the same system twice.

Deploying this kind of directed focus can be incredibly powerful. The average team can absorb maybe one or two of these a year, so deploy them with purpose.

🌲

Sometimes, all the care in the world won’t do the trick, and you really do need to replace a system. Next time: The Trellis Pattern

Read More
Gabriel L. Helman Gabriel L. Helman

Strange New Worlds Season 3 Preview

I haven’t had much of a chance to talk about Strange New Worlds here on the ‘cano, since the last season went off the air just before I got this place firing on all thrusters.

I absolutely love it, it really might have ended up as my favorite live action Trek. Between SNW and Lower Decks, it’s hard to believe maybe the two best Trek shows of all time are airing at the same time.

Over the weekend, Paramount posted a preview of the next season, which is presumably the opening of the first episode, directly following on from last year’s cliffhanger. Here, watch this, and I’ll meet you below the embed with some assorted thoughts:

  • Hey, that’s the music from “Balance of Terror!”
  • I know this makes me sound old, but I can’t believe that’s how good “TV Star Trek” looks now.
  • Closely related: I really love this iteration of the Enterprise design. I can’t believe how good the old girl looks in this show. Inside too!
  • I saw someone griping about Pike being disoriented at the start of this, but I though that was a pretty clever piece of filmmaking to have Pike need a beat to get his bearings in order to give the audience a little space to get their bearings as they get dropped into the middle of a cliffhanger from a year and a half ago.
  • Star Trek has always been a show about people working together to solve problems, but I’m always impressed at how good a job SNW does at genuinely letting every member of the cast contribute to a solution under pressure, and do it in a way that the audience can follow along with.
  • It’s been fun watching the LED screen tech from The Volume expanding out from The Mandalorian and into the industry at large. Case in point: the Enterprise Main Screen really is a screen now. There’s a camera move about halfway through that clip where the camera tracks sideways towards Uhura’s station (while the Balance of Terror music is going) and the parallax and focus on the screen stays correct, because it’s really a screen. Every cinematographer that’s ever worked on Trek over the last 50 years would have killed for that shot, and they can just do it now. Go look at that again—can you imagine what Nick Myer would have done to have been able to to that in Wrath of Khan? Or Robert Wise?
  • I’m a simple man, with simple tastes, and someone on the bridge going on the shipwide intercom with a warning always works for me.
  • And big fan of the pulsing movie-era “alert condition red” logo.
  • This also gives me an opportunity to introduce my invention of The Mitchell Index. It goes like this: the quality of a given episode of SNW is directly proportional to a) if Jenna Mitchell is in the show and b) how many lines she has. So far, it’s been remarkably accurate, and this clip is close to to highest score yet recorded. She even gets the big idea!
  • Speaking of Mitchell, love the way she tags the Gorn with a real torpedo too; sure, you gotta make the dud look good, but also: their shields are down and screw those guys.
  • Heh, “Let’s hit it.” Hell yeah.
  • Man, I love this show.
Read More
Gabriel L. Helman Gabriel L. Helman

Two Weeks and Change

This election is starting to feel like that shot at the end of Return of the Jedi where the Falcon and a TIE Fighter are screaming up the Death Star tunnel away from the wall of flame. In just over two weeks we’ll know which ship we’re in.

The last month or so seems to have really borne out my initial impression that there are no undecided voters; the polls have been basically rock solid the whole time, despite an absurd number of plot twists and things that would have blown up any other election. I wouldn’t have thought a three-way race between “harm reduction, “fascism”, and “well, the leopards won’t eat my face” would be this close, but there you go. Everyone decided four years ago.

On the other hand, who knows? Part of being an American is saying things like “I don’t want to sound like a conspiracy theorist, but….” and then a decade later finding out everything you suspected was right, and then some. Polling is clearly broken this cycle, but we won’t know how cooked it was until after the fact. Everyone who puts out poll numbers or talks about them has a strong financial incentive to make it sound like a close race no matter what; both so they have something to keep talking about and so that they don’t have to answer questions about why their methodologies were wrong. In retrospect, one of the most consequential political events of the last decade was the NYT website poll tracker meter slamming all the way towards Clinton on election night in 2016; everyone in the business looked at that the next morning and thought “never again.”

On a more positive note, how great is it that Jimmy Carter managed to hit his goal of living long enough to vote for Harris? There have been at least three complete narratives about “Jimmy Carter” since I’ve been an adult, and I’m glad he lived long enough to see it finally land on “Beloved and Respected Elder Statesman.” (And, that his long-term rep has increased in a directly inverse proportion to Reagan. Speaking of consequential political events, it’s also nice to see the 1980 election being more widely acknowledged as the course-of-civilization changing fuckup that it was. Here’s hoping we avoid another one.)

Harris/Walz ’24. Let’s clear the tunnel.

Read More
software forestry Gabriel L. Helman software forestry Gabriel L. Helman

Software Forestry 0x05: Cutting Trails

The lived reality of most Software Foresters is that we spend all our time with large systems we didn’t see the start of, and won’t see the end of. There’s a lot of takes out there about what makes a system “Legacy” and not “just old”, but one of the key things is that Legacy Systems are code without continuity of philosophy.

Because almost certainly, the person who designed and built in the first place isn’t still here. The person that person trained probably isn’t still here. Given the average tenure time in tech rolls, it’s possible the staff has rolled over half-a-dozen or more times.

Let me tell you a personal example. A few lifetimes ago, I worked on one of these two-decade old Software Forests. Big web monolith, written in what was essentially a homebrew MVC-esque framework which itself was written on top of a long-deprecated 3rd party UI framework. The file layout was just weird. There clearly was a logic to the organization, but it was gone, like tears in the rain.

Early on, I had a task where I needed to add an option to a report generator. From the user perspective, I needed to add an option to a combobox on a web form, and then when the user clicked the Generate button, read that new option and punch the file out differently.

I couldn’t find the code! I finally asked my boss, “hey, is there any way to tell which file has which UI pages?” The response was, “no, you just gotta search.”

As they say, Greppability Is an Underrated Code Metric.

(Actually, I’ve worked on two big systems now where I had essentially this exact conversation. The other one was the one where one of the other engineers described it as having been built by “someone who knew everything about how JSP tags worked except when to use them.”)

So you search for the distinctive text in the button, or the combo box, or something on the page. You find the UI. Then you start tracing in. Following the path of execution, dropping into undocumented methods with unhelpful names, bouncing into weird classes with no clear design, strange boundaries, one minute a function with a thousand lines, the next minute an inheritance hierarchy 10 levels of abstraction deep to call one line.

At this point you start itching. “I could rewrite all of this,” you think. “I could get this stood up in a weekend with Spring Boot/React/Ruby on Rails/Elixir/AWS Lambdas/Cool Thing I Used Last”. You start gazing meaningfully at the copy of the Refactoring book on the shelf. But you gotta choke down that urge to rebuild everything. You have a bug you have to fix or a feature to deploy. But it’s not actually going to get better if you keep digging the hole deeper. You gotta stop digging, and leave things better for the next person.

You need to Cut a Trail.

1. Leave Trail Markers.

First thing is, you have to figure out what it does now before you change anything. In a sane, kind world, there would be documentation, diagrams, clear training. And that does happen, but very, very rarely. If you’re very lucky, there’s someone who can explain it to you. Otherwise, you have to do some Forensic Architecture.

Talk to people. Add some logging. Keep clicking and watching what it does. Step through it in a debugger, if you can and if that helps, although I’ve personally found that just getting a debugger working on a live system is often times more work than is worth it for the information you get out of it. But most of all, read. Read the code closely, load as much of that system’s state and behavior into your mind as you can. Read it like you’re in High School English and trying to pull the symbolism out of The Grapes or Wrath or The Great Gatsby. That weird function call is the light at the end of the pier, those abstractions are the eyes on the billboard—what do they mean? Why are they here? How does any of this work?

You’ll get there, that’s what we do. There’s a point where you’ll finally understand enough to make the change you want to make. The trick is to stop at this point, and write everything down. There’s a “haha only serious” joke that code comments are for yourself in six months, but—no. Your audience here is you, a week ago. Write down everything you needed to know when you started this. Every language has a different way to do embedded documentation or comments, but they all have a way to do it. Document every method or function that your explored call path went through. Write down the thing you didn’t understand when you started, the strange overloaded behavior of that one parameter, what that verb really means in the function name, as much as possible, why it does what it does.

Take an hour and draw the diagram you wish you’d had. Write down your notes. And then leave all that somewhere that other people can find it. If you’re using a language where the imbedded documentation system can pull in external files, check that stuff right on in to the codebase. Most places have an internal wiki. Make a page for you team if there isn’t one. Under that, make a page for the app if it doesn’t have one. Then put all that you’ve learned there.

Something else to make sure to document early on: terminology. Everyone uses the same words to mean totally different things. My personal favorite example: no two companies on earth use the word “flywheel” the same way. It doesn’t matter what it was supposed to mean! Ask. Then write it down. The weird noun you tripped over at the start of this? Put the internal definition somewhere you would have found it.

People frequently object that they don’t have the time to do this, to which I say: you’ve already done the hard part! 90% of the time for this task was figuring it out! Writing it down will take a fraction of the time you already had to spend, I promise. And when you’re back here in a year, the time you save in being able to reload all that mental state is going to more than pay for that afternoon you spent here.

2. Write Tests.

Tests are really underrated as a documentation and exploration technique. I mean, using them to actually test is good too! But for our purposes we’re not talking about a formal TDD or Red-Green-Refactor–style approaches. That weird function? Slap some mocks and stubs together and see what it does. Throw some weird data at it. You’re goal isn’t to prove it correct, but to act like one of those Edwardian Scientists trying to figure out how air works.

Another Forest I inherited once, which was a large app that customers paid real money to use, had a test suite of 1 test—which failed. But that was great, because there was already a place to write and run tests.

Tests are a net benefit, they don’t all have to be thorough, or fall into strict unit/integration/acceptance boundaries. Sometimes, it’s okay to put a couple of little weird ones in there that exist to help explain what some undocumented code does.

If you’re unlucky enough to run into a Forest with no test runner, trust me, take the time to bolt one on. It doesn’t have to be perfect! But you’ll make that time back faster than you’d believe.

When you get done, in additional to whatever “normal” Unit or Integration tests your process requires or requests, write a really small test that demonstrates what you had to do. Link that back to the notes you wrote, or the documentation you checked in.

3. A Little Cleanup, some Limited Refactoring

Once you have it figured out, and have a test or two, there’s usually two strong responses: either “I need to replace all of this right now”, or “This is such a mess it’ll never get better.”

So, the good news is that both of those are wrong! It can get better, and you really probably shouldn’t rework everything today.

What you should do a little cleanup. Make something better. Fix those parameter names, rename that strangely named function. Heck, just fix the tenses of the local variables. Do a little refactoring on that gross class, split up some responsibilities. It’s always okay to slide in another shim or interface layer to add some separation between tangled things.

(Don’t leave a huge mess in the VC diff, though, please.)

Leave the trail a little cleaner than when you found it. Doesn’t have to be a lot, we don’t need to re-landscape the whole forest today.

4. Write the New Stuff Right (as possible)

A lot of the time, you know at least one thing the original implementors didn’t: you know how the next decade went. It’s very easy to come in much later, and realize how things should have been done in the first place, because the system has a lot more years on it now than it used to. So, as much as you can, build the new stuff the right way. Drop that shim layer in, encapsulate the new stuff, lay it out right. Leave yourself a trail to follow when you come back and refactor the rest of it into shape.

But the flip side of that is:

5. Don’t be a Jerk About It

Everyone has worked on a codebase where there’s “that” module, or library, or area, where “that guy” had a whole new idea about how the system should be architected, and it’s totally out of place with everything else. A grove of palm trees in the middle of a redwood forest. Don’t be that guy.

I worked on an e-commerce system once where the Java package name was something like com.company.services.store, and then right next to it was com.company.services.store2. The #2 package was one former employee’s personal project to refactor the whole system; they had left years before with it half done, but of course it was all in production, and it was a crapshoot which version other parts of the system called into. Don’t do that.

After you’re gone, when someone looks at the version control change log for this part of the system, you want them to see your name and think “oh, this one had the right idea.”

Software Forestry is a group project, for the long term. Most of the time, “consistency and familiarity” are more valuable than some kind of quixotic quest for the ideal engineering. It’s okay, we’ll get there. Keep leaving it better than you found it. It’ll be worth it.

🌲

You’ll be amazed what your overgrown codebase looks like after a couple months of doing this. That tangled overgrowth starts to look positively tidy.

But sometimes, just cutting trails doesn’t get you there. Next Time: The Controlled Burn.

Read More
Gabriel L. Helman Gabriel L. Helman

Ten Years of the Twelfth Doctor

I missed it with everything else going on at the time, but this past August marks ten years since the debut of Peter Capaldi as the Twelfth Doctor Who, who is, without a doubt, my all-time favorite version of the character.

His take on the character boiled down to, basically, “Slightly Grumpy Aging Punk Space Dad”, and it turns out that’s exactly what I always wanted. Funny, weird, a little spooky, “kind” without necessarily being “nice”. If nothing else, the Doctor should be the coolest weird uncle possible, and, well, look at that picture! Perfection.

(This is a strange thing for someone who grew up on PBS reruns of Tom Baker to admit. But when I’m watching something else and wishing the Doctor would show up and kick things into gear, it’s now Capaldi I picture instead of Baker.)

Unlike some of the other versions of the character, Twelve took a little while to dial in. So it’s sort of appropriate I didn’t remember this anniversary until now, because this past weekend was the 10th anniversary of the eighth episode of his inaugural series, “Mummy on the Orient Express.” “Mummy” wasn’t the best episode of that season—that was easily “Listen” or “Dark Water”, but “Mummy” was the episode where I finally got what they were doing.

This is slightly embarrassing, because “Mummy” is also the most blatantly throwback episode of the year; it’s a story that could have been done with very few tweaks in 1975 with Tom Baker. The key though, are those differences in approach, and one of the reasons a long running show like Doctor Who goes back and revisits old standards is to draw a contrast between how they were done then vs now.

Capaldi, unlike nearly all of his predecessors, was a genuinely well-known actor before climbing on board the Tardis. The first place I saw him was as the kid that falls in love with the (maybe?) mermaid in the criminally under-seen Local Hero. But his signature part was Malcom Tucker in The Thick of It. The Thick of It is set “behind the scenes” of the British government, and is cut from the British comedy model of “everyone is an idiot trying to muddle through”. The Thick of It takes that model one step further, though, and posits that if that’s true, there must be a tiny group of non-idiots desperately keeping the world together. That’s Malcom Tucker, nominally the government’s Director of Communications, but in reality the Prime Minister’s enforcer, spin doctor, and general Fixer. Tucker is clearly brilliant, the lone competent man surrounded by morons, but also a monster, and borderline insane. Capaldi plays him as openly menacing, but less straightforwardly malevolent as just beyond caring about anyone, constantly picking up the pieces from the problems that the various other idiots in Government have caused. Capaldi manages to play Tucker as clearly always thinking, but it’s never clear what he’s actually thinking about.

Somehow, Tucker manages to be both the series main antagonist and protagonist at the same time. And the character also had his own swearing consultant? It’s an incredible performance of a great part in a great show. (On the off chance you never saw it, he’s where “Omni-Shambles” came from, and you should stop reading this right now and go watch that show, I’ll wait for you down at the next paragraph.)

So the real problem for Doctor Who was that “Malcom Tucker as The Doctor” was simultaneously a terrible idea but one that was clearly irresistible to everyone, including show-runner Steven Moffat and Capaldi himself.

The result was that Capaldi had a strangely hesitant first season. His two immediate predecessors, David Tennant and Matt Smith, lept out of the gate with their takes on the Doctor nearly fully formed, whereas it took a bit longer to dial in Capaldi. They knew they wanted someone a little less goofy than Smith and maybe a little more standoffish and less emotional, but going “Full Tucker” clearly had strong gravity. (We’ve been working our way on-and-off through 21st century Who with the kids, and having just rewatched Capaldi’s first season, in retrospect I think he cracked what he was going to to do pretty early, but everyone else needed to get Malcom Tucker out of their systems.)

Capaldi is also an excellent actor—probably the best to ever play the part—and also one who is very willing to not be the center of attention every scene, so he hands a lot of the spotlight off to his co-lead Louise Coleman’s Clara Oswald, which makes the show a lot better, but left him strangely blurry early on.

As such, I enjoyed it, but spent a lot of that first season asking “where are they going with this?” I was enjoying it, but it wasn’t clear what the take was. Was he… just kind of a jerk now? One of the running plot lines of the season was the Doctor wondering if he was a good man or not, which was a kind of weird question to be asking in the 51st year of the show. There was another sideplot where he didn’t get along with Clara’s new boyfriend which was also unclear what the point was. Finally, the previous episode ended with Clara and the Doctor having a giant argument that would normally be the kind of thing you’d do as a cast-member was leaving, but Coleman was staying for at least there rest of the year? Where was all this going?

For me, “Mummy” is where it all clicked: Capaldi’s take on the part, what the show was doing with Clara, the fact that their relationship was as toxic as it looked and that was the point.

There are so many great little moments in “Mummy”; from the basic premise of “there’s a mummy on the orient express… in space!”, to the “20s art deco in the future” design work to, the choice of song that the band is singing, to the Doctor pulling out a cigarette case and revealing that it’s full of jelly babies.

It was also the first episode of the year that had a straightforward antagonist, that the Doctor beat by being a little bit smarter and a little bit braver than everyone else. He’d been weirdly passive up to this point; or rather, the season had a string of stories where there wasn’t an actual “bad buy” to be defeated, and had more complex, ambiguous resolutions.

It’s the denouement where it really all landed for me. Once all the noise was over, the Doctor and Clara have a quite moment on an alien beach where he explains—or rather she realizes—what his plan had been all along and why he had been acting the way he had.

The previous episode had ended with the two of them having a tremendous fight, fundamentally a misunderstanding about responsibility. The Doctor had left Clara in charge of a decision that normally he’d have taken; Clara was angry that he’d left her in the lurch, he thought she deserved the right to make the decision.

The Doctor isn’t interested in responsibility—far from it, he’s one of the most responsibility-averse characters in all of fiction—but he’s old, and he’s wise, and he’s kind, and he’s not willing not to not help if he can. And so he’ll grudgingly take responsibility for a situation if that’s what it takes—but this version is old enough, and tired enough, that he’s not going to pretend to be nice while he does it.

He ends by muttering, as much to himself as to Clara, “Sometimes all you have are bad choices. But you still have to choose.”

And that’s this incarnation in a nutshell—of course he’d really rather be off having a good time, but he’s going to do his best to help where he can, and he isn’t going to stop trying to help just because all the options are bad ones. He’d really rather the Problem Trolly be going somewhere nice, but if someone has to choose which track to go down, he’ll make the choice.

“Mummy” is the middle of a triptych of episodes where Clara’s world view fundamentally changed. In the first, she was angry that the Doctor expected her to take responsibility for the people they came across, here in the second she realized why the Doctor did what he did, and then in the next she got to step in the Doctor’s shoes again, but this time understood.

The role of the “companion” has changed significantly over the years. Towards the end of the old show they realized that if the title character is an unchanging mostly-immortal, you can wrap an ongoing story around the sidekick. The new show landed on a model where the Doctor is mostly a fixed point, but each season tells a story about the companion changing, sometimes to the point where they don’t come back the next year.

Louise Coleman was on the show for two and a half seasons, and so the show did three distinct stories about Clara. The first two stories—“who is the impossible girl” and “will she leave the show to marry the boring math teacher”—turned out to be headfakes, red herrings, and actually the show was telling another story, hidden in plain sight.

The one story you can never tell in Doctor Who is why that particular Time Lord left home, stole a time capsule, and became “The Doctor”. You can edge up against it, nibble around the edges, imply the hell out of things, but you can’t ever actually tell that story. Except, what you can do is tell the story of how someone else did the same thing, what kind of person they had to be ahead of time, what kinds of things had to happen to them, what did they need to learn.

With “Mummy”, Clara’s fate was sealed—there was no going back to “real life”, or “getting married and settling down”, or even “just leaving”. The only options left were Apotheosis or Death—or, as it turns out, both, but in the other order. She had learned too much, and was on a collision course with her own stolen Tardis.

And standing there next to her was the aging punk space dad, passing though, trying to help. My Doctor.


Both Moffat’s time as show-runner and Capaldi’s time as the Doctor have been going through a much-deserved reappraisal lately. At the time, Capaldi got a weirdly rough reaction from online corners of the fanbase. Partly this was because of the aforementioned slow start, and partly because he broke the 21st century Who streak of casting handsome young men. But mostly this was because of a brew of toxic “fans”, bad-faith actors, and various “alt-right” grifters. (You know, Tumblr.) Because of course, this last August was also the 10th anniversary of “GamerGate”. How we ended up in a place that the unchained Id of the worst people alive crashed through video game and science fiction fandoms, tried to fix the Hugos, freaked out about The Last Jedi so hard it broke Hollywood, and then elected a racist game show host to be president is a topic for another time, but those people have mostly moved the grift on from science fiction—I mean, other than the Star Wars fanbase, which became a permanent host body.

The further we get from it, the more obvious what a grift it was. It’s hard to describe how how utterly deranged the Online DiscourseTM was. There was an entire cottage industry telling people not to watch Doctor Who because of the dumbest reasons imaginable in the late twenty-teens, and those folks are just… gone now, and their absense makes it even more obvious how spurious the “concerns” were. Because this was also the peak “taking bad-faith actors seriously” era. The general “fan” “consensus” was that Capaldi was a great actor let down by bad writing, in that sense of “bad” meaning “it wasn’t sexist enough for me.”

There’s a remarkable number of posts out there what’s left of the social web of people saying, essentially, “I never watched this because $YOUTUBER said it was bad, but this is amazing!” or “we never knew what we had until it was gone!”

Well, some of us knew.

I missed this back in November, but the official Doctor Who magazine did one of their rank every episode polls on the advent of the 60th anniversary. They do this every decade or so, and they’re always interesting, inasmuch as they’re a snapshot of the general fan consensus of the time. They’re not always a great view on how the general public sees this, I mean, a poll conducted by the official magazine is strongly self-selecting for Fans with a capital F.

I didn’t see it get officially posted anywhere, but most of the nerd news websites did a piece on it, for example: Doctor Who Fans Have Crowned the Best Episode – Do You Agree? | Den of Geek. The takeway is that the top two are Capaldis, and half of the top ten are Moffat’s. That would have been an unbelievable result a decade ago, because the grifters would have swamped the voting.

Then there’s this, which I’ve been meaning to link to for a while now. Over in the burned-out nazi bar where twitter used to be, a fan of Matt Smith’s via House of the Dragon found out that he used to be the lead of another science fiction show and started live tweeting her watch through Doctor Who: jeje (@daemonsmatt). She’s up through Capaldi’s second season now, as I type this, and it’s great. She loves it, and the whole thread of threads is just a river of positivity. And even in the “oops all nazis” version of twitter, no one is showing up in the comments with the same grifter crap we had to deal with originally, those people are just gone, moved on to new marks. It’s the best. It’s fun to see what we could have had at the time if we’d run those people off faster.

This all feels hopeful in a way that’s bigger than just people discovering my favorite version of my favorite show. Maybe, the fever is finally starting to break.

Read More
Gabriel L. Helman Gabriel L. Helman

How to Monetize a Blog

If, like me, you have a blog thats purely a cost center, you may be interested in How to Monetize a Blog. Lotta good tips in there!

(Trust me, and make sure you scroll all the way to the end.)

Read More
software forestry Gabriel L. Helman software forestry Gabriel L. Helman

Software Forestry 0x04: Library Upgrade Week

Here at Software Forestry we do occasionally try to solve problems instead of just turning them into lists of smaller problems, so we’re goona do a little mood pivot here and start talking about how to manage some of those forms of overgrowth we talked about last time.

First up: let me tell you the good news about Library Upgrade Week.

Just about any decently sized software system uses a variety of third party libraries. And why wouldn’t you? The multitude of high-quality libraries and frameworks out there is probably the best outcome of both the Open Source movement and Object-Oriented Software design. The specific mechanics vary between languages and their practitioner’s cultures, but the upshot is that very rarely does anyone build everything from scratch. There’s no need to go all historical reenactment and write your own XML parser.

Generally, people keep those libraries pinned and stay on one fixed version, rather than schlepping in a new version every-time an update happens. This is a good thing! Change is risk, and risk should be taken on intentionally. The downside is that those libraries keep moving forward, the version you’re using slips out of date, and now you have a bunch of Overgrowth. And so that means you need to upgrade them.

The upshot of all that is on a semi-regular basis, we all need to slurp in a bunch of new code no-one on the payroll wrote, and don’t really know how to test. Un-Reviewed Code Is Tech Debt, and one of the mantras of writing good tests is “don’t test the framework”, so this is always a little iffy.

It’s incredibly easy to just keep letting those weeds grow a little longer. “The new version doesn’t have anything we need”, “there’s no bugs”, “if it ain’t broke don’t fix it”, and so on. They always take too long, don't usually deliver immediate gratification, and are hard to schedule. It’s no fun, and no one likes to do it.

The trick is to turn it into a party.

It works like this: set aside the last week of the quarter to concentrate on 3rd party library upgrades. Regardless of what your formal planning cycle or process is, most businesses tends to operate on quarters, and there’s usually a little dead time at the end of the quarter you can repurpose.

The Process:

  1. Form squads. Each squad is a group of like-minded individuals focused on a single 3rd party library or framework. Squads are encouraged to be cross-team. Each squad will focus on updating that 3rd party library in all applications or places where it is used. The intent is to make this a group event, where people can help each other out. Participation is not mandatory.

  2. Share squad membership and goals ahead of time. Leadership should reserve the right to veto libraries as “too scary” or “not scary enough”. Libraries with a high severity alerts or known CVE are good candidates.

  3. That week, each squad self organizes and works as a group through any issues caused by the upgrade. Other than major outages or incidents, squad members should be excused from other “run the business” type work for that week; or rather, the library upgrades are “the business.” Have fun!

  4. On that Friday hold the Library Upgrade Week Show-n-Tell. Every team should demo what they did, how they did it, and what it took to pull it off. Tell war stories, hold a happy hour, swap jokes. If a squad doesn’t finish that's okay! The expectation is that they’ll have learned a lot about what it'll take to finish, and that work will be captured in the relevant team’s todo lists. If you’re in a process with short develop-deploy increments (like sprints) you can make the library upgrade(s) a release on its own. Ideally you already have a way to sign off a release as not containing regressions, and so a short release with just a library upgrade is a great way to make sure you didn’t knock some dominos over.

But wait! There's more! All participants will vote on awards to give to squads, for things like:

  • Error message with least hits on Stack Overflow
  • Largest version number jump
  • Most lines changed
  • Fewest lines changed
  • Best team name
  • Best presentation

Go nuts! Have a great time!

🌲

Yes, it’s a little silly, but that’s the point. I’ve deployed a version of this at a couple of jobs now, and it’s remarkable how effective it is. The first couple of cycles people hit the “easy” ones—uprev the logging library or a JSON parser or something. But then, once people know that Library Upgrade Week is coming, they start thinking about the harder stuff, and you start getting people saying they want to take a swing at the main framework, or the main language version, or something else load-bearing. It’s remarkable how much progress two or three people can make on a problem that looks unsolvable when they have an uninterrupted week to chew on it. (If you genuinely can’t spare a handful of folks to do some weeding four weeks out the the year, that’s a much larger problem than out of date libraries, and you should go solve that problem first. Like, right now.)

There’s an instinct to take the core idea to schedule this kind of maintenance for a few times a year, but leave off the part where it’s a party. This is a mistake. This is work people want to do even less than their usual work; the trick is to make everthing around it fun.

We’re Foresters, and both we and the Forest are here long term. The long term health of both depends on the care of the Forest being something that the Foresters enjoy, and it’s okay to stack that deck in your favor.

Read More
Gabriel L. Helman Gabriel L. Helman

Wacky Times for AI

Been a wacky month or two for AI news! Open AI is reorganizing! Apple declined to invest! Whatsisname decided he wanted equity after all! The Governor of CA vetoed an AI safety bill! Microsoft is rebooting Three Mile Island, which feels like a particularly lazy piece of satire from the late 90s that escaped into reality! Study after study keeps showing no measurable benefit to AI deployment! The web is drowning in AI slop that no one likes!

I don’t know what any of that means, but it’s starting to feel like we’re getting real close to the part of The Sting where Kid Twist tells Quint from Jaws something confusing on purpose.

But, in our quest here at Icecano to bring you the best sentences from around the web, I’d like to point you at The Subprime AI Crisis because it includes this truly excellent sentence:

Generative AI must seem kind of magical when your entire life is either being in a meeting or reading an email

Oh Snap!

Elsewhere in that piece it covers the absolutely eye-watering amounts of money being spent on the plagiarism machine. There are bigger problems than the cost; the slop itself, the degraded information environment, the toll on the actual environment. But man, that is a hell of a lot of money to just set on fire to get a bunch of bad pictures no one likes. The opportunity cost is hard to fathom; imagine what that money could have been spent on! Imagine how many actually cool startups that would have launched! Imagine how much real art that would have bought!

But that’s actually not what I’m interested in today, what I am interested in are statements like these:

State of Play: Kobold Press Issues the No AI Pledge

Legendary Mario creator on AI: Nintendo is “going the opposite direction”

I stand by my metaphor that AI is like asbestos, but increasingly it’s also the digital equivalent of High Fructose Corn Syrup. Everyone has accepted that AI stuff is “not as good”, and it’s increasingly treated as low-quality filler, even by the people who are pushing it.

What’s intriguing to me is that companies whose reputation or brand centers around creativity or uniqueness are working hard to openly distance themselves. There’s a real “organic farm” energy, or maybe more of a “restaurant with real food, not fast food.”

Beyond the moral & ethical angle, it gives me hope that “NO AI” is emerging as a viable advertising strategy, in a way that “Made with AI” absolutely isn’t.

Read More
Gabriel L. Helman Gabriel L. Helman

Dungeons & Dragons (2024): Trying to Make a Big Tent Bigger

Dungeons & Dragons is a weird game. I don’t mean that as some kind of poetic statement about role-playing games in general, I mean that specifically within the world of tabletop RPGs, D&D is weird. It’s weird for a lot of reasons, including, but not limited to:

  1. It’s the only TTRPG with with actual “real world” name recognition or any sort of cross-over brand awareness.
  2. For most of its existence, it hasn’t been a very good game.

And then for bonus points, it’s not even one game! Depending on how you count it’s at least six different related but totally incompatible games.

The usual example for a brand name getting turned into a generic noun is “kleenex”, but the thing where “Dungeons and Dragons” has become a generic noun for all RPGs is so strange.

It’s so much more well known that everything else it’s like if all TV shows were called MASH, as in “hey, that new MASH with the dragons is pretty good, ” or “I stayed in and rewatched that MASH with the time-traveller with the police box,” etc.

There was a joke in the mid-90s that all computer games got pitched as “it’s like DOOM, but…” and then just pitched the game regardless of how much it was actually like Doom; “It’s like DOOM except it’s not in first person, it’s not in real time, you don’t have a gun, you’re a pirate, you’re not in space, and instead you solve puzzles”. D&D is like that but for real.

Which is a testament to the power of a great name and the first mover advantage, because mechanically, the first 30-or-so years of the game were a total mess. In a lot of ways, RPGs became an industry because everyone who spent more than about 90 seconds with D&D in the 70s, 80, or 90s immediately thought of ten ways to improve the game, and were right about at least eight of them. (One of the best running bits in Shannon Applecline’s seminial Designers & Dungeons is how many successful RPG companies literally started like this.)

And this mechanical weirdness isn’t just because it was first, but because of things like Gary Gygax’s desire to turn it into a competitive sport played at conventions, but also make sure that Dave Arneson didn’t get paid any royalties, and also show off how many different names of polearms he knew. As much as RPGs are sold as “do anything, the only limit is your imagination!” D&D has always been defined by it’s weird and seemingly arbitrary limits. So there’s a certain semi-effable “D&D-ness” you need for a game to be “Dungeons & Dragons” and not just another heroic fantasy game, not all of which make for a great system. It’s a game where its flaws have become part of the charm; the magic system is objectively terrible, but is also a fundamental part of it’s D&D-ness.

The upshot of all that is that for most of its life, D&D had a very clear job within the broader TTRPG world: it was the game that onboarded new players to the hobby, who then immediately graduated to other, better games. The old Red Box was one of the great New Customer Acquisition products of all time, but most people proceeded to bounce right off Advanced D&D, and then moved on to Ninja Turtles, or Traveller, or Vampire, or GURPS, or Shadowrun, or Paranoia, or Star Wars, or any number of other systems that were both better games and were more tailored to a specific vibe or genre, but all assumed you already knew how to play. It wasn’t a game you stuck with. You hear stories about people who have been playing the same AD&D 2nd Edition game for years, and then you ask a couple of follow-up questions and realize that their home rules make the Ship of Theseus look under-remodeled.

Now, for the hobby at large that’s fairly healthy, but if your salary depends on people buying “Dungeons & Dragons” books specifically, I can see how that would be fairly maddening. The game, and the people who make it, have been in an ongoing negotiation with the player base to find a flavor of the game that people are actually willing to stick around for. This results in the game’s deeply weird approach to “Editons”, where each numbered edition is effectively a whole new game, always sold with a fairly explicit “Look! We finally fixed it!”

This has obviously been something of a mixed bag. I think a big part of the reason the d20 boom happened at the turn of the century was that for the first time, 3rd edition D&D was actually a good game. Not perfect, but finally worth playing. 4e, meanwhile, was the best-designed game that no one wanted to play, and it blew up the hobby so much that it created both Pathfinder and served as one of the sparks to light off the twenty-teens narrative RPG boom.

Another result of this ongoing negotiation is that D&D also has a long tradition of “stealth” updates, where new books come out that aren’t a formal revision, but if you pull the content in it dramatically changes the game. AD&D 1 had Oriental Adventures and Unearthed Arcana, AD&D 2 had those Player’s Option books (non-weapon proficiencies!), Basic had at least three versions (the original B/X, the BECMI sets, and then the Rules Cyclopedia). 3rd had the rare Formal Update in the form of the 3.5 release, but it also had things like the Miniatures Handbook (which, if you combine that with the SAGA Edition of Star Wars, makes the path from 3 to 4 more obvious.) 4e had Essentials.

2024 is a radically different time for tabletop games than 2014 was. As the twenty-teens dawned, there was growing sense that maybe there just wasn’t going to be a commercial TTRPG industry anymore. Sales were down, the remaining publishers were pivoting to PDF-only releases, companies were either folding or moving towards other fields. TTRPGs were just going to be a hobbyist niche thing from here on out, and maybe that was going to be okay. I mean, text-based Interactive Fiction Adventure games hadn’t been commercially viable since the late 80s, but the Spring Thing was always full of new submissions. I remember an article on EN World or some such in 2012 or 2013 that described the previous year’s sales as “an extinction level event for the industry.”

Designers & Dungeons perfectly preserves the mood from the time. I have the expanded 2014 4-volume edition, although the vast majority of the content is still from the 2011 original, which officially covers the industry up to 2009 and then peeks around the corner just a bit. The sense of “history being over” pervades the entire work, theres a real sense that the heyday is over, and so now is the time to get the first draft of history right.

As such, the Dungeons & Dragons (2014) books had a certain “last party of summer vacation” quality to them. The time where D&D would have multiple teams with cool codenames working on different parts of the game was long past, this was done by a small group in a short amount of time, and somewhat infamously wasn’t really finished, which is why so many parts of the book seem to run out of steam and end with a shrug emoji and “let the DM sort it out.” The bones are pretty good, but huge chunks of it read like one of those book reports where you’re trying to hide the fact you only read the first and last chapters.

That’s attracted a lot of criticism over the years, but in their (mild) defense, I don’t think it occurred to them that anyone new was going to be playing Fifth. “We’re gonna go out on a high note, then turn the lights out after us.” Most of the non-core book product line was outsourced for the first year or so, it was all just sorta spinning down.

Obviously, that’s not how things went. Everyone has their own theory about why 5th Edition caught fire the way no previous edition had, and here’s mine: The game went back to a non-miniatures, low-math design right as the key enabling technology for onboarding new players arrived: Live Play Podcasts. By hook or by crook, the ruleset for 5E is almost perfect for an audio-only medium, and moves fast, in a way that none of the previous 21st century variants had been.

And so we find outselves in a future where D&D, as a brand, is one of Hasbro’s biggest moneymakers.

Part of what drove that success is that Hasbro has been very conservative about changes to the game, which has clearly let the game flourish like never before, but the same issues are still there. Occasionally one of the original team would pop up on twitter and say something like “yeah, it’s obvious now what we should have done instead of bonus actions,” but nothing ever shipped as a product.

5th edition has already had its stealth update in the form the Tasha/Xanathar/Mordenkainen triptych, but now we’ve got something that D&D really hasn’t had before: the 2024 books are essentially 5th Edition, 2nd Edition. Leading the charge of a strangely spaced-out release schedule is the new Player’s Handbook (2024).

Let’s start with the best part: The first thirty pages are a wonder. It opens with the best “what is an RPG” intro I have ever read, and works its way up though the basics, and by page 28 has fully explained the entire ruleset. To be clear: there aren’t later chapters with names like “Using Skills” or “Combat”, or “Advanced Rules”, this is it.

The “examples of play” are a real thing of art. The page is split into two columns: the left side of the page is a running script-like dialogue of play, and the right side is a series of annotations and explanations describing exactly what rule was in play, why they rolled what they rolled, what the outcome was. I’ve never seen anything quite like it.

This is followed by an incredibly clear set of instructions on how to create a character, and then… the rest of the book is reference material. Chapters on the classes, character origins, feats, equipment, spells, a map of the Planes, stat blocks for creatures to use as familiars or morph targets.

Finally, the book ends with its other best idea: the Rules Glossary. It’s 18 pages of The Rules, alphabetical by Formal Name, clearly written. Theres no flipping around in the book looking for how to Grapple or something, it’s in the glossary. Generally, the book will refer the reader to the glossary instead of stating a rule in place.

It’s really easy to imagine how to repackage this layout into a couple of Red Box–style booklets covering the first few levels. You can basically pop the first 30 pages out as-is and slap a cover on it that says “Read This First!”

Back when I wrote about Tales of the Valiant, I made a crack that maybe there just wasn’t a best order for this material. I stand corrected. It’s outstanding.

Design-wise the book is very similar to it’s predecessor: same fonts, same pseudo-parchment look to the paper, same basic page layout. My favorite change is that the fonts are all larger, which my rapidly aging eyes appreciates.

It’s about 70 pages longer than the 2014 book, and it wouldn’t surprise me to learn that both books have the same number of words and that the extra space is taken up with the larger text and more art. The book is gorgeous, and is absolutely chock full of illustrations. Each class gets a full-page piece, and then each subclass gets a half-page piece showing an example of that build. It’s probably the first version of this game where you can flip through the classes chapter, and then stop at a cool picture and go “hang on, I want to play one of THOSE”. The art style feels fresh and modern in a way that’s guaranteed to make everyone say “that is so twenties” years from now; the same way that the art for the original 3rd edition books looked all clean and modern at the time, but now screams “late 90s” in a way I don’t have the critical vocabulary to describe. (Remember how everything cool had to be asymmetrical for a while there? Good times!)

Some of the early art previewed included a piece with the cast from 80s D&D cartoon drawn in the modern style of the book. At the time, I thought that was a weird piece of nostalgia bait: really? Now’s the time to do a callback to a 40-year old cartoon? Whose the audience for that?

But I was wrong about the intent, because this book is absolutely full of all manner of callbacks and cameos. The DragonLance twins are in the first couple of pages, everyone’s favorite Drow shows up not long after, there’s a guy from Baldur’s Gate 3, the examples of play are set in Castle Ravenloft, there’s Eberron airships, characters from the 80s action figure line, the idol from the old DMG cover, a cityscape of Sigil with the Lady floating down the street. It’s not a nostalgia play so much as it is a “big tent” play: the message, over and over again, is that everything fits. You remember some weird piece of D&D stuff from ages ago? Yeah, that’s in here too. Previous versions of this game have tended to start with a posture of “here’s the default way to play now”, with other “weirder” stuff floating in later. This takes the exact opposite approach, this is full-throated “yes, and” to everything D&D. So not only does Spelljammer get a shoutout in the 2 page appendix about the planes, but rules for guns are in the main equipment chapter, the psionic subclasses are in the main book, airships are in the travel costs table. Heck, the para-elemental planes are in the inner planes diagram, and I thought I was the only person who remembered those existed.

And this doesn’t just mean obscure lore pulls, the art is a case study in how to do “actual diversity”. There’s an explosion of body types, genders, skin tones, styles, and everyone looks cool.

Theres a constant, pervasive sense of trying to make the tent as big and as welcoming as possible. Turns out “One D&D” was the right codename for this; it wasn’t a version number, it was a goal.

Beyond just the art, 2024 book has a different vibe. There’s a whimsicalness from the 2014 version that’s gone: the humorous disclaimer on the title page isn’t there, there isn’t a joke entry for THAC0 in the index. If the 2014 book was an end-of-summer party, this is a start of the year syllabus.

The whole thing has been adjusted to be easier to use. The 2014 book had a very distinct yellowed-parchment pattern behind the text, the 2024 book has a similar pattern, but it’s much less busy and paler, so the text stands out better against the background. All the text is shorter, more to the point. The 2014 book had a lot of fluff that just kinda clogged up the rules when you were trying to look something up in a hurry, the 2024 book has been through an intense editing pass.

As an example: in the section for each class, each class ability has a subheading with the name of the power, and then a description, like this:

Invert the Polarity Starting at 7th level, your growing knowledge of power systems allows you to invert the polarity of control circuits, such as in teleport control panels or force fields. As a bonus action, you can add a d4 to attempts to control electrical systems. After using this power, you must take a short or long rest before using it again.

Now, it’s like this:

Level 7: Invert the Polarity Add 1d4 to checks made with the Sonic Screwdriver Tool. You regain this feature after a short or long rest.

For better or worse, it’s still 5th edition D&D. All the mechanical warts of the system are still there; the weird economy around Bonus Actions, too many classes have weird pools of bonus dice, the strange way that some classes get a whole set of “spell-like” powers to choose from, and other classes “just get spells.” There still isn’t a caster that just uses spell points. Warlocks still look like they were designed on the bus on the way to school the morning the homework was due. Inspiration is still an anemic version of better ideas from other systems. Bounded accuracy still feels weird if you’re not used to it. It’s still allergic to putting math in the text. It still tries to sweep more complex mechanics under the rug by having a very simple general rule, and then a whole host of seemingly one-off exceptions that feel like could have just been one equation or table. The text is still full of tangled sentences about powers recharging after short and long rests instead of just saying powers can used used so many times per day or encounter. There’s still no mechanic for “partial success” or “success with consequences.” You still can’t build any character from The Princess Bride. If 5th wasn’t your jam, there’s nothing here that’ll change your mind.

On the other hand, the good stuff is largely left unchanged: The Advantage/Disadvantage mechanic is still brilliant. The universal proficiency bonus is still a great approach. Bounded Accuracy enables the game to stay fun long past the point where other editions crash into a ditch filled with endless +2 modifiers. It’s the same goofball combat-focused fantasy-themed superhero game it’s been for a long time. I’ve said many times, 5e felt like the first version of D&D that wasn’t actively fighting against the way I like to run games, and the 2024 version stays that way.

All that said, it feels finished in a way the 2014 book didn’t. It’s a significantly smaller mechanical change that 3 to 3.5 was, but the revisions are where it counts.

Hasbro has helpfully published a comprehensive list of the mechanics changes as Updates in the Player’s Handbook (2024) | Dungeons & Dragons, so rather than drain the list, here are the highlights that stood out to me:

The big one is that Races are now Species, and Backgrounds have been reworked and made more important, and the pair are treated as “Origins”. This is massive improvement, gone is the weird racial determinism, and where you grew up is now way more important than where your ancestors came from. There’s some really solid rules for porting an older race or background into the new rules. The half-races are gone, replaced by “real Orcs” and the Aaisimar and Goliaths being called up to the big leagues. Backgrounds in 2014 were kinda just there, a way to pick up a bonus skill proficiency, here they’re the source of the attribute bonus and an actual Feat. Choosing a pair feels like making actual choices about a specific character in a different way that how previous editions would sort of devolve that choice into “choose your favorite Fellowship member”.

Multi-classing and Feats are flushed out and no longer relegated to an “optional because we ran out of time” sidebar. Feats specifically are much closer to where they were in 3e—interesting choices to dial in your character. The they split the difference with the choice you had to make in 5e to either get a stat boost or a feat, you still make that choice, but the stat boost bumps up two stats, and every general feat inclues a single stat boost.

The rules around skills vs tools make sense. At first glance, there don’t seem to be weird overlaps anymore. Tools were one of those undercooked features in 2014, they were kinda like skills, but not? When did you use a tool vs a plain skill check? How do you know what attribute bonus to use? Now, every attribute and skill has a broad description and examples of what you can use them from. Each tool has a full description, including the linked attribute, at least one action you can use it for, and at least one thing you can craft with it. And, each background comes with at least one tool proficiency. You don’t have to guess or make something up on the fly, or worse, remember what you made up last time. It’s not a huge change, but feels done.

Every class has four subclasses in the main book now, which cover a pretty wide spread of options, and sanity has prevailed and all subclasses start at level 3. (In a lot of ways, level 3 is clearly the first “real” level, with the first two as essentially the tutorial, which syncs well with that if you follow the recommended progression, you’ll hit 3rd level at the end of the second session.)

The subclasses are a mix of ones from the 2014 book, various expansions, and new material, but each has gotten a tune up top focus on what the actual fantasy is. To use Monk for example, the subclasses are “Hong Kong movie martial artist”, “ninja assassin”, “airbender”, and, basically, Jet Li from Kiss of the Dragon? The Fighter subclasses have a pretty clear sliding scale of “how complicated do you want to make this for yourself,” spanning “Basic Fighter”, “3rd Edition Fighter”, “Elf from Basic D&D”, and “Psionics Bullshit (Complementary)”.

Weapons now have “Weapon Mastery Properties” that, if you have the right class power or feat, allow you do do additional actions or effects with certain weapons, which does a lot to distinguish A-track fighters from everyone else without just making their attack bonus higher.

The anemic Ideals/Flaws/Bonds thing from 2014 is gone, but in it’s place there’s a really neat set of tables with descriptive words for both high and low attributes and alignment that you can roll against to rough in a personality.

On the other hand, lets talk about whats not here. The last page of the book is not the OGL, and there’s no hint of what any future 3rd party licensing might be. The OGL kerfluffle may have put the 2014 SRD under a CC license, but there’s no indication that there will even be a 2024 SRD.

There’s basically nothing in the way of explicit roleplaying/social hooks; and nothing at all in the way of inter-party hooks. PbtA is a thing, you know? But more to the point, so was Vampire. So was Planescape. There’s a whole stack of 30-year old innovations that just aren’t here.

Similarly there’s no recognition of “the party” as a mechanical construct.

There’s nothing on safety tools or the like; there is a callout box about Session Zero, but not much else. I’m withholding judgement on that one, since it looks like there’s something on that front in the DMG.

There’s very little mechanics for things other than combat; although once again, D&D tends to treat that as a DMG concern.

The other best idea that 4e had was recognizing that “an encounter” was a mechanical construct, but didn’t always have to mean “a fight.” This wasn’t new there, using games I can see from where I’m sitting as an example, Feng Shui was organized around “scenes” in the early 90s. Once you admit an encounter is A Thing, you can just say “this works once an encounter” without having to put on a big show about short rests or whatever, when everyone knows what you mean.

Speaking for myself, as someone who DMs more than he plays, I can’t say as I noticed anything that would change the way I run. The ergonomics and presentation of the book, yes, more different and better player options, yes, but from the other side of the table, they’re pretty much the same game.

Dungeons & Dragons is in a strage spot in the conceptual space. It’s not an explicit generic system like GURPS or Cypher, but it wants to make the Heroic Fantasy tent big enough that it can support pretty much any paperback you find in the fantasy section of the used book store. There’s always been a core of fantasy that D&D was “pretty good at” that got steadily weedier the further you got from it. This incarnation seems to have done a decent job of widening out that center while keeping the weed growth the a minimum.

It seems safe to call this the best version of Dungeons & Dragons to date, and perfectly positioned to do the thing D&D is best at: bring new players into the hobby, get them excited, and then let them move on.

But, of course, it’s double volcano summer, so this is the second revised Fifth Edition this year, after Kobold’s Tales of the Valiant. Alert readers will note that both games made almost the exact same list of changes, but this is less “two asteroid movies” and more “these were the obvious things to go fix.” It’s fascinating how similar they both are, I was expecting to have a whole compare and contrast section here, but not so much! I’m not as tapped into “the scene” as I used to be, so I don’t know how common these ideas were out in the wild, but both books feel like the stable versions of two very similar sets of house rules. It kinda feels like there are going to be a lot of games running a hacked combo of the the two.

(To scratch the compare-and-contrast itch: At first glance, I like the ToV Lineage-Heritage-Background set more than the D&D(2024) Species-Background pair, but the D&D(2024) weapon properties and feats look better than their ToV equivalents. Oh, to be 20 and unemployed again!)

The major difference is that ToV is trying to be a complete game, whereas the 2024 D&D still wants to treat the rest of the post-2014 product line as valid.

As of this writing, both games still have their respective DM books pending, which I suspect is where they’ll really diverge.

More than anything, this reminds me of that 2002-2003 period where people kept knocking out alternate versions of 3e (Arcana Unearthed, Conan, Spycraft, d20 Star Wars, etc, etc) capped off with 3.5. A whole explosion of takes on the same basic frame.

This feels like the point where I should make some kind of recommendation. Should you get it?That feels like one of those “no ethical consumption under capitalism” riddles. Maybe?

To put it mildly, it hasn’t been a bump-free decade for ‘ol Hasbro; recently the D&D group has made a series of what we might politely call “unforced errors,” or if we were less polite “a disastrously mishandled situation or undertaking.”

Most of those didn’t look malevolent, but the sort of profound screwups you get when too many people in the room are middle-aged white guys with MBAs, and not enough literally anyone else. Credit where credit is due, and uncharacteristically for a public-traded American corporation, they seemed to actually be humbled by some of these, and seemed to be making a genuine attempt to fix the systems that got them into a place where they published a book where they updated an existing race of space apes by giving them the exciting new backstory of “they’re escaped slaves!” Or blowing up the entire 3rd party licensing model for no obvious reason. Or sending the literal Pinkertons to someone’s house.

There seems to be an attempt to use the 2024 books to reset. There seems to be a genuine attempt here to get better at diversity and inclusion, to actually move forward. On the other hand, there’s still no sign of what’s going to happen next with the licensing situation.

And this is all slightly fatuous, because I clearly bought it, and money you spend while holding your nose is still legal tender. Your milage may vary.

My honest answer is that if you’re only looking to get one new 5e-compatible PHB this year, I’d recommend you get Tales of the Valiant instead, they’re a small company and could use the sales. If you’re in the market for a second, pick this one up. If you’ve bought in to the 5e ecosystem, the new PHB is probably worth the cover price for the improved ergonomics alone.

Going all the way back to where we started, the last way that D&D is weird is that whether we play it or not, all of us who care about this hobby have a vested interest in Dungeons & Dragons doing well. As D&D goes, so goes the industry: if you’ll forgive a mixed metaphor, when D&D does well the rising tide lifts all boats, but when it does poorly D&D is the Fisher King looking out across a blasted landscape.

If nothing else, I want to live in a world where as many people’s jobs are “RPG” as possible.

D&D is healthier than it’s ever been, and that should give us all a sigh of relief. They didn’t burn the house down and start over, they tried to make a good game better. They’re trying to make it more welcoming, more open, trying to make a big tent bigger. Here in the ongoing Disaster of the Twenties, and as the omni-crisis of 2024 shrieks towards its uncertain conclusion, I’ll welcome anyone trying to make things better.

Read More
software forestry Gabriel L. Helman software forestry Gabriel L. Helman

Software Forestry 0x03: Overgrowth, Catalogued

Previously, we talked about What We Talk About When We Talk About Tech Debt, and that one of the things that makes that debt metaphor challenging is that it has expanded to encompass all manner of Overgrowth, not all of which fits that financial mental model.

From a Forestry perspective, not all the things that have been absorbed by “debt” are necessarily bad, and aren’t always avoidable. Taking a long-term, stewardship-focused view, there’s a bunch of stuff that’s more like emergent properties of a long-running project, as opposed to getting spendy with the credit card.

So, if not debt, what are we talking about when we talk about tech debt?

It’s easy to get over-excited about Lists of Things, but I got into computer science from the applied philosophy side, rather than the applied math side. I think there are maybe seven categories of “Overgrowth” that are different enough to make it useful to talk about them separately:

1. Actual Tech Debt.

Situations where an explicit decision to do something “not as good” to ship faster. There’s two broad subcategories here: using a hacky or unsustainable design to move faster, and cutting scope to hit a date.

In fairness, the original Martin Fowler post just talks about “cruft” in a broad sense, but generally speaking “Formal” (orthodox?) tech debt assumes a conscious choice to accept that debt.

This is the category where the debt analogy works the best. “I can’t buy this now with cash on hand, but I can take on more credit.” (Of course, this also includes “wait, what’s a variable rate?”)

In my experience, this is the least common species of Overgrowth, and the most straightforwardly self correcting. All development processes have some kind of “things to do next” list or backlog, regardless of the formal name. When making that decision to take on the debt, you put an item on the todo list to pay it off.

That list of cut features becomes the nucleus of the plan for the next major version, or update, or DLC. Sometimes, the schedule did you a favor, you realize it was a bad idea, and that cut feature debt gets written off instead of paid off.

The more internal or infrastructure–type items become those items you talk about with the phrase “we gotta do something about…”; the logging system, the metrics observability, that validation system, adding internationalization. Sometimes this isn’t a big formal effort, just a recognition that the next piece of work in that area is going to take a couple extra days to tidy up the mess we left last time.

Fundamentally, paying this off is a scheduling and planning problem, not a technical one. You had to have some kind of an idea about what the work was to make the decision to defer the work, so you can use that same understanding to find it a spot on the schedule.

That makes this the only category where you can actually pay it off. There’s a bounded amount of work you can plan around. If the work keeps getting deferred, or rescheduled, or kicked down the road, you need to stop and ask yourself if this is actually debt or a something asperational that went septic on you.

2. We made the right decision, but then things happened.

Sometimes you make the right decisions, don’t choose to take on any debt, and then things happen and the world imposes work on you anyway.

The classic example: Third party libraries move forward, the new version isn’t cleanly backwards compatible, and the version you’re using suddenly has a critical security flaw. This isn’t tech debt, you didn’t take out a loan! This is more like tech property taxes.

This is also a planning problem, but tricker, because it’s on someone else’s schedule. Unlike the tech debt above, this isn’t something you can pay down once. Those libraries or frameworks are going to keep getting updated, and you need to find a way to stay on top of them without making it a huge effort every time.

Of course, if they stop getting updated you don’t have an ongoing scheduling problem anymore, but you have the next category…

3. It seemed like a good idea at the time.

Sometimes you just guess wrong, and the rest of the world zigs instead of zags. You do your research, weigh the pros and cons, build what you think is the right thing, and then it’s suddenly a few years later and your CEO is asking why your best-in-class data rich web UI console won’t load on his new iPad, and you have to tell him it’s because it was written in Flash.

You can’t always guess right, and sometimes you’re left with something unsupported and with no future. This is very common; there’s a whole lot of systems out there that went all-in on XML-RPC, or RMI, or GWT, or Angular 1, or Delphi, or ColdFusion, or something else that looked like it was going to be the future right up until it wasn’t.

Personally, I find this to be the most irritating. Like Han Solo would say, it’s not your fault! This was all fine, and then someone you never met makes a strategic decision, and now you have to decide how or if you’re going to replace the discontinued tech. It’s really easy to get into a “if it ain’t broke don’t fix it” headspace, right up until you grind to a halt because you can’t hire anyone who knows how to add a new screen to the app anymore. This is when you start using phrases like “modernization effort”.

4. We did the best we could but there are better options now.

There’s a lot more stuff available than there used to be, and so sometimes you roll onto a new project and discover a home-brew ORM, or a hand-rolled messaging queue, or a strange pattern, and you stop and realize that oh wait, this was written before “the thing I would use” existed. (My favorite example of this is when you find a class full of static final constants in an old Java codebase and realize this was from before Java 5 added enums.)

A lot of the time, the custom, hand-rolled thing isn’t necessarily “worse” than some more recent library or framework, but you have to have some serious conversations about where to spend your time; if something isn’t your core business and has become a commodity, it’s probably not worth pouring more effort in to maintaining your custom version. Everyone wants to build the framework, but no one really wants to maintain the framework. Is our custom JSON serializer really still worth putting effort into?

Like the previous, it’s probably time to take a deep breath and talk about re-designing; but unlike the previous, the persion who designed the current version is probably still on the payroll. This usually isn’t a technical problem so much as it is a grief management one.

5. We solved a different problem.

Things change. You built the right thing at the time, but now we got new customers, shifted markets, increased scale, maybe the feds passed a law. The business requirements have changed. Yesterday, this was the right thing, and now it isn’t.

For example: Maybe you had a perfectly functional app to sell mp3 files to customers to download and play on their laptops, and now you have to retrofit that into a subscription-based music streaming platform for smartphones.

This is a good problem to have! But you still gotta find a way to re-landscape that forest.

6. Context Drift.

There’s a pithy line that Legacy Code is “code without tests,” but I think that’s only part of the problem. Legacy code is code without continuity of philosophy. Why was it built this way? There’s no one left who knows! A system gets built in a certain context, and as time passes that context changes, and the further away we get from the original context, the more overgrown and weedy the system appears to become. Tests—good tests—are one way to preserve context, but not the only way.

A whole lot of what’s called “cruft” is here, because It’s harder to read code than to write it.. A lot of that “cruft” is congealed knowledge. That weird custom string utility thats only used the one place? Sure, maybe someone didn’t understand the standard library, or maybe you don’t know about the weekend the client API started handing back malformed data and they wouldn’t fix it—and even worse, this still happens at unpredictable times.

This is both the easiest and least glamorous to treat, because the trick here is documentation. Don’t just document what the code does, document why the code does what it does, why it was built this way. A very small amount of effort while something is being planted goes a long way towards making sure the context is preserved. As Henry Jones Sr, says, you write it down so you don’t have to remember.

To put all that another way: Documentation debt is still tech debt.

7. Not debt, just an old mistake.

The one no one likes to talk about. For whatever reason, someone didn’t do A-quality work. This isn’t necessarily because they were incompetent or careless, sometimes shit happens, you know? This the flip side of the original Tech Debt category; it wasn’t on purpose, but sometimes people are in a hurry, or need to leave early, or just can’t think of anything better.

And so for whatever the reason, the doors aren’t straight, there’a bunch of unpainted plywood, those stairs aren’t up to code. Weeds everywhere. You gotta spend some time re-cutting your trails through the forest.

🌲

As we said at the start, each of those types of Overgrowth has their own root causes, but also needs a different kind of forest management. Next week, we start talking about techniques to keep the Overgrowth under control.

Read More
Gabriel L. Helman Gabriel L. Helman

TV Rewatch: The Good Place

spoilers ahoy

We’ve been rewatching The Good Place. (Or rather, I’ve been rewatching it—I watched it on and off while it was on—everyone else around here is watching it for the first time.)

It is, of course, an absolute jewel. Probably the last great network comedy prior to the streaming/covid era. It’s a masterclass. In joke construction, in structure, in hiding jokes in set-dressing signs. It hits that sweet spot of being both genuinely funny while also have recognizable human emotions, which tends to beyond the grasp of most network sitcoms.

It’s also a case study in why you hire people with experience; Kristen Bell and Ted Danson are just outstanding at the basic skill of “starring in a TV comedy”, but have never as good as they are here. Ted Danson especially is a revelation here, he’s has been on TV essentially my entire life, and he’s better than he’s ever been, but in a way that feels like this is because he finally has material good enough.

But on top of all that, It’s got a really interesting take on what being a “good person” means, and the implications thereof. It’s not just re-heated half-remembered psychology classes, this is a show made by people that have really thought about it. Philosophers get named-dropped, but in a way that indicates that the people writing the show have actually read the material and absorbed it, instead of just leaving a blank spot in the script that said TECH.

Continuing with that contrasting example, Star Trek: The Next Generation spent hours on hours talking about emotions and ethics and morality, but never had an actual take on the concept, beyond a sort of mealy-mouthed “emotions are probably good, unless they’re bad?” and never once managed to be as insightful as the average joke in TGP. It’s great.

I’m gonna put a horizontal line here and then do some medium spoilers, so if you never watched the show you should go do something about that instead of reading on.


...

The Good Place has maybe my all-time favorite piece of narrative sleight of hand. (Other than the season of Doctor Who that locked into place around the Tardis being all four parts of “something old, something new, something borrowed, something blue.”)

In the very first episode, a character tells something to another character—and by extension the audience. That thing is, in fact, a lie, but neither the character nor the audience have any reason to doubt it. The show then spends the rest of the first season absolutely screaming at the audience that this was a lie, all while trusting that the audience won’t believe their lying eyes and ignore the mounting evidence.

So, when the shoe finally drops, it manages to be both a) a total surprise, but also b) obviously true. I can’t think of another example of a show that so clearly gives the audience everything they need to know, but trusts them not to put the pieces together until the characters do.

And then, it came back for another season knowing that the audience was in on “the secret” and managed to both be a totally new show and the same show it always was at the same time. It’s a remarkable piece of work.

Read More
Gabriel L. Helman Gabriel L. Helman

Checking In On Space Glasses

It’s been a while since we checked in on Space Glasses here at the old ‘cano, and while we weren’t paying attention this happens: Meta Unveils 'Orion' Augmented Reality Glasses.

Most of the discussion—rightly—has focused on the fact that they’re a non-production internal prototype that reportedly costs 10 Gs a pop. And they’re a little, cough, “rubenesque”?

Alert readers will recall I spent several years working on Space Glasses professionally, trying to help make fetch happen as it were, and I looked at a lot of prototype or near-prototype space glasses. These Orion glasses were the first time I sat up and said “ooh, they might be on to something, there.”

My favorite piece about the Orions was Ben Thompson at Stratechery’s Interview with Meta CTO Andrew Bosworth About Orion and Reality Labs, which included this phenomenal sentence:

Orion makes every other VR or AR device I have tried feel like a mistake — including the Apple Vision Pro.

There are a lot of things that make Space Glasses hard, but the really properly hard part is the lenses. You need something optically clear and distortion-free enough to do regular tasks while looking through, while also being able to display high-enough resolution content to be readable without eyestrain, and do all that will being spectacle lens–sized, stay in sync with each other, and do it all while mounted in something roughly glasses-sized, and ideally without shooting lasers at anyone’s eyeballs or having a weird prism sidecar.

The rest of it: chunky bodies, battery life, software, those aren’t “Easy”, but making those better is a known quantity; it doesn’t require super-science, it’s “just work.”

I’m personally inclined to believe that a Steve Jobs–esque “one more thing” surprise reveal is more valuable than a John Sculley–style fantasy movie about Knowledge Navigators, but if I’d solved the core problem with space glasses while my main competitor was mired in a swamp full of Playthings for the Alone, I’d probably flex too.

Read More
software forestry Gabriel L. Helman software forestry Gabriel L. Helman

Software Forestry 0x02: What We Talk About When We Talk About Tech Debt

Tell me if this sounds familiar: you start a new job, or roll onto a new project, or even have a job interview, and someone says in a slightly hushed tone, words to the effect of “Now, I don’t want to scare you, but we have a lot of tech debt.” Or maybe, “this system is all Legacy Code.” Usually followed by something like “but don’t worry! We’ve got a modernization effort!”

Everyone seems to be absolutely drowning in “tech debt”; hardly a day goes by where you don’t read another article about some system with some terrible problem that was caused by being out of date, deferred maintenance, “in debt.” We constantly joke about the fragile house-of-cards nature of basically, everything. Everyone is hacking their way, pun absolutely intended, through overgrown forests.

There’s a lot to unpack from all that. Other engineering disciplines don’t beg to rebuild their bridges or chemical plants after a couple of years, but they also don’t need to; they build them to last. How does this happen? Why is it like this?

For starters, I think this is one of those places where our metaphors are leading us wrong.

I can’t now remember when I first heard the term Technical Debt. I think it was early twenty-teens, the place I was working in the mid-aughts had a lot of tech debt but I don’t ever remember anyone using that term, the place I was working in the early teens also had a lot, and we definitely called it that.

One of the things metaphors are for is to make it easier to talk to people with a different background—software developers and business folks, for example. We might use different jargon in our respective areas of expertise, but if we can find an area of shared understanding, we can use that to talk about the same thing. “Debt” seems like a kind of obvious middle-ground: basically everyone who participates in the modern economy has a basic, gut-level understanding of how debt works.

Except, do they have the same understanding?

Personally, I think “debt” is a terrible metaphor, bordering on catastrophic. Here’s why: it has very, very different moral dimensions depending on whose talking about it.

To the math and engineering types who popularized the term, “debt” is obviously bad, bordering on immoral. They’re the kind of people who played enough D&D as kids to understand how probability works, probably don’t gamble, probably pay off their credit cards in full every month. Obviously we don’t want to run up debt! We need to pay that back down! Can’t let it build up! Queue that scene in Ghostbusters where Egon is talking about the interest on Ray’s two mortgages.

Meanwhile, when the business-background folks making the decisions about where to put their investments hear that they can rack up “debt” to get features faster, but can pay it off in their own time with no measurable penalty or interest, they make the obvious-to-them choice to rack up a hell of a lot of it. They debt-financed the company with real money, why not the software with metaphorical currency? “We can do it, but that’ll add tech debt” means something completely different to the technical and business staff.

Even worse, “debt” as a metaphor implies that it ends. In real life, you can actually pay the debt off; pay off the house, end the car payment, pay back the municipal bonds, keep your credit cards up to date, whatever. But keeping your systems “debt free” is a process, a way of working, not really something you can save up and pay off.

I’m not sure any single metaphor has done more damage to our industry’s ability to understand itself than “tech debt.”

Of course, the definiton “tech debt” expanded and has come to encompass everything about a software system that makes it hard to work on or the developers don’t like. “Cruft” is the word Fowler uses. “Tech debt”, “legacy”, “lack of maintenance” all kind of swirl into a big mish-mash, meaning, roughly, “old code that’s hard to work on.” Which makes it even less useful as a metaphor, because it covers a lot of different kinds of challenges, each of which calls for different techniques to treat and prevent. In fairness, Fowler takes a swing at categorizing tech debts via the Technical Debt Quadrant, which isn’t terrible, but is a little too abstract to reflect the lived reality.

This is a place where our Forestry metaphor offers up an obvious alternate metaphor: Overgrowth. Which gets close to heart of what the problem feels like: that we built a perfectly fine system, and now, after no action on our part, its not fine. Weeds. There’s that sense that it gets worse when you’re not looking,

There’s something very vexing about this. As Joel said: As if source code rusted. But somehow, that system that was just fine not that long ago is old and hard to work on now. We talk about maintenance, but the kind of maintenance a computer system needs is very different from a giant engine that needs to get oiled or it’ll break down.

I think a big part of the reason why it seems so endemic nowadays is that there was a whole lot of appetite to rewrite “everything for the web” in either Java or .Net around the turn of the century, at the same time a lot of other software got rebuilt mostly from scratch to support, depending on your field, Linux, Mac OS X, or post-NT Windows. There hasn’t been a similar “replant the forest” mood since, so by the teens everyone had a decade-old system with no external impetus to rebuild it. For a lot of fields, this was the first point where we had to think in terms of long term maintenance instead of the OS vendor forcing a rebuild. (We all became mainframe programmers, metaphorically speaking.) And so, even though the Fowler article dates to ’03, and the term is older than that, “Tech Debt” became a twenty-teens concern. Construction stopped being the main concern, replaced with care and feeding.

Software Forests need a different kind of tending than the old rewrite-updates-rewrite again loop. As Foresters, we know the codebases we work on were here before us, and will continue on after us. The occasional greenfield side project, the occasional big new feature, but mostly out job is to keep the forest healthy and hand it along to the next Forester. It takes a different, longer term, continuous world view than counting down the number of car payments left.

Of course, there’s more than one way a forest can get out of hand. Next Time: Types of Overgrowth, catalogued

Read More
Gabriel L. Helman Gabriel L. Helman

is it still called a subtweet if you use your blog to do it

There’s that line, that’s incorrectly attributed to Werner Herzog, that goes “Dear America: You are waking up, as Germany once did, to the awareness that 1/3 of your people would kill another 1/3 while 1/3 watches.” (It was actually the former @WernerTwertzog.)

But sure, false attribution aside, I grew up hearing stories about that. The reason my family lives on this side of the planet is because of reasons strongly adjacent to that. Disappointing, but not surprising, when that energy rears back up.

What does keep surprising me over the last few years, is that I never expected that last third to be so smug about it.

Read More
Gabriel L. Helman Gabriel L. Helman

The next Dr Who Blu Ray release is… Blake’s 7?

It turns out the next Doctor Who blu-ray release is… the first season of Blakes 7? Wait, what? Holy Smokes!

I describe Blake's 7 as “the other, other, other, British Science fiction show”, implicitly after Doctor Who , The Hitchhiker's Guide to the Galaxy, and Red Dwarf. Unlike those other three, Blake didn’t get widespread PBS airings in the US (I’m not sure my local PBS channel ever showed it, and it ran everything.)

Which is a shame, because it deserves to be better known. The elevator pitch is essentially The Magnificent Seven/Seven Samurai in space”; a group of convicts, desperadoes, and revolutionaries lead a revolt against the totalitarian Earth Federation. In a move that could only be done in the mid-70s, the “evil Federation” is blatantly the Federation from Star Trek, rotted out and gone fascist, following a long line of British SF about fascism happening “here.”

It was made almost entirely by people who had previously worked on Doctor Who, and it shows; while there was never a formal crossover, the entire show feels like a 70s Who episode where the TARDIS just never lands and things keep getting worse. My other joke though, is that whereas Doctor Who’s budget was whatever change they could find in the BBC lobby couch cushions, Blake’s budget was whatever Doctor Who didn’t use. It’s almost hypnotically low budget, with some episodes so cheap that they seem more like avant garde theatre than they do a TV show whose reach is exceeding its grasp.

On the other hand, its got some of the best writing of all time, great characters, great acting. It revels in shades of gray and moral ambiguity decades before that came into vogue. And without spoiling anything, it has one of the all-time great last episodes of any show. It’s really fun. It’s a show I always want to recommend, but I’m not sure it ever got a real home video release in North America.

So a full, plugs out release is long overdue. The same team that does the outstanding Doctor Who blu-ray sets is doing this; same level of restoration, same kind of special features. Apparently, they’re doing “updated special effects”, except some of the original effects team came out of retirement and they’re shooting new model work? Incredible. The real shame is that so many of the people behind the show have since passed; both main writers, several of the actors, including the one who played the best character. Hopefully there’s some archive material to fill in the gaps.

Blake ran for 4 years, presumably the Doctor Who releases will stay and 2 a year with Blake getting that third slot.

Read More