Gabriel L. Helman Gabriel L. Helman

Stray Notes on Unsettled Times

About 20 years we all spent a lot of time talking about The Singularity. Remember, the thing where we were all going to upload our minds into a computer in just a few years and live forever? Oh, but also the AIs were going to become supersmart and take over?

The whole thing was deeply silly, but for some reason we spent several years with all the Serious People pretending it was something to just around the corner, despite the fact that it was blatantly just “the rapture,” but with computer words.

That whole discourse was always a weird jumbled mixture of the plot of Terminator, Christian Apocalyptic theology, and unexamined anxiety about capitalism; in the parlance of today “A man will invent the singularly instead of going to therapy.”

(And I note that in today’s LLM/AGI discourse, we kept the bit about the robots being about to take over, but somehow lost the part where we all get to go to cyber-heaven. Huh.)

The bit that stuck with me from that era was the concept of “a” singularity, in the broder sense. A historical moment where there’s so much change so fast for whatever technological or historical or other reason, that it’s impossible to see beyond, the future is clouded until you get past the inflection point.

“The Singularity is coming!” they kept saying.

Well, it came all right. Just not the one they were rooting for.

🧊🌋

At about the same time, I was living next door to a couple who had left New Orleans after Katrina. I was never entirely clear how they ended up in my corner of Northern California, which they strongly disliked in a way that I, as someone who grew up in and actually does like California, was extremely sympathetic to.

This was the era where we started using the phrase “grim meathook future,” but weren’t yet sure how ironic we were being. I remember someone in the broader post-cyberpunk author world—knowning who I was reading a lot of at the time, it was probably either Bruce Sterling or Warren Ellis—said something like “maybe that’s just how it is now, every couple of years a once in a thousand year weather event will show up and wreck a major city.” The sort of comment where your initial reaction was to think “that’s a little pessimistic, gosh” before realizing that no, that was obviously true.

Maybe that’s just how it is now.

🧊🌋

Like most people in my age group, all my grandparents were involved with the WWII war effort in one way or another. The War came up a lot, as you might imagine, mostly as this crazy shared experience they all had.

One time, us kids were asking my grandmother questions about something related to the whole effort, why something had been the way it was.

“You have to remember,” she said, “we didn’t know we were going to win.”

That’s not a huge insight, but I was young enough that it was the first time I’d really engaged with the idea. As far as school was concerned, that was the war where America Saved Everybody, the idea that the people involved didn’t know the end of the story yet hadn’t ever occurred to me.

Obviously, that stuck with me, but what really stuck with me was the look on her face; a woman in her 70s remembering how scary her early 20s were.

🧊🌋

Looking forward to telling our grandkids, “you have to remember, we didn’t know how this was all going to turn out.”

Read More
Gabriel L. Helman Gabriel L. Helman

Video Game Replay: Portal/Portal 2

Spoilers Ahoy

No seriously, I’m about to spoil two of the best games of the last 20 years, and if you somehow still haven’t played them, bookmark this post and head over to Steam right now trust me.

I’m Serious, go play it.

My kids had never played either of the Portal games, so on a whim a couple of weeks ago we fired them up on the SteamDeck and played through them as a team. (Technical sidebar: the PS5 controller makes an excellent bluetooth controller for the SteamDeck when it’s connected to a TV, and really easy to set up! Ironically, a million times easier than trying to use my old Steam Controller.) I played them both when they came out, but hadn’t since.

Portal is a perfectly crafted jewel of a game. The gameplay is perfect, the puzzles are interesting, the design and look of the game perfectly matched with what the game engine can do.

It’s also got maybe my all time favorite piece of narrative slight-of-hand I’ve ever seen in a video game.

Recall that the frame for the game is that you’re Chell, a “test subject” for Aperture Science Labs, testing out their “Portal Gun.” Structurally, you move through a series of levels, each of which is a confined space where you need to use the gun in increasingly complex ways to make portals to get from the entrance door to the exit. The portals themselves are person-sized wormholes or connections that you can drop onto most flat surfaces, connecting disparate areas of the geography. But also, objects—including yourself—keep their momentum as they pass through the portals, so not only can you use them to navigate around obstacles but to build a variety of slingshots, catapults, launchers. You redirect lasers, confuse turrets, bounce objects. Critically, you also don’t have another kind of gun, just the portal one, so puzzles that in a “regular” first person shooter would be solved via firepower here have to be solved by variable cartography.

The puzzles are from the “duplicate, then elaborate”school of design, each one adds some new twist or obstacle or complication that you have to combine with what you leaned last time.

The only other character is the robot voice that’s giving you instructions—that’s GLaDOS, voiced by the staggeringly good Ellen McLain, who seems to be running the show. She’s a computer mastermind in the HAL/SHODAN sense, but a little ruder, a little funnier.

Each test chamber has an opening graphic or placard, giving the chamber number, counting up to 19. The opening sign also has a series of icons indicating which obstacles this room has, with the array lighting up more and more as you move through the game.

The visual design of the game also perfectly matched what the upgraded Half-Life 2 engine it was using could do. The test chambers were mostly white high-tech spaces, sort of 2001 crossed with the Apple store, with the occasional moving panel or window. Big doors slide open to reveal pneumatic tube–like elevators between levels. Metalic panels indicate walls that can’t have portals opened on them, as opposed to the normal glowing white walls. Most of all, the visual design was very clear and focused. Considering the strange geometries you could create with the portals, this was critical to making the puzzles solvable, you could always get your bearings and get an eye-line to where the exit door was, regardless of if you could see how to get there yet.

This is where I pause and remind everyone that Portal wasn’t released on it’s own. It was the “other, other” new game in the Orange Box collection, bundled with Half-Life 2: Episode Two and Team Fortress 2. Portal was clearly the one they had the least commercial expectations for; Team Fortress got all the ads and early chatter, Episode 2 was exciting because it was moving the Half-Life story forward, Portal had the quality that it was the bonus track on the album, the fun tech demo.

And so there was no reason to believe that Portal was anything other than it presented itself as: 19 puzzles with this cool portal tech, which would presumably show up in Half-Life 3 as part of a “real game.”

If you paid attention, though, there were some indications that things weren’t quite right. Every test chamber had at least one observation window looking down into it, and while you could see chairs and computers, you never saw a person moving around on the other side of the translucent glass. GLaDOS wasn’t ever openly malevolent, but sometimes seemed a little off. And there were a few places where you could slip “backstage” of a test chamber, and find strange graffiti and other abandoned debris. There was nothing you could do to interact with it, though? GLaDOS never mentions it? Just a fun little easter egg, I guess, like the G-Man peeking through windows at you at the start of the first Half-Life A little strange though, for a glorified tech demo?

So then, when you get to Test Chamber 19 and then instead of the game ending GlaDOS tries to dump you into the incinerator, you get to have the absolutely breathtaking realization that no, you fell for it, you didn’t just beat the game, you beat the tutorial.

The rest of the game is making your way through the infrastructure of the testing facility towards GLaDOS, using all the portal tricks the game carefully tought you earlier. You find out that, hey, the reason you never saw anyone behind those windows was because GLaDOS killed them all, and now instead of a fun tech demo puzzle game you’re in a 1:1 duel to the death with an evil computer. It’s great! Then there’s a song at the end!

Part what makes it so great is the length: it’s not short short, but it knows how not to wear out its welcome. Replaying it, I think we beat in in three after-school nights, neither rushing nor going terribly slowly. Perfectly paced, satisfying without being overlong, trim without leaving you feeling cheated.

It did, however, leave everyone wanting more.

It was, and I’m marking it down here, a huge success. Portal ripped through the circa 2008 nerd culture like few things I’ve ever seen before or since. It quickly flipped from “the bonus track” to “really, there’s no way to get this without that dumb-looking Team Fortress?” The cake memes were everywhere. Making a sequel was an absolute no-brainer.

They announced Portal 2 in 2010, it was released the next year. Unlike the first game, this was a full triple-A standalone release. In a world where it had already become clear that Half-Life 3 was never going to happen, this was Valve’s Next Big Thing. Structurally, Portal wasn’t a lot like Valve’s other work, Portal 2, on the other hand, was absolutely A Valve Game.TM

This is where I pause and admit that my opinion most of-of-step with the video game–playing mainstream is that I do not, personally, care for either of the Half-Life games. This is not a contrarian hot take, I’m not about to try to convince you that they’re Bad Actually, I understand why they are as popular and beloved as they are, I am aware of all the ways they were incredibly innovative and influential.

I feel the same way about the Half-Lifes that I do about Cola: I acknowledge that it’s very popular, don’t have anything against it, but it is not my preferred flavor. I guess, in this strained metaphor, the original Deus Ex is Mountain Dew?

Because this is going to be relevant in a moment, let me attempt to sketch for you what I don’t like about them. I’ve thought about this a lot, because it’s very strange to beat a game, think to yourself “well, that was okay I guess, but not that great” and then have everyone you know declare it to be the greatest game of all time, and then have that happen even more so with the sequel. You gotta stop and make sure you’re not the idiot, you know?

Valve shooters tend to be extremely linear games where you make your way though an environment, alternating segments of “traversal” where you have to find the one way forward, and “encounters” which are either an in-engine cutscene, a shootout, or more rarely, a puzzle to get past. They very much like to imply a larger, more complex environment out and around you, but all the doors are locked and impassable except the one door or vent you can go through. It’s all stage scenery, basically. And while it’s cool that the cutscenes don’t take your control away, it sometimes feels like you’re watching the game get played for you. In my less charitable moods, I describe the Half-Lifes as “slowly walking down an elaborately decorated single hallway.”

And the obvious follow-up question here is, well buddy, even just limiting ourselves to first person shooters from the turn of the century, that also pretty much describes Max Payne, which you loved, so what gives? Broadly, I think it’s two things. First, those fake environments. I prefer sprawling non-linear environments in games, but I don’t mind something more linear. What drove me crazy about Half-Life 2 especially was you’d get these vast city-scapes, and then only a tiny little alleyway was available to you. Vice City had already been out for two years! Deus Ex did all kinds of things with open spaces on limited computers! Max Payne didn’t irritate me as much because you spent all your time in naturally-enclosed areas; abandoned subways, empty office buildings, and the like. I spent a lot of time wishing City 17 was more like Hong Kong in Deus Ex and less like the Black Mesa facility.

But mostly what I didn’t like was I thought most of the actual shooting was pretty boring. I like games that structure “encounters” more like puzzles—this is why I prefer turn-based tactical fights in RPGs, why I like X-COM more than Diablo, and so on. One of the things I loved so much about Max Payne, was that between the fact you really could take cover and the bullet time mechanic, each shootout functioned as a puzzle—how do I get through this without being hit? More than once I’d get through a fight, and the reload, muttering “I can do better.”

The parts of Half-Life 2 I really liked—the sawblades vs zombies village, that big physics puzzle with the crane—were encounters that functioned more like puzzles. It wasn’t just “keep an eye on your ammo remaining and watch the floor for those crab things.”

I disliked the way Half-Life 2 would get you to the next set-piece, and then say “okay, this is a gravity gun puzzle” or “nope, this is just shooting,” or “yeah, this is a laser-guided missile puzzle.” There were very very few opportunities to mix and match, or find your own solution to anything.

This sounds like snark but isn’t: my favorite part of Half-Life 2 was the final level where you have to use the gravity gun to bounce those energy spheres around and disintegrate things. That was something new, and didn’t play like anything else. I wish the whole game has been like that.

I bring all this up because Portal 2 has this exact structure, and I loved it.

Portal 2 opens with the swagger of a game being make by people who know they’re making a hit. Portal sometimes has a slightly hesitant quality to it, beyond just being the “bonus game,” in that you can tell the developers aren’t quite sure if the audience is going to buy what they’re selling. Portal 2, on the other hand, is clearly made by people who know the audience loved what they did last time. It has a really solid take on what worked from the first game and leans into them. Among other things, that means more humor and more atmospherics. It also knows it has more space, so it settles in, puts its feet up, and gets comfortable.

Valve hadn’t been known for funny games, and while Portal was funny that humor tended to be subtle and deadpan. But the jokes were everyone’s favorite part, so Portal 2 comes out of the gate making it clear that this is a comedy: a terribly dark comedy, but a comedy.

It opens with a fairly bravura set-piece, where you start in what looks like a 1950s hotel room, do a couple of tutorial moves to learn the controls, go to sleep, and then wake up terribly far in the future. The room is ruined and overgrown, and things have clearly gone wrong. The first new character of the game, Wheatley, quickly arrives to finish your tutorial. He’s a spherical robot driving around on a track on the ceiling, and he’s played by Steven Merchant, who at the time was mostly known for the UK version of The Office. The opening turns into something of a technical flex as Wheatley starts driving your hotel room around on a larger set of tracks, crashing into things, disintegrating the walls, as you have to move around and avoid being thrown out. As the walls fall apart, you get glimpses of that same backstage infrastructure from the first game—you’re still in the same Aperture Science facility, just in a new part. On paper, this is a classic Valve “live action cutscene”, a lot like the opening train rides of both Half-Lifes, but the key difference for me was that it was very funny. The slapstick of the room crashing into things, Wheatley’s stuttered apologies, great stuff.

You’re once again playing Chell, a silent protagonist in the style of Half-Life’s Gordon Freeman. Unlike Half-Life which dances around why Freeman never says anything, here’s it’s lampshaded directly; Wheatley thinks you have brain damage, GLaDOS later refers to you as a “mute lunatic”; the writer, Erik Wolpaw has said several times that she just refuses to give anyone the satisfaction of a response.

The utilitarian, 2001-esque test chambers of Portal were very spooky in their own subtle way, and then the backstage areas even more so. Portal 2 knows not to try to recreate either of those, but keeps finding new ways to riff on the same basic environmental grammar.

You quickly find yourself back in the facility from the first game, but long-abandoned and gone to ruin. The first few levels are the same intro test chambers from the first game, but now overgrown and abandoned. It’s an inspired way to reacclimatize returning players to the game while also onboarding new ones, while still making it clear this this game is going to be different, and very spooky.

But, like the first game, Portal 2 knows not to overstay its welcome with any particular batch of ideas. The game passes through, roughly, five acts. After the opening act in the ruined facility, you accidentally wake GLaDOS up, and she retakes control, and she decides to get back to work.

This second act is the one most the first game, with GLaDOS running you through new test chambers. The facility itself becomes much more of a character, with the chambers “waking up”, walls reorganizing themselves, the various panels shaking off years of debris before re-assuming their test configurations, becoming less ruined and more like they were before.

The best example of the second game’s swagger is the way it uses GLaDOS herself. While she was used sparingly before, here they know she’s the best part of the game, and make sure to use her to the fullest. Her voice is less artificial, and she has more things to say, and they’re funner.

My favorite example of this is that as her frustration mounts, we end up with an extended series of jokes where rather than questioning your skills or value, she just starts calling you fat in increasingly bitchy ways. GLaDOS is far more human in this game to the character’s immense benefit, there’s a sense that her behavior in the first game is her “professional demeanor”, and in the second game she’s gotten tired and frustrated enough that the “real her” is spilling out.

While this is going on, most levels have a spot where Wheatley peeks through a half-opened panel or around a corner. A carefully-designed set of blink-or-you’ll-miss-it encounters that make sure you never blink. Eventually he stages a rescue, and the third act is once again backstage of the testing facility, making your way towards GLaDOS. Similar in design to the backstage second half of the first game, the facility here come across as larger and more menacing, with more things going on that just your strange tests. Views recede into a blue haze past the industrial strutures, where is all this, exactly?

The closest the game comes to replicating the first game’s surprise twist is at the fight with GLaDOS—it looks like so far we’ve mostly been re-staging the plot of the first game with better graphics and funnier writing, but then Wheatley takes over, goes all megalomaniacal, straps GLaDOS to a potato battery, and throws the pair of you down a long shaft.

The best, and most famous part of the game is the fourth act, set in the abandoned 50s, 70s, and 80s–era testing facilities. Turns out the whole facility was built inside an abandoned salt mine, working from the bottom up, and everything you’ve seen so far was just the very top layer.

This is where we meet the last new character—Cave Johnson, played by JK Simmons in full “bring me pictures of Spider-man” mode, the founder and now deceased CEO of Aperture Science, via his leftover recordings. Johnson’s rants, and GLaDOS’s snark in return from her position as a potato perched on your gun, makes for the game’s best writing.

This is where the game most settles into it’s Half-Life 2 style structure, you alternate between navigating your way up to the next level through the abandoned structures, then solve a test chamber or two designed with an appropriately retro style of tech, and then go back to traversal. Like the first game, it does a remarkable job of teaching you some new portal tricks with the test chambers, and then letting you loose to use them as you try and move around between those test chambers.

It’s worth noting how much exposition they cram into the jokes Cave Johnson and GLaDOS make at each other—most specifically how much time they spend talking about moon dust, which seems like just another wacky detail until you find out why, and realize they’ve been giving you the solution to a puzzle the whole time.

Finally, you make it back up to the “modern day”, facility, where things have gone horribly wrong with Wheatley in charge. It’s a remarkable piece of design work that, using the same basic pieces, the freshly re-ruined facility manages to be the most menacing yet. It’s positively apocalyptic with tangled up rooms and looming fires on the horizon as you try to keep the whole place from being destroyed and solve Wheatley’s terrible puzzles.

The key difference structurally between the two games is that the second knows it can’t recreate the Big Surprise of the first, so it doesn’t try. Instead, the second game is built around anticipation, each act has an end goal that gets declared at the start and that you spend the whole time working towards: escape the facility, escape GLaDOS, climb back out, defeat Wheatley. While this keeps the game moving forward, it does tend to blunt the puzzles a little; unlike the first game there’s a tendency to try and rush through them so you can see what happens next.

That’s part of how Half-Life 2 structure’s worked too: you’d get a goal, then fight your way through whatever it was to get where the goal needed you to be.

Which brings me back around to why did I like Portal 2 so much more than the Half-Lifes? For starters, I like the humor a lot more than the post-apocalyptic melodrama. Mainly, though, it’s the puzzles. While I found the shooting encounters frequently boring, the portal puzzles never were, and kept building on themselves in fun and interesting ways. There was never an “oh this again” moment, there was always some new twist or “yes and”. And whereas the linear and confined nature of the Half-Lifes felt limiting, here it made the puzzles feel even possible. Knowing there’s one way through keeps the tangled wreckage at the bottom of the test shaft from feeling overwhelming. You’re not going to get lost, you’re not going to chase the wrong path, let’s just look around for the one place you can shoot a portal and keep moving.

As an aside on that point: there’s a regular Discourse that pops up with video games around how much player affordance is too much, every 9–18 months someone would get mad about yellow paint on ladders back on the old twitter. Portal 2 does a really elegant job of this by using light; most of the facilities are very dark, especially the older ones, and the few spotlights that are there will just casually play across the area where you need to shoot a portal. It’s a slick way to draw the eye without making it insultingly obvious. (There are a few places where you’d have a collapsed bridge but then the fallen wreckage would just happen to form a perfect walkway over to where you need to be, which gets a little eye-rolling.)

Both Portal games are a masterclass in this, in game design that subtly wiggles its eyebrows at the right answer and then lets you think you solved it all on your own.

Narratively, the game has a pretty conclusive end, there’s room for more but no real un-pulled threads. From a design perspective, this also felt like the definitive statement on these mechanics. Half-Life 3 has become a vaporware meme because there’s still so much plot and mechanics you could build on top of those games, but conversely no one really clamors for a Portal 3, because it doesn’t need one. Any new game with those portal mechanics would need to do something new, something different, and whatever that might be, it wouldn’t be Portal. The Portal/Portal 2 diptych might be the only perfect 1-2 punch in all of video games, and there’s no reason to make more. Outstanding work, just as fun over a decade later as they were when they were new. I’d say something like “they don’t make ‘em like that anymore,” but no, they never made them like that at any time, except those two.


I will just throw this out here though: I’d pay real money for a game just called “Three” that let you play as Gordon Freeman, Chell, and Alyx simultaneously, swapping between them to solve portal/gravity/bullet gun puzzles as you had to team up with GLaDOS to defeat those aliens.

Read More
Gabriel L. Helman Gabriel L. Helman

TTRPGs I’m Currently Playing: Cypher System + It’s Only Magic

It can’t have escaped notice that I written something like fourteen thousand words on “new kinds of D&D” on the ‘cano so far this year, and all of those pieces ended with a kind of “well, not really what I’m playing these days but seems neat!” Which brings up the obvious follow-up question: what am I playing these days? Well…

Something that I think is really under-theorized in TTRPGs are GM Playstyles. Every decent RPG these days has a list of player archetypes: the actor, the puzzle-solver, the rules lawyer, etc, but very rarely do you see GM style addressed in anything more detailed than a reminder that it’s not a competition and you need to support your players.

I think a big part of the reason for that is that GM Style ends up being closely linked to the design of the particular game itself. Most games—and I realize the word “most” is a load-bearing word in this sentence—support multiple player styles, but generally have a much narrower list of “right” ways to run them.

The result of that is that most people who run games, especially those of us who've run multiple systems, will find one and glom on—“this is the game I’m running from here on out.” We can’t always articulate why, but you’ll settle into a ruleset and realize how much easier and more fun it is to run, and I think that’s because it’s a game where the designer runs games the same way you do.

I’ve said before that 5th edition D&D is the first version of that game that I didn’t feel like was fighting me to run it the way I wanted to. I genuinely loved the whole 3.x family, and that’s probably the ruleset I have the most hours with at this point, but at least once a session I would say both “bleah, I don’t remember how that works,” and “man, I don’t care. Just roll something and we can move on.”

A big part of that is I like to run games in a more “improvisational” style than D&D usually assumes—and just to be crystal clear, I’m using “improv” in the formal, technical sense as a specific technique like with Improv Comedy, not as a synonym for “ad lib” or “just making things up.”

And it’s not that you can’t Improv D&D, it’s just that for any given mechanical encounter you need to know a lot of numbers, and so the game tends to screech to a halt as you flip through the Monster Manual looking for something close enough to run with.

(My go-to guidelines were when in doubt, the DC was 13, and the players could always have a +2 circumstance bonus if they asked.)

So with that as prologue, let me tell you about my favorite tabletop RPG out there: Monte Cook’s Cypher System.

Like a lot of people, Cook was somebody whose name I first learned due to his being one of the three core designers of 3rd Edition D&D, along with Johnathan Tweet and Skip Williams. Tweet, of course, was the big name rockstar developer, having done both Ars Magica and Over the Edge, and was supposedly the guy who came up with most of the d20 system’s core mechanics.

Cook, though, was one of those people I realized I already knew who he was despite not knowing his name—he was one of “the Planescape Guys,” and was the one who wrote the modules that brought Orcus back.

After 3.0 came out, Cook did a bunch of weird projects like the criminally underrated Ghostwalk, and got hit in one of the early waves of layoffs. He started his own indie company, and ended up as one of the first people to explore selling PDFs on their own as a business model. (Which sounds absolutely ancient now.)

I thought his indie stuff was some of, if not the best third party 3e D&D material out there. But even more so, I found his stuff incredibly easy to use and run. This was a guy who clearly ran games the way I did. By contrast, my reaction to Tweet’s stuff, who I respected and admired tremendously, was to stare at it and think “but what do I do, though?”

Cook also had a blog—I think on LiveJournal, to really emphasize the 2004 of it all—which had a huge influence on how I ran games, mostly because I’d get halfway through a post and already be shouting “of course!”

He also did a mostly-forgotten game published variously as Arcana Unearthed and Arcana Evolved that I thought was the best version of 3rd edition; it was the game 3.0 wanted to be without all the D&D historical baggage. One of the many neat things it had—and this is foreshadowing—was a much cleaner & more comprehensive system for crafting magic items, including a very cool way to make single-use items. Want to store a bunch of single-use Fireball spells in marbles and distribute them to your fellow party members? You can do that.

Flash forward a decade. Just before 5e came out, Cook released his big magnum opus game, Numenera. I bounced off the setting pretty hard, but the rules, those I really liked.

Imagine the initial 3.0 version of D&D, and strip it down until all you have left are Feats and the d20. The core mechanic is this: everything has a difficulty from 1 to 10. The target number is the difficulty times 3. Meet or beat on a roll to accomplish the task.

And here’s the thing: that’s the only way tasks work. All you need to do to make something work in game is give it a difficulty score. Going hand in hand with this is that only the PCs roll. So, for example, monsters use the same difficulty score for what the PCs need to roll to hit them, and also what the PCs need to roll to avoid being hit by them. Occasionally, something will have something at a different level than the default, a difficulty 3 monster with stealth as level 6, for example. It’s incredibly easy to improv on this when you really only need one number, and you can focus on the big picture without having to roll the dice and do math yourself on the fly.

It's funny—on 3rd Edition/D20 Jonathan Tweet always got the credit for the clean and simple parts of the game ("Um, how about if Armor Class just went up?") and Cook got the credit for all the really crunchy rules & wizards stuff. Which made sense, since Tweet has just done Over the Edge, and Cook had just spent years working for ICE on Rolemaster. So, building his own system from scratch, Cook ends up with something from the "bare minimum number of rules to make this playable" school, whereas Tweet’s 13th Age went completely the other direction.

Alert mathematicians will have noted that difficulty levels higher than 6 are impossible to hit on a bare roll being above 20. Rather than modifiers to the roll, you use things to increase or decrease the difficulty level. (When the game came out, I cracked that Cook had clearly won a bet by making a game where the only mechanic was THAC0.)

Most of where the PC’s options come from are their Abilities, which are effectively 3e D&D feats. They’re some thing a PC can do, a power, a bonus to some kind of task, a spell, a special attack.

Players can also have skills, in which they are either trained or specialized, which decrease the difficulty by one or two steps respectively. A player can use up to two “Assets” to decrease the difficulty by up to another two steps, and they’re delightfully abstracted. An Asset can be anything: a crowbar, an NPC assisting, a magic gauntlet, a piece of advice you got last session about where the weak point was. They’re as much an improv prompt for the players as they are a mechanic. If you can decrease the difficulty down to zero, it’s an automatic success, and you dont have to roll.

Which brings me to my two favorite features of the mechanics.

First, the PCs have three Stats—Might, Speed, Intellect—but rather than scores, they’re pools. Your skills & abilities & assets represent your character’s baseline normal everyday capabilities. Your Stat Pools represent how much extra “oomph” you can deploy under pressure. So if you’re trying to Bend Bars & Lift Gates, and having a friend help with a crowbar didn’t get the job done, you can spend some Might points and really get that portcullis open.

Your pools also act as your hit points—physical damage drains your Might pool, psionic attacks drains your Intellect. Special powers or spells also spend pool points to activate.

“I have to spend hit points to kick the door open?” is a reaction most everyone has to this at first glance, but that’s the wrong approach. Your pools are basically a representation of how much “spotlight” time your character can have during an encounter, how much cool stuff they can do before they have to sit down and rest.

Because also, getting your points back is incredibly easy; there’s really no reason to ever enter an encounter—combat, social, or otherwise—without a full tank.

This works for all tasks, not just the punchy combat ones. So you get these great moments where someone will be trying to bluff their way past the border patrol and decide they’re going to be charming as hell as they empty out their Intellect pool, or yell that they’re going to bullet time as they dump their speed pool on a dodge check.

Which brings me to my single favorite RPG mechanic of all time: something called “The GM Intrusion.” At any point, the GM has the option to throw a wrinkle in and call for a roll anyway, usually when the party has cleverly knocked a difficulty down to nothing.

The examples in the book are things like a PC trying to climb a cliff with a specialized rockclimbing skill and a rope harness making the climb check zero, and then the GM says “well actually, it was raining earlier, so I’m gonna need a roll.”

But, the kicker is that the GM has to pay the PC for it. The GM offers up an XP for the Intrusion, and the player has the option to accept, or two spend one of their XPs to reject it. Actually, the GM has to offer up 2 XPs, one of which the player being intruded on has to immediately give to another player, which also does a really neat job of democratizing XP rewards.

Cypher is one of those games where “1 XP” is a significant item, players generally get 2–4 a session, upgrades cost 3 or 4 depending on what you want.

The place where this really works is if you use cards to represent those XPs. (They have a bunch of really cool XP decks for sale, but they’re dirt easy to make out of 3x5 cards or use repurposed playing cards.) A player saying “and that makes it difficulty zero!” followed by the GM silently sliding an XP card into the middle of the table is peak. I like to give the card a couple little taps before I say something like “so what really happens is…”

This gets objected to from some quarters, usually in the form of something like “putting your thumb on the scale is what I was already doing as a good GM, why should I have to pay for it?” And, well, that’s the reason, so that you have to pay for it. This makes the extra difficulty both explicit and collaborative. Instead of monsters suddenly growing an extra 30 HP they way they tend to do in D&D, here the GM has to openly offer the extra challenge, and allow the player to turn it down. Sometimes they’re just not in the mood, and would rather pay the XP to get past this to what they really want to do.

Like the stat pools, XPs aren’t just a score to make characters better. In addition to actual character upgrades, you can also spend them on things like retroactively creating an NPC contact, or acquiring a base of operations. They’re the currency the players get to use to wrest control of the game away from the GM.

Rounding out the mechanics are the Cyphers themselves. In simple terms, Cyphers are powerful, single-use magic items. In the original Numenera they were all assumed to be scavenged and barely understood ancient tech. So an item that acts as a single-use Fireball grenade might actually be an ancient power cell that no one knows how to use anymore, but they know if they mash these two metal bits together it blows up real good.

Later settings introduced more “subtle” cyphers, as appropriate for the world. In the game I’m running now, Cyphers have included a marble that if you throw it grows to the size of a bowling ball and does a tremendous amount of damage, a high-powered energy drink that does a bonus to any speed task, and “the advice your aunt gave you when you were young,” which they haven’t tried to use yet. (It’s a -2 to any task difficulty, as long as they yell “oh! That’s what she meant!” before rolling.)

PCs can only have a few Cyphers on them at a time, and are supposed to always be finding new ones, so the game operates on the assumption that the players always have a small set of very powerful one-shot powers they can deploy. It keeps the game fresh, while discouraging hoarding. Like XPs, these also work best on cards.

I saw someone complain that Cypher was just “the players and GM handing metaplot coupons back and forth,” and yeaaaahhhh?, I can see why you might get that impression but also that’s the completely wrong philosophy. There are definitely sessions that feel more like a card game, with XP and Cypher cards slapping onto the table. But this is what I was talking about with GM style; I like having a formalized, easy to deploy way where both the GM and the players can go “well, actually…” at each other.

Character creation is similarly stripped down, and is one of the signature elements of the system: you make your character by filling in the blanks of the sentence “I’m an [adjective] [noun] who [verbs].” The noun is effectively your character class, but they’re more like a starting template. The default nouns are “strong guy”, “fast guy”, “smart guy”, “talky guy”—Fighter, Rogue, Wizard, Bard, basically. The other two let you pick up some specializations. In practice, those three choices just determine which ala carte menu you get to pick your starting powers from.

That all lands somewhere around “rules medium”, in that you can probably fit all the mechanics on a single postcard, but the book is still 400+ pages long to fit all the Abilities and Verbs and all.

Despite the heft of the book, I’ve found it to be a system where the rules just melt away, but still give you enough framework to actually resolve things. When I really need the rules to back me up, there’s something there, otherwise, just say “sure, let’s call that difficulty 3,” and keep moving.

As I said, I bounced off the original Numenera setting pretty hard. Briefly: the setting is a billion years in the future, full of super-science and nanotech and post-plural-apocalypse. "Now", is roughly a medieval setting, where everyone runs around with swords fighting for feudal lords. But, instead of magic we have rediscovered super-science, monsters are the results of ancient genetic experiments, or aliens, or long-abandoned robots. Cook always enjoyed playing with the Arthur C. Clarke line about "Any sufficiently advanced technology is indistinguishable from magic", and here took that all the way up to eleven—the only magic is terribly advanced technology.

The other place he leans into his strengths is that his previous games (Ghostwalk, Ptolus, the setting for Acana Unearthed) had very cool, evocative, exciting setups, and then tended to have a tremendously boring resolution or explanation. Here, mysteries about but are fundamentally unexplainable. “Who knows, it’s weird!!” is the end of every adventure; a setting built around all cool setups that can never be explained or resolved ever. That’s a real “your mileage may vary” flavor if ever there was one.

But the problem is that all ends up just being “turbo D&D” but with different latin stems on the words describing the superpowers. Despite being a world dripping in nanotech, crashed spaceships, power armor, genetically-engineered robots, jetpacks, and all, for some reason the equipment chapter is all swords and polearms. Dude, I didn't buy a book with a robot on the cover to pack a halberd.

I can see why they decided to use this as the setting for the Torment-not-a-sequel. There are ways in which it’s a lot like Planescape, just without all the D&D baggage.

But there is something so deeply joyless about the setting. In the back, he has a list of Inspirations/Recommended Reading, which is both his homage to Gygax's similar appendix in 1E D&D, and also his list of primary sources. Nausicca, which is what I think the setting most resembles, is listed under movies, not books. Which means he only saw the movie, which is 90 minutes of crazy stuff happening, and not the book, where you get to find out what the heck is going on. And then he lists Adventure Time, and I'm all, Monte—where's the sense of fun? Ninjas never steal an old guy's diamond in this game. Maybe he only saw that episode where Bubblegum dies?

As an aside: later releases for Numenera did a better job of embracing the “weird superscience future” side of setting. I know this because despite bouncing off the game I kept picking up supplements for it because I wanted to find a way to make it work and I kept trying to figure out how to shear the rules away from the setting. They did a couple of other games with the same basic mechanics—including the spectacular “RPG for kids” No Thank You Evil which we played the hell out of.

Fortunately, they eventually pulled the combined rules from the other games and broke them out into their own book as just The Cypher System Rulebook. Like I said earlier, it’s a hefty tome, but it has all the “stuff” from the previous stand-alone games, along with a whole bunch of advice on how to lean into or out of various genres with the same rules, especially regarding how to make Cyphers work depending on the vibe and setting you’re going for.

Speaking of advice, the Cypher core book came out at roughly the same time as another book Cook did called Your Best Game Ever, which is a system & setting–agnostic book on “here’s how I think RPGs can and should work”. I cannot think of another example of this, where someone wrote a whole about RPGs, and then separately put out a book of “and here’s the rules I built specifically to support the philosophy of play from the other book.”

So not only does the Cypher core rule book have some of the clearest “here’s how this game is supposed to work and here’s how to make that happen” text I’ve ever read, but then if you have follow-up questions there’s another 230 pages of philosophy and detail you can read if you want.

This should happen more often. I’d love to read a “philosophy of RPG design and play” book from Tweet, or Robin Laws, or Steve Jackson, or the Blades in the Dark guy, or Kevin Siembada, or any of the other people who’ve been around making these games for long time. I don’t know that I’d agree with them, but I’d sure like to read them.

The “generic RPG” is a hill a lot of people have tried to climb, with mixed success. The obvious primary example here is GURPS, but then you have games like Shadowrun which are really four or five different games stacked on each other in the same cyber-trenchcoat.

Cypher is also a swing at the Generic RPG, but a better example of what it’s going for is the post-3.0 D&D d20 era, or the constellation of games “Power by the Apocalypse,” not so much one big game as a core set of bones you can assemble a game on top of. You could mix-and-match stuff from d20 Modern and d20 Future, but you’ll probably have a better time if you don’t.

The Cypher book doesn’t talk about settings but it does talk about genres, and has a long chapter outlining specific advice and tools for making the rules work under the narrative conceits of various genres. The list of genres is longer than I was expecting, there’s the usual Modern/Fantasy/Science-Fiction entries, but also things like Horror, or Romance.

The place where it really started to shine, though, is when then started doing “White Books”, separate genre & settings books to plug into Cypher.

On paper these aren’t that different than the sort of settings books GURPs or d20 would do, but the difference is that with Numenera covering the bases for all the classic science fiction & fantasy tropes, the White Books have the flexibility to get into really narrow and specific sub-genres. The generic stuff is back in the core book, these are all books with a take. They tend to be a mix of advice and guidelines on how to make the genre work as a game, a bunch of genre-specific mechanics, and then an example setting or two.

They did a fantasy setting, but instead of Tolkien/Howard/Burroughs–inspired it’s Alice in Wonderland. They did a Fallout-in-all-but-name setting with the wonderfully evocative name of Rust & Redemption that makes the mechanic of “Cyphers as scavenged technology” work maybe even better than in the original.

And then they did a book called It’s Only Magic, which might be the best RPG supplement I’ve ever read. The strapline is that it’s “cozy witchcore fantasy.” It’ a modern-day urban magic setting, but low-stakes and high-magic. (And look at that cover art!)

The main example setting in the book is centered around the coffee shop in the part of town the kids who go to the local magic college live in. The “ghost mall” is both a dead mall and where the ghosts hang out. It has one of those big fold-out maps where practically every building has an evocative paragraph of description, and you’ve knocked a skeleton of a campaign together halfway through skimming the map.

Less Earthsea and more Gilmore Girls, or rather, it plays like the lower-stakes, funnier episodes of Buffy. Apocalyptic threats from your evil ex-boyfriend? No. Vampire-who-can’t-kill-anymore as your new roommate? Yes. The Craft, but there’s three other magic-using witch clubs at the same school.

The other (smaller) example setting is basically Twin Peaks but the ghosts aren’t evil and the whole town knows about them. Or the funnier monster-of-the-week episodes of the X-Files.

It’s really fun to see what “Urban Fantasy” looks like with both “Cthulhu” and “90s goth vampire angst” washed completely out of its hair.

There’s the usual host of character options, NPCs, equipment, and the like, but there’s also a whole set of extra mechanics to make “casual magic” work. Cyphers as scented candles and smartphone apps! Theres a character focus—the verb in the character sentence—who is a car wizard, a spellcaster whose feeds all their spellcasting into making their muscle car do things. It’s great!

There’s a bunch of really well thought through and actionable stuff on how to run and play an urban fantasy game, how to build out a setting, how to pace and write the story and plot in such a genre. One of my themes in the all the RPG writing I’ve done this year has been how much I enjoy this current trend of just talking to the GM directly about how to do stuff, and this is an all time great example. The sort of work where you start thinking you probably know everything they’re going to say, and then end up nodding along going “of course!” and “great point!” every page.

It’s exactly what I look for out of an RPG supplement: a bunch of ideas, new toys to play with, and a bunch of foundational work that I wouldn’t have thought of and that’s easy to build on.

This is where I loop back around to where I started with GMing styles; whatever the term for the style I like is the style this game is written for, because this is the easiest game to run I’ve ever played.

Like I said, I tend to think of the way I like to run as “Improv”, but in the formal sense, not “just making stuff up.” Rules-wise, that means you need a ruleset that’s there when you need it to resolve something, but otherwise won’t get in your way and keep you from moving forward. You need ways the players can take the wheel and show you what kind of game they want to be running. And you need a bunch of stuff that you can lay hands on quickly to Improv on top of. I used to joke that I’d prepare for running a TTRPG session the same way a D&D Wizard prepares spells—I sketch out and wrap up a bunch of things to keep in my back pocket, not sure if I’m going to need them all, and with just enough detail that I can freestyle on top of them, but don’t feel like I wasted the effort if I don’t.

The example setting here is perfect for that. One of the players will glance at the map and say “you know, there’s that hardware store downtown,” and I can skim the two paragraphs on the store and the guy who runs it and have everything I need to run the next 30 minutes of the game.

Great stuff all around. Gets the full Icecano Seal of Approval.


Edited to add on Dec 16: Regarding the list of people who I suggest should write books about RPGs, it’s been brought to my attention that not only did Robin Laws write such a book, but I both own it and have read it! Icecano regrets the error.

Read More
Gabriel L. Helman Gabriel L. Helman

Read This Book Next! Dungeons & Dragons: Dungeon Master’s Guide (2024)

And the “New D&D” double volcano-asteroid summer comes to a close with the release of the 2024 revision of the 5th Edition D&D Dungeon Master’s Guide.

Let me start with the single best thing in this book. It’s on page 19, at the end of the first chapter (“The Basics”). It’s a subsection titled “Players Exploiting the Rules.” It’s half a page of blunt talk that the rules are not a simulation, they assume good-faith interpretations by everyone, they don’t exist as a vehicle to bully the other players, and if a player is being an asshole tell them to stop. Other games, including previous iterations of this one, have danced around this topic, but I can not remember a rule book so clearly stating “don’t let your players be dicks.” I should add that this comes after a section called “Respect for the Players” that spends a page or two finding every possible way to phrase “If you are going to be a DM, do not be an asshole.” It’s incredible, not because it’s some hugely insightful or ground-shaking series of observations, but because they just say it.

(There are a couple people I played with in college—no, no one you know—that I am tempted to find for the first time in 20+ years just so I can mail them a copy of these sections.)

Let me back up a tad.

A running theme through my “New D&D” reviews this year has been: where were people supposed to learn how to play this game? At one point I posited that the key enabling technology that led to the current D&D-like boom was twitch, which finally let people watch other people play even if they didn’t already know someone.

Like I talked about last time, TTRPGs have this huge mass of what amounts to oral traditions that no one really wrote down. Everyone learned from their friend’s weird older brother, or that one uncle, or the guy in the dorms, or whatever. And this goes double for actually running the game—again, one of the reasons 10’ square-by-square dungeon crawls were so common was that was the only style of play the Red Box actually taught you how to run.

As much as “new player acquisition” was a big part of D&D’s mandate, that’s something it’s struggled with outside the era of the Red Box; text actually answering the question “okay, but literally what do I do now that everyone is at my kitchen table,” has been thin on the ground.

D&D tended to shunt this kind of stuff off into auxiliary products, leaving the Core Books as reference material. The classic example here is the Red Box, but as another example, if you go back and look at the 3.0 books, theres no discussion on what “this is” or how to play it at all. That’s because 3e came out alongside the “D&D Adventure Game” box set which was a Basic-eque starter set that was supposed to teach you how to play that no one bought and no one remembers. (The complete failure of that set is one of the more justifiable reasons why 3.5 happened, those revised books had a lot of Adventure Game material forklifted over.) 4E pivoted late to the Essentials thing, the 2014 5e had three different Starter Boxes over the last decade (with a new one coming, I assume?)

And this has always been a little bit of a crazy approach, like: really? I can’t just learn the game from this very expensive thick hardcover I bought in a bookstore? I gotta go somewhere else and buy a box with another book in it? What?

By contrast, the 2024 rules, for the first time in 50 years, really seems to have embraced “what if the core rule books actually tought you how to play.”

Like the 2024 PHB, the first 20 pages or so are a wonder. It starts with an incredibly clean summary of what a DM actually does, with tips on how to prep and run a session, what you need to bring, how do it. It’s got an example of play like the one in the PHB with a sidebar of text explaining what’s going on, except this time it’s explaining that the DM casually asked for what order the characters were in as they were walking towards the cave before they needed to know it so they could drop the surprise attack with more drama.

It’s got a section on “DM play style” which is something almost no one ever talks about. It’s got a really great section on limits and safety tools, and setting expectations, complete with a worksheet to define hard and soft limits as a group.

Then that rolls into another 30 pages of Running The Game. Not advanced rules, just page after page of “here’s how to actually run this.” My favorite example: in the section on running combat, there’s a whole chunk on what to actually do to track monster hit points on scratch paper. There’s a discussion on whether to start with the monster’s full HP and subtract, or start and zero and add damage until you get to the HP max. (I’m solidly an add damage guy, because I can do mental addition faster than subtraction.) I literally can’t ever remember another RPG book directly talking to the person running the game about scratch paper tracking techniques. This is the kind of stuff I’m talking about where we learned to play the game; this was all stuff you learned from watching another DM or just figured out on your own. This whole book is like someone finally wrote down the Oral Torah and I am here for it.

For once, maybe for the first time, the D&D Dungeon Master’s Guide is actually a Guide for Dungeon Masters.

Like the PHB, you could sheer the first 30-50 pages off the front of this book and repackage them as a pretty great “Read This Book Next!” softcover for a new Red Box. From that point, the book shifts into a crunchier reference work, but still with the focus on “how to actually do this.” Lots of nuts-and-bolts stuff, “here’s how to work with alignment”, “here’s how to hotrod this if you need to”, the usual blue moon rules, but presented as “here’s how to run this if it comes up.”

In the best possible way, this all seems like D&D finally responding to the last decade and change of the industry. Like how Planescape’s Factions were a direct reaction to the Clans in Vampire, so much of this book feels like a response to the “GM Moves” in Apocalypse World. Those moves weren’t hugely innovative in their own right—there were a lot of reactions to PbtA that boiled down to “yeah, that’s how I already run RPGs”, but that was the point, those were things that good GMs were already doing, but someone finally wrote them down so people who didn’t have direct access to a “good GM” could learn them too. The effect on the whole industry was profound; it was like everyone’s ears popped and said “wait, we can just directly tell people how to play?”

For example: the 2024 DMG doesn’t have a section on “worldbuilding”, it has sections on “Creating Adventures” and “Creating Campaign” with “campaign settings” and worldbuilding as a secondary concern to those, and that’s just great. That’s putting the emphasis on the right syllables; this is much more concerned with things like pacing, encounter design, recurring characters, flavor, and then the advice about settings builds out from that, how can you build out a setting to reflect the kind of game you want to run. Fantastic.

However, the theme of this book is “actionable content”, so rather than throw a bunch of advice for settings around and leave you hanging (like the 2014 DMG,) this includes a fully operational example setting, which just happens to be Grayhawk. It’s a remarkably complete gazeteer, nice maps (plural), lots of details. This strikes me as a perfect nostalgia deployment, something that’s cool on its own right that also will make old timers do the Leo DiCaprio pointing meme.

Following that is a remarkably complete gazeteer of cosmology, offering what amounts to a diet Manual of the Planes. It does a really nice job of the whirlwind tour of what’s cool and fun to use from what they now call the “D&D Multiverse”, while making it clear you can still use any or all of this stuff on top of and in addition to anything you make up.

Something else this book does well is take advantage of the fact that there’s already a whole line of compatible 5e books in print, so it can point you to where to learn more. There’s a page or two on things like Spelljamming, or Sigil, or The Radiant Citadel, which is fully useful on it’s own, but then instead of being coy about it, the book just says “if you want to know more, go read $BOOK.” That’s marketing the way its supposed to work.

On a similar note, before it dives into Greyhawk, the DMG has a list of all the other in-print 5e settings with an elevator pitch for why they’re cool. So if you’re new, you can skim and say “wait, armies of dragons?” or “magical cold war you say?” and know where to go next.

(Well, everything in print plus… Dark Sun? Interesting. Everything else in the table is something that got into print for 5e, so the usual stuff like Forgotten Realms, Raveloft, and Planescape, but also the adapted Magic: The Gathering settings, the Critical Role book, etc. Mystara isn’t here, or any of the other long-dorment 2e settings, but somehow Dark Sun made the cut. Between this and the last-second name-change reprieve in the Spelljammer set, there might be something cooking here? sicks_yes.gif)

There’s also the usual treasure tables, magic items, and so on.

Between this and the PHB, the 2024 books are a fully operational stand-alone game in a way previous iterations of the “core rules” haven’t always been.

Okay, having said all that, I am now compelled to tell you about my least favorite thing, which is the cover art. Here, let me link you to the official web page. Slap that open in a new tab, take a gander, I’ll meet you down at the next paragraph.

Pretty cool right? Skeleton army, evil sword guy, big dragon lurking in the back. Cool coloring! Nice use of light effects! But! There, smack in the center, is Venger from the 80s D&D cartoon. My problem isn’t the nostalgia ploy, as such. My problem is that Venger is a terrible design. Even if you limit the comparison to other 80s toy cartoons, Venger is dramatically, orders-of-magnitude worse than Skeletor, Mum-Rah, Megatron, Cobra Commander. Hell, every single He-Man or She-Ra bad guy is a better design than Venger. Step that out further, every single Space Ghost villain is a better design than Venger. D&D is full of cooler looking stuff than that. This cover with Skeletor and his Ram Staff there instead of Venger and his goofy-ass side horn? That would be great. This, though? sigh

He shows up inside the book, too! Like the PHB, each chapter opens with a full-page art piece, and they’re all a reference to some existing D&D thing, a setting or character. And then, start of chapter 2, there’s Venger and his big dumb horn using a crystal ball to spy on Tiamat. And this is really the one I’m complaining about, because all the other full-page spreads are a cool scene, and if you want to know more, there’s a whole book for that. But for this, the follow up is… you can go watch the worst cartoon of the 80s, the DVD of which is currently out of print?

And I hear what you’re saying, it’s a nostalgia play, sure, yeah, but also, it’s 2024; the kids that watched that show are closing in on 50, or thereabouts. The edition that could lean into 80s nostalgia for the purposes of pulling in the kids back in was third, and I know because I was there. “That’s the bad guy from a cartoon your parents barely tolerated” is a weird-ass piece of marketing.

As long as I’m grousing, my other least-favorite thing is towards the end, where they have something called a “Lore Glossary.” On the surface, this is a nice counterpart to the Rules Glossary in the new PHB, but while the Rules Glossary was probably the single best idea in the new books, this Lore Glossary is baffling. It’s a seemingly-random collection of D&D “trivia stuff”; locations, characters, events, scattered across various settings and fiction. There doesn’t seem to be any rhyme or reason to why things get an entry here; Fizban and Lord Soth get entries but Tanis doesn’t, but Drizzt, Minsc, and Boo do. There’s an entry for The Great Modron March but not Orcus which, okay, spoilers I guess. It’s all details for settings rather than anything broadly applicable; the book was already too long, it didn’t need 10 more pages of teasers for other books. Both Venger and the main characters from the 80s cartoon (as “Heros of the Realm, The”) are in here too. Again, it’s just plain weird they leaned in that hard to the old show. I assume that someone on staff was a huge fan, that or there’s a book coming out next year that’ll make us all go “ohhhh.”

The last thing I have anything negative to say about is the new Bastion System. On it’s own, and having not taken them for a test drive yet, it seems cool? It’s a pretty solid-looking system for having a player or party create and manage their own base of operations, possibly with Hirelings. Ways to upgrade them, bonuses or plot hooks those upgrades get you.

I’m just not sure why they’re in this book? It feels like a pitch for a “Stronghold Builder’s Guidebook” or “Complete Guildhall” got left without a release slot, and they said “let’s put the best 20 pages of this in the DMG.” Everything else in the book is applicable to every game, and then there’s this weird chapter for “and here’s how to do a base-building minigame!” Sure?

Personally, I love hireling/follower/base-building systems in computer games, but stay far away from them on the tabletop. The base management subgame was one of my favorite parts of both BATTLETECH and the first Pillars of Eternity, for example, but I don’t think I’ve ever had the desire to run a tabletop game with something more complex than “Wait, how many GP do you have on your character sheet? Sure, you can buy a house I guess.”

There’s nothing wrong with it, but like the Lore Glossary I wish they’d tried to make the book a little shorter and 10 bucks cheaper instead. (And then gave me the option to buy the blown-out version next summer.) Actually, let me hit that a little harder: this is a $50 384 page hardcover, and that seems like it’s out of reach for the target audience here. I don’t know how much you’d have to cut to get down to 40 bucks, but I bet that would have been a better book.

Finally for everyone keeping score at home (that’s me, I’m keeping score) Skill Challenges are not in this one.

🛡

And so, look. This is still 5th Edition Dungeons & Dragons. There’s a reason they didn’t update the number, even fractionally. If 5e wasn’t your or your group’s jam, there’s nothing in here that’ll change your mind. If 5e was your jam, this is a tooled-up, better version. This book is easily the best official D&D DMG to date. Between this and the ToV GMG, it’s an unexpected embarrassment of riches.

I see a lot of chatter on the web around “is it worth the upgrade?” I mean, these books are fifty bucks a pop retail, there’s nothing in here that’s so earth-shattering that you should consider it if you have to budget around that fact. Like buying a yacht, if you have to look at the price tag, the answer is “no.”

Honestly, though, I don’t think “upgrade” is the right lens. If you want to upgrade, great, Hasbro won’t decline the money. But this is about teeing up the next decade, setting up the kids who are just getting into the hobby now. More so than in a long time, this is a book for a jr high kid to pick up and change their life. I’ve said before that as D&D goes, so goes the rest of the hobby. I think we’re all in good shape.

Read More
software forestry Gabriel L. Helman software forestry Gabriel L. Helman

Software Forestry 0x07: The Trellis Pattern

In general, rewriting a system from scratch is a mistake. There are times, however, where replacing a system makes sense.

You’ve checked off every type in the Overgrowth Catalogue, entropy is winning, and someone does a back of the envelope cost analysis and finally says “you know, I think it really would be cheaper to replace this thing.”

Part of what makes this tricky is that presumably, this system has customers and is bringing in revenue. (If it didn’t you could just shut it down and build something new without a hassle.) And so, you need to build and deploy a new system while the old one is still running.

The solution is to use the old system as a Trellis.

The ultimate goal is to eventually replace the legacy system, but you can’t do it all at once. Instead, use the existing system as a Trellis, supporting the new system as it grows along side.

This way, you can thoughtfully replace a part at a time, carefully roll it out to customers, all while maintaining the existing—and working—functionality. The system as a whole will evolve from the current condition to the improved final form.

As you work through each capability, you can either use the legacy system as a blueprint to base the new one on, or harvest code for re‐use.

The great thing about a Trellis is that the Trellis and the Tree are partners. They have the same goals: for the tree to outgrow the trellis. But the trellis can be an active partner. I have a tree right now that still has part of a trellis holding up some branches. It’s one of the those trees that got a little too big a little too fast, put on a little too much fruit. A few years ago it had supports and trellises all around, now it’s down to just one whimsically-leaning stick. If things go well, I’ll be able to pull that out next summer.

The old system isn’t abandoned, it transitions into a new role. You can add features to make it work better as a trellis: an extra API endpoint here, a data export job there. The new system calls into the old one to accomplish something, and then you throw the switch and the old system starts calling into the new one to do that same thing.

Eventually the new system pulls away from the Trellis, grows beyond it. And if you do it right, the trellis falls away when its job is done. Ideally, in such a way that you could never tell it was there in the first place.

Sometimes, you can schedule the last switchover and have a big party when you turn the old system off. But if you’re really lucky, there comes a day where you realize the old load balancer crashed a month ago and no one noticed.

🌲

There’s a social & emotional aspect to this as well, which goes almost entirely undiscussed.

If we’re replacing a system, it’s probably been around for a while. We’re probably talking about systems with a decade+ of run time. The original architects may have moved on, but there are people who work on it, probably people who have built a whole career out of keeping it running.

There’a always some emotions when it comes time to start replacing the old stuff. Some stereotypes exist for a reason, and the sorts of people who become successful software engineers or business people tend to be the sorts of folks for whom “empathy” was their dump stat. There’s something galling about the newly arrived executive talking about moving on from the old and busted system, or the new tech lead espousing how much better the future is going to be. The old system may have gotten chocked out by overgrowth, left behind by the new growth of the tech industry, but it’s still running, still pulling in revenue. It’s paying for the electricity in the projector that’s showing the slide about how bad it is. It deserves respect, and so do the people who worked on it.

Thats the point: The Trellis is a good thing, it’s positive. The old system and—the old system’s staff—have a key role to play. Everyone is on the same team, everyone has the same goal.

🌲

There’s an existing term that’s often used for a pattern similar to this. I am, of course, talking about the Strangler Fig pattern. I hate this term, and I hate the usual shorthand of “strangler pattern” even more.

Really? Your mental model for biring in new software is that it’s an invasive parasite that slowly drains nutrients and kills its host? There are worse ways to go than being strangled, but not by much.

This isn’t an isolated style of metaphor, either. I used to work with someone—who was someone I liked, by the way—who used to say that every system being replaced needed someone to be an executioner and an undertaker.

Really? Your mental model for something ending is violent, state-mandated death?

If Software Forestry has a central thesis, it is this: the ways we talk about what we do and how we do it matter. I can think of no stronger example of what I mean than otherwise sane professionals describing their work as murdering the work of their colleagues, and then being surprised when there’s resistance.

What we do isn’t violent or murderous, it is collaborative and constructive.

What I dislike the most about Strangler Figs, though, is that a Strangler Fig can never exceed the original host, only succede. The Fig is bound to the host forever, at first for sustenance, and then, even after the host has died and rotted away, the Fig has an empty space where the host once stood, a ghost haunting the parasite that it can never fully escape from.

🌲

So if we’re going to use a real tree as our example for how to do this, let’s use my favorite trees.

Let me tell you about the Coastal Redwoods.

The Redwood forest is a whole ecosystem to itself, not just the trees, but the various other plants growing beneath them. When a redwood gets to the end of its life, it falls over. But that fallen tree then serves as the foundation to a whole new mini-ecosystem. The ferns and sorrel cover the fallen trunk. Seedlings sprout up in the newly exposed sunlight. Burls or other nodes sprout new trees from the base of the old, meaning maybe the tree really didn’t die at all, it just transitioned. From one tree springs a whole new generation of the forest.

There are deaths other than murder, and there are endings other than death.

Let’s replace software like a redwood falling; a loud noise, and then an explosion of new possibilities.

Read More
Gabriel L. Helman Gabriel L. Helman

Don’t Panic: Infocom’s Hitchhiker’s Guide to the Galaxy at 40

Well! It turns out that this coming weekend is the 40th anniversary of Infocom’s Hitchhiker’s Guide to the Galaxy text adventure game by Douglas Adams and Steve Meretzky. I mentioned the game in passing back in July when talking about Salmon of Doubt, but I’ll take an excuse to talk about it more.

To recap: Hitchhiker started as a six-part radio show in 1978, which was a surprise hit, and was quickly followed by a second series, an album—which was a rewrite and re-record with the original cast instead of just being a straight release of the radio show—a 2-part book adaptation, a TV adaptation, and by 1984, a third book with a fourth on the way. Hitchhiker was a huge hit.

Somewhere in there, Adams discovered computers, and (so legend has it) also became a fan of Infocom’s style of literate Interactive Fiction. They were fans of his as well, and to say their respective fan-bases had a lot of overlap would be an understatement. A collaboration seemed obvious.

(For the details on how the game actually got made, I’ll point you at The Digital Antiquarian’s series of philosophical blockbusters Douglas Adams, The Computerized Hitchhiker’s, and Hitchhiking the Galaxy Infocom-Style.)

These are two of my absolute favorite things—Infocom games and Hitchhiker—so this should be a “two great tastes taste great together” situation, right? Well, unfortunately, it’s a little less “peanut butter cup” and a little more “orange juice on my corn chex.”

“Book adaptation” is the sort of thing that seemed like an obvious fit for Infocom, and they did several of them, and they were all aggressively mediocre. Either the adaptation sticks too close to the book, and you end up painfully recreating the source text, usually while you “wait” and let the book keep going until you have something to do, or you lean the other way and end up with something “inspired by” rather than “based on.” Hitchhiker, amusingly, manages to do both.

By this point Adams had well established his reputation for blowing deadlines (and loving “the whooshing noise they make as they go by”) so Infocom did the sane thing and teamed him up Steve Meretzky, who had just written the spectacular—and not terribly dissimilar from Hitchhiker—Planetfall, with the understanding that Meretzky would do the programming and if Adams flagged then Meretzky could step in and push the game over the finish line.

The game would cover roughly the start of the story; starting with Arthur’s house being knocked down, continuing through the Vogon ship, arriving on the Heart of Gold, and then ending as they land on Magrathea. So, depending on your point of view, about the first two episodes of the radio and TV versions, or the first half of the first book. This was Adams’ fourth revision of this same basic set of jokes, and one senses his enthusiasm waning.

You play as Arthur (mostly, but we’ll get to that,) and the game tracks very closely to the other versions up through Arthur and Ford getting picked up by the Heart of Gold. At that point, the game starts doing its own thing, and it’s hard not to wonder if that’s where Adams got bored and let Meretzky take over.

The game—or at least the first part—wants to be terribly meta and subversive about being a text adventure game, but more often than not offers up things that are joke-shaped, but are far more irritating than funny.

The first puzzle in the game is that it is dark, and you have to open your eyes. This is a little clever, since finding and maintaining light sources are a major theme in earlier Zork-style Infocom games, and here you don’t need a battery-powered brass lantern or a glowing elvish sword, you can just open your eyes! Haha, no grues in this game, chief! Then the second puzzle is where the game really shows its colors.

Because, you see, you’ve woken up with a hangover, and you need to find and take some painkillers. Again, this is a text adventure, so you need to actually type the names of anything you want to interact with. This is long before point-and-click interfaces, or even terminal-style tab-complete. Most text games tried to keep the names of nouns you need to interact with as short as possible for ergonomic reasons, so in a normal game, the painkillers would be “pills”, or “drugs”, or “tablets”, or some other short name. Bur no, in this game, the only phrase the game recognizes for the meds is “buffered analgesic”. And look, that’s the sort of think that I’m sure sounds funny ahead of time, but is just plain irritating to actually type. (Although, credit where credit is due, four decades later, I can still type “buffered analgesic” really fast.)

And for extra gear-griding, the verb you’d use in reglar speech to consume a “buffered analgesic” would be to “take” it, except that’s the verb Infocom games use to mean “pick something up and put it in your inventory” so then you get to do a little extra puzzle where you have to guess what other verb Adams used to mean put it in your mouth and swallow.

The really famous puzzle shows up a little later: the Babel Fish. This seems to be the one that most people gave up at, and there was a stretch where Infocom was selling t-shirts that read “I got the Babel Fish!”

The setup is this: You, as Arthur, have hitchhiked on to the Vogon ship with Ford. The ship has a Babel Fish dispenser (an idea taken from the TV version, as opposed to earlier iterations where Ford was just carrying a spare.) You need to get the Babel fish into your ear so that it’ll start translating for you and you can understand what the Vogons yell at you when they show up to throw you off the ship in a little bit. So, you press the button on the machine, and a fish flies out and vanishes into a crack in the wall.

What follows is a pretty solid early-80s adventure game puzzle. You hang your bathrobe over the crack, press the button again, and then the fish hits the bathrobe, slides down, and falls into a grate on the floor. And so on, and you build out a Rube Goldberg–style solution to catch the fish. The 80s-style difficulty is that there are only a few fish in the dispenser, and when you run out you have to reload your game to before you started trying to dispense fish. This, from the era where game length was extended by making you sit and wait for your five-inch floppy drive to grind through another game load.

Everything you need to solve the puzzle is in the room, except one: the last thing you need to get the fish is the pile of junk mail from Arthur’s front porch, which you needed to have picked up on your way to lie in front of the bulldozer way back a the start of the game. No one thinks to do this the first time, or even first dozen times, and so you end up endlessly replaying the first hour of the game, trying to find what you missed.

(The Babel Fish isn’t called out by name in Why Adventure Games Suck, but one suspects it was top of Ron Gilbert’s mind when he wrote out his manifesto for Monkey Island four years later.)

The usual reaction, upon learning that the missing element was the junk mail, and coming after the thing with the eyes and the “buffered analgesic” is to mutter, screw this and stop playing.

There’s also a bit right after that where the parser starts lying to you and you have to argue with it to tell you what’s in a room, which is also the kind of joke that only sounds funny if you’re not playing the game, and probably accounted for the rest of the people throwing their hands up in the air and doing literally anything else with their time.

Which is a terrible shame, because just after that, you end up on the Heart of Gold and the game stops painfully rewriting the book or trying to be arch about being a game. Fairly quickly, Ford, Zaphod, and Trillian go hang out in the HoG’s sauna, leaving you to do your own thing. Your own thing ends up being using the backup Improbability Generator to teleport yourself around the galaxy, either as yourself or “quantum leap-style” jumping into other people. You play out sequences as all of Ford, Zaphod, and Trillian, and end up in places the main characters never end up in any of the other versions—on board the battlefleet that Arthur’s careless coment sets in motion, inside the whale, outside the lair of the Ravenous Bugblatter Beast of Traal. The various locations can be played in any order, and like an RPG from fifteen years later, the thing you need to beat the game has one piece in each location.

This is where the game settles in and turns into an actual adventure game instead of a retelling of the same half-dozen skits. And, more to the point, this is where the game starts doing interesting riffs on the source material instead of just recreating it.

As an example, at one point, you end up outside the cave of the Ravenenous Bugblatter Beast of Traal, and the way you keep it from eating you is by carving your name on the memorial to the Beast’s victims, so that it thinks it has already eaten you. This is a solid spin on the book’s joke that the Beast is so dumb that it thinks that if you can’t see it, it can’t see you, but manges to make having read the book a bonus but not a requirement.

As in the book, to make the backup Improbability Drive work you need a source of Brownian Motion, like a cup of hot liquid. At first, you get a cup of Advanced Tea Substitute from the Nutrimat—the thing that’s almost, but not quite, entirely unlike tea. Later, after some puzzles and the missile attack, you can get a cup of real tea to plug into the drive, which allows it work better and makes it possible to choose your destination instead of it being random. Again, that’s three different jokes from the source material mashed together in an interesting and new way.

There’s a bit towards the end where you need to prove to Marvin that you’re intelligent, and the way you do that is by holding “tea” and “no tea” at the same time. The way you do that is by using the backup Improbably Drive to teleport into your own brain and removing your common sense particle, which is a really solid Hitchhiker joke that only appears in the game.

The game was a huge success at the time, but the general consensus seemed to be that it was very funny but very hard. You got the sense that a very small percentage of the people who played the game beat it, even grading on the curve of Infocom’s usual DNF rate. You also got the sense that there were a whole lot of people for whom HHGG was both their first and last Infocom game. Like Myst a decade later, it seemed to be the kind of game people who didn’t play games got bought for them, and didn’t convert a lot of people.

In retrospect, it’s baffling that Infocom would allow what was sure to be their best-selling game amongst new customers to be so obtuse and off-putting. It’s wild that HHGG came out the same year as Seastalker, their science fiction–themed game designed for “junior level” difficulty, and was followed by the brilliant jewel of Wishbringer, their “Introductory” game which was an absolute clinic in teaching people how to play text adventure games. Hitchhiker sold more than twice those two games combined.

(For fun, See Infocom Sales Figures, 1981-1986 | Jason Scott | Flickr)

Infocom made great art, but was not a company overly-burdened by business acumen. The company was run by people who thought of games as a way to bootstrap the company, with the intent to eventually graduate to “real” business software. The next year they “finally” released Cornerstone—their relational database product that was going to get them to the big leagues. It did not; sales were disastrous compared to the amount of money spent on development, the year after that, Infocom would sell itself to Activision; Activision would shut them down completely in 1989.

Cornerstone was a huge, self-inflicted wound, but it’s hard not to look at those sales figures, with Hitchhiker wildly outstripping everything else other than Zork I, and wonder what would have happened if Hitchhiker had left new players eager for more instead of trying to remember how to spell “analgesic.”

As Infocom recedes into the past and the memories of old people and enthusiasts, Hitchhiker maintains it’s name recognition. People who never would have heard the name “Zork” stumble across the game as the other, other, other version of Hitchhiker Adams worked on.

And so, the reality is that nowadays HHGG is likely to be most people’s first—and only—encounter with an Infocom game, and that’s too bad, because it’s really not a good example of what their games were actually like. If you’re looking for recommendation, scare up a copy of Enchanter. I’d recommend that, Wishbringer, Planetfall, and Zork II long before getting to Hitchhiker. (Zork is the famous game with the name recognition, but the second one is by far the best of the five games with “Zork” in the title.)

BBC Radio 4 did a 30th anniversary web version some years ago, which added graphics in the same style as the guide entries from the TV show, done by the same people, which feels like a re-release Infocom would have done in the late 80s if the company hadn’t been busy drowning in consequences of their bad decisions.

It’s still fun, taken on its own terms. I’d recommend the game to any fan of the other iterations of the Guide, with the caveat that it should be played with a cup of tea in one hand and a walkthrough within easy reach of the other.

All that said, it’s easy to sit here in the future and be too hard on it. The Secret of Monkey Island was a conceptual thermocline for adventure games as a genre, it’s so well designed, and it’s design philosophy is so well expressed in that design, that once you’ve played it it’s incredibly obvious what every game before it did wrong.

As a kid, though, this game fascinated me. It was baffling, and seemingly impossible, but I kept plowing at it. I loved Hitchhiker, still do, and there I was, playing Arthur Dent, looking things up in my copy of the Guide and figuring out how to make the Improbability Drive work. It wasn’t great, it wasn’t amazing, it was amazingly amazing. At one point I printed out all the Guide entries from the game and made a physical Guide out of cardboard?

As an adult, what irritates me is that the game’s “questionable” design means that it’s impossible to share that magic from when I was 10. There are plenty of other things I loved at that time I can show people now, and the magic still works—Star Wars, Earthsea, Monkey Island, the other iterations of Hitchhiker, other Infocom games. This game, though, is lost. It was too much of its exact time, and while you can still play it, it’s impossible to recreate what it was like to realize you can pick up the junk mail. Not all magic lasts. Normally, this is where I’d type something like “and that’s okay”, but in this particular case, I wish they’d tried to make it last a little harder.


As a postscript, Meretzky was something of a packrat, and it turns out he saved everything. He donated his “Infocom Cabinet” to the Internet Archive, and it’s an absolute treasure trove of behind-the-scenes information, memos, designs, artwork. The Hitchhiker material is here: Infocom Cabinet: Hitchhikers Guide to the Galaxy : Steve Meretzky and Douglas Adams

Read More
Gabriel L. Helman Gabriel L. Helman

Icecano endorses Kamala Harris for President

Well, fuck it, if the LA Times and the Washington Post won’t do it I will: Icecano Formally Endorses Kamala Harris for President of the United States.

It’s hard to think of another presidential election in living memory that’s more of a slam dunk than the one we have here. One the one hand, we have a career public servant, senator, sitting vice-president, and lady who made a supreme court justice cry. On the other hand, we have the convicted felon, adjudicated rapist, wannabe warlord, racist game show host, and, oh, the last time he was president he got impeached twice and a million people died.

If you strip away all the context—and you shouldn’t, but if you did—there’s still no contest. On basic ability to do the job Harris far outstrips the other guy. But if you pour all the context back—well, here’s the thing, only one candidate has any context worth talking about, and it’s all utterly disqualifying.

Whatever, whoever, or wherever you care about, Harris is going to be better than anything the convicted felon would do. Will she be perfect? Almost certainly not, but that’s not how this works. We’re not choosing a best friend, or a Dungeon Master, or a national therapist, or figuring out who to invite out for drinks. We’re hiring the chief executive of a staggeringly well-armed and sprawling bureaucracy.

I was going to type a bunch more here, but it comes down to this: Harris would actually be good at that job. We know for a fact that other guy will not.


One of the gifts age gives you, as it’s taking away the ability to walk up and down stairs without using the handrail, is perspective.

Every election cycle, there’s a group of self-identified “liberal/leftist” types who mount a “principled” take on why they can’t possibly vote for the Democrat, and have to let the Republican win, despite the fact that the Republican would be objectively worse on the thing they claim to care about. And it’s always the same people.

Here’s the thing: these people are liars. Certainly to you, maybe to themselves.

Nothing has ever satisfied them, and nothing ever will. They’re conservatives, but they want their weed dealer to keep selling to them at the “friendly” price, so they pretend otherwise. You know that “punk rock to conservative voter” pipeline? At best, these are those people, about 2/3 of the way down the line. Stop paying attention to them, stop trying to convert them, and don’t let them distract you.


Whats the point of being a billionaire newspaper owner if you’re going to be a gutless coward about it? I genuinely don’t know which of these two Charles Foster Kane would have supported, but he’d have made a fucking endorsement.


A week to go everyone. Vote.

Read More
software forestry Gabriel L. Helman software forestry Gabriel L. Helman

Software Forestry 0x06: The Controlled Burn

Did you know earthworms aren’t native to North America? Sounds crazy, but it’s true; or at least it has been since the glaciers of the last ice age scoured the continent down to the bedrock and took the earthworms with them. North America certainly has earthworms now, but as a recently introduced invasive species. (For everyone who just thought “citation needed”, Invasive earthworms of North America.)

As such, the biomes North America have very different lifecycles than their counterparts in Eurasia do. In, say, a Redwood Forest, organic matter builds up in a way it doesn’t across the water. Things still rot, there’s still fungus and microbes and bugs and things, but there isn’t a population of worms actively breaking everything down. The biomass decays slower. Some buildup is a good thing, it provides a habitat for smaller plants and animals, but if it builds up too much, it can start choking plants out before it can break down into nutrients.

So what happens is, the forest catches on fire. In a forest with earthworms, a fire pretty much always a bad thing. No so much in the Redwoods, or other Californian forests. The trees are fire resistant, the fire clears away the excess debris, frees those nutrients, and many species of cone-bearing conifer trees—redwoods, pines, cypresses, and the like—have what are called “serotinous” cones, which means they only germinate after a fire. Some are literally covered in a layer of resin that has to melt off before the seeds can sprout. The fire rips though, clears out the debris, and the new plants can sprut in the newly fertilized ground. Fire isn’t a hazard to be endured, it’s been adopted as a critical part of the entire ecosystem’s lifecycle.

Without human intervention, fires happen semi-regularly due to lighting. Of course, that’s a little unpredictable and doesn’t always turn out great. But the real problem is when humans prevent fires from taking hold, and then no matter how much you “sweep the forest,” the debris and overgrowth builds up and builds up, until you get the really huge fires we’ve been having out here.

The people who used to live here (Before, ahh… a bunch of other people “showed up and took over” who only knew how to manage forests with earthworms) knew what the solution was: the Controlled Burn. You choose a time, make some space, and carefully set the fire, making sure it does what it needs to do in the area you’ve made safe, but keep it out of places where the people are. In CA at least, we’re starting to adopt controlled burns as an intentional management technique again, a few hundred years later. (The biology, politics, history, and sociology of setting a forest on fire on purpose are beyond our scope here, but you get the general idea.)

I think a lot of Software Forests are like this too.

Every place I’ve ever worked has struggled with figuring out how to plan and estimate ongoing maintenance outside of a couple of very narrow cases. If it’s something specific, like a library upgrade, or a bug, you can usually scope and plan that without too much trouble. But anything larger is a struggle, because those larger maintenance and care efforts are harder to estimate, especially when there isn’t a specific & measurable customer-facing impact. You don’t have a “thing” you can write a bug on. You don’t know what the issues are, specifically, it’s just acting bad.

The problem requires sustained focus, the kind that lasts long enough to actually make a difference. And that’s hard to get.

One of the reasons why Cutting Trails is so effective is that it doesn’t take that much more time than the work the trail is being cut towards. Back when estimating via Fibonacci Sequence was all the rage, the extra work to cut the trail usually didn’t get you up to the next fibonacci number.

Furthermore, the effort to get in and actually estimate and scope some significant maintenance work is often more work than the actual changes. It’s wasteful to spend a week investigating and then write up a plan for someone to do later. You’re already in there!

Finally, rarely is there a direct advocate. There’s nearly always someone who acts as the Voice of the Customer, or the Voice of the Business, but very rarely is anyone the Voice of the Forest.

(I suspect this is one of the places where agile leads us astray. The need to have everything be a defined amount of work that someone can do in under a week or two makes it incredibly easy to just not do work that doesn’t lend itself to being clearly defined ahead of time.)

So the overgrowth and debris builds up, and you get the software equivalent of an unchecked forest fire: “We need to just rewrite all of this.”

No you don’t! What you need are some Controlled Burns.

It goes like this:

Most Forests have more than one application, for a wide definition of “application.” There’s always at least one that’s limping along, choked with Overgrowth. Choose one. Find a single person to volunteer. (Or get volun-told.) Clear their schedule for a month. Point them at the app with overgrowth and let them loose to fix stuff.

We try to be process-agnostic here at Software Forestry, but we acknowledge most folks these days are doing something agile, or at least agile adjacent. Two-week sprints seems to have settled as the “standard” increment size; so a month is two sprints. That’s not nothing! You gotta mean it to “lose” a resource for that much time. But also, you should be able to absorb four weeks of vacation in a quarter, and this is less disruptive than that. Maybe schedule it as one sprint with the option to extend to a second depending on how things look “next week.”

It helps, but isn’t mandatory, to have success metrics ahead of time. Sometimes, the right move is to send the person in there and assume you’ll find something to paint a bullseye around. But most of the time you’ll want to have some kind of measurement you can do a before-and-after comparison with. The easiest ones are usually performance related, because you can measure those objectively, but probably aren’t getting handled as part of the normal “run the business.” Things like “we currently process x transactions per second, we need to get that to 2x,” or “cut RAM use by 10%,” or “why is this so laggy sometimes?”

I did a Controlled Burn once on a system that needed to, effectively, scan every record in a series of database tables to check for things that needed to be deleted off of a storage device. It scanned everything, then started over and scanned everything again. When I started, it was taking over a day to get through a cycle, and that time was increasing, because it wasn’t keeping up with the amount of new work sliding in. No one knew why it took that long, and everyone with real experience with that app was long gone from the company. After a month of dedicated focus, it got through a cycle in less than two hours. Fixed a couple bits of buggy behavior while I was at it. No big changes, no re-architecture, no platform changes, just a month of dedicated focus and cleanup. A Controlled Burn.

This is the time to get that refactoring done—fix that class hierarchy, split that object into some collaborators. Write a bunch of tests. Refactor until you can write a bunch of tests. Fix that thing in the build process everyone hates. Attach some profilers and see where the time is going.

Dig in, focus, and burn off as much overgrowth as you can. And then leave a list of things to do next time. You should know enough now to do a reasonable job scoping and estimating the next steps, write those up for the to do list. Plants some seeds for new growth. You shouldn’t have to do a Controlled Burn on the same system twice.

Deploying this kind of directed focus can be incredibly powerful. The average team can absorb maybe one or two of these a year, so deploy them with purpose.

🌲

Sometimes, all the care in the world won’t do the trick, and you really do need to replace a system. Next time: The Trellis Pattern

Read More
software forestry Gabriel L. Helman software forestry Gabriel L. Helman

Software Forestry 0x05: Cutting Trails

The lived reality of most Software Foresters is that we spend all our time with large systems we didn’t see the start of, and won’t see the end of. There’s a lot of takes out there about what makes a system “Legacy” and not “just old”, but one of the key things is that Legacy Systems are code without continuity of philosophy.

Because almost certainly, the person who designed and built in the first place isn’t still here. The person that person trained probably isn’t still here. Given the average tenure time in tech rolls, it’s possible the staff has rolled over half-a-dozen or more times.

Let me tell you a personal example. A few lifetimes ago, I worked on one of these two-decade old Software Forests. Big web monolith, written in what was essentially a homebrew MVC-esque framework which itself was written on top of a long-deprecated 3rd party UI framework. The file layout was just weird. There clearly was a logic to the organization, but it was gone, like tears in the rain.

Early on, I had a task where I needed to add an option to a report generator. From the user perspective, I needed to add an option to a combobox on a web form, and then when the user clicked the Generate button, read that new option and punch the file out differently.

I couldn’t find the code! I finally asked my boss, “hey, is there any way to tell which file has which UI pages?” The response was, “no, you just gotta search.”

As they say, Greppability Is an Underrated Code Metric.

(Actually, I’ve worked on two big systems now where I had essentially this exact conversation. The other one was the one where one of the other engineers described it as having been built by “someone who knew everything about how JSP tags worked except when to use them.”)

So you search for the distinctive text in the button, or the combo box, or something on the page. You find the UI. Then you start tracing in. Following the path of execution, dropping into undocumented methods with unhelpful names, bouncing into weird classes with no clear design, strange boundaries, one minute a function with a thousand lines, the next minute an inheritance hierarchy 10 levels of abstraction deep to call one line.

At this point you start itching. “I could rewrite all of this,” you think. “I could get this stood up in a weekend with Spring Boot/React/Ruby on Rails/Elixir/AWS Lambdas/Cool Thing I Used Last”. You start gazing meaningfully at the copy of the Refactoring book on the shelf. But you gotta choke down that urge to rebuild everything. You have a bug you have to fix or a feature to deploy. But it’s not actually going to get better if you keep digging the hole deeper. You gotta stop digging, and leave things better for the next person.

You need to Cut a Trail.

1. Leave Trail Markers.

First thing is, you have to figure out what it does now before you change anything. In a sane, kind world, there would be documentation, diagrams, clear training. And that does happen, but very, very rarely. If you’re very lucky, there’s someone who can explain it to you. Otherwise, you have to do some Forensic Architecture.

Talk to people. Add some logging. Keep clicking and watching what it does. Step through it in a debugger, if you can and if that helps, although I’ve personally found that just getting a debugger working on a live system is often times more work than is worth it for the information you get out of it. But most of all, read. Read the code closely, load as much of that system’s state and behavior into your mind as you can. Read it like you’re in High School English and trying to pull the symbolism out of The Grapes or Wrath or The Great Gatsby. That weird function call is the light at the end of the pier, those abstractions are the eyes on the billboard—what do they mean? Why are they here? How does any of this work?

You’ll get there, that’s what we do. There’s a point where you’ll finally understand enough to make the change you want to make. The trick is to stop at this point, and write everything down. There’s a “haha only serious” joke that code comments are for yourself in six months, but—no. Your audience here is you, a week ago. Write down everything you needed to know when you started this. Every language has a different way to do embedded documentation or comments, but they all have a way to do it. Document every method or function that your explored call path went through. Write down the thing you didn’t understand when you started, the strange overloaded behavior of that one parameter, what that verb really means in the function name, as much as possible, why it does what it does.

Take an hour and draw the diagram you wish you’d had. Write down your notes. And then leave all that somewhere that other people can find it. If you’re using a language where the imbedded documentation system can pull in external files, check that stuff right on in to the codebase. Most places have an internal wiki. Make a page for you team if there isn’t one. Under that, make a page for the app if it doesn’t have one. Then put all that you’ve learned there.

Something else to make sure to document early on: terminology. Everyone uses the same words to mean totally different things. My personal favorite example: no two companies on earth use the word “flywheel” the same way. It doesn’t matter what it was supposed to mean! Ask. Then write it down. The weird noun you tripped over at the start of this? Put the internal definition somewhere you would have found it.

People frequently object that they don’t have the time to do this, to which I say: you’ve already done the hard part! 90% of the time for this task was figuring it out! Writing it down will take a fraction of the time you already had to spend, I promise. And when you’re back here in a year, the time you save in being able to reload all that mental state is going to more than pay for that afternoon you spent here.

2. Write Tests.

Tests are really underrated as a documentation and exploration technique. I mean, using them to actually test is good too! But for our purposes we’re not talking about a formal TDD or Red-Green-Refactor–style approaches. That weird function? Slap some mocks and stubs together and see what it does. Throw some weird data at it. You’re goal isn’t to prove it correct, but to act like one of those Edwardian Scientists trying to figure out how air works.

Another Forest I inherited once, which was a large app that customers paid real money to use, had a test suite of 1 test—which failed. But that was great, because there was already a place to write and run tests.

Tests are a net benefit, they don’t all have to be thorough, or fall into strict unit/integration/acceptance boundaries. Sometimes, it’s okay to put a couple of little weird ones in there that exist to help explain what some undocumented code does.

If you’re unlucky enough to run into a Forest with no test runner, trust me, take the time to bolt one on. It doesn’t have to be perfect! But you’ll make that time back faster than you’d believe.

When you get done, in additional to whatever “normal” Unit or Integration tests your process requires or requests, write a really small test that demonstrates what you had to do. Link that back to the notes you wrote, or the documentation you checked in.

3. A Little Cleanup, some Limited Refactoring

Once you have it figured out, and have a test or two, there’s usually two strong responses: either “I need to replace all of this right now”, or “This is such a mess it’ll never get better.”

So, the good news is that both of those are wrong! It can get better, and you really probably shouldn’t rework everything today.

What you should do a little cleanup. Make something better. Fix those parameter names, rename that strangely named function. Heck, just fix the tenses of the local variables. Do a little refactoring on that gross class, split up some responsibilities. It’s always okay to slide in another shim or interface layer to add some separation between tangled things.

(Don’t leave a huge mess in the VC diff, though, please.)

Leave the trail a little cleaner than when you found it. Doesn’t have to be a lot, we don’t need to re-landscape the whole forest today.

4. Write the New Stuff Right (as possible)

A lot of the time, you know at least one thing the original implementors didn’t: you know how the next decade went. It’s very easy to come in much later, and realize how things should have been done in the first place, because the system has a lot more years on it now than it used to. So, as much as you can, build the new stuff the right way. Drop that shim layer in, encapsulate the new stuff, lay it out right. Leave yourself a trail to follow when you come back and refactor the rest of it into shape.

But the flip side of that is:

5. Don’t be a Jerk About It

Everyone has worked on a codebase where there’s “that” module, or library, or area, where “that guy” had a whole new idea about how the system should be architected, and it’s totally out of place with everything else. A grove of palm trees in the middle of a redwood forest. Don’t be that guy.

I worked on an e-commerce system once where the Java package name was something like com.company.services.store, and then right next to it was com.company.services.store2. The #2 package was one former employee’s personal project to refactor the whole system; they had left years before with it half done, but of course it was all in production, and it was a crapshoot which version other parts of the system called into. Don’t do that.

After you’re gone, when someone looks at the version control change log for this part of the system, you want them to see your name and think “oh, this one had the right idea.”

Software Forestry is a group project, for the long term. Most of the time, “consistency and familiarity” are more valuable than some kind of quixotic quest for the ideal engineering. It’s okay, we’ll get there. Keep leaving it better than you found it. It’ll be worth it.

🌲

You’ll be amazed what your overgrown codebase looks like after a couple months of doing this. That tangled overgrowth starts to look positively tidy.

But sometimes, just cutting trails doesn’t get you there. Next Time: The Controlled Burn.

Read More
Gabriel L. Helman Gabriel L. Helman

Ten Years of the Twelfth Doctor

I missed it with everything else going on at the time, but this past August marks ten years since the debut of Peter Capaldi as the Twelfth Doctor Who, who is, without a doubt, my all-time favorite version of the character.

His take on the character boiled down to, basically, “Slightly Grumpy Aging Punk Space Dad”, and it turns out that’s exactly what I always wanted. Funny, weird, a little spooky, “kind” without necessarily being “nice”. If nothing else, the Doctor should be the coolest weird uncle possible, and, well, look at that picture! Perfection.

(This is a strange thing for someone who grew up on PBS reruns of Tom Baker to admit. But when I’m watching something else and wishing the Doctor would show up and kick things into gear, it’s now Capaldi I picture instead of Baker.)

Unlike some of the other versions of the character, Twelve took a little while to dial in. So it’s sort of appropriate I didn’t remember this anniversary until now, because this past weekend was the 10th anniversary of the eighth episode of his inaugural series, “Mummy on the Orient Express.” “Mummy” wasn’t the best episode of that season—that was easily “Listen” or “Dark Water”, but “Mummy” was the episode where I finally got what they were doing.

This is slightly embarrassing, because “Mummy” is also the most blatantly throwback episode of the year; it’s a story that could have been done with very few tweaks in 1975 with Tom Baker. The key though, are those differences in approach, and one of the reasons a long running show like Doctor Who goes back and revisits old standards is to draw a contrast between how they were done then vs now.

Capaldi, unlike nearly all of his predecessors, was a genuinely well-known actor before climbing on board the Tardis. The first place I saw him was as the kid that falls in love with the (maybe?) mermaid in the criminally under-seen Local Hero. But his signature part was Malcom Tucker in The Thick of It. The Thick of It is set “behind the scenes” of the British government, and is cut from the British comedy model of “everyone is an idiot trying to muddle through”. The Thick of It takes that model one step further, though, and posits that if that’s true, there must be a tiny group of non-idiots desperately keeping the world together. That’s Malcom Tucker, nominally the government’s Director of Communications, but in reality the Prime Minister’s enforcer, spin doctor, and general Fixer. Tucker is clearly brilliant, the lone competent man surrounded by morons, but also a monster, and borderline insane. Capaldi plays him as openly menacing, but less straightforwardly malevolent as just beyond caring about anyone, constantly picking up the pieces from the problems that the various other idiots in Government have caused. Capaldi manages to play Tucker as clearly always thinking, but it’s never clear what he’s actually thinking about.

Somehow, Tucker manages to be both the series main antagonist and protagonist at the same time. And the character also had his own swearing consultant? It’s an incredible performance of a great part in a great show. (On the off chance you never saw it, he’s where “Omni-Shambles” came from, and you should stop reading this right now and go watch that show, I’ll wait for you down at the next paragraph.)

So the real problem for Doctor Who was that “Malcom Tucker as The Doctor” was simultaneously a terrible idea but one that was clearly irresistible to everyone, including show-runner Steven Moffat and Capaldi himself.

The result was that Capaldi had a strangely hesitant first season. His two immediate predecessors, David Tennant and Matt Smith, lept out of the gate with their takes on the Doctor nearly fully formed, whereas it took a bit longer to dial in Capaldi. They knew they wanted someone a little less goofy than Smith and maybe a little more standoffish and less emotional, but going “Full Tucker” clearly had strong gravity. (We’ve been working our way on-and-off through 21st century Who with the kids, and having just rewatched Capaldi’s first season, in retrospect I think he cracked what he was going to to do pretty early, but everyone else needed to get Malcom Tucker out of their systems.)

Capaldi is also an excellent actor—probably the best to ever play the part—and also one who is very willing to not be the center of attention every scene, so he hands a lot of the spotlight off to his co-lead Louise Coleman’s Clara Oswald, which makes the show a lot better, but left him strangely blurry early on.

As such, I enjoyed it, but spent a lot of that first season asking “where are they going with this?” I was enjoying it, but it wasn’t clear what the take was. Was he… just kind of a jerk now? One of the running plot lines of the season was the Doctor wondering if he was a good man or not, which was a kind of weird question to be asking in the 51st year of the show. There was another sideplot where he didn’t get along with Clara’s new boyfriend which was also unclear what the point was. Finally, the previous episode ended with Clara and the Doctor having a giant argument that would normally be the kind of thing you’d do as a cast-member was leaving, but Coleman was staying for at least there rest of the year? Where was all this going?

For me, “Mummy” is where it all clicked: Capaldi’s take on the part, what the show was doing with Clara, the fact that their relationship was as toxic as it looked and that was the point.

There are so many great little moments in “Mummy”; from the basic premise of “there’s a mummy on the orient express… in space!”, to the “20s art deco in the future” design work to, the choice of song that the band is singing, to the Doctor pulling out a cigarette case and revealing that it’s full of jelly babies.

It was also the first episode of the year that had a straightforward antagonist, that the Doctor beat by being a little bit smarter and a little bit braver than everyone else. He’d been weirdly passive up to this point; or rather, the season had a string of stories where there wasn’t an actual “bad buy” to be defeated, and had more complex, ambiguous resolutions.

It’s the denouement where it really all landed for me. Once all the noise was over, the Doctor and Clara have a quite moment on an alien beach where he explains—or rather she realizes—what his plan had been all along and why he had been acting the way he had.

The previous episode had ended with the two of them having a tremendous fight, fundamentally a misunderstanding about responsibility. The Doctor had left Clara in charge of a decision that normally he’d have taken; Clara was angry that he’d left her in the lurch, he thought she deserved the right to make the decision.

The Doctor isn’t interested in responsibility—far from it, he’s one of the most responsibility-averse characters in all of fiction—but he’s old, and he’s wise, and he’s kind, and he’s not willing not to not help if he can. And so he’ll grudgingly take responsibility for a situation if that’s what it takes—but this version is old enough, and tired enough, that he’s not going to pretend to be nice while he does it.

He ends by muttering, as much to himself as to Clara, “Sometimes all you have are bad choices. But you still have to choose.”

And that’s this incarnation in a nutshell—of course he’d really rather be off having a good time, but he’s going to do his best to help where he can, and he isn’t going to stop trying to help just because all the options are bad ones. He’d really rather the Problem Trolly be going somewhere nice, but if someone has to choose which track to go down, he’ll make the choice.

“Mummy” is the middle of a triptych of episodes where Clara’s world view fundamentally changed. In the first, she was angry that the Doctor expected her to take responsibility for the people they came across, here in the second she realized why the Doctor did what he did, and then in the next she got to step in the Doctor’s shoes again, but this time understood.

The role of the “companion” has changed significantly over the years. Towards the end of the old show they realized that if the title character is an unchanging mostly-immortal, you can wrap an ongoing story around the sidekick. The new show landed on a model where the Doctor is mostly a fixed point, but each season tells a story about the companion changing, sometimes to the point where they don’t come back the next year.

Louise Coleman was on the show for two and a half seasons, and so the show did three distinct stories about Clara. The first two stories—“who is the impossible girl” and “will she leave the show to marry the boring math teacher”—turned out to be headfakes, red herrings, and actually the show was telling another story, hidden in plain sight.

The one story you can never tell in Doctor Who is why that particular Time Lord left home, stole a time capsule, and became “The Doctor”. You can edge up against it, nibble around the edges, imply the hell out of things, but you can’t ever actually tell that story. Except, what you can do is tell the story of how someone else did the same thing, what kind of person they had to be ahead of time, what kinds of things had to happen to them, what did they need to learn.

With “Mummy”, Clara’s fate was sealed—there was no going back to “real life”, or “getting married and settling down”, or even “just leaving”. The only options left were Apotheosis or Death—or, as it turns out, both, but in the other order. She had learned too much, and was on a collision course with her own stolen Tardis.

And standing there next to her was the aging punk space dad, passing though, trying to help. My Doctor.


Both Moffat’s time as show-runner and Capaldi’s time as the Doctor have been going through a much-deserved reappraisal lately. At the time, Capaldi got a weirdly rough reaction from online corners of the fanbase. Partly this was because of the aforementioned slow start, and partly because he broke the 21st century Who streak of casting handsome young men. But mostly this was because of a brew of toxic “fans”, bad-faith actors, and various “alt-right” grifters. (You know, Tumblr.) Because of course, this last August was also the 10th anniversary of “GamerGate”. How we ended up in a place that the unchained Id of the worst people alive crashed through video game and science fiction fandoms, tried to fix the Hugos, freaked out about The Last Jedi so hard it broke Hollywood, and then elected a racist game show host to be president is a topic for another time, but those people have mostly moved the grift on from science fiction—I mean, other than the Star Wars fanbase, which became a permanent host body.

The further we get from it, the more obvious what a grift it was. It’s hard to describe how how utterly deranged the Online DiscourseTM was. There was an entire cottage industry telling people not to watch Doctor Who because of the dumbest reasons imaginable in the late twenty-teens, and those folks are just… gone now, and their absense makes it even more obvious how spurious the “concerns” were. Because this was also the peak “taking bad-faith actors seriously” era. The general “fan” “consensus” was that Capaldi was a great actor let down by bad writing, in that sense of “bad” meaning “it wasn’t sexist enough for me.”

There’s a remarkable number of posts out there what’s left of the social web of people saying, essentially, “I never watched this because $YOUTUBER said it was bad, but this is amazing!” or “we never knew what we had until it was gone!”

Well, some of us knew.

I missed this back in November, but the official Doctor Who magazine did one of their rank every episode polls on the advent of the 60th anniversary. They do this every decade or so, and they’re always interesting, inasmuch as they’re a snapshot of the general fan consensus of the time. They’re not always a great view on how the general public sees this, I mean, a poll conducted by the official magazine is strongly self-selecting for Fans with a capital F.

I didn’t see it get officially posted anywhere, but most of the nerd news websites did a piece on it, for example: Doctor Who Fans Have Crowned the Best Episode – Do You Agree? | Den of Geek. The takeway is that the top two are Capaldis, and half of the top ten are Moffat’s. That would have been an unbelievable result a decade ago, because the grifters would have swamped the voting.

Then there’s this, which I’ve been meaning to link to for a while now. Over in the burned-out nazi bar where twitter used to be, a fan of Matt Smith’s via House of the Dragon found out that he used to be the lead of another science fiction show and started live tweeting her watch through Doctor Who: jeje (@daemonsmatt). She’s up through Capaldi’s second season now, as I type this, and it’s great. She loves it, and the whole thread of threads is just a river of positivity. And even in the “oops all nazis” version of twitter, no one is showing up in the comments with the same grifter crap we had to deal with originally, those people are just gone, moved on to new marks. It’s the best. It’s fun to see what we could have had at the time if we’d run those people off faster.

This all feels hopeful in a way that’s bigger than just people discovering my favorite version of my favorite show. Maybe, the fever is finally starting to break.

Read More
software forestry Gabriel L. Helman software forestry Gabriel L. Helman

Software Forestry 0x04: Library Upgrade Week

Here at Software Forestry we do occasionally try to solve problems instead of just turning them into lists of smaller problems, so we’re goona do a little mood pivot here and start talking about how to manage some of those forms of overgrowth we talked about last time.

First up: let me tell you the good news about Library Upgrade Week.

Just about any decently sized software system uses a variety of third party libraries. And why wouldn’t you? The multitude of high-quality libraries and frameworks out there is probably the best outcome of both the Open Source movement and Object-Oriented Software design. The specific mechanics vary between languages and their practitioner’s cultures, but the upshot is that very rarely does anyone build everything from scratch. There’s no need to go all historical reenactment and write your own XML parser.

Generally, people keep those libraries pinned and stay on one fixed version, rather than schlepping in a new version every-time an update happens. This is a good thing! Change is risk, and risk should be taken on intentionally. The downside is that those libraries keep moving forward, the version you’re using slips out of date, and now you have a bunch of Overgrowth. And so that means you need to upgrade them.

The upshot of all that is on a semi-regular basis, we all need to slurp in a bunch of new code no-one on the payroll wrote, and don’t really know how to test. Un-Reviewed Code Is Tech Debt, and one of the mantras of writing good tests is “don’t test the framework”, so this is always a little iffy.

It’s incredibly easy to just keep letting those weeds grow a little longer. “The new version doesn’t have anything we need”, “there’s no bugs”, “if it ain’t broke don’t fix it”, and so on. They always take too long, don't usually deliver immediate gratification, and are hard to schedule. It’s no fun, and no one likes to do it.

The trick is to turn it into a party.

It works like this: set aside the last week of the quarter to concentrate on 3rd party library upgrades. Regardless of what your formal planning cycle or process is, most businesses tends to operate on quarters, and there’s usually a little dead time at the end of the quarter you can repurpose.

The Process:

  1. Form squads. Each squad is a group of like-minded individuals focused on a single 3rd party library or framework. Squads are encouraged to be cross-team. Each squad will focus on updating that 3rd party library in all applications or places where it is used. The intent is to make this a group event, where people can help each other out. Participation is not mandatory.

  2. Share squad membership and goals ahead of time. Leadership should reserve the right to veto libraries as “too scary” or “not scary enough”. Libraries with a high severity alerts or known CVE are good candidates.

  3. That week, each squad self organizes and works as a group through any issues caused by the upgrade. Other than major outages or incidents, squad members should be excused from other “run the business” type work for that week; or rather, the library upgrades are “the business.” Have fun!

  4. On that Friday hold the Library Upgrade Week Show-n-Tell. Every team should demo what they did, how they did it, and what it took to pull it off. Tell war stories, hold a happy hour, swap jokes. If a squad doesn’t finish that's okay! The expectation is that they’ll have learned a lot about what it'll take to finish, and that work will be captured in the relevant team’s todo lists. If you’re in a process with short develop-deploy increments (like sprints) you can make the library upgrade(s) a release on its own. Ideally you already have a way to sign off a release as not containing regressions, and so a short release with just a library upgrade is a great way to make sure you didn’t knock some dominos over.

But wait! There's more! All participants will vote on awards to give to squads, for things like:

  • Error message with least hits on Stack Overflow
  • Largest version number jump
  • Most lines changed
  • Fewest lines changed
  • Best team name
  • Best presentation

Go nuts! Have a great time!

🌲

Yes, it’s a little silly, but that’s the point. I’ve deployed a version of this at a couple of jobs now, and it’s remarkable how effective it is. The first couple of cycles people hit the “easy” ones—uprev the logging library or a JSON parser or something. But then, once people know that Library Upgrade Week is coming, they start thinking about the harder stuff, and you start getting people saying they want to take a swing at the main framework, or the main language version, or something else load-bearing. It’s remarkable how much progress two or three people can make on a problem that looks unsolvable when they have an uninterrupted week to chew on it. (If you genuinely can’t spare a handful of folks to do some weeding four weeks out the the year, that’s a much larger problem than out of date libraries, and you should go solve that problem first. Like, right now.)

There’s an instinct to take the core idea to schedule this kind of maintenance for a few times a year, but leave off the part where it’s a party. This is a mistake. This is work people want to do even less than their usual work; the trick is to make everthing around it fun.

We’re Foresters, and both we and the Forest are here long term. The long term health of both depends on the care of the Forest being something that the Foresters enjoy, and it’s okay to stack that deck in your favor.

Read More
software forestry Gabriel L. Helman software forestry Gabriel L. Helman

Software Forestry 0x03: Overgrowth, Catalogued

Previously, we talked about What We Talk About When We Talk About Tech Debt, and that one of the things that makes that debt metaphor challenging is that it has expanded to encompass all manner of Overgrowth, not all of which fits that financial mental model.

From a Forestry perspective, not all the things that have been absorbed by “debt” are necessarily bad, and aren’t always avoidable. Taking a long-term, stewardship-focused view, there’s a bunch of stuff that’s more like emergent properties of a long-running project, as opposed to getting spendy with the credit card.

So, if not debt, what are we talking about when we talk about tech debt?

It’s easy to get over-excited about Lists of Things, but I got into computer science from the applied philosophy side, rather than the applied math side. I think there are maybe seven categories of “Overgrowth” that are different enough to make it useful to talk about them separately:

1. Actual Tech Debt.

Situations where an explicit decision to do something “not as good” to ship faster. There’s two broad subcategories here: using a hacky or unsustainable design to move faster, and cutting scope to hit a date.

In fairness, the original Martin Fowler post just talks about “cruft” in a broad sense, but generally speaking “Formal” (orthodox?) tech debt assumes a conscious choice to accept that debt.

This is the category where the debt analogy works the best. “I can’t buy this now with cash on hand, but I can take on more credit.” (Of course, this also includes “wait, what’s a variable rate?”)

In my experience, this is the least common species of Overgrowth, and the most straightforwardly self correcting. All development processes have some kind of “things to do next” list or backlog, regardless of the formal name. When making that decision to take on the debt, you put an item on the todo list to pay it off.

That list of cut features becomes the nucleus of the plan for the next major version, or update, or DLC. Sometimes, the schedule did you a favor, you realize it was a bad idea, and that cut feature debt gets written off instead of paid off.

The more internal or infrastructure–type items become those items you talk about with the phrase “we gotta do something about…”; the logging system, the metrics observability, that validation system, adding internationalization. Sometimes this isn’t a big formal effort, just a recognition that the next piece of work in that area is going to take a couple extra days to tidy up the mess we left last time.

Fundamentally, paying this off is a scheduling and planning problem, not a technical one. You had to have some kind of an idea about what the work was to make the decision to defer the work, so you can use that same understanding to find it a spot on the schedule.

That makes this the only category where you can actually pay it off. There’s a bounded amount of work you can plan around. If the work keeps getting deferred, or rescheduled, or kicked down the road, you need to stop and ask yourself if this is actually debt or a something asperational that went septic on you.

2. We made the right decision, but then things happened.

Sometimes you make the right decisions, don’t choose to take on any debt, and then things happen and the world imposes work on you anyway.

The classic example: Third party libraries move forward, the new version isn’t cleanly backwards compatible, and the version you’re using suddenly has a critical security flaw. This isn’t tech debt, you didn’t take out a loan! This is more like tech property taxes.

This is also a planning problem, but tricker, because it’s on someone else’s schedule. Unlike the tech debt above, this isn’t something you can pay down once. Those libraries or frameworks are going to keep getting updated, and you need to find a way to stay on top of them without making it a huge effort every time.

Of course, if they stop getting updated you don’t have an ongoing scheduling problem anymore, but you have the next category…

3. It seemed like a good idea at the time.

Sometimes you just guess wrong, and the rest of the world zigs instead of zags. You do your research, weigh the pros and cons, build what you think is the right thing, and then it’s suddenly a few years later and your CEO is asking why your best-in-class data rich web UI console won’t load on his new iPad, and you have to tell him it’s because it was written in Flash.

You can’t always guess right, and sometimes you’re left with something unsupported and with no future. This is very common; there’s a whole lot of systems out there that went all-in on XML-RPC, or RMI, or GWT, or Angular 1, or Delphi, or ColdFusion, or something else that looked like it was going to be the future right up until it wasn’t.

Personally, I find this to be the most irritating. Like Han Solo would say, it’s not your fault! This was all fine, and then someone you never met makes a strategic decision, and now you have to decide how or if you’re going to replace the discontinued tech. It’s really easy to get into a “if it ain’t broke don’t fix it” headspace, right up until you grind to a halt because you can’t hire anyone who knows how to add a new screen to the app anymore. This is when you start using phrases like “modernization effort”.

4. We did the best we could but there are better options now.

There’s a lot more stuff available than there used to be, and so sometimes you roll onto a new project and discover a home-brew ORM, or a hand-rolled messaging queue, or a strange pattern, and you stop and realize that oh wait, this was written before “the thing I would use” existed. (My favorite example of this is when you find a class full of static final constants in an old Java codebase and realize this was from before Java 5 added enums.)

A lot of the time, the custom, hand-rolled thing isn’t necessarily “worse” than some more recent library or framework, but you have to have some serious conversations about where to spend your time; if something isn’t your core business and has become a commodity, it’s probably not worth pouring more effort in to maintaining your custom version. Everyone wants to build the framework, but no one really wants to maintain the framework. Is our custom JSON serializer really still worth putting effort into?

Like the previous, it’s probably time to take a deep breath and talk about re-designing; but unlike the previous, the persion who designed the current version is probably still on the payroll. This usually isn’t a technical problem so much as it is a grief management one.

5. We solved a different problem.

Things change. You built the right thing at the time, but now we got new customers, shifted markets, increased scale, maybe the feds passed a law. The business requirements have changed. Yesterday, this was the right thing, and now it isn’t.

For example: Maybe you had a perfectly functional app to sell mp3 files to customers to download and play on their laptops, and now you have to retrofit that into a subscription-based music streaming platform for smartphones.

This is a good problem to have! But you still gotta find a way to re-landscape that forest.

6. Context Drift.

There’s a pithy line that Legacy Code is “code without tests,” but I think that’s only part of the problem. Legacy code is code without continuity of philosophy. Why was it built this way? There’s no one left who knows! A system gets built in a certain context, and as time passes that context changes, and the further away we get from the original context, the more overgrown and weedy the system appears to become. Tests—good tests—are one way to preserve context, but not the only way.

A whole lot of what’s called “cruft” is here, because It’s harder to read code than to write it.. A lot of that “cruft” is congealed knowledge. That weird custom string utility thats only used the one place? Sure, maybe someone didn’t understand the standard library, or maybe you don’t know about the weekend the client API started handing back malformed data and they wouldn’t fix it—and even worse, this still happens at unpredictable times.

This is both the easiest and least glamorous to treat, because the trick here is documentation. Don’t just document what the code does, document why the code does what it does, why it was built this way. A very small amount of effort while something is being planted goes a long way towards making sure the context is preserved. As Henry Jones Sr, says, you write it down so you don’t have to remember.

To put all that another way: Documentation debt is still tech debt.

7. Not debt, just an old mistake.

The one no one likes to talk about. For whatever reason, someone didn’t do A-quality work. This isn’t necessarily because they were incompetent or careless, sometimes shit happens, you know? This the flip side of the original Tech Debt category; it wasn’t on purpose, but sometimes people are in a hurry, or need to leave early, or just can’t think of anything better.

And so for whatever the reason, the doors aren’t straight, there’a bunch of unpainted plywood, those stairs aren’t up to code. Weeds everywhere. You gotta spend some time re-cutting your trails through the forest.

🌲

As we said at the start, each of those types of Overgrowth has their own root causes, but also needs a different kind of forest management. Next week, we start talking about techniques to keep the Overgrowth under control.

Read More
Gabriel L. Helman Gabriel L. Helman

TV Rewatch: The Good Place

spoilers ahoy

We’ve been rewatching The Good Place. (Or rather, I’ve been rewatching it—I watched it on and off while it was on—everyone else around here is watching it for the first time.)

It is, of course, an absolute jewel. Probably the last great network comedy prior to the streaming/covid era. It’s a masterclass. In joke construction, in structure, in hiding jokes in set-dressing signs. It hits that sweet spot of being both genuinely funny while also have recognizable human emotions, which tends to beyond the grasp of most network sitcoms.

It’s also a case study in why you hire people with experience; Kristen Bell and Ted Danson are just outstanding at the basic skill of “starring in a TV comedy”, but have never as good as they are here. Ted Danson especially is a revelation here, he’s has been on TV essentially my entire life, and he’s better than he’s ever been, but in a way that feels like this is because he finally has material good enough.

But on top of all that, It’s got a really interesting take on what being a “good person” means, and the implications thereof. It’s not just re-heated half-remembered psychology classes, this is a show made by people that have really thought about it. Philosophers get named-dropped, but in a way that indicates that the people writing the show have actually read the material and absorbed it, instead of just leaving a blank spot in the script that said TECH.

Continuing with that contrasting example, Star Trek: The Next Generation spent hours on hours talking about emotions and ethics and morality, but never had an actual take on the concept, beyond a sort of mealy-mouthed “emotions are probably good, unless they’re bad?” and never once managed to be as insightful as the average joke in TGP. It’s great.

I’m gonna put a horizontal line here and then do some medium spoilers, so if you never watched the show you should go do something about that instead of reading on.


...

The Good Place has maybe my all-time favorite piece of narrative sleight of hand. (Other than the season of Doctor Who that locked into place around the Tardis being all four parts of “something old, something new, something borrowed, something blue.”)

In the very first episode, a character tells something to another character—and by extension the audience. That thing is, in fact, a lie, but neither the character nor the audience have any reason to doubt it. The show then spends the rest of the first season absolutely screaming at the audience that this was a lie, all while trusting that the audience won’t believe their lying eyes and ignore the mounting evidence.

So, when the shoe finally drops, it manages to be both a) a total surprise, but also b) obviously true. I can’t think of another example of a show that so clearly gives the audience everything they need to know, but trusts them not to put the pieces together until the characters do.

And then, it came back for another season knowing that the audience was in on “the secret” and managed to both be a totally new show and the same show it always was at the same time. It’s a remarkable piece of work.

Read More
software forestry Gabriel L. Helman software forestry Gabriel L. Helman

Software Forestry 0x01: Somewhere Between a Flower Pot and a Rainforest

“Software” covers a lot of ground. As there are a lot of different kinds and ecosystems of forests, there are a lot of kinds and ecosystems of software. And like forests, each of those kinds of software has their own goals, objectives, constraints, rules, needs.

One of the big challenges when reading about software “processes” or “best practices” or even just plain general advice is that people so rarely state up front what kind of software they’re talking about. And that leads to a lot of bad outcomes, where people take a technique or a process or an architecture that’s intrinsically linked to its originating context out of that context, recommend it, and then it gets applied to situations that are wildly inappropriate. Just like “leaves falling off” means something very different in an evergreen redwood forest than it does in one full of deciduous oak trees, different kinds of software projects need different care. As practitioners, it’s very easy for us to talk past each other.

(This generally gets cited in cases like “if you aren’t a massive social network with a dedicated performance team you probably don’t need React,” but also, pop quiz: what kind of software were all the signers of the Agile Manifesto writing at the time they wrote and signed it?1)

So, before we delve into the practice of Software Forestry, let’s orient ourselves in the landscape. What kinds of software are there?

As usual for our industry, one of the best pieces written on this is a twenty-year old Joel On Software Article,2 where he breaks software up into Five Worlds:

  1. Shrinkwrap (which he further subdivides into Open Source, Consultingware, and Commercial web based)
  2. Internal
  3. Embedded
  4. Games
  5. Throwaway

And that’s still a pretty good list! I especially like the way he buckets not based on design or architecture but more on economic models and business contraints.

I’d argue that in the years since that was written, “Commercial web-based” has evolved to be more like what he calls “Internal” than “Shrinkwrap”; or more to the point, those feel less like discrete categories than they do like convenient locations on a continuous spectrum. Widening that out a little, all five of those categories feel like the intersections of several spectrums.

I think spectrums are a good way to view the landcape of modern software development. Not discrete buckets or binary yes/no questions, but continuous ranges where various projects land somewhere in between the extremes.

And so, in the spirit of an enthusiastic “yes, and”, I’d like to offer up what I think are the five most interesting or influential spectrums for talking about kinds of software, which we can express as questions sketching out a left-to-right spectrum:

  1. Is it a Flower Pot or a Sprawling Forest?
  2. Does it Run on the Customer’s Computers or the Company’s Computers?
  3. Are the Users Paid to Use It or do they Pay to Use It?
  4. How Often Do Your Customers Pay You?
  5. How Much Does it Matter to the Users?
🌲

Is it a Flower Pot or a Sprawling Forest?

This isn’t about size or scale, necessarily, as mich as it is about overall “complexity”, the number of parts. On one end, you have small, single-purpose scripts running on one machine, on the other end, you have sprawling systems with multiple farms or clusters interacting with each other over custom messaging busses.

How many computers does it need? How many different applications work together? Different languages? How many versions do you have to maintain at once? What scale does it operate at?3 How many people can draw an accurate diagram from memory?

This has huge impacts on not only the technology, but things like team structure, coordination, and planning. Joel’s Shrinkwrap and Internal categories are on the right here, the other three are more towards the left.

🌳

Does it Run on the Customer’s Computers or the Company’s Computers?

To put that another way, how much of it works without an internet connection? Almost nothing is on one end or the other; no one ships dumb terminals or desktop software that can’t call home anymore.

Web apps are pretty far to the right, depending on how complex the in-browser client app is. Mobile apps are usually in the middle somewhere, with a strong dependency on server-side resources, but also will usually work in airplane mode. Single-player Games are pretty far to the left, only needing server components for things like updates and achievement tracking; multiplayer starts moving right. Embedded software is all the way to the left. Joel’s Shrinkwrap is left of center, Internal is all the way to the right.

This has huge implications for development processes; as an example, I started my career in what we then called “Desktop Software”. Deployment was an installer which got burned to a disk. Spinning up a new test system was unbelievably easy, pull a fresh copy of the installer and install it into a VM! Working in a micoservice mesh environment, there are days that feels like the software equivalent of greek fire, a secret long lost. In a world of sprawling services, spinning up a new environment is sometimes an insurmountable task.

A final way to look at this: how involved do your users have to be with an update?

🌲

Are the Users Paid to Use It or do they Pay to Use It?

What kind of alternate options do the people actually using the software have? Can they use something else? A lot of times you see this talked about as being “IT vs commercial,” but it’s broader than that. On the extreme ends here, the user can always choose to play a different mobile game, but if they want to renew their driver’s license, the DMV webpage is the only game in town. And the software their company had custom built to do their job is even less optional.

Another very closely related way of looking at this: Are your Customers and Users the same people? That is, are the people looking at the screen and clicking buttons the same people who cut the check to pay for it? The oft-repeated “if you’re not the customer you’re the product” is a point center-left of this spectrum.

The distance between the people paying and the people using has profound effects on the design and feedback loops for a software project. As an extreme example, one of the major—maybe the most significant—differences between Microsoft and Apple is that Microsoft is very good at selling things to CIOs, and Apple is very good at selling things to individuals, and neither is any good at selling things the other direction.

Bluntly, the things your users care about and that you get feedback on are very, very different depending on if they paid you or if they’re getting paid to use it.

Joel’s Internal category is all the way to the left here, the others are mostly over on the right side.

🌳

How Often Do Your Customers Pay You?

This feels like the aspect that’s exploded in complexity since that original Joel piece. The traditional answer to this was “once, and maybe a second time for big upgrades.” Now though, you’ve got subscriptions, live service models, “in-app purchases”, and a whole universe of models around charging a middle-man fee on other transactions. This gets even stranger for internal or mostly-internal tools, in my corporate life, I describe this spectrum as a line where the two ends are labeled “CAPEX” and “OPEX”.

Joel’s piece doesn’t really talk about business models, but the assumption seems to be a turn-of-the-century Microsoft “pay once and then for upgrades” model.

🌲

How Much Does it Matter to the Users?

Years and years ago, I worked on one of the computer systems backing the State of California’s welfare system. And on my first day, the boss opened with “however you feel about welfare, politically, if this system goes down, someone can’t feed their kids, and we’re not going to let that happen.” “Will this make a kid hungry” infused everything we did.

Some software matters. Embedded pacemakers. The phone system. Fly-by-wire flight control. Banks.

And some, frankly, doesn’t. If that mobile game glitches out, well, that’s annoying, but it was almost my appointment time anyway, you know?

Everyone likes to believe that what they’re working on is very important, but they also like to be able to say “look, this isn’t aerospace” as a way to skip more testing. And thats okay, there’s a lot of software that if it goes down for an hour or two, or glitches out on launch and needs a patch, that’s not a real problem. A minor inconvenience for a few people, forgotten about the next day.

As always, it’s a spectrum. There’s plenty of stuff in the middle: does a restaurant website matter? In the grand scheme of things, not a lot, but if the hours are wrong that’ll start having an impact on the bottom line. In my experience, there’s a strong perception bias towards the middle of this spectrum.

Joel touches on this with Embedded, but mostly seems to be fairly casual about how critical the other categories are.

🌳

There are plenty of other possible spectrums, but over the last twenty years those are the ones I’ve found myself thinking about the most. And I think the combination does a reasonable job sketching out the landscape of modern software.

A lot of things in software development are basically the same regardless of what kind of software you’re developing, but not everything. Like Joel says, it’s not like Id was hiring consultants to make UML diagrams for DOOM, and so it’s important to remember where you are in the landscape before taking advice or adopting someone’s “best practices.”

As follows from the name, Software Forestry is concerned with forests—the bigger systems, with a lot of parts, that matter, with paying customers. In general, the things more on the right side of those spectrums.

As Joel said 22 years ago, we can still learn something from each other regardless of where we all stand on those spectrums, but we need to remember where we’re standing.

🌲

Next Time: What We Talk About When We Talk About Tech Debt


  1. I don’t know, and the point is you don’t either, because they didn’t say.
  2. This almost certainly wont be the last Software Forestry post to act as extended midrash on a Joel On Software post.
  3. Is it web scale?
Read More
Gabriel L. Helman Gabriel L. Helman

Ableist, huh?

Well! Hell of a week to decide I’m done writing about AI for a while!

For everyone playing along at home, NaNoWriMo, the nonprofit that grew up around the National Novel Writing Month challenge, has published a new policy on the use of AI, which includes this absolute jaw-dropper:

We also want to be clear in our belief that the categorical condemnation of Artificial Intelligence has classist and ableist undertones, and that questions around the use of Al tie to questions around privilege.

Really? Lack of access to AI is the only reason “the poors” haven’t been able to write books? This is the thing that’s going to improve access for the disabled? It’s so blatantly “we got a payoff, and we’re using lefty language to deflect criticism,” so disingenuine, and in such bad faith, that the only appropriate reaction is “hahahha Fuck You.

That said, my absolute favorite response was El Sandifer on Bluesky:

"Fucking dare anyone to tell Alan Moore, to his face, that working class writers need AI in order to create."; immediately followed by "“Who the fuck said that I’ll fucking break his skull open” said William Blake in a 2024 seance."

It’s always a mistake to engage with Bad Faith garbage like this, but I did enjoy these attempts:

You Don't Need AI To Write A Novel - Aftermath

NaNoWriMo Shits The Bed On Artificial Intelligence – Chuck Wendig: Terribleminds

There’s something extra hilarious about the grifters getting to NaNoWriMo—the whole point of writing 50,000 words in a month is not that the world needs more unreadable 50k manuscripts, but that it’s an excuse to practice, you gotta write 50k bad words before you can get to 50k good ones. Using AI here is literally bringing a robot to the gym to lift weights for you.

If you’re the kind of ghoul that wants to use a robot to write a book for you, that’s one (terrible) thing, but using it to “win” a for-fun contest that exists just to provide a community of support for people trying to practice? That’s beyond despicable.

The NaNoWriMo organization has been a mess for a long time, it’s a classic volunteer-run non-profit where the founders have moved on and the replacements have been… poor. It’s been a scandal engine for a decade now, and they’ve fired everyone and brought in new people at least once? And the fix is clearly in; NoNoWiMo got a new Executive Director this year, and the one thing the “AI” “Industry” has at the moment is gobs of money.

I wonder how small the bribe was. Someone got handed a check, excuse me, a “sponsorship”, and I wonder how embarrassingly, enragingly small the number was.

I mean, any amount would be deeply disgusting, but if it was, “all you have to do is sell out the basic principles non-profit you’re now in charge of and you can live in luxury for the rest of your life” that’s still terrible but at least I would understand. But you know, you know, however much money changed hands was pathetically small.

These are the kind of people who should be hounded out of any functional civilization.


And then I wake up to the news that Oprah is going to host a prime time special on The AI? Ahhhh, there we go, that’s starting to smell like a Matt Damon Superbowl Ad. From the guest list—Bill Gates?—it’s pretty clearly some high-profile reputation laundering, although I’m sure Oprah got a bigger paycheck than those suckers at NaNoWriMo. I see the discourse has already decayed through a cycle of “should we pre-judge this” (spoiler: yes) and then landed on whether or not there are still “cool” uses for AI. This is such a dishonest deflection that it almost takes my breath away. Whether or not it’s “cool” is literally the least relevant point. Asbestos was pretty cool too, you know?

Read More
Gabriel L. Helman Gabriel L. Helman

Why is this Happening, Part III: Investing in Shares of a Stairway to Heaven

Previously: Part I, Part II.

We’ve talked a lot about “The AI” here at Icecano, mostly in terms ranging from “unflattering” to “extremely unflattering.” Which is why I’ve found myself stewing on this question the last few months: Why is this happening?

The easy answer is that, for starters, it’s a scam, a con. That goes hand-in-hand with it also being hype-fueled bubble, which is finally starting to show signs of deflating. We’re not quite at the “Matt Damon in Superbowl ads” phase yet, but I think we’re closer than not to the bubble popping.

Fad-tech bubbles are nothing new in the tech world, in recent memory we had similar grifts around the metaverse, blockchain & “web3”, “quantum”, self-driving cars. (And a whole lot of those bubbles all had the same people behind them as the current one around AI. Lots of the same datacenters full of GPUs, too!) I’m also old enough to remember similar bubbles around things like bittorrent, “4gl languages”, two or three cycles on VR, 3D TV.

This one has been different, though. There’s a viciousness to the boosters, a barely contained glee at the idea that this will put people out of work, which has been matched in intensity by the pushback. To put all that another way, when ELIZA came out, no one from MIT openly delighted at the idea that they were about to put all the therapists out of work.

But what is it about this one, though? Why did this ignite in a way that those others didn’t?

A sentiment I see a lot, as a response to AI skepticism, is to say something like “no no, this is real, it’s happening.” And the correct response to that is to say that, well, asbestos pajamas really didn’t catch fire, either. Then what happened? Just because AI is “real” it doesn’t mean it’s “good”. Those mesothelioma ads aren’t because asbestos wasn’t real.

(Again, these tend to be the same people who a few years back had a straight face when they said they were “bullish on bitcoin.”)

But I there’s another sentiment I see a lot that I think is standing behind that one: that this is the “last new tech we’ll see in our careers”. This tends to come from younger Xers & elder Millennials, folks who were just slightly too young to make it rich in the dot com boom, but old enough that they thought they were going to.

I think this one is interesting, because it illuminates part of how things have changed. From the late 70s through sometime in the 00s, new stuff showed up constantly, and more importantly, the new stuff was always better. There’s a joke from the 90s that goes like this: Two teams each developed a piece of software that didn’t run well enough on home computers. The first team spent months sweating blood, working around the clock to improve performance. The second team went and sat on a beach. Then, six months later, both teams bought new computers. And on those new machines, both systems ran great. So who did a better job? Who did a smarter job?

We all got absolutely hooked on the dopamine rush of new stuff, and it’s easy to see why; I mean, there were three extra verses of “We Didn’t Light the Fire” just in the 90s alone.

But a weird side effect is that as a culture of practitioners, we never really learned how to tell if the new thing was better than the old thing. This isn’t a new observation, Microsoft figured out to weaponize this early on as Fire And Motion. And I think this has really driven the software industry’s tendency towards “fad-oriented development,” we never built up a herd immunity to shiny new things.

A big part of this, of course, is that the press tech profoundly failed. A completely un-skeptical, overly gullible press that was infatuated shiny gadgets foisted a whole parade of con artists and scamtech on all of us, abdicating any duty they had to investigate accurately instead of just laundering press releases. The Professionally Surprised.

And for a long while, that was all okay, the occasional CueCat notwithstanding, because new stuff generally was better, and even if was only marginally better, there was often a lot of money to be made by jumping in early. Maybe not “private island” money, but at least “retire early to the foothills” money.

But then somewhere between the Dot Com Crash and the Great Recession, things slowed down. Those two events didn’t help much, but also somewhere in there “computers” plateaued at “pretty good”. Mobile kept the party going for a while, but then that slowed down too.

My Mom tells a story about being a teenager while the Beatles were around, and how she grew up in a world where every nine months pop music was reinvented, like clockwork. Then the Beatles broke up, the 70s hit, and that all stopped. And she’s pretty open about how much she misses that whole era; the heady “anything can happen” rush. I know the feeling.

If your whole identity and worldview about computers as a profession is wrapped up in diving into a Big New Thing every couple of years, it’s strange to have it settle down a little. To maintain. To have to assess. And so it’s easy to find yourself grasping for what the Next Thing is, to try and get back that feeling of the whole world constantly reinventing itself.

But missing the heyday of the PC boom isn’t the reason that AI took off. But it provides a pretty good set of excuses to cover the real reasons.

Is there a difference between “The AI” and “Robots?” I think, broadly, the answer is “no;” but they’re different lenses on the same idea. There is an interesting difference between “robot” (we built it to sit outside in the back seat of the spaceship and fix engines while getting shot at) and “the AI” (write my email for me), but that’s more about evolving stories about which is the stuff that sucks than a deep philosophical difference.

There’s a “creative” vs “mechanical” difference too. If we could build an artificial person like C-3PO I’m not sure that having it wash dishes would be the best or most appropriate possible use, but I like that as an example because, rounding to the nearest significant digit, that’s an activity no one enjoys, and as an activity it’s not exactly a hotbed of innovative new techniques. It’s the sort of chore it would be great if you could just hand off to someone. I joke this is one of the main reasons to have kids, so you can trick them into doing chores for you.

However, once “robots” went all-digital and became “the AI”, they started having access to this creative space instead of the physical-mechanical one, and the whole field backed into a moral hazard I’m not sure they noticed ahead of time.

There’s a world of difference between “better clone stamp in photoshop” and “look, we automatically made an entire website full of fake recipes to farm ad clicks”; and it turns out there’s this weird grifter class that can’t tell the difference.

Gesturing back at a century of science fiction thought experiments about robots, being able to make creative art of any kind was nearly always treated as an indicator that the robot wasn’t just “a robot.” I’ll single out Asimov’s Bicentennial Man as an early representative example—the titular robot learns how to make art, and this both causes the manufacturer to redesign future robots to prevent this happening again, and sets him on a path towards trying to be a “real person.”

We make fun of the Torment Nexus a lot, but it keeps happening—techbros keep misunderstanding the point behind the fiction they grew up on.

Unless I’m hugely misinformed, there isn’t a mass of people clamoring to wash dishes, kids don’t grow up fantasizing about a future in vacuuming. Conversely, it’s not like there’s a shortage of people who want to make a living writing, making art, doing journalism, being creative. The market is flooded with people desperate to make a living doing the fun part. So why did people who would never do that work decide that was the stuff that sucked and needed to be automated away?

So, finally: why?

I think there are several causes, all tangled.

These causes are adjacent to but not the same as the root causes of the greater enshittification—excuse me, “Platform Decay”—of the web. Nor are we talking about the largely orthogonal reasons why Facebook is full of old people being fooled by obvious AI glop. We’re interested in why the people making these AI tools are making them. Why they decided that this was the stuff that sucked.

First, we have this weird cultural stew where creative jobs are “desired” but not “desirable”. There’s a lot of cultural cachet around being a “creator” or having a “creative” jobs, but not a lot of respect for the people actually doing them. So you get the thing where people oppose the writer’s strike because they “need” a steady supply of TV, but the people who make it don’t deserve a living wage.

Graeber has a whole bit adjacent to this in Bullshit Jobs. Quoting the originating essay:

It's even clearer in the US, where Republicans have had remarkable success mobilizing resentment against school teachers, or auto workers (and not, significantly, against the school administrators or auto industry managers who actually cause the problems) for their supposedly bloated wages and benefits. It's as if they are being told ‘but you get to teach children! Or make cars! You get to have real jobs! And on top of that you have the nerve to also expect middle-class pensions and health care?’

“I made this” has cultural power. “I wrote a book,” “I made a movie,” are the sort of things you can say at a party that get people to perk up; “oh really? Tell me more!”

Add to this thirty-plus years of pressure to restructure public education around “STEM”, because those are the “real” and “valuable” skills that lead to “good jobs”, as if the only point of education was as a job training program. A very narrow job training program, because again, we need those TV shows but don’t care to support new people learning how to make them.

There’s always a class of people who think they should be able to buy anything; any skill someone else has acquired is something they should be able to purchase. This feels like a place I could put several paragraphs that use the word “neoliberalism” and then quote from Ayn Rand, The Incredibles, or Led Zeppelin lyrics depending on the vibe I was going for, but instead I’m just going to say “you know, the kind of people who only bought the Cliffs Notes, never the real book,” and trust you know what I mean. The kind of people who never learned the difference between “productivity hacks” and “cheating”.

The sort of people who only interact with books as a source of isolated nuggets of information, the kind of people who look at a pile of books and say something like “I wish I had access to all that information,” instead of “I want to read those.”

People who think money should count at least as much, if not more than, social skills or talent.

On top of all that, we have the financializtion of everything. Hobbies for their own sake are not acceptable, everything has to be a side hustle. How can I use this to make money? Why is this worth doing if I can’t do it well enough to sell it? Is there a bootcamp? A video tutorial? How fast can I start making money at this?

Finally, and critically, I think there’s a large mass of people working in software that don’t like their jobs and aren’t that great at them. I can’t speak for other industries first hand, but the tech world is full of folks who really don’t like their jobs, but they really like the money and being able to pretend they’re the masters of the universe.

All things considered, “making computers do things” is a pretty great gig. In the world of Professional Careers, software sits at the sweet spot of “amount you actually have to know & how much school you really need” vs “how much you get paid”.

I’ve said many times that I feel very fortunate that the thing I got super interested in when I was twelve happened to turn into a fully functional career when I hit my twenties. Not everyone gets that! And more importantly, there are a lot of people making those computers do things who didn’t get super interested in computers when they were twelve, because the thing they got super interested in doesn’t pay for a mortgage.

Look, if you need a good job, and maybe aren’t really interested in anything specific, or at least in anything that people will pay for, “computers”—or computer-adjacent—is a pretty sweet direction for your parents to point you. I’ve worked with more of these than I can count—developers, designers, architects, product people, project managers, middle managers—and most of them are perfectly fine people, doing a job they’re a little bored by, and then they go home and do something that they can actually self-actualize about. And I suspect this is true for a lot of “sit down inside email jobs,” that there’s a large mass of people who, in a just universe, their job would be “beach” or “guitar” or “games”, but instead they gotta help knock out front-end web code for a mid-list insurance company. Probably, most careers are like that, there’s the one accountant that loves it, and then a couple other guys counting down the hours until their band’s next unpaid gig.

But one of the things that makes computers stand out is that those accountants all had to get certified. The computer guys just needed a bootcamp and a couple weekends worth of video tutorials, and suddenly they get to put “Engineer” on their resume.

And let’s be honest: software should be creative, usually is marketed as such, but frequently isn’t. We like to talk about software development as if it’s nothing but innovation and “putting a dent in the universe”, but the real day-to-day is pulling another underwritten story off the backlog that claims to be easy but is going to take a whole week to write one more DTO, or web UI widget, or RESTful API that’s almost, but not quite, entirely unlike the last dozen of those. Another user-submitted bug caused by someone doing something stupid that the code that got written badly and shipped early couldn’t handle. Another change to government regulations that’s going to cause a remodel of the guts of this thing, which somehow manages to be a surprise despite the fact the law was passed before anyone in this meeting even started working here.

They don’t have time to learn how that regulation works, or why it changed, or how the data objects were supposed to happen, or what the right way to do that UI widget is—the story is only three points, get it out the door or our velocity will slip!—so they find someting they can copy, slap something together, write a test that passes, ship it. Move on to the next. Peel another one off the backlog. Keep that going. Forever.

And that also leads to this weird thing software has where everyone is just kind of bluffing everyone all the time, or at least until they can go look something up on stack overflow. No one really understands anything, just gotta keep the feature factory humming.

The people who actually like this stuff, who got into it because they liked making compteurs do things for their own sake keep finding ways to make it fun, or at least different. “Continuous Improvement,” we call it. Or, you know, they move on, leaving behind all those people whose twelve-year old selves would be horrified.

But then there’s the group that’s in the center of the Venn Diagram of everything above. All this mixes together, and in a certain kind of reduced-empathy individual, manifests as a fundamental disbelief in craft as a concept. Deep down, they really don’t believe expertise exists. That “expertise” and “bias” are synonyms. They look at people who are “good” at their jobs, who seem “satisfied” and are jealous of how well that person is executing the con.

Whatever they were into at twelve didn’t turn into a career, and they learned the wrong lesson from that. The kind of people who were in a band as a teenager and then spent the years since as a management consultant, and think the only problem with that is that they ever wanted to be in a band, instead of being mad that society has more open positions for management consultants than bass players.

They know which is the stuff that sucks: everything. None of this is the fun part; the fun part doesn’t even exist; that was a lie they believed as a kid. So they keep trying to build things where they don’t have to do their jobs anymore but still get paid gobs of money.

They dislike their jobs so much, they can’t believe anyone else likes theirs. They don’t believe expertise or skill is real, because they have none. They think everything is a con because thats what they do. Anything you can’t just buy must be a trick of some kind.

(Yeah, the trick is called “practice”.)

These aren’t people who think that critically about their own field, which is another thing that happens when you value STEM over everything else, and forget to teach people ethics and critical thinking.

Really, all they want to be are “Idea Guys”, tossing off half-baked concepts and surrounded by people they don’t have to respect and who wont talk back, who will figure out how to make a functional version of their ill-formed ramblings. That they can take credit for.

And this gets to the heart of whats so evil about the current crop of AI.

These aren’t tools built by the people who do the work to automate the boring parts of their own work; these are built by people who don’t value creative work at all and want to be rid of it.

As a point of comparison, the iPod was clearly made by people who listened to a lot of music and wanted a better way to do so. Apple has always been unique in the tech space in that it works more like a consumer electronics company, the vast majority of it’s products are clearly made by people who would themselves be an enthusiastic customer. In this field we talk about “eating your own dog-food” a lot, but if you’re writing a claims processing system for an insurance company, there’s only so far you can go. Making a better digital music player? That lets you think different.

But no: AI is all being built by people who don’t create, who resent having to create, who resent having to hire people who can create. Beyond even “I should be able to buy expertise” and into “I value this so little that I don’t even recognize this as a real skill”.

One of the first things these people tried to automate away was writing code—their own jobs. These people respect skill, expertise, craft so little that they don’t even respect their own. They dislike their jobs so much, and respect their own skills so little, that they can’t imagine that someone might not feel that way about their own.

A common pattern has been how surprised the techbros have been at the pushback. One of the funnier (in a laugh so you don’t cry way) sideshows is the way the techbros keep going “look, you don’t have to write anymore!” and every writer everywhere is all “ummmmm, I write because I like it, why would I want to stop” and then it just cuts back and forth between the two groups saying “what?” louder and angrier.

We’re really starting to pay for the fact that our civilization spent 20-plus years shoving kids that didn’t like programming into the career because it paid well and you could do it sitting down inside and didn’t have to be that great at it.

What future are they building for themselves? What future do they expect to live in, with this bold AI-powered utopia? Some vague middle-management “Idea Guy” economy, with the worst people in the world summoning books and art and movies out of thin air for no one to read or look at or watch, because everyone else is doing the same thing? A web full of AI slop made by and for robots trying to trick each other? Meanwhile the dishes are piling up? That’s the utopia?

I’m not sure they even know what they want, they just want to stop doing the stuff that sucks.

And I think that’s our way out of this.

What do we do?

For starters, AI Companies need to be regulated, preferably out of existence. There’s a flavor of libertarian-leaning engineer that likes to say things like “code is law,” but actually, turns out “law” is law. There’s whole swathes of this that we as a civilization should have no tolerance for; maybe not to a full Butlerian Jihad, but at least enough to send deepfakes back to the Abyss. We dealt with CFCs and asbestos, we can deal with this.

Education needs to be less STEM-focused. We need to carve out more career paths (not “jobs”, not “gigs”, “careers”) that have the benefits of tech but aren’t tech. And we need to furiously defend and expand spaces for creative work to flourish. And for that work to get paid.

But those are broad, society-wide changes. But what can those of us in the tech world actually do? How can we help solve these problems in our own little corners? We can we go into work tomorrow and actually do?

It’s on all of us in the tech world to make sure there’s less of the stuff that sucks.

We can’t do much about the lack of jobs for dance majors, but we can help make sure those people don’t stop believing in skill as a concept. Instead of assuming what we think sucks is what everyone thinks sucks, is there a way to make it not suck? Is there a way to find a person who doesn’t think it sucks? (And no, I don’t mean “Uber for writing my emails”) We gotta invite people in and make sure they see the fun part.

The actual practice of software has become deeply dehumanizing. None of what I just spent a week describing is the result of healthy people working in a field they enjoy, doing work they value. This is the challenge we have before us, how can we change course so that the tech industry doesn’t breed this. Those of us that got lucky at twelve need to find new ways to bring along the people who didn’t.

With that in mind, next Friday on Icecano we start a new series on growing better software.


Several people provided invaluable feedback on earlier iterations of this material; you all know who you are and thank you.

And as a final note, I’d like to personally apologize to the one person who I know for sure clicked Open in New Tab on every single link. Sorry man, they’re good tabs!

Read More
Gabriel L. Helman Gabriel L. Helman

Why is this Happening, Part II: Letting Computers Do The Fun Part

Previously: Part I

Let’s leave the Stuff that Sucks aside for the moment, and ask a different question. Which Part is the Fun Part? What are we going to do with this time the robots have freed up for us?

It’s easy to get wrapped up in pointing at the parts of living that suck; especially when fantasizing about assigning work to C-3PO’s cousin. And it’s easy to spiral to a place where you just start waving your hands around at everything.

But even Bertie Wooster had things he enjoyed, that he occasionally got paid for, rather than let Jeeves work his jaw for him.

So it’s worth recalibrating for a moment: which are the fun parts?

As aggravating as it can be at times, I do actually like making computers do things. I like programming, I like designing software, I like building systems. I like finding clever solutions to problems. I got into this career on purpose. If it was fun all the time they wouldn’t have to call it “work”, but it’s fun a whole lot of the time.

I like writing (obviously.) For me, that dovetails pretty nicely with liking to design software; I’m generally the guy who ends up writing specs or design docs. It’s fun! I owned the customer-facing documentation several jobs back. It was fun!

I like to draw! I’m not great at it, but I’m also not trying to make a living out of it. I think having hobbies you enjoy but aren’t great at is a good thing. Not every skill needs to have a direct line to a career or a side hustle. Draw goofy robots to make your kids laugh! You don’t need to have to figure out a the monetization strategy.

In my “outside of work” life I think I know more writers and artists than programmers. For all of them, the work itself—the writing, the drawing, the music, making the movie—is the fun part. The parts they don’t like so well is the “figuring out how to get paid” part, or the dealing with printers part, or the weird contracts part. The hustle. Or, you know, the doing dishes, laundry, and vacuuming part. The “chores” part.

So every time I see a new “AI tool” release that writes text or generates images or makes video, I always as the same question:

Why would I let the computer do the fun part?

The writing is the fun part! The drawing pictures is the fun part! Writing the computer programs are the fun part! Why, why, are they trying to tell us that those are the parts that suck?

Why are the techbros trying to automate away the work people want to do?

It’s fun, and I worked hard to get good at it! Now they want me to let a robot do it?

Generative AI only seems impressive if you’ve never successfully created anything. Part of what makes “AI art” so enragingly radicalizing is the sight of someone whose never tried to create something before, never studied, never practiced, never put the time in, never really even thought about it, joylessly showing off their terrible AI slop they made and demanding to be treated as if they made it themselves, not that they used a tool built on the fruits of a million million stolen works.

Inspiration and plagiarism are not the same thing, the same way that “building a statistical model of word order probability from stuff we downloaded from the web” is not the same as “learning”. A plagiarism machine is not an artist.

But no, the really enraging part is watching these people show off this garbage realizing that these people can’t tell the difference. And AI art seems to be getting worse, AI pictures are getting easier spot, not harder, because of course it is, because the people making the systems don’t know what good is. And the culture is following: “it looks like AI made it” has become the exact opposite of a compliment. AI-generated glop is seen as tacky, low quality. And more importantly, seen as cheap, made by someone who wasn’t willing to spend any money on the real thing. Trying to pass off Krusty Burgers as their own cooking.

These are people with absolutely no taste, and I don’t mean people who don’t have a favorite Kurosawa film, I mean people who order a $50 steak well done and then drown it in A1 sauce. The kind of people who, deep down, don’t believe “good” is real. That it’s all just “marketing.”

The act of creation is inherently valuable; creation is an act that changes the creator as much as anyone. Writing things down isn’t just documentation, it’s a process that allows and enables the writer to discover what they think, explore how they actually feel.

“Having AI write that for you is like having a robot lift weights for you.”

AI writing is deeply dehumanizing, to both the person who prompted it and to the reader. There is so much weird stuff to unpack from someone saying, in what appears to be total sincerity, that they used AI to write a book. That the part they thought sucked was the fun part, the writing, and left their time free for… what? Marketing? Uploading metadata to Amazon? If you don’t want to write, why do you want people to call you a writer?

Why on earth would I want to read something the author couldn’t be bothered to write? Do these ghouls really just want the social credit for being “an artist”? Who are they trying to impress, what new parties do they think they’re going to get into because they have a self-published AI-written book with their name on it? Talk about participation trophies.

All the people I know in real life or follow on the feeds who use computers to do their thing but don’t consider themselves “computer people” have reacted with a strong and consistant full-body disgust. Personally, compared to all those past bubbles, this is the first tech I’ve ever encountered where my reaction was complete revulsion.

Meanwhile, many (not all) of the “computer people” in my orbit tend to be at-least AI curious, lots of hedging like “it’s useful in some cases” or “it’s inevitable” or full-blown enthusiasm.

One side, “absolutely not”, the other side, “well, mayyybe?” As a point of reference, this was the exact breakdown of how these same people reacted to blockchain and bitcoin.

One group looks at the other and sees people musing about if the face-eating leopard has some good points. The other group looks at the first and sees a bunch of neo-luddites. Of course, the correct reaction to that is “you’re absolutely correct, but not for the reasons you think.”

There’s a Douglas Adams bit that gets quoted a lot lately, which was printed in Salmon of Doubt but I think was around before that:

I’ve come up with a set of rules that describe our reactions to technologies:

  1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.

  2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.

  3. Anything invented after you’re thirty-five is against the natural order of things.

The better-read AI-grifters keep pointing at rule 3. But I keep thinking of the bit from Dirk Gently’s Detective Agency about the Electric Monk:

The Electric Monk was a labour-saving device, like a dishwasher or a video recorder. Dishwashers washed tedious dishes for you, thus saving you the bother of washing them yourself, video recorders watched tedious television for you, thus saving you the bother of looking at it yourself; Electric Monks believed things for you, thus saving you what was becoming an increasingly onerous task, that of believing all the things the world expected you to believe.

So, what are the people who own the Monks doing, then?

Let’s speak plainly for a moment—the tech industry has always had a certain…. ethical flexibility. The “things” in “move fast and break things” wasn’t talking about furniture or fancy vases, this isn’t just playing baseball inside the house. And this has been true for a long time, the Open Letter to Hobbyists was basically Gates complaining that other people’s theft was undermining the con he was running.

We all liked to pretend “disruption” was about finding “market inefficiencies” or whatever, but mostly what that meant was moving in to a market where the incumbents were regulated and labor had legal protection and finding a way to do business there while ignoring the rules. Only a psychopath thinks “having to pay employees” is an “inefficiency.”

Vast chunks of what it takes to make generative AI possible are already illegal or at least highly unethical. The Internet has always been governed by a sort of combination of gentleman’s agreements and pirate codes, and in the hunger for new training data, the AI companies have sucked up everything, copyright, licensing, and good neighborship be damned.

There’s some half-hearted attempts to combat AI via arguments that it violates copyright or open source licensing or other legal approach. And more power to them! Personally, I’m not really interested in the argument the AI training data violates contract law, because I care more about the fact that it’s deeply immoral. See that Vonnegut line about “those who devised means of getting paid enormously for committing crimes against which no laws had been passed.” Much like I think people who drive too fast in front of schools should get a ticket, sure, but I’m not opposed to that action because it was illegal, but because it was dangerous to the kids.

It’s been pointed out more than once that AI breaks the deal behind webcrawlers and search—search engines are allowed to suck up everyone’s content in exchange for sending traffic their way. But AI just takes and regurgitates, without sharing the traffic, or even the credit. It’s the AI Search Doomsday Cult. Even Uber didn’t try to put car manufacturers out of business.

But beyond all that, making things is fun! Making things for other people is fun! It’s about making a connection between people, not about formal correctness or commercial viability. And then you see those terrible google fan letter ads at the olympics, or see people crowing that they used AI to generate a kids book for their children, and you wonder, how can these people have so little regard for their audience that they don’t want to make the connection themselves? That they’d rather give their kids something a jumped-up spreadsheet full of stolen words barfed out instead of something they made themselves? Why pass on the fun part, just so you can take credit for something thoughtless and tacky? The AI ads want you to believe that you need their help to find “the right word”; what thay don’t tell you is that no you don’t, what you need to do is have fun finding your word.

Robots turned out to be hard. Actually, properly hard. You can read these papers by computer researchers in the fifties where they’re pretty sure Threepio-style robot butlers are only 20 years away, which seems laughable now. Robots are the kind of hard where the more we learn the harder they seem.

As an example: Doctor Who in the early 80s added a robot character who was played by the prototype of an actual robot. This went about as poorly as you might imagine. That’s impossible to imagine now, no producer would risk their production on a homemade robot today, matter how impressive the demo was. You want a thing that looks like Threepio walking around and talking with a voice like a Transformer? Put a guy in a suit. Actors are much easier to work with. Even though they have a union.

Similarly, “General AI” in the HAL/KITT/Threepio sense has been permanently 20 years in the future for at least 70 years now. The AI class I took in the 90s was essentially a survey of things that hadn’t worked, and ended with a kind of shrug and “maybe another 20?”

Humans are really, really good at seeing faces in things, and finding patterns that aren’t there. Any halfway decent professional programmer can whip up an ELIZA clone in an afternoon, and even knowing how the trick works it “feels” smarter than it is. A lot of AI research projects are like that, a sleight-of-hand trick that depends on doing a lot of math quickly and on the human capacity to anthropomorphize. And then the self-described brightest minds of our generation fail the mirror test over and over.

Actually building a thing that can “think”? Increasingly seems impossible.

You know what’s easy, though, comparatively speaking? Building a statistical model of all the text you can pull off the web.

On Friday: conclusions, such as they are.

Read More
Gabriel L. Helman Gabriel L. Helman

This Adam Savage Video

The YouTube algorithm has decided that what I really want to watch are Adam Savage videos, and it turns out the robots are occasionally right? So, I’d like to draw your attention to this vid where Adam answers some user questions: Were Any Myths Deemed Too Simple to Test on MythBusters?

It quickly veers moderately off-topic, and gets into a the weeds on what kinds of topics MythBusters tackled and why. You should go watch it, but the upshot is that MythBusters never wanted to invite someone on just to make them look bad or do a gotcha, so there was a whole class of “debunking” topics they didn’t have a way in on; the example Adam cites is dowsing, because there’s no way to do an episode busting dowsing without having a dowser on to debunk.

And this instantly made clear to me why I loved MythBusters but couldn’t stand Penn & Teller’s Bullshit!. The P&T Show was pretty much an extended exercise in “Look at this Asshole”, and was usually happy to stop there. MythBusters was never interested in looking at assholes.


And, speaking of Adam Savage, did I ever link to the new Bobby Fingers?

Fabio and the Goose

This is relevant because it’s a collaboration with Adam Savage, and the Slow Mo Guys, who also posted their own videos on the topic:

Shooting Ballistic Gel Birds at Silicone Fabio with @bobbyfingers and @theslowmoguys!

75mph Bird to the Face with Adam Savage (@tested) and @bobbyfingers - The Slow Mo Guys

It’s like a youtube channel Rashomon, it’s great.

Read More
Gabriel L. Helman Gabriel L. Helman

Meet the Veep

And there we go, it’s Walz. Personally, I was hoping for Mayor Pete, but as Elizabeth Sandifer says: “…when you create the campaign's new messaging strategy you get to be the VP nom.". I love that he’s a regular guy in the way that’s the exact opposite of what we use the word “weird” as a shorthand for.

The Dems claiming the title of the party of regular, normal, non-crazy people is long overdue; this is a note they should have been playing since at least the Tea Party, and probably since Gingrich. But, like planting a tree, the second best time is now, and Walz’s midwestern cool dad energy is the perfect counterpoint to the Couch Experience.

“Both sides are the same” is right-wing propganda designed to reduce voter turnout, but the Dems don’t always run a ticket that makes it easy to dispute. What I like about the Harris/Walz vs Trump/Vance race is that the differences are clear, even at a distance. What future do you want, America?


As I keep saying, this is a “turn out the base” election; everyone already knows which side they’d vote for, and the trick is to get them to think it’s worth it to bother to vote. Each candidate is running against apathy, not each other. Fairly or not, over the summer the Democrats found themselves with a substantial enthusiasm gap. The Repubs didn’t have a huge amount of enthusiasm either, but the reality is the members of the Republican coalition are more likely to show up and vote for someone they don’t like than the Dems, so structurally thats the sort of thing that hurts Team Blue more.

Literally no one wanted to do the 2020 election over again, and in one of those bizarre unfair moments America decided to blame Biden for it, instead of blaming the guy who lost for not staying down. But more than that, complaining about how “old” everyone was also a shorthand for something else—all the actors here are people who’ve been around since the 80s. We just keep re-litigating the ghosts of the 20th century. Obama felt like the moment we were finally done having elections rooted in how the Boomers felt about Nixon, but then, no, another three cycles made up entirely of people who’ve been household names since Cheers was on the air.

And then Harris crashes into the race at the last second with an “oh yeaaahhhh!” Suddenly, we’ve got something new. This finally feels like not just a properly post-Obama ticket, but actually post “The Third Way”; both in terms of policy and attitude this is the campaign the Dems should have been running every election in the 21st century. And for once, the Dems aren’t just trying to score points with some hypothetical ref and win on technicals, they’re here to actually win. Finally.

I’m as surprised as anyone at the amount of excitement that’s built up over the last two weeks; I was sure swapping candidates was an own-goal for the ages, but now I’m sure I was wrong.

Rooting for the winning team is fun, and the team with the initiative and hustle is usually the one that wins. It’s self-perpetuating, in both directions. (This is a big part of how Trump managed to stumble into a win in ’16, it was a weird feedback loop of him doing something insane and then everyone else going “hahaha what” and all that kept building on itself until he was suddenly the President.)

Accurately or not, the Dems had talked themselves into believing they were going to lose, and were acting like it. Now, not so much! The feedback loops are building the other way, and as Harris keeps picking up more support, you can see the air bleeding out of Trump’s tires as his support drifts away because he’s only fun when he’s winning.


I have a conceptual model that I use for US Presidential elections that has very rarely let me down. It goes like this: every cycle the Republicans run someone who reads as a Boss, and the Democrats run someone who reads as a college Professor. And so most elections turn into a contest between the worst boss you’ve ever had against your least favorite teacher; with the final decision boiling down to, basically, “would you rather work for this guy or take a class from that guy”. (Often leading to a frustrated “bleah, neither!”)

And elections pretty consistently go to the team that wins that comparison. As a historical example, I liked Gore a lot, but he really had the quality that he’d grade you down on a paper because he thought you used an em dash wrong when you didn’t, whereas W (war crimes notwithstanding) seemed like the kind of boss that wouldn’t hassle you too bad and would throw a great summer BBQ. And occasionally one side or the other pops a good one—Obama seemed like he’d be your favorite law professor of all time.

Viewing this ticket via that lens? This one I like. We have the worst boss you can imagine running with the worst coworker you’ve ever had, against literally the cool geography teacher/football coach and the lady that seems like she’d be your new favorite professor? Hell yeah. I’m sold. Let’s do this.

Read More
Gabriel L. Helman Gabriel L. Helman

Retweeting Feeds

There’s a feature I miss from the old twitter, or rather, there’s a use case that twitter filled better than anything else.

The use case is this: RSS feeds are a great way to publish content, but that’s where it stops—there’s no intrinsic way to (re)share an item from a feed you’re subscribed to with anyone else, to retweet it, if you will. I’d love to have an easy way to reshare content from feeds I’m subscribed to.

I think one of the reasons that twitter sucked the air out of the RSS ecosystem was that not only was it trivial to set up a twitter account that worked just like an RSS feed, with links to your blog or podcast or whatever, but everyone who followed your feed could re-share it with their followers with a single action, optionally adding a comment. I cannot tell you how much great stuff I found out about because someone I followed on twitter quote-tweeted it.

Here in the post-twitter afterscape, I keep wishing NetNewsWire had a retweet button. I’ve spent enough time in product design and development to know why that doesn’t exist; something that could re-share an item from an RSS feed out into a new feed with a connection back to the original feed requires about 80% of twitter, if you’re going to build all that, add the ability to post your own content as well and go all the way to a social network/microblogging system/twitter clone. And I guess the solution is either bluesky or mastodon.

But I keep thinking there has to be something between turn of the century–style RSS feeds and a full-blown social network. And I am putting this out in the universe because I absolutely do not want to build or work on this myself, I want to use it. (All my actual startup ideas are going to be buried with me.)

Somebody go figure this out!


Edited to add: I am reminded by an Alert Reader that Google Reader (RIP) had a similar feature, you could “share” things from your feed. But google’s gonna google and the share list was basically your gmail contacts list? Which would lead to some really bizarre results like suddenly seeing a thing in your feed that got there because a friend of a friend shared it, because you were both on the same party invite a couple of years ago. Cool idea, but again, something twitter improved on by making those connections obvious.

Read More