Gabriel L. Helman Gabriel L. Helman

Why is this Happening, Part III: Investing in Shares of a Stairway to Heaven

Previously: Part I, Part II.

We’ve talked a lot about “The AI” here at Icecano, mostly in terms ranging from “unflattering” to “extremely unflattering.” Which is why I’ve found myself stewing on this question the last few months: Why is this happening?

The easy answer is that, for starters, it’s a scam, a con. That goes hand-in-hand with it also being hype-fueled bubble, which is finally starting to show signs of deflating. We’re not quite at the “Matt Damon in Superbowl ads” phase yet, but I think we’re closer than not to the bubble popping.

Fad-tech bubbles are nothing new in the tech world, in recent memory we had similar grifts around the metaverse, blockchain & “web3”, “quantum”, self-driving cars. (And a whole lot of those bubbles all had the same people behind them as the current one around AI. Lots of the same datacenters full of GPUs, too!) I’m also old enough to remember similar bubbles around things like bittorrent, “4gl languages”, two or three cycles on VR, 3D TV.

This one has been different, though. There’s a viciousness to the boosters, a barely contained glee at the idea that this will put people out of work, which has been matched in intensity by the pushback. To put all that another way, when ELIZA came out, no one from MIT openly delighted at the idea that they were about to put all the therapists out of work.

But what is it about this one, though? Why did this ignite in a way that those others didn’t?

A sentiment I see a lot, as a response to AI skepticism, is to say something like “no no, this is real, it’s happening.” And the correct response to that is to say that, well, asbestos pajamas really didn’t catch fire, either. Then what happened? Just because AI is “real” it doesn’t mean it’s “good”. Those mesothelioma ads aren’t because asbestos wasn’t real.

(Again, these tend to be the same people who a few years back had a straight face when they said they were “bullish on bitcoin.”)

But I there’s another sentiment I see a lot that I think is standing behind that one: that this is the “last new tech we’ll see in our careers”. This tends to come from younger Xers & elder Millennials, folks who were just slightly too young to make it rich in the dot com boom, but old enough that they thought they were going to.

I think this one is interesting, because it illuminates part of how things have changed. From the late 70s through sometime in the 00s, new stuff showed up constantly, and more importantly, the new stuff was always better. There’s a joke from the 90s that goes like this: Two teams each developed a piece of software that didn’t run well enough on home computers. The first team spent months sweating blood, working around the clock to improve performance. The second team went and sat on a beach. Then, six months later, both teams bought new computers. And on those new machines, both systems ran great. So who did a better job? Who did a smarter job?

We all got absolutely hooked on the dopamine rush of new stuff, and it’s easy to see why; I mean, there were three extra verses of “We Didn’t Light the Fire” just in the 90s alone.

But a weird side effect is that as a culture of practitioners, we never really learned how to tell if the new thing was better than the old thing. This isn’t a new observation, Microsoft figured out to weaponize this early on as Fire And Motion. And I think this has really driven the software industry’s tendency towards “fad-oriented development,” we never built up a herd immunity to shiny new things.

A big part of this, of course, is that the press tech profoundly failed. A completely un-skeptical, overly gullible press that was infatuated shiny gadgets foisted a whole parade of con artists and scamtech on all of us, abdicating any duty they had to investigate accurately instead of just laundering press releases. The Professionally Surprised.

And for a long while, that was all okay, the occasional CueCat notwithstanding, because new stuff generally was better, and even if was only marginally better, there was often a lot of money to be made by jumping in early. Maybe not “private island” money, but at least “retire early to the foothills” money.

But then somewhere between the Dot Com Crash and the Great Recession, things slowed down. Those two events didn’t help much, but also somewhere in there “computers” plateaued at “pretty good”. Mobile kept the party going for a while, but then that slowed down too.

My Mom tells a story about being a teenager while the Beatles were around, and how she grew up in a world where every nine months pop music was reinvented, like clockwork. Then the Beatles broke up, the 70s hit, and that all stopped. And she’s pretty open about how much she misses that whole era; the heady “anything can happen” rush. I know the feeling.

If your whole identity and worldview about computers as a profession is wrapped up in diving into a Big New Thing every couple of years, it’s strange to have it settle down a little. To maintain. To have to assess. And so it’s easy to find yourself grasping for what the Next Thing is, to try and get back that feeling of the whole world constantly reinventing itself.

But missing the heyday of the PC boom isn’t the reason that AI took off. But it provides a pretty good set of excuses to cover the real reasons.

Is there a difference between “The AI” and “Robots?” I think, broadly, the answer is “no;” but they’re different lenses on the same idea. There is an interesting difference between “robot” (we built it to sit outside in the back seat of the spaceship and fix engines while getting shot at) and “the AI” (write my email for me), but that’s more about evolving stories about which is the stuff that sucks than a deep philosophical difference.

There’s a “creative” vs “mechanical” difference too. If we could build an artificial person like C-3PO I’m not sure that having it wash dishes would be the best or most appropriate possible use, but I like that as an example because, rounding to the nearest significant digit, that’s an activity no one enjoys, and as an activity it’s not exactly a hotbed of innovative new techniques. It’s the sort of chore it would be great if you could just hand off to someone. I joke this is one of the main reasons to have kids, so you can trick them into doing chores for you.

However, once “robots” went all-digital and became “the AI”, they started having access to this creative space instead of the physical-mechanical one, and the whole field backed into a moral hazard I’m not sure they noticed ahead of time.

There’s a world of difference between “better clone stamp in photoshop” and “look, we automatically made an entire website full of fake recipes to farm ad clicks”; and it turns out there’s this weird grifter class that can’t tell the difference.

Gesturing back at a century of science fiction thought experiments about robots, being able to make creative art of any kind was nearly always treated as an indicator that the robot wasn’t just “a robot.” I’ll single out Asimov’s Bicentennial Man as an early representative example—the titular robot learns how to make art, and this both causes the manufacturer to redesign future robots to prevent this happening again, and sets him on a path towards trying to be a “real person.”

We make fun of the Torment Nexus a lot, but it keeps happening—techbros keep misunderstanding the point behind the fiction they grew up on.

Unless I’m hugely misinformed, there isn’t a mass of people clamoring to wash dishes, kids don’t grow up fantasizing about a future in vacuuming. Conversely, it’s not like there’s a shortage of people who want to make a living writing, making art, doing journalism, being creative. The market is flooded with people desperate to make a living doing the fun part. So why did people who would never do that work decide that was the stuff that sucked and needed to be automated away?

So, finally: why?

I think there are several causes, all tangled.

These causes are adjacent to but not the same as the root causes of the greater enshittification—excuse me, “Platform Decay”—of the web. Nor are we talking about the largely orthogonal reasons why Facebook is full of old people being fooled by obvious AI glop. We’re interested in why the people making these AI tools are making them. Why they decided that this was the stuff that sucked.

First, we have this weird cultural stew where creative jobs are “desired” but not “desirable”. There’s a lot of cultural cachet around being a “creator” or having a “creative” jobs, but not a lot of respect for the people actually doing them. So you get the thing where people oppose the writer’s strike because they “need” a steady supply of TV, but the people who make it don’t deserve a living wage.

Graeber has a whole bit adjacent to this in Bullshit Jobs. Quoting the originating essay:

It's even clearer in the US, where Republicans have had remarkable success mobilizing resentment against school teachers, or auto workers (and not, significantly, against the school administrators or auto industry managers who actually cause the problems) for their supposedly bloated wages and benefits. It's as if they are being told ‘but you get to teach children! Or make cars! You get to have real jobs! And on top of that you have the nerve to also expect middle-class pensions and health care?’

“I made this” has cultural power. “I wrote a book,” “I made a movie,” are the sort of things you can say at a party that get people to perk up; “oh really? Tell me more!”

Add to this thirty-plus years of pressure to restructure public education around “STEM”, because those are the “real” and “valuable” skills that lead to “good jobs”, as if the only point of education was as a job training program. A very narrow job training program, because again, we need those TV shows but don’t care to support new people learning how to make them.

There’s always a class of people who think they should be able to buy anything; any skill someone else has acquired is something they should be able to purchase. This feels like a place I could put several paragraphs that use the word “neoliberalism” and then quote from Ayn Rand, The Incredibles, or Led Zeppelin lyrics depending on the vibe I was going for, but instead I’m just going to say “you know, the kind of people who only bought the Cliffs Notes, never the real book,” and trust you know what I mean. The kind of people who never learned the difference between “productivity hacks” and “cheating”.

The sort of people who only interact with books as a source of isolated nuggets of information, the kind of people who look at a pile of books and say something like “I wish I had access to all that information,” instead of “I want to read those.”

People who think money should count at least as much, if not more than, social skills or talent.

On top of all that, we have the financializtion of everything. Hobbies for their own sake are not acceptable, everything has to be a side hustle. How can I use this to make money? Why is this worth doing if I can’t do it well enough to sell it? Is there a bootcamp? A video tutorial? How fast can I start making money at this?

Finally, and critically, I think there’s a large mass of people working in software that don’t like their jobs and aren’t that great at them. I can’t speak for other industries first hand, but the tech world is full of folks who really don’t like their jobs, but they really like the money and being able to pretend they’re the masters of the universe.

All things considered, “making computers do things” is a pretty great gig. In the world of Professional Careers, software sits at the sweet spot of “amount you actually have to know & how much school you really need” vs “how much you get paid”.

I’ve said many times that I feel very fortunate that the thing I got super interested in when I was twelve happened to turn into a fully functional career when I hit my twenties. Not everyone gets that! And more importantly, there are a lot of people making those computers do things who didn’t get super interested in computers when they were twelve, because the thing they got super interested in doesn’t pay for a mortgage.

Look, if you need a good job, and maybe aren’t really interested in anything specific, or at least in anything that people will pay for, “computers”—or computer-adjacent—is a pretty sweet direction for your parents to point you. I’ve worked with more of these than I can count—developers, designers, architects, product people, project managers, middle managers—and most of them are perfectly fine people, doing a job they’re a little bored by, and then they go home and do something that they can actually self-actualize about. And I suspect this is true for a lot of “sit down inside email jobs,” that there’s a large mass of people who, in a just universe, their job would be “beach” or “guitar” or “games”, but instead they gotta help knock out front-end web code for a mid-list insurance company. Probably, most careers are like that, there’s the one accountant that loves it, and then a couple other guys counting down the hours until their band’s next unpaid gig.

But one of the things that makes computers stand out is that those accountants all had to get certified. The computer guys just needed a bootcamp and a couple weekends worth of video tutorials, and suddenly they get to put “Engineer” on their resume.

And let’s be honest: software should be creative, usually is marketed as such, but frequently isn’t. We like to talk about software development as if it’s nothing but innovation and “putting a dent in the universe”, but the real day-to-day is pulling another underwritten story off the backlog that claims to be easy but is going to take a whole week to write one more DTO, or web UI widget, or RESTful API that’s almost, but not quite, entirely unlike the last dozen of those. Another user-submitted bug caused by someone doing something stupid that the code that got written badly and shipped early couldn’t handle. Another change to government regulations that’s going to cause a remodel of the guts of this thing, which somehow manages to be a surprise despite the fact the law was passed before anyone in this meeting even started working here.

They don’t have time to learn how that regulation works, or why it changed, or how the data objects were supposed to happen, or what the right way to do that UI widget is—the story is only three points, get it out the door or our velocity will slip!—so they find someting they can copy, slap something together, write a test that passes, ship it. Move on to the next. Peel another one off the backlog. Keep that going. Forever.

And that also leads to this weird thing software has where everyone is just kind of bluffing everyone all the time, or at least until they can go look something up on stack overflow. No one really understands anything, just gotta keep the feature factory humming.

The people who actually like this stuff, who got into it because they liked making compteurs do things for their own sake keep finding ways to make it fun, or at least different. “Continuous Improvement,” we call it. Or, you know, they move on, leaving behind all those people whose twelve-year old selves would be horrified.

But then there’s the group that’s in the center of the Venn Diagram of everything above. All this mixes together, and in a certain kind of reduced-empathy individual, manifests as a fundamental disbelief in craft as a concept. Deep down, they really don’t believe expertise exists. That “expertise” and “bias” are synonyms. They look at people who are “good” at their jobs, who seem “satisfied” and are jealous of how well that person is executing the con.

Whatever they were into at twelve didn’t turn into a career, and they learned the wrong lesson from that. The kind of people who were in a band as a teenager and then spent the years since as a management consultant, and think the only problem with that is that they ever wanted to be in a band, instead of being mad that society has more open positions for management consultants than bass players.

They know which is the stuff that sucks: everything. None of this is the fun part; the fun part doesn’t even exist; that was a lie they believed as a kid. So they keep trying to build things where they don’t have to do their jobs anymore but still get paid gobs of money.

They dislike their jobs so much, they can’t believe anyone else likes theirs. They don’t believe expertise or skill is real, because they have none. They think everything is a con because thats what they do. Anything you can’t just buy must be a trick of some kind.

(Yeah, the trick is called “practice”.)

These aren’t people who think that critically about their own field, which is another thing that happens when you value STEM over everything else, and forget to teach people ethics and critical thinking.

Really, all they want to be are “Idea Guys”, tossing off half-baked concepts and surrounded by people they don’t have to respect and who wont talk back, who will figure out how to make a functional version of their ill-formed ramblings. That they can take credit for.

And this gets to the heart of whats so evil about the current crop of AI.

These aren’t tools built by the people who do the work to automate the boring parts of their own work; these are built by people who don’t value creative work at all and want to be rid of it.

As a point of comparison, the iPod was clearly made by people who listened to a lot of music and wanted a better way to do so. Apple has always been unique in the tech space in that it works more like a consumer electronics company, the vast majority of it’s products are clearly made by people who would themselves be an enthusiastic customer. In this field we talk about “eating your own dog-food” a lot, but if you’re writing a claims processing system for an insurance company, there’s only so far you can go. Making a better digital music player? That lets you think different.

But no: AI is all being built by people who don’t create, who resent having to create, who resent having to hire people who can create. Beyond even “I should be able to buy expertise” and into “I value this so little that I don’t even recognize this as a real skill”.

One of the first things these people tried to automate away was writing code—their own jobs. These people respect skill, expertise, craft so little that they don’t even respect their own. They dislike their jobs so much, and respect their own skills so little, that they can’t imagine that someone might not feel that way about their own.

A common pattern has been how surprised the techbros have been at the pushback. One of the funnier (in a laugh so you don’t cry way) sideshows is the way the techbros keep going “look, you don’t have to write anymore!” and every writer everywhere is all “ummmmm, I write because I like it, why would I want to stop” and then it just cuts back and forth between the two groups saying “what?” louder and angrier.

We’re really starting to pay for the fact that our civilization spent 20-plus years shoving kids that didn’t like programming into the career because it paid well and you could do it sitting down inside and didn’t have to be that great at it.

What future are they building for themselves? What future do they expect to live in, with this bold AI-powered utopia? Some vague middle-management “Idea Guy” economy, with the worst people in the world summoning books and art and movies out of thin air for no one to read or look at or watch, because everyone else is doing the same thing? A web full of AI slop made by and for robots trying to trick each other? Meanwhile the dishes are piling up? That’s the utopia?

I’m not sure they even know what they want, they just want to stop doing the stuff that sucks.

And I think that’s our way out of this.

What do we do?

For starters, AI Companies need to be regulated, preferably out of existence. There’s a flavor of libertarian-leaning engineer that likes to say things like “code is law,” but actually, turns out “law” is law. There’s whole swathes of this that we as a civilization should have no tolerance for; maybe not to a full Butlerian Jihad, but at least enough to send deepfakes back to the Abyss. We dealt with CFCs and asbestos, we can deal with this.

Education needs to be less STEM-focused. We need to carve out more career paths (not “jobs”, not “gigs”, “careers”) that have the benefits of tech but aren’t tech. And we need to furiously defend and expand spaces for creative work to flourish. And for that work to get paid.

But those are broad, society-wide changes. But what can those of us in the tech world actually do? How can we help solve these problems in our own little corners? We can we go into work tomorrow and actually do?

It’s on all of us in the tech world to make sure there’s less of the stuff that sucks.

We can’t do much about the lack of jobs for dance majors, but we can help make sure those people don’t stop believing in skill as a concept. Instead of assuming what we think sucks is what everyone thinks sucks, is there a way to make it not suck? Is there a way to find a person who doesn’t think it sucks? (And no, I don’t mean “Uber for writing my emails”) We gotta invite people in and make sure they see the fun part.

The actual practice of software has become deeply dehumanizing. None of what I just spent a week describing is the result of healthy people working in a field they enjoy, doing work they value. This is the challenge we have before us, how can we change course so that the tech industry doesn’t breed this. Those of us that got lucky at twelve need to find new ways to bring along the people who didn’t.

With that in mind, next Friday on Icecano we start a new series on growing better software.


Several people provided invaluable feedback on earlier iterations of this material; you all know who you are and thank you.

And as a final note, I’d like to personally apologize to the one person who I know for sure clicked Open in New Tab on every single link. Sorry man, they’re good tabs!

Read More
Gabriel L. Helman Gabriel L. Helman

Why is this Happening, Part II: Letting Computers Do The Fun Part

Previously: Part I

Let’s leave the Stuff that Sucks aside for the moment, and ask a different question. Which Part is the Fun Part? What are we going to do with this time the robots have freed up for us?

It’s easy to get wrapped up in pointing at the parts of living that suck; especially when fantasizing about assigning work to C-3PO’s cousin. And it’s easy to spiral to a place where you just start waving your hands around at everything.

But even Bertie Wooster had things he enjoyed, that he occasionally got paid for, rather than let Jeeves work his jaw for him.

So it’s worth recalibrating for a moment: which are the fun parts?

As aggravating as it can be at times, I do actually like making computers do things. I like programming, I like designing software, I like building systems. I like finding clever solutions to problems. I got into this career on purpose. If it was fun all the time they wouldn’t have to call it “work”, but it’s fun a whole lot of the time.

I like writing (obviously.) For me, that dovetails pretty nicely with liking to design software; I’m generally the guy who ends up writing specs or design docs. It’s fun! I owned the customer-facing documentation several jobs back. It was fun!

I like to draw! I’m not great at it, but I’m also not trying to make a living out of it. I think having hobbies you enjoy but aren’t great at is a good thing. Not every skill needs to have a direct line to a career or a side hustle. Draw goofy robots to make your kids laugh! You don’t need to have to figure out a the monetization strategy.

In my “outside of work” life I think I know more writers and artists than programmers. For all of them, the work itself—the writing, the drawing, the music, making the movie—is the fun part. The parts they don’t like so well is the “figuring out how to get paid” part, or the dealing with printers part, or the weird contracts part. The hustle. Or, you know, the doing dishes, laundry, and vacuuming part. The “chores” part.

So every time I see a new “AI tool” release that writes text or generates images or makes video, I always as the same question:

Why would I let the computer do the fun part?

The writing is the fun part! The drawing pictures is the fun part! Writing the computer programs are the fun part! Why, why, are they trying to tell us that those are the parts that suck?

Why are the techbros trying to automate away the work people want to do?

It’s fun, and I worked hard to get good at it! Now they want me to let a robot do it?

Generative AI only seems impressive if you’ve never successfully created anything. Part of what makes “AI art” so enragingly radicalizing is the sight of someone whose never tried to create something before, never studied, never practiced, never put the time in, never really even thought about it, joylessly showing off their terrible AI slop they made and demanding to be treated as if they made it themselves, not that they used a tool built on the fruits of a million million stolen works.

Inspiration and plagiarism are not the same thing, the same way that “building a statistical model of word order probability from stuff we downloaded from the web” is not the same as “learning”. A plagiarism machine is not an artist.

But no, the really enraging part is watching these people show off this garbage realizing that these people can’t tell the difference. And AI art seems to be getting worse, AI pictures are getting easier spot, not harder, because of course it is, because the people making the systems don’t know what good is. And the culture is following: “it looks like AI made it” has become the exact opposite of a compliment. AI-generated glop is seen as tacky, low quality. And more importantly, seen as cheap, made by someone who wasn’t willing to spend any money on the real thing. Trying to pass off Krusty Burgers as their own cooking.

These are people with absolutely no taste, and I don’t mean people who don’t have a favorite Kurosawa film, I mean people who order a $50 steak well done and then drown it in A1 sauce. The kind of people who, deep down, don’t believe “good” is real. That it’s all just “marketing.”

The act of creation is inherently valuable; creation is an act that changes the creator as much as anyone. Writing things down isn’t just documentation, it’s a process that allows and enables the writer to discover what they think, explore how they actually feel.

“Having AI write that for you is like having a robot lift weights for you.”

AI writing is deeply dehumanizing, to both the person who prompted it and to the reader. There is so much weird stuff to unpack from someone saying, in what appears to be total sincerity, that they used AI to write a book. That the part they thought sucked was the fun part, the writing, and left their time free for… what? Marketing? Uploading metadata to Amazon? If you don’t want to write, why do you want people to call you a writer?

Why on earth would I want to read something the author couldn’t be bothered to write? Do these ghouls really just want the social credit for being “an artist”? Who are they trying to impress, what new parties do they think they’re going to get into because they have a self-published AI-written book with their name on it? Talk about participation trophies.

All the people I know in real life or follow on the feeds who use computers to do their thing but don’t consider themselves “computer people” have reacted with a strong and consistant full-body disgust. Personally, compared to all those past bubbles, this is the first tech I’ve ever encountered where my reaction was complete revulsion.

Meanwhile, many (not all) of the “computer people” in my orbit tend to be at-least AI curious, lots of hedging like “it’s useful in some cases” or “it’s inevitable” or full-blown enthusiasm.

One side, “absolutely not”, the other side, “well, mayyybe?” As a point of reference, this was the exact breakdown of how these same people reacted to blockchain and bitcoin.

One group looks at the other and sees people musing about if the face-eating leopard has some good points. The other group looks at the first and sees a bunch of neo-luddites. Of course, the correct reaction to that is “you’re absolutely correct, but not for the reasons you think.”

There’s a Douglas Adams bit that gets quoted a lot lately, which was printed in Salmon of Doubt but I think was around before that:

I’ve come up with a set of rules that describe our reactions to technologies:

  1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.

  2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.

  3. Anything invented after you’re thirty-five is against the natural order of things.

The better-read AI-grifters keep pointing at rule 3. But I keep thinking of the bit from Dirk Gently’s Detective Agency about the Electric Monk:

The Electric Monk was a labour-saving device, like a dishwasher or a video recorder. Dishwashers washed tedious dishes for you, thus saving you the bother of washing them yourself, video recorders watched tedious television for you, thus saving you the bother of looking at it yourself; Electric Monks believed things for you, thus saving you what was becoming an increasingly onerous task, that of believing all the things the world expected you to believe.

So, what are the people who own the Monks doing, then?

Let’s speak plainly for a moment—the tech industry has always had a certain…. ethical flexibility. The “things” in “move fast and break things” wasn’t talking about furniture or fancy vases, this isn’t just playing baseball inside the house. And this has been true for a long time, the Open Letter to Hobbyists was basically Gates complaining that other people’s theft was undermining the con he was running.

We all liked to pretend “disruption” was about finding “market inefficiencies” or whatever, but mostly what that meant was moving in to a market where the incumbents were regulated and labor had legal protection and finding a way to do business there while ignoring the rules. Only a psychopath thinks “having to pay employees” is an “inefficiency.”

Vast chunks of what it takes to make generative AI possible are already illegal or at least highly unethical. The Internet has always been governed by a sort of combination of gentleman’s agreements and pirate codes, and in the hunger for new training data, the AI companies have sucked up everything, copyright, licensing, and good neighborship be damned.

There’s some half-hearted attempts to combat AI via arguments that it violates copyright or open source licensing or other legal approach. And more power to them! Personally, I’m not really interested in the argument the AI training data violates contract law, because I care more about the fact that it’s deeply immoral. See that Vonnegut line about “those who devised means of getting paid enormously for committing crimes against which no laws had been passed.” Much like I think people who drive too fast in front of schools should get a ticket, sure, but I’m not opposed to that action because it was illegal, but because it was dangerous to the kids.

It’s been pointed out more than once that AI breaks the deal behind webcrawlers and search—search engines are allowed to suck up everyone’s content in exchange for sending traffic their way. But AI just takes and regurgitates, without sharing the traffic, or even the credit. It’s the AI Search Doomsday Cult. Even Uber didn’t try to put car manufacturers out of business.

But beyond all that, making things is fun! Making things for other people is fun! It’s about making a connection between people, not about formal correctness or commercial viability. And then you see those terrible google fan letter ads at the olympics, or see people crowing that they used AI to generate a kids book for their children, and you wonder, how can these people have so little regard for their audience that they don’t want to make the connection themselves? That they’d rather give their kids something a jumped-up spreadsheet full of stolen words barfed out instead of something they made themselves? Why pass on the fun part, just so you can take credit for something thoughtless and tacky? The AI ads want you to believe that you need their help to find “the right word”; what thay don’t tell you is that no you don’t, what you need to do is have fun finding your word.

Robots turned out to be hard. Actually, properly hard. You can read these papers by computer researchers in the fifties where they’re pretty sure Threepio-style robot butlers are only 20 years away, which seems laughable now. Robots are the kind of hard where the more we learn the harder they seem.

As an example: Doctor Who in the early 80s added a robot character who was played by the prototype of an actual robot. This went about as poorly as you might imagine. That’s impossible to imagine now, no producer would risk their production on a homemade robot today, matter how impressive the demo was. You want a thing that looks like Threepio walking around and talking with a voice like a Transformer? Put a guy in a suit. Actors are much easier to work with. Even though they have a union.

Similarly, “General AI” in the HAL/KITT/Threepio sense has been permanently 20 years in the future for at least 70 years now. The AI class I took in the 90s was essentially a survey of things that hadn’t worked, and ended with a kind of shrug and “maybe another 20?”

Humans are really, really good at seeing faces in things, and finding patterns that aren’t there. Any halfway decent professional programmer can whip up an ELIZA clone in an afternoon, and even knowing how the trick works it “feels” smarter than it is. A lot of AI research projects are like that, a sleight-of-hand trick that depends on doing a lot of math quickly and on the human capacity to anthropomorphize. And then the self-described brightest minds of our generation fail the mirror test over and over.

Actually building a thing that can “think”? Increasingly seems impossible.

You know what’s easy, though, comparatively speaking? Building a statistical model of all the text you can pull off the web.

On Friday: conclusions, such as they are.

Read More
Gabriel L. Helman Gabriel L. Helman

Why is this Happening, Part I: The Stuff That Sucks

When I was a kid, I had this book called The Star Wars Book of Robots. It was a classic early-80s kids pop-science book; kids are into robots, so let’s have a book talking about what kinds of robots existed at the time, and then what kinds of robots might exist in the future. At the time, Star Wars was the spoonful of sugar to help education go down, so every page talked about a different kind of robot, and then the illustration was a painting of that kind of robot going about its day while C-3PO, R2-D2, and occasionally someone in 1970s leisureware looked on. So you’d have one of those car factory robot arms putting a sedan together while the droids stood off to the side with a sort of “when is Uncle Larry finally going to retire?” energy.

The image from that book that has stuck with me for four decades is the one at the top of this page: Threepio, trying to do the dishes while vacuuming, and having the situation go full slapstick. (As a kid, I was really worried that the soap suds were going to get into his bare midriff there and cause electrical damage, which should be all you need to know to guess exactly what kind of kid I was at 6.)

Nearly all the examples in the book were of some kind of physical labor; delivering mail, welding cars together, doing the dishes, going to physically hostile places. And at the time, this was the standard pop-culture job for robots “in the future”, that robots and robotic automation were fundamentally physical, and were about relieving humans from mechanical labor.

The message is clear: in the not to distant future we’re all going to have some kind of robotic butler or maid or handyman around the house, and that robot is going to do all the Stuff That Sucks. Dishes, chores, laundry, assorted car assembly, whatever it is you don’t want to do, the robot will handle for you.

I’ve been thinking about this a lot over the last year and change since “Generative AI” became a phrase we were all forced to learn. And what’s interesting to me is the way that the sales pitch has evolved around which is the stuff that sucks.

Robots, as a storytelling construct, have always been a thematically rich metaphor in this regard, and provide an interesting social diagnostic. You can tell a lot about what a society thinks is “the stuff that sucks” by looking at both what the robots and the people around them are doing. The work that brought us the word “robot” itself represented them as artificially constructed laborers who revolted against their creators.

Asimov’s body of work, which was the first to treat robots as something industrial and coined the term “robotics” mostly represented them as doing manual labor in places too dangerous for humans while the humans sat around doing science or supervision. But Asimov’s robots also were always shown to be smarter and more individualistic than the humans believed, and generally found a way to do what they wanted to do, regardless of the restrictions from the “Laws of Robotics.”

Even in Star Wars, which buries the political content low in the mix, it’s the droids where the dark satire from THX-1138 pokes through; robots are there as a permanent servant class doing dangerous work either on the outside of spaceships or translating for crime bosses, are the only group shown to be discriminated against, and have otherwise unambiguous “good guys” ordering mind wipes of, despite consistently being some of the smartest and most capable characters.

And then, you know, Blade Runner.

There’s a lot of social anxiety wrapped up in all this. Post-industrial revolution, the expanding middle classes wanted the same kinds of servants and “domestic staff” as the upper classes had. Wouldn’t it be nice to have a butler, a valet, some “staff?” That you didn’t have to worry about?

This is the era of Jeeves & Wooster, and who wouldn’t want a “gentleman’s gentleman” to do the work around the house, make you a hangover cure, drive the car, get you out of scrapes, all while you frittered your time away with idiot friends?

(Of course, I’m sure it’s a total coincidence this is also the period where the Marxists & Socialist thinkers really got going.)

But that stayed asperational, rather than possible, and especially post-World War II, the culture landed on sending women back home and depending on the stay-at-home mom handle “all that.”

There’s a lot of “robot butlers” in mid-century fiction, because how nice would it be if you could just go to the store and buy that robot from The Jetsons, free from any guilt? There’s a lot to unpack there, but that desire for a guilt-free servant class was, and is, persistant in fiction.

Somewhere along the lines, this changes, and robots stop being manual labor or the domestic staff, and start being secretaries, executive assistants. For example, by the time Star Trek: The Next Generation rolls around in the mid-80s, part of the fully automated luxury space communism of the Federation is that the Enterprise computer is basically the perfect secretary—making calls, taking dictation, and doing research. Even by the then it was clear that there was a whole lot of “stuff to know”, and so robots find themselves acting as research assistants. Partly, this is a narrative accelerant—having the Shakespearian actor able to ask thin air for the next plot point helps move things along pretty fast—but the anxiety about information overload was there, even then. Imagine if you could just ask somebody to look it up for you! (Star Trek as a whole is an endless Torment Nexus factory, but that’s a whole other story.)

I’ve been reading a book about the history of keyboards, and one of the more intersting side stories is the way “typing” has interacted with gender roles over the last century. For most of the 1900s, “typing” was a woman’s job, and men, who were of course the bosses, didn’t have time for that sort of tediousness. They’re Idea Guys, and the stuff that sucks is wrestling with an actual typewriter to write them down.

So, they would either handwrite things they needed typed and send it down to the “typing pool”, or dictate to a secretary, who would type it up. Typing becomes a viable job out of the house for younger or unmarried women, albeit one without an actual career path.

This arrangement lasted well into the 80s, and up until then the only men who typed themselves were either writers or nerds. Then computers happened, PCs landed on men’s desks, and it turns out the only thing more powerful than sexism was the desire to cut costs, so men found themselves typing their own emails. (Although, this transition spawns the most unwittingly enlightening quote in the whole book, where someone who was an executive at the time of the transition says it didn’t really matter, because “Feminism ruined everything fun about having a secretary”. pikachu shocked face dot gif)

But we have a pattern; work that would have been done by servants gets handed off to women, and then back to men, and then fiction starts showing up fantasizing about giving that work to a robot, who won’t complain, or have an opinion—or start a union.

Meanwhile, in parallel with all this “chat bots” have been cooking along for as long as there have been computers. Typing at a computer and getting a human-like response was an obvious interface, and spawned a whole set of thought similar but adjacent to all those physical robots. ELIZA emerged almost as soon as computers were able to support such a thing. The Turing test assumes a chat interface. “Software Agents” become a viable area of research. The Infocom text adventure parser came out of the MIT AI lab. What if your secretary was just a page of text on your screen?

One of the ways that thread evolved emerged as LLMs and “Generative AI”. And thanks to the amount of VC being poured in, we get the last couple of years of AI slop. And also a hype cycle that says that any tech company that doesn’t go all-in on “the AI” is going to be left in the dust. It’s the Next Big Thing!

Flash forward to Apple’s Worldwide Developer Conference earlier this summer. The Discourse going into WWDC was that Apple was “behind on AI” and needed to catch up to the industry, although does it really count as behind if all your competitors are up over their skis? And so far AI has been extremely unprofitable, and if anything, Apple is a company that only ships products it knows it can make money on.

The result was that they rolled out the most comprehensive vision of how a Gen AI–powered product suite looks here in 2024. In many ways, “Apple Intelligence” was Apple doing what they do best—namely, doing their market research via letting their erstwhile competitors skid into a ditch, and then slide in with a full Second Mover Advantage by saying “so, now do you want something that works?”

They’re very, very good at identifying The Stuff That Sucks, and announcing that they have a solution. So what stuff was it? Writing text, sending pictures, communicating with other people. All done by a faceless, neutral, “assistant,” who you didn’t have to engage with like they were a person, just a fancy piece of software. Computer! Tea, Earl Gray! Hot!

I’m writing about a marketing event from months ago because watching their giant infomercial was where something clicked for me. They spent an hour talking about speed, “look how much faster you can do stuff!” “You don’t have to write your own text, draw your own pictures, send your own emails, interact directly with anyone!

Left unsaid was what you were saving all that time for. Critically, they didn’t annouce they were going to a 4-day work week or 6-hour days, all this AI was so people could do more “real work”. Except that the “stuff that sucks” was… that work? What’s the vision of what we’ll be doing when we’ve handed off all this stuff that sucks?

Who is building this stuff? What future do they expect to live in, with this bold AI-powered economy? What are we saving all this time for? What future do these people want? Why are these the things they have decided suck?

I was struck, not for the first time, by what a weird dystopia we find ourselves in: “we gutted public education and non-STEM subjects like writing and drawing, and everyone is so overworked they need a secretary but can’t afford one, so here’s a robot!”

To sharpen the point: why in the fuck am I here doing the dishes myself while a bunch of techbros raise billions of dollars to automate the art and poetry? What happened to Threepio up there? Why is this the AI that’s happening?

On Wednesday, we start kicking over rocks to find an answer...

Read More
Gabriel L. Helman Gabriel L. Helman

Salmon of Doubt by Douglas Adams (2002)

There are multiple interlocking tradegies of Douglas Adams’ death—not the least of which is the fact that he died at all. But also he passed at what appeared to be the end of a decade-long career slump—well, not slump exactly, but a decade where he seemed to spend his time being very, very irritated at the career he’d accidentally found.

After he died unexpectedly in May of 2001 at 49, his publisher rushed out a collection of previously unpublished work called Salmon of Doubt. It’s a weird book—a book that only could have happened under the exact circumstances that it did, scrambled out to take advantage of the situation, part collection, part funeral.

Douglas Adams is, by far, the writer whose had the biggest influence on my own work, and it’s not even close. I’m not even sure who would be number two? Ursula LeGuin, probably? But that’s a pretty distant second place—The Hitchhiker’s Guide to the Galaxy is the first “grown-up” book I ever read on my own, which is sort of my secret origin story.

As such I gulped Salmon down the instant it came out in 2002, and hadn’t read it since. There was a bit I vaguely remembered that I wanted to quote in something else I was working on, so I’ve recently bought a new copy, as my original one has disappeared over the years. (Actually, I’m pretty sure I know exactly what happened to it, but it’s a minor footnote in a larger, more depressing story, so lets draw a veil across it and pretend that it was pilfered by elves.)

Re-reading the book decades later, two things are very obvious:

First, Adams would never have let a book like this happen while he was alive. It’s self-indulgent in exactly the way he never was, badly organized, clearly rushed. I mean, the three main sections are “Life”, “The Universe”, and “And Everything”, which in addition to being obvious to the point of being tacky, is an absolutely terrible table of contents because there’s no rhyme or reason why one item is in one section versus another.

Second, a book like this should have happened years before. There was so much stuff Adams wrote—magazine articles, newspaper columns, bits and bobs on the internet—that a non-fiction essay collection–style book was long overdue.

This book is weird for other reasons, including that a bunch of other people show up and try to be funny. It’s been remarked more than once that no other generally good writer has inspired more bad writing that Douglas Adams, and other contributions to this book are a perfect example. The copy I have now is the US paperback, with a “new introduction” by Terry Jones—yes, of Monty Python—which might be the least funny thing I’ve ever read, not just unfunny but actively anti-funny, the humor equivalent of anti-matter. The other introductions are less abrasive, but badly misjudge the audience’s tolerance for a low-skill pastiche at the start of what amounts to a memorial service.

The main selling point here is the unfinished 3rd Dirk Gently novel, which may or may not have actually been the unfinished 6th Hitchhiker’s Guide to the Galaxy novel. However, that only takes about about 80 pages of a 290 page book; by my math thats a hair over a quarter, which is a little underwhelming. It’s clear the goal was to take whatever the raw material looked like and edit it into something reasonably coherent and readable, which it is. But even at the time, it felt like heavily-edited “grit-out-of-the-spigot” early drafts rather than an actual unfinished book, I’d be willing to bet a fiver that if Adams had lived to finish whatever that book turned into, none of the text here would have been in it. As more unfinished pieces have leaked out over the years, such as the excerpts in 42: The Wildly Improbable Ideas of Douglas Adams, it’s clear that there was a lot more than made it into Salmon, and while less “complete”, that other stuff was a lot more interesting. As an example, the excerpts from Salmon in 42 include some passages from one of the magazine articles collected here, except in the context of the novel instead of Adams himself on a trip? What’s the story there? Which came first? Which way did that recycling go? Both volumes are frustratingly silent.

It’s those non-novel parts that are actually good, though. That magazine article is casually one of the best bits of travel writing I’ve ever read, there’s some really insightful bits about computers and technology, a couple of jokes that I’ve been quoting for years having forgotten they weren’t in Hitchhiker proper. The organization, and the rushed nature of the compilation, make these frustrating, because there will be an absolutely killer paragraph on its own, with no context for where did this come from? Under what circumstances was this written? Similarly for the magazine articles, newspaper columns, excerpts from (I assume) his website; there’s no context or dates or backstory, the kinds of things you’d hope for in a collection like this. Most of them seem to date to “the 90s” from context clues, but it’s hard to say where exactly all these things fit in.

But mopst of what really makes the book so weird is how fundamentally weird Adams’ career itself was in the last decade of his life.

In a classic example of working for years to become an overnight success, Adams had a remarkably busy period from 1978–1984, which included (deep breath) two series of the Hitchhiker radio show, a revised script for the album version of the first series, a Doctor Who episode, a stint as Doctor Who’s script editor during which he wrote two more episodes—one of which was the single best episode of the old show—and heavily rewrote several others, the TV adaptation of Hitchhiker which was similar but not identical to the first radio series, the third Hitchhiker novel based (loosely) on a rejected pitch for yet another Doctor Who, and ending in 1984 with the near simultaneous release of the fourth Hitchhiker novel and the Infocom text adventure based on the first.

(In a lot of ways, HHGG makes more sense if you remember that it happened in the shadow of his work for Doctor Who, more than anything it functions as a satire of the older program, the Galaxy Quest to Who’s Star Trek, if you will. Ford is the Doctor if he just wanted to go to a party, Arthur is a Doctor Who companion who doesn’t want to be there and argues back, in the radio show at least, The Heart of Gold operates almost exactly like the Tardis. If you’ll forgive the reference, I’ve always found it improbable, that Hitchhiker found its greatest success in America at a time where Who was barely known.)

After all that, to steal a line from his own work, “he went into a bit of a decline.”

Somewhere in there he also became immensely rich, and it’s worth remembering for the rest of this story that somewhere in the very early 80s Adams crossed the line of “never needs to work again.”

Those last two projects in 1984 are worth spending an extra beat on. It’s not exactly a secret that Adams actually had very little to do with the Hitchhiker game other than the initial kickoff, and that the vast majority of the writing and the puzzles were Steve Meretzky doing an impeccable Adams impression. (See The Digital Antiquarian’s Douglas Adams, The Computerized Hitchhiker’s, and Hitchhiking the Galaxy Infocom-Style for more on how all that happened.)

Meanwhile, the novel So Long and Thanks for All The Fish kicks off what I think of his middle period. It’s not really a SF comedy, it’s a magical realism romance novel that just happens to star the main character from Hitchhiker. It wasn’t super well received. It’s also my personal favorite? You get the feeling that’s the sort of direction he wanted to move in, not just recycling the same riffs from a decade earlier. There’s a real sense of his growth as an author. It also ties up the Hitchhiker series with a perfect ending.

Then a couple of more things happen. Infocom had a contract for up to six Hitchhiker games, and they really, really wanted to make at least a second. Adams, however, had a different idea for a game, which resulted in Infocom’s loved-by-nobody Bureaucracy, which again, Adams largely had nothing to do with beyond the concept, with a different set of folks stepping in to finish the project. (Again, see Bureaucracy at The Digital Antiquarian for the gory details.)

Meanwhile, he had landed a two book deal for two “non-Hitchhiker books”, which resulted in the pair of Dirk Gently novels, of which exactly one of them is good.

The first, Dirk Gently’s Holistic Detective Agency, is probably his best novel. It reworks a couple of ideas from those late 70s Doctor Whos but remixed in interesting ways. The writing is just better, better characters, funnier, subtler jokes, a time-travel murder-mystery plot that clicks together like a swiss watch around a Samuel Coleridge poem and a sofa. It’s incredible.

The second Dirk Gently book, Long Dark Teatime of the Soul, is a terrible book, full stop, and I would describe it as one of the most angry, bitter, nihilistic books I’ve ever read, except I’ve also read Mostly Harmless, the final Hitchhiker book. Both of those books drip with the voice of an author that clearly really, really doesn’t want to be doing what he’s doing.

(I’m convinced Gaiman’s American Gods is a direct riposte to the bleak and depressing Teatime.)

The two Dirk books came out in ’87 and ’88, the only time he turned a book around that fast. (Pin that.) After wrapping up the Dirk contract, he went and wrote Last Chance to See, his best book period, out in 1990.

Which brings us back around to the book nominally at hand—Salmon of Doubt. The unfinished work published here claims to be a potential third Dirk novel, and frankly, it’s hard to believe that was ever seriously under consideration. Because, look, the Gently contract was for two books, neither of which did all that well. According to the intro of this compilation, the first files for Salmon date to ’93, and he clearly noodled on and around that for a decade. That book was never actually going to be finished. If there was desire for a 3rd Gently novel, they would have sat him down and forced him to finish it in ’94. Instead, they locked him in a room and got Mostly Harmless.

There’s a longstanding rumor that Mostly Harmless was largely ghostwritten, and it’s hard to argue. It’s very different from his other works, mean, bad-tempered, vicious towards its characters in a way his other works aren’t. Except it has a lot in common with Bureaucracy which was largely finished by someone else. And, it has to be said, both of those have a very similar voice to the equally mean and bad-tempered Teatime. This gets extra suspicious when you consider the unprecedented-for-him turnaround time on Teatime. It’s hard to know how much stock to put into that rumor mill, since Adams didn’t write anything after that we can compare them to—except Last Chance which is in a completely different mood and in the same style as his earlier, better work. Late period style or ghostwriter? The only person alive who still knows hasn’t piped up on the subject.

Personally? I’m inclined to believe that Dirk Gently’s Holistic Detective Agency was the last novel he wrote on his own, and that his contributions to both Teatime and Mostly Harmless were a sketch of an outline and some jokes. Which all, frankly, makes his work—or approximation thereof—over the course of the 90s even stranger.

In one of the great moments of synchronicity, while I was working on this, the Digital Antiquarian published a piece on Adams’ late period, and specifically the absolute mess of the Starship Titanic computer game, so rather than me covering the same ground, you should pause here and go read The Later Years of Douglas Adams. But the upshot is he spent a lot of time doing not very much of anything, and spawning at least two projects pawned off on others to finish.

After the garbage fire of Starship Titanic and then the strangely prescient h2g2—which mostly failed when it choked out on the the reams of unreadable prose that resulted from a horde of fans trying and failing to write wikipedia in the style of Adams’ guide entries—there was a distinct vibe shift. Whereas interviews with him in the mid 90s tended to have him say things like “I accidentally wrote a best-selling novel” and indicate a general dislike of novel writing as a profession, there seemed to be a thaw, a sense that maybe after a decade-plus resenting his found career, maybe he was ready to accept it and lean back in.

And then he died in the gym at 49.

One of the many maddening things about his death is that we never got to see what his late style would have looked like. His last two good books provide a hint of where he was heading.

And that’s the real value of Salmon of Doubt—the theoretical novel contained within would never have been finished in that form, the rest of the content is largely comprised of articles or blog posts or other trivialities, but it’s the only glimpse of what “Late Adams” would have looked like that we’ll ever get.

As a point of comparison, let continue getting side-tracked and talk about the guy who succeeded Adams as “the satirical genre writer beloved by nerds,” Terry Pratchett. Pratchett started writing novels about the same time Adams did, but as the saying goes, put the amount of energy into writing books that Adams spent avoiding writing them. He also, you know, lived longer, despite also dying younger than he should have. Even if we just scope down to Discworld, Pratchett wrote 40 novels, 28 of which were while Adams was also alive and working. Good Omens, his collaboration with Neil Gaiman, which is Discworld-adjacent at least, came out in 1990, and serves as a useful piece of temporal geography; that book is solidly still operating in “inspired by Douglas Adams” territory, and Pratchett wasn’t yet Terry Pratchett, beloved icon. But somewhere around there at the turn of the decade is where he stops writing comedy fantasy and starts writing satirical masterpieces. “What’s the first truly great Discworld novel?” is the sort of unanswerable question the old web thrived on, despite the fact that the answer is clearly Guards! Guards! from ’89. But the point here is that was book 8 after a decade of constant writing. And thats still a long way away from Going Postal or The Wee Free Men. We never got to see what a “Douglas Adams 8th Novel” looked like, much less a 33rd.

What got me thinking about this was I saw a discussion recently about whom of Adams or Pratchett were the better writer. And again, this is a weird comparison, because Pratchett had a late period that Adams never had. Personally, I think there’s very little Pratchett that’s as good as Adams at his peak, but Pratchett wrote ten times the number of novels Adams did and lived twenty years longer. Yes, Pratchett’s 21st century late period books are probably better than Adam’s early 80s work, but we never got to see what Adams would have done at the same age.

(Of course the real answer is: they’re both great, but PG Wodehouse was better than both of them.)

And this is the underlying frustration of Salmon and the Late Adams that never happened. There’s these little glimpses of what could have been, career paths he didn’t take. It not that hard to imagine a version of Hitchhiker that worked liked Discworld did, picking up new characters and side-series but always just rolling along, a way for the author to knock out a book every year where Arthur Dent encountered whatever Adams was thinking about, where Adams didn’t try to tie it off twice. Or where Adams went the Asimov route and left fiction behind to write thoughtful explanatory non-fiction in the style of Last Chance.

Instead all we have is this. It’s scraps. but scraps I’m grateful for.


This is where I put a horizontal line and shift gears dramatically. Something I’ve wondered with increasing frequency over the last decade is who Adams would have turned into. I wonder this, because it’s hard to miss that nearly everybody in Adams’ orbit has turned into a giant asshole. The living non-Eric Ide Pythons, Dawkins and the whole New Atheist movement, the broader 90s Skeptic/Humanist/“Bright” folks all went mask-off the last few years. Even the guy who took over the math puzzles column in Scientific American from Martin Gardner now has a podcast where he rails against “wokeists” and vomits out transphobia. Hell, as I write this, Neil Gaiman, who wrote the definitive biography of Adams and whose first novel was a blatant Adams pastiche, has turned out to be “problematic” at best.

There’s something of a meme in the broader fanbase that it’s a strange relief that Adams died before we found out if he was going to go full racist TERF like all of his friends. I want to believe he wouldn’t, but then I think about the casual viscousness with which Adams slaughtered off Arthur Dent in Mostly Harmless—the beloved character who made him famous and rich—and remember why I hope those rumors about ghostwriters are true.

The New Atheists always kind of bugged me for reasons it took me a long time to articulate; I was going to put a longer bit on that theme here, but this piece continues to be proof that if you let something sit in your drafts folder long enough someone else will knock out an article covering the parts you haven’t written yet, and as such The Defector had an absolutely dead-on piece on that whole movement a month or so ago: The Ghosts Of New Atheism Still Haunt Us. Adams goes (mercifully) unmentioned, but recall Dawkins met his wife—Doctor Who’s Romana II herself, Lalla Ward!—after Adams introduced the two of them at a party Adams was hosting, and Adams was a huge sloppy fan of Dawkins and his work.

I bring all this up here and now because one of the pieces in Salmon of Doubt is an interview of Adams by the “American Atheist”, credited to The American Atheist 37, No. 1 which in keeping with Salmon’s poor organization isn’t dated, but a little digging on the web reveals to be the Winter 1998–1999 issue.

It’s incredible, because the questions the person interviewing ask him just don’t compute with Adams. Adams can’t even engage on the world-view the American Atheists have. I’m going to quote the best exchange here:

AMERICAN ATHEISTS: Have you faced any obstacles in your professional life because of your Atheism (bigotry against Atheists), and how did you handle it? How often does this happen?

DNA: Not even remotely. It's an inconceivable idea.

One can easily imagine, and by “imagine” I mean “remember”, other figures from that movement going on and on about how poorly society treats atheists, and instead here Adams just responds with blank incomprehension. Elsewhere in the interview he dismissed their disconnect as a difference between the US and the UK, which is both blatantly a lie but also demonstrates the sort of kindness and empathy one doesn’t expect from the New Atheists. Every response Adams gives has the air of him thinking “what in the world is wrong with you?”

And, here in the twenties, that was my takeaway from reading Salmon again. It’s a book bursting with empathy, kindness, and a fundamentally optimistic take on the absurd world we find ourselves in. A guy too excited about how great things could be to rant about how stupid they are (or, indeed, to put the work into getting there.) A book full of things written by, fundamentally, one of the good guys.

If Adams had lived, I’m pretty sure three things would be true. First, there’d be a rumor every year this this was the year he was finally going to finish a script for the new Doctor Who show despite the fact that this never actually ends up happening. Second, that we never would have been able to buy a completed Salmon of Doubt. Third, I’m pretty sure he wouldn’t be on twitter asking people to define “a woman.”

In other words: Don't Panic.

Read More
Gabriel L. Helman Gabriel L. Helman

Fractals

(This may not appear correctly inside a feed reader or other limited-formatting browser.)

Part 1: DFW

  1. Let me tell you a story about something that happens to me. Maybe it happens to you?
  2. This same thing has happened probably a dozen times, if not more, over the last couple of decades. I’ll be in a group some kind, where the membership is not entirely optional—classmates, coworkers, other parents at the kid’s school—and the most irritating, obnoxious member of the group, the one I have the least in common with and would be the least likely to spend time with outside of whatever it is we’re doing, will turn to me, face brightening, and say “Hey! I bet you’re a huge fan of David Foster Wallace.”
  3. I’ve learned that the correct answer to this is a succinct “you know, he didn’t invent footnotes.”A
  4. Because reader, they do not bet correctly. To be very clear: I have never1 read any of his work. I’m aware he exists, and there was that stretch in the late 90s where an unread copy of Infinite Jest seemed to spontaneously materialize on everyone’s shelves. But I don’t have an opinion on the guy?2
  5. I have to admit another reaction, in that in addition to this behavior, most of the people41 you run in to that actually recommend his work are deeply obnoxious.α
  6. So, I’ve never been able to shake the sense that this is somehow meant as an insult. There’s a vague “attempted othering” about it; it's never presented as “I liked this and I bet you will too,” or “Aha, I finally found a thing we have in common!” it’s more of “Oh, I bet you’re one of those people”. It’s the snap of satisfaction that gets to me. The smug air of “oh, I’ve figured you out.”
  7. And look, I’m a late-era Gen-X computer nerd programmer—there are plenty of stereotypes I’ll own up to happily. Star Wars fan? Absolutely. The other 80s nerd signifiers? Trek, Hitchhiker’s Guide, Monty Python? Sure, yep, yep. Doctor Who used to be the outlier, but not so much anymore.3 William Gibson, Hemmingway, Asimov? For sure.
  8. But this one I don’t understand. Because it cant just be footnotes, right?
  9. I bring all this up because Patricia Lockwood4 has written a truly excellent piece on DFW: [Where be your jibes now?].7 It’s phenomenal, go read it!
  10. But, I suspect I read it with a unique viewpoint. I devoured it with one question: “am I right to keep being vaguely insulted?”
  11. And, he nervously laughed, I still don’t know!
  12. She certainly seems to respect him, but not actually like him very much? I can’t tell! It’s evocative, but ambiguous? It’s nominally a review of his last, unfinished, posthumously published book, but then works its way though his strange and, shall we say, “complicated” reputation, and then his an overview of the rest of his work.
  13. And I have the same reaction I did every time I hear about his stuff, which is some combination of “that guy sounds like he has problems” (he did) and “that book sounds awful” (they do).
  14. “I bet you’re a fan”
  15. Why? Why do you bet that?
  16. I’m self-aware enough to know that the correct response to all this is probably just to [link to this onion article] And I guess there’s one way to know.
  17. But look. I’m just not going to read a million pages to find out.

Part 2: Footnotes, Hypertext, and Webs

  1. Inevitably, this is after I’ve written something full of footnotes.B
  2. Well, to expand on that, this usually happens right after I write something with a joke buried in a footnote. I think footnotes are funny! Or rather, I think they’re incredibly not funny by default, a signifier of a particular flavor of dull academic writing, which means any joke you stash in one becomes automatically funnier by virtue of surprise.C
  3. I do like footnotes, but what I really like is hypertext. I like the way hypertext can spider-web out, spreading in all directions. Any text always has asides, backstory, details, extending fractally out. There’s always more to say about everything. Real life, even the simple parts, doesn’t fit into neat linear narratives. Side characters have full lives, things got where they are somehow, everything has an explanation, a backstory, more details, context. So, generally writing is as much the art of figuring out what to leave out as anything. But hypertext gives you a way to fit all those pieces together, to write in a way that’s multidimensional.D
  4. Fractals. There’s always more detail. Another story. “On that subject…”E
  5. Before we could [link] to things, the way to express that was footnotes. Even here, on the system literally called “the web”, footnotes still work as a coherent technique for wrangling hypertext into something easier to get your arms around.F
  6. But the traditional hypertext [link] is focused on detail—to find out more, click here! The endless cans of rabbit holes of wikipedia’s links to other articles. A world where every noun has a blue underline leading to another article, and another, and so on.G
  7. Footnotes can do that, but they have another use that links don’t—they can provide commentary.13 A well deployed footnote isn’t just “click here to read more”, it’s a commentary, annotations, a Gemara.H
  8. I come by my fascination with footnotes honestly: The first place I ever saw footnotes deployed in an interesting way was, of all things, a paper in a best-of collection of the Journal of Irreproducible Results.9 Someone submitted a paper that was only a half-sentence long and then had several pages of footnotes that contained the whole paper, nested in on itself.12 I loved this. It was like a whole new structure opened up that had been right under my nose the whole time.J
  9. Although, if I’m honest, the actual origin of my love of footnotes is probably reading too many choose your own adventure books.17
  10. I am also a huge fan of overly-formalist structural bullshit, obviously.α

Part 3: Art from Obnoxious People

  1. What do you do with art that’s recommended by obnoxious people?40
  2. In some ways, this is not totally unlike how to deal with art made by “problematic” artists; where if we entirely restricted our intake to art made exclusively by good people, we’d have Mister Rogers' Neighborhood and not much else. But maintaining an increasingly difficult cognitive dissonance while watching Annie Hall is one thing, but when someone you don’t like recommends something?38
  3. To be fair, or as fair as possible, most of this has very, very little to do with the art itself. Why has a movie about space wizards overthrowing space fascists become the favorite movie of actual earth fascists? Who knows? The universe is strange. It’s usually not healthy to judge art by its worst fans.36
  4. Usually.
  5. In my experience, art recommended by obnoxious people takes roughly three forms:32
  6. There’s art where normal people enjoy it, and it’s broadly popular, and then there’s a deeply irritating toxic substrate of people who maybe like it just a little too much to be healthy. 30
  7. Star Trek is sort of the classic example here, or Star Wars, or Monty Python, or, you know, all of sports. Things that are popular enough where there’s a group of people who have tried to paper over a lack personality by memorizing lines from a 70s BBC sketch comedy show, or batter’s statistics from before they were born. 28
  8. Then there’s the sort of art that unlocks a puzzle, where, say, you have a coworker who is deeply annoying for reasons you can’t quite put your finger on, and then you find out their favorite book is Atlas Shrugged. A weight lifts, aha, you say, got it. It all makes sense now. 26
  9. And then, there’s art24 that exclusively comes into your life from complete dipshits.
  10. The trick is figuring out which one you're dealing with.q

Part 4: Endnotes

  1. What never? Well, hardly ever!
  2. I think I read the thing where he was rude about cruises?
  3. And boy, as an aside, “I bet you’re a Doctor Who fan” has meant at least four distinct things since I started watching Tom Baker on PBS in the early 80s.
  4. Who5 presumably got early parole from her thousand years of jail.6
  5. In the rough draft of this I wrote “Patricia Highsmith,” and boy would that be a whole different thing!
  6. Jail for mother!
  7. In the spirit of full disclosure, she wrote it back in July, whereupon I saved it to Instapaper and didn’t read it until this week. I may not be totally on top of my list of things to read?35
  8. "Notes Towards a Mental Breakdown" (1967)
  9. The JoIR is a forum for papers that look and move like scientific papers, but are also a joke.
  10. The bartender is a die-hard Radiers fan; he happily launches into a diatribe about what a disaster the Las Vegas move has been, but that F1 race was pretty great. A few drinks in, he wants to tell you about his “radical” art installation in the back room? To go look, turn to footnote ω To excuse yourself, turn to footnote 20
  11. I knew a girl in college whose ex-boyfriend described Basic Instinct as his favorite movie, and let me tell you, every assumption you just made about that guy is true.
  12. Although, in fairness, that JoIR paper was probably directly inspired by that one J. G. Ballard story.8
  13. This was absurdly hard21 to put together.39
  14. The elf pulls his hood back and asks: “Well met, traveller! What was your opinion of the book I loaned you?” He slides a copy of Brief Interviews with Hideous Men across the table. To have no opinion, turn to footnote d. To endorse it enthusiastically, turn to footnote α
  15. Or is it five?
  16. You’re right, that bar was sus. Good call, adventurer! To head further into town, turn to footnote 20. To head back out into the spooky woods, go to footnote 22.
  17. You stand in the doorway of a dark and mysterious tavern. Miscreants and desperadoes of all description fill the smoky, shadowed room. You’re looking for work. Your sword seems heavier than normal at your side as you step into the room. If you... Talk to the bartender, turn to footnote 10. Talk to the hooded Elf in the back corner, turn to footnote 14. To see what’s going on back outside, turn to footnote 16.
  18. "Glass Onion” from The Beatles, aka The White Album
  19. Superscript and anchor tags for the link out, then an ordered list where each List Item has to have a unique id so that those anchor tags can link back to them.23
  20. You head into town. You find a decent desk job; you only mean to work there for a bit, but it’s comfortable and not that hard, so you stay. Years pass, and your sword grows dusty in the back room. You buy a minivan! You get promoted to Director of Internal Operations, which you can never describe correctly to anyone. Then, the market takes a downturn, and you’re one of the people who get “right sized”. They offer you a generous early retirement. To take it, turn to footnote 25. To decline, turn to footnote 22.
  21. But did you know that HTML doesn’t have actual support33 for footnotes?23
  22. You journey into the woods. You travel far, journeying across the blasted plains of Hawksroost, the isles of Ka’ah’wan-ah, you climb the spires of the Howling Mountains, you delve far below the labyrinth of the Obsidian Citadel; you finally arrive at the domain of the Clockwork Lord, oldest of all things. Its ancient faces turns towards you, you may ask a boon.

    “Is this all there is? Is there nothing more?”: Footnote 27

    “I wish for comfort and wealth!” Footnote 25

  23. I have no idea how this will look in most browsers.31
  24. This usually still isn’t a direct comment on the art itself, but on the other hand, healthy people don’t breathlessly rave about Basic Instinct,11 you know?
  25. Good call! You settle into a comfortable retirement in the suburbs. Your kids grow up, move out, grow old themselves. The years tick by. One day, when the grandkids are over, one of them finds your old sword in the garage. You gingerly pick it up, dislodging generations of cobwebs. You look down, and see old hands holding it, as if for the first time. You don’t answer when one of them asks what it is; you just look out the window. You can’t see the forest anymore, not since that last development went in. You stand there a long time.

    *** You have died ***

  26. And turnabout is fair play: I’ve watched people have this a-ha moment with me and Doctor Who.
  27. The Clockwork Lord has no expression you can understand, but you know it is smiling. “There is always more,” it says, in infinite kindness. “The door to the left leads to the details you are seeking. The door to the right has the answer you are lacking. You may choose.”

    Left: Footnote a

    Right: Footnote 29

  28. Other examples of this category off the top of my head: Catcher in the rye, MASH, all of Shakespeare.
  29. You step through the doorway, and find yourself in an unfamiliar house. There are people there, people you do not know. With a flash of insight, you realize the adult is your grandchild, far past the time you knew them, the children are your great-grandchildren, whom you have never met. You realize that you are dead, and have been for many years. All your works have been forgotten, adventures, jobs, struggles, lost as one more grain of sand on the shore of time. Your grandchild, now old themselves, is telling their child a story—a story about you. A minor thing, a trifle, something silly you did at a birthday party once. You had totally forgotten, but the old face of the 6-year old who’s party it was didn’t. They’re telling a story.

    Oh. You see it,

    *** You Have Ascended ***

  30. Fanatics, to coin a phrase?
  31. This feels like it should have one of those old “works best in Netscape Navigator”, except Netscape would choke on all this CSS.37
  32. Björk: (over the phone) I have to say I'm a great fan of triangles.

    Space Ghost: Well, I have to say that I am a great fan of Chuck Norris, and he was in the Delta Force, and the delta was a triangle.

  33. Instead you have to code19 them by hand.
  34. Yeah, I see what you did there.
  35. Okay, that's also a lie; I've actually been working on this on-and-off since July.13
  36. Cases in point: I, II, III , IIII
  37. Certified: “It works on my machine!”α
  38. Especially when they themselves don’t seem to like it?
  39. I mean, the writing itself was tricky enough, with three15 interleaving essays.
  40. Not annoying people, not assholes: obnoxious. Hard to define, but like pornography, you know it when you see it.
  41. On the other hand, back in the 90s these people were asking about Robert Anton Wilson or saying “fnord” at me, so some things have gotten better, I guess.
  42. That's not the problem. This is: Change. Read it through again and you'll get it.

Part 5: No Moral.

Read More
Gabriel L. Helman Gabriel L. Helman

Fully Automated Insults to Life Itself

In 20 years time, we’re going to be talking about “generative AI”, in the same tone of voice we currently use to talk about asbestos. A bad idea that initially seemed promising which ultimately caused far more harm than good, and that left a swathe of deeply embedded pollution across the landscape that we’re still cleaning up.

It’s the final apotheosis of three decades of valuing STEM over the Humanities, in parallel with the broader tech industry being gutted and replaced by a string of venture-backed pyramid schemes, casinos, and outright cons.

The entire technology is utterly without value and needs to be scrapped, legislated out of existence, and the people involved need to be forcibly invited to find something better to send their time on. We’ve spent decades operating under the unspoken assumption that just because we can build something, that means it’s inevitable and we have to build it first before someone else does. It’s time to knock that off, and start asking better questions.

AI is the ultimate form of the joke about the restaurant where the food is terrible and also the portions are too small. The technology has two core problems, both of which are intractable:

  1. The output is terrible
  2. It’s deeply, fundamentally unethical

Probably the definite article on generative AI’s quality, or profound lack thereof, is Ted Chiang’s ChatGPT Is a Blurry JPEG of the Web; that’s almost a year old now, and everything that’s happened in 2023 has only underscored his points. Fundamentally, we’re not talking about vast cyber-intelligences, we’re talking Sparkling Autocorrect.

Let me provide a personal anecdote.

Earlier this year, a coworker of mine was working on some documentation, and had worked up a fairly detailed outline of what needed to be covered. As an experiment, he fed that outline into ChatGPT, intended to publish the output, and I offered to look over the result.

At first glance it was fine. Digging in, thought, it wasn’t great. It wasn’t terrible either—nothing in it was technically incorrect, but it had the quality of a high school book report written by someone who had only read the back cover. Or like documentation written by a tech writer who had a detailed outline they didn’t understand and a word count to hit? It repeated itself, it used far too many words to cover very little ground. It was, for lack of a better word, just kind of a “glurge”. Just room-temperature tepidarium generic garbage.

I started to jot down some editing notes, as you do, and found that I would stare at a sentence, then the whole paragraph, before crossing the paragraph out and writing “rephrase” in the margin. To try and be actually productive, I took a section and started to rewrite in what I thought was better, more concise manner—removing duplicates, omitting needless words. De-glurgifying.

Of course, I discovered I had essentially reconstituted the outline.

I called my friend back and found the most professional possible way to tell him he needed to scrap the whole thing start over.

It left me with a strange feeling, that we had this tool that could instantly generate a couple thousand words of worthless text that at first glance seemed to pass muster. Which is so, so much worse than something written by a junior tech writer who doesn’t understand the subject, because this was produced by something that you can’t talk to, you can’t coach, that will never learn.

On a pretty regular basis this year, someone would pop up and say something along the lines of “I didn’t know the answer, and the docs were bad, so I asked the robot and it wrote the code for me!” and then they would post some screenshots of ChatGPTs output full of a terribly wrong answer. Human’s AI pin demo was full of wrong answers, for heaven’s sake. And so we get this trend where ChatGPT manages to be an expert in things you know nothing about, but a moron about things you’re an expert in. I’m baffled by the responses to the GPT-n “search” “results”; they’re universally terrible and wrong.

And this is all baked in to the technology! It’s a very, very fancy set of pattern recognition based on a huge corpus of (mostly stolen?) text, computing the most probable next word, but not in any way considering if the answer might be correct. Because it has no way to, thats totally outside the bounds of what the system can achieve.

A year and a bit later, and the web is absolutely drowning in AI glurge. Clarkesworld had to suspend submissions for a while to get a handle on blocking the tide of AI garbage. Page after page of fake content with fake images, content no one ever wrote and only meant for other robots to read. Fake articles. Lists of things that don’t exist, recipes no one has ever cooked.

And we were already drowning in “AI” “machine learning” gludge, and it all sucks. The autocorrect on my phone got so bad when they went from the hard-coded list to the ML one that I had to turn it off. Google’s search results are terrible. The “we found this answer for you” thing at the top of the search results are terrible.

It’s bad, and bad by design, it can’t ever be more than a thoughtless mashup of material it pulled in. Or even worse, it’s not wrong so much as it’s all bullshit. Not outright lies, but vaguely truthy-shaped “content”, freely mixing copied facts with pure fiction, speech intended to persuade without regard for truth: Bullshit.

Every generated image would have been better and funnier if you gave the prompt to a real artist. But that would cost money—and that’s not even the problem, the problem is that would take time. Can’t we just have the computer kick something out now? Something that looks good enough from a distance? If I don’t count the fingers?

My question, though, is this: what future do these people want to live in? Is it really this? Swimming a sea of glurge? Just endless mechanized bullshit flooding every corner of the Web?Who looked at the state of the world here in the Twenties and thought “what the world needs right now is a way to generate Infinite Bullshit”?

Of course, the fact that the results are terrible-but-occasionally-fascinating obscure the deeper issue: It’s a massive plagiarism machine.

Thanks to copyleft and free & open source, the tech industry has a pretty comprehensive—if idiosyncratic—understanding of copyright, fair use, and licensing. But that’s the wrong model. This isn’t about “fair use” or “transformative works”, this is about Plagiarism.

This is a real “humanities and the liberal arts vs technology” moment, because STEM really has no concept of plagiarism. Copying and pasting from the web is a legit way to do your job.

(I mean, stop and think about that for a second. There’s no other industry on earth where copying other people’s work verbatim into your own is a widely accepted technique. We had a sign up a few jobs back that read “Expert level copy and paste from stack overflow” and people would point at it when other people had questions about how to solve a problem!)

We have this massive cultural disconnect that would be interesting or funny if it wasn’t causing so much ruin. This feels like nothing so much as the end result of valuing STEM over the Humanities and Liberal Arts in education for the last few decades. Maybe we should have made sure all those kids we told to “learn to code” also had some, you know, ethics? Maybe had read a couple of books written since they turned fourteen?

So we land in a place where a bunch of people convinced they’re the princes of the universe have sucked up everything written on the internet and built a giant machine for laundering plagiarism; regurgitating and shuffling the content they didn’t ask permission to use. There’s a whole end-state libertarian angle here too; just because it’s not explicitly illegal, that means it’s okay to do it, ethics or morals be damned.

“It’s fair use!” Then the hell with fair use. I’d hate to lose the wayback machine, but even that respects robots.txt.

I used to be a hard core open source, public domain, fair use guy, but then the worst people alive taught a bunch of if-statements to make unreadable counterfit Calvin & Hobbes comics, and now I’m ready to join the Butlerian Jihad.

Why should I bother reading something that no one bothered to write?

Why should I bother looking at a picure that no one could be bothered to draw?

Generative AI and it’s ilk are the final apotheosis of the people who started calling art “content”, and meant it.

These are people who think art or creativity are fundamentally a trick, a confidence game. They don’t believe or understand that art can be about something. They reject utter the concept of “about-ness”, the basic concept of “theme” is utterly beyond comprehension. The idea that art might contain anything other than its most surface qualities never crosses their mind. The sort of people who would say “Art should soothe, not distract”. Entirely about the surface aesthetic over anything.

(To put that another way, these are the same kind people who vote Republican but listen to Rage Against the Machine.)

Don’t respect or value creativity.

Don’t respect actual expertise.

Don’t understand why they can’t just have what someone else worked for. It’s even worse than wanting to pay for it, these creatures actually think they’re entitled to it for free because they know how to parse a JSON file. It feels like the final end-point of a certain flavor of free software thought: no one deserves to be paid for anything. A key cultual and conceptual point past “information wants to be free” and “everything is a remix”. Just a machine that endlessly spits out bad copies of other work.

They don’y understand that these are skills you can learn, you have to work at, become an expert in. Not one of these people who spend hours upon hours training models or crafting prompts ever considered using that time to learn how to draw. Because if someone else can do it, they should get access to that skill for free, with no compensation or even credit.

This is why those machine generated Calvin & Hobbes comics were such a shock last summer; anyone who had understood a single thing about Bill Watterson’s work would have understood that he’d be utterly opposed to something like that. It’s difficult to fathom someone who liked the strip enough to do the work to train up a model to generate new ones while still not understanding what it was about.

“Consent” doesn’t even come up. These are not people you should leave your drink uncovered around.

But then you combine all that with the fact that we have a whole industry of neo-philes, desperate to work on something New and Important, terrified their work might have no value.

(See also: the number of abandoned javascript frameworks that re-solve all the problems that have already been solved.)

As a result, tech has an ongoing issue with cool technology that’s a solution in search of a problem, but ultimately is only good for some kind of grift. The classical examples here are the blockchain, bitcoin, NFTs. But the list is endless: so-called “4th generation languages”, “rational rose”, the CueCat, basically anything that ever got put on the cover of Wired.

My go-to example is usually bittorrent, which seemed really exciting at first, but turned out to only be good at acquiring TV shows that hadn’t aired in the US yet. (As they say, “If you want to know how to use bittorrent, ask a Doctor Who fan.”)

And now generative AI.

There’s that scene at the end of Fargo, where Frances McDormand is scolding The Shoveler for “all this for such a tiny amount of money”, and thats how I keep thinking about the AI grift carnival. So much stupid collateral damage we’re gonna be cleaning up for years, and it’s not like any of them are going to get Fuck You(tm) rich. No one is buying an island or founding a university here, this is all so some tech bros can buy the deluxe package on their next SUV. At least crypto got some people rich, and was just those dorks milking each other; here we all gotta deal with the pollution.

But this feels weirdly personal in a way the dunning-krugerrands were not. How on earth did we end up in a place where we automated art, but not making fast food, or some other minimum wage, minimum respect job?

For a while I thought this was something along one of the asides in David Graeber’s Bullshit Jobs, where people with meaningless jobs hate it when other people have meaningful ones. The phenomenon of “If we have to work crappy jobs, we want to pull everyone down to our level, not pull everyone up”. See also: “waffle house workers shouldn’t make 25 bucks an hour”, “state workers should have to work like a dog for that pension”, etc.

But no, these are not people with “bullshit jobs”, these are upper-middle class, incredibly comfortable tech bros pulling down a half a million dollars a year. They just don’t believe creativity is real.

But because all that apparently isn’t fulfilling enough, they make up ghost stories about how their stochastic parrots are going to come alive and conquer the world, how we have to build good ones to fight the bad ones, but they can’t be stopped because it’s inevitable. Breathless article after article about whistleblowers worried about how dangerous it all is.

Just the self-declared best minds of our generation failing the mirror test over and over again.

This is usually where someone says something about how this isn’t a problem and we can all learn to be “prompt engineers”, or “advisors”. The people trying to become a prompt advisor are the same sort who would be proud they convinced Immortan Joe to strap them to the back of the car instead of the front.

This isn’t about computers, or technology, or “the future”, or the inevitability of change, or the march or progress. This is about what we value as a culture. What do we want?

“Thus did a handful of rapacious citizens come to control all that was worth controlling in America. Thus was the savage and stupid and entirely inappropriate and unnecessary and humorless American class system created. Honest, industrious, peaceful citizens were classed as bloodsuckers, if they asked to be paid a living wage. And they saw that praise was reserved henceforth for those who devised means of getting paid enormously for committing crimes against which no laws had been passed. Thus the American dream turned belly up, turned green, bobbed to the scummy surface of cupidity unlimited, filled with gas, went bang in the noonday sun.” ― Kurt Vonnegut, God Bless You, Mr. Rosewater

At the start of the year, the dominant narrative was that AI was inevitable, this was how things are going, get on board or get left behind.

Thats… not quite how the year went?

AI was a centerpiece in both Hollywood strikes, and both the Writers and Actors basically ran the table, getting everything they asked for, and enshrining a set of protections from AI into a contract for the first time. Excuse me, not protection from AI, but protection from the sort of empty suits that would use it to undercut working writers and performers.

Publisher after publisher has been updating their guidelines to forbid AI art. A remarkable number of other places that support artists instituted guidlines to ban or curtail AI. Even Kickstarter, which plunged into the blockchain with both feet, seemed to have learned their lesson and rolled out some pretty stringent rules.

Oh! And there’s some actual high-powered lawsuits bearing down on the industry, not to mention investigations of, shall we say, “unsavory” material in the training sets?

The initial shine seems to be off, where last year was all about sharing goofy AI-generated garbage, there’s been a real shift in the air as everyone gets tired of it and starts pointing out that it sucks, actually. And that the people still boosting it all seem to have some kind of scam going. Oh, and in a lot of cases, it’s literally the same people who were hyping blockchain a year or two ago, and who seem to have found a new use for their warehouses full of GPUs.

One of the more heartening and interesting developments this year was the (long overdue) start of a re-evaluation of the Luddites. Despite the popular stereotype, they weren’t anti-technology, but anti-technology-being-used-to-disenfrancise-workers. This seems to be the year a lot of people sat up and said “hey, me too!”

AI isn’t the only reason “hot labor summer” rolled into “eternal labor september”, but it’s pretty high on the list.

Theres an argument thats sometimes made that we don’t have any way as a society to throw away a technology that already exists, but that’s not true. You can’t buy gasoline with lead in it, or hairspray with CFCs, and my late lamented McDLT vanished along with the Styrofoam that kept the hot side hot and the cold side cold.

And yes, asbestos made a bunch of people a lot of money and was very good at being children’s pyjamas that didn’t catch fire, as long as that child didn’t need to breathe as an adult.

But, we've never done that for software.

Back around the turn of the century, there was some argument around if cryptography software should be classified as a munition. The Feds wanted stronger export controls, and there was a contingent of technologists who thought, basically, “Hey, it might be neat if our compiler had first and second amendment protection”. Obviously, that didn’t happen. “You can’t regulate math! It’s free expression!”

I don’t have a fully developed argument on this, but I’ve never been able to shake the feeling like that was a mistake, that we all got conned while we thought we were winning.

Maybe some precedent for heavily “regulating math” would be really useful right about now.

Maybe we need to start making some.

There’s a persistant belief in computer science since computers were invented that brains are a really fancy powerful computer and if we can just figure out how to program them, intelligent robots are right around the corner.

Theres an analogy that floats around that says if the human mind is a bird, then AI will be a plane, flying, but very different application of the same principals.

The human mind is not a computer.

At best, AI is a paper airplane. Sometimes a very fancy one! With nice paper and stickers and tricky folds! Byt the key is that a hand has to throw it.

The act of a person looking at bunch of art and trying to build their own skills is fundamentally different than a software pattern recognition algorithm drawing a picture from pieces of other ones.

Anyone who claims otherwise has no concept of creativity other than as an abstract concept. The creative impulse is fundamental to the human condition. Everyone has it. In some people it’s repressed, or withered, or undeveloped, but it’s always there.

Back in the early days of the pandemic, people posted all these stories about the “crazy stuff they were making!” It wasn’t crazy, that was just the urge to create, it’s always there, and capitalism finally got quiet enough that you could hear it.

“Making Art” is what humans do. The rest of society is there so we stay alive long enough to do so. It’s not the part we need to automate away so we can spend more time delivering value to the shareholders.

AI isn’t going to turn into skynet and take over the world. There won’t be killer robots coming for your life, or your job, or your kids.

However, the sort of soulless goons who thought it was a good idea to computer automate “writing poetry” before “fixing plumbing” are absolutely coming to take away your job, turn you into a gig worker, replace whoever they can with a chatbot, keep all the money for themselves.

I can’t think of anything more profoundly evil than trying to automate creativity and leaving humans to do the grunt manual labor.

Fuck those people. And fuck everyone who ever enabled them.

Read More
Gabriel L. Helman Gabriel L. Helman

It’ll Be Worth It

An early version of this got worked out in a sprawling Slack thread with some friends. Thanks helping me work out why my perfectly nice neighbor’s garage banner bugs me, fellas

There’s this house about a dozen doors down from mine. Friendly people, I don’t really know them, but my son went to school with their youngest kid, so we kinda know each other in passing. They frequently have the door to the garage open, and they have some home gym equipment, some tools, and a huge banner that reads in big block capital letters:

NOBODY CARES WORK HARDER

My reaction is always to recoil slightly. Really, nobody? Even at your own home, nobody? And I think “you need better friends, man. Maybe not everyone cares, but someone should.” I keep wanting to say “hey man, I care. Good job, keep it up!” It feels so toxic in a way I can’t quite put my finger on.

And, look, I get it. It’s a shorthand to communicate that we’re in a space where the goal is what matters, and the work is assumed. It’s very sports-oriented worldview, where the message is that the complaints don’t matter, only the results matter. But my reaction to things like that from coaches in a sports context was always kinda “well, if no one cares, can I go home?”

(Well, that, and I would always think “I’d love to see you come on over to my world and slam into a compiler error for 2 hours and then have me tell you ‘nobody cares, do better’ when you ask for help and see how you handle that. Because you would lose your mind”)

Because that’s the thing: if nobody cared, we woudn’t be here. We’re hever because we think everyone cares.

The actual message isn’t “nobody cares,” but:

“All this will be worth it in the end, you’ll see”

Now, that’s a banner I could get behind.

To come at it another way, there’s the goal and there’s the work. Depending on the context, people care about one or the other. I used to work with some people who would always put the number of hours spent on a project as the first slide of their final read-outs, and the rest of us used to make terrible fun of them. (As did the execs they were presenting to.)

And it’s not that the “seventeen million hours” wasn’t worth celebrating, or that we didn’t care about it, but that the slide touting it was in the wrong venue. Here, we’re in an environment where only care about the end goal. High fives for working hard go in a different meeting, you know?

But what really, really bugs me about that banner specifically, and things like it, that that they’re so fake. If you really didn’t think anyone cares, you wouldn’t hang a banner up where all your neighbors could see it over your weight set. If you really thought no one cared, you wouldn’t even own the exercise gear, you’d be inside doing something you want to do! Because no one has to hang a “WORK HARDER” banner over a Magic: The Gathering tournament, or a plant nursery, or a book club. But no, it’s there because you think everyone cares, and you want them to think you’re cool because you don’t have feelings. A banner like that is just performative; you hang something like that because you want others to care about you not caring.

There’s a thing where people try and hold up their lack of emotional support as a kind of badge of honor, and like, if you’re at home and really nobody cares, you gotta rethink your life. And if people do care, why do you find pretending they don’t motivating? What’s missing from your life such that pretending you’re on your own is better than embracing your support?

The older I get, the less tolerance I have for people who think empathy is some kind of weakness, that emotional isolation is some kind of strength. The only way any of us are going to get through any of this is together.

Work Harder.

Everyone Cares.

I’ll Be Worth It.

Read More
Gabriel L. Helman Gabriel L. Helman

Books I read in 2022, part 3

(That I have mostly nice things to say about)

Programming note: while clearning out the drafts folder as I wind the year down, I discovered that much to my amusement and surprise I wrote most of the third post on the books I read last year, but somehow never posted it? One editing & expansion pass later, and here it is.

Previously , Previously .

Neil Gaiman's Chivalry, adapted by Colleen Duran

A perfect jewel of a book. The story is slight, but sweet. Duran’s art, however, is gorgeous, perfectly sliding between the style of an illuminated manuscript and watercolor paintings. A minor work by two very capable artists, but clearly a labor of love, and tremendous fun.

The Murderbot Diaries 1&2: All Systems Red and Artificial Condition by Martha Wells

As twitter started trending towards it’s final end last summer, I decided I’b better stary buying some of the books I’d been seeing people enthuse about. There was a stretch there where it seemed like my entire timeline was praise for Murderbot.

For reasons due entirely to my apparent failures of reading comprehension, I was expecting a book starring, basically, HK-47 from Knights of the Old Republic. A robot clanking around, calling people meatbags, wanting to be left along, and so on.

The actual books are so much better than that. Instead, it’s a story about your new neurodivergent best friend, trying to figure themselves out and be left alone while they do it. It’s one of the very best uses of robots as a metaphor for something else I’ve ever seen, and probably the first new take on “what do we use robots for besides an Asimov riff” since Blade Runner. It was not what I expected at all, or really in the mood for at the time, and I still immediately bought the next book.

Some other MoonKnights not worth typing the whole titles of

All pumped after the Lumire/Smallwood stuff, I picked up a few other more recent MoonKnights. I just went downstairs and flipped through them again, and I don’t remember a single thing about them. They were fine, I guess?

The Sandman by Neil Gaiman and others

Inspired by the Netflix show (capsule review: great casting, visually as dull as dishwater, got better the more it did it’s own thing and diverged from the books) I went back and read the original comic run for the first time since the turn of the century. When I was in college, there was a cohort of mostly gay, mostly goth kids for whom Sandman was everything. I was neither of those things, but hung out in the subcultures next door, as you will. I liked it fine, and probably liked it more that I would normally have because of how many good friends loved it.

Nearly three decades later, I had a very similar reaction. It always worked best when it moved towards more of an light-horror anthology, where a rotating batch of artists would illustrate stories where deeply weird things happened and then Morpheus would show up at the end and go “wow, that’s messed up.” There’s a couple of things that—woof—haven’t aged super well? Overall, though, still pretty good.

Mostly, though, it made me nostalgic for those people I used to know who loved it so much. I hope they’re all doing well!

Death, the Deluxe Edition

Everything I had to say about Sandman goes double for Death.

Sandman Overture by Neil Gaiman and J.H. Williams III

I never read this when it came out, but I figured as long as I was doing a clean sweep of the Sandman, it was time to finally read it. A lot of fun, but I don’t believe for a hot second this is what Gaiman had in mind when he wrote the opening scene of the first issue back in the late 80s.

The art, though! The art on the original series operated on a spread from “pretty good” to “great for the early 90s”. No insult to the artists, but what the DC production pipeline and tooling could do at the time was never quite up to what Sandman really seemed to call for. And, this is still well before “what if every issue had good art, tho” was the standard for American comics.

The art here is astounding. Page after page of amazing spreads. You can feel Gaiman nodding to himself, thinking “finally! This was how this was always supposed to look.”

Iron Widow by Xiran Jay Zhao

Oh heck yes, this is the stuff. A (very) loose retelling of the story of Wu Zeitan, the first and only female Chinese emperor, in a futuristic setting where animal-themed mechs have Dragonball Z fights. It’s the sort of book where you know exactly how it’s going to end, but the fun is seeing how the main character pulls it off. I read it in one sitting.

Dungeons & Dragons Spelljammer: Adventures in Space

Oh, what a disappointment.

Let’s back up for a sec. Spelljammer was an early-90s 2nd Edition D&D setting, which boiled down to essentially “magical sailing ships in space, using a Ptolemaic-style cosmology. It was a soft replacement for the Manual of the Planes, as a way to link campaign worlds together and provide “otherworldly“ adventures without having to get near the demons and other supernatural elements that had become a problem during the 80s “satanic panic.” (It would ultimately be replaced by Planescape, which brought all that back and then some.)

Tone-wise, Spelljammer was basically “70s van art”. It was never terribly successful, and thirty years on it was mostly a trivia answer, although fondly remembered by a small cadre of aging geeks. As should be entirely predictable, I loved it.

Initially, 5th edition wasn’t interested in past settings others that the deeply boring Forgotten Realms. But as the line continued, and other settings started popping back up, Spelljammer started coming up. What if? And then, there it was.

For the first time in the game’s history, 5th edition found a viable product strategy: 3 roughly 225 to 250-page hardcovers a year, two adventures, one some kind of rules expansion. The adventures occasionally contained a new setting, but the setting was always there to support the story, rather than the other way around.

Spelljammer was going to be different: a deluxe boxed set with a DM screen and three 64-page hardcovers, a setting and rules book, a monster book, and an adventure. (Roughly mirroring the PHB, DMG, MM core books.)

The immediate problem will be obvious to anyone good at mental arithmetic, which is that as a whole the product was 30 to 60 pages shorter than normal, and it felt like it. Worse, the structure of the three hardbacks meant that the monster book and adventure got way more space than they needed, crushing the actual setting material down even further.

As a result, there’s so much that just isn’t there. The setting is boiled down to the barest summary; all the chunky details are gone. As the most egregious example, in the original version The Spelljammer is a legendary ship akin to the flying dutchman, that ship makes up the background of the original logo. The Spelljammer herself isn’t even mentioned in the new version.

Even more frustrating, what is here is pretty good. They made some very savvy changes to better fit with everything else (Spelljammers now travel through the “regular” astral plane instead of “the phogiston” for example). But overall it feels like a sketch for what a 5E spelljammer could look line instead of a finished product.

This is exacerbated by the fact that this release also contains most of a 5E Dark Sun. One of the worst-kept secrets in the industry was that Hasbro had a 5E Dark Sun book under development that was scrapped before release. The races and creatures from that effort ended up here. Dark Sun also gets an amazing cameo: the adventure includes a stop in “Doomspace”, a solar system where the star has become a black hole, and the inhabited planet is just on the cusp of being sucked in. While the names are all slightly changed, this is blatantly supposed to be the final days of Athas. While I would have been first in line to pick up a 5E Dark Sun, having the setting finally collapse in on itself in another product entirely is a perfect end to the setting. I kind of loved it.

Finally, Spelljammer had some extremely racist garbage in it. To the extent that it’s hard to believe that these book had any editorial oversight at all. For a product that had the physical trappings (and price) of a premium product, the whole package came across as extremely half-assed. Nowhere more so that in the fact that they let some white supremacist shit sail through unnoticed.

Spelljammer, even more so than the OGL shitshow, caused me fundamentally reassess my relationship with the company that owns D&D. I still love the game, but I’m going to need to prove it to me before I buy anything else from them. Our support should be going elsewhere.

Ducks: Two Years in the Oil Sands by Kate Beaton

Back during the mid-00s webcomics boom, there were a lot of webcomics that were good for webcomics, but a much smaller set that were good for comics, full stop. Kate Beaton’s Hark a Vagrant! stood head and shoulders above that second group.

Most of the people who made webcomics back then have moved on, using their webcomic to open doors to other—presumably better paying—work. Most of them have moved on from the styles from their web work. To use one obvious and slightly cheap example, Ryan North’s Squirrel Girl has different panels on each page, you know?

One of the many, many remarkable things about Ducks is that is’s recognizably the same style as Hark a Vagrant!, just deployed for different purpose. All her skills as a storyteller and and cartoonist are on display here, her ability to capture expressions with only a few lines, the sharp wit, the impeccable timing, but this book is not even remotely funny.

It chronicles the years she spent working on the Oil Sands in Alberta. A strange, remote place, full of people, mostly men, trying to make enough money to leave.

Other than a brief introduction, the book has no intrusions from the future, there’s no narration contextualizing the events. Instead, it plays out as a series of vignettes of her life there, and she trusts that the reader is smart enough to understand why she’s telling these stories in this order.

It’s not a spoiler, or much of one anyway, to say that a story about a young woman in a remote nearly all-male environment goes the way you hope it doesn’t. There’s an incredible tension to the first half of the book where you know something terrible is going to happen, it’s a horrible relief when it finally does.

As someone closer in age to her parents than her when this all happened, I found myself in a terrible rage at them as I read it—how could you let her do this? How could you let this happen? But they didn’t know. And there was nothing they could do.

It was, by far, the best book I read last year. It haunts my memory.

Jenny Sparks: The Secret History of the Authority by a bunch of hacks

I loved the original run on The Authority 20 years ago, and Jenny Sparks is one of my all-time favorite comic book characters, but I had never read Millar’s prequel miniseries about her. I picked up a copy in a used bookstore. I wish I hadn’t. It was awful.

She-Hulk omnibus 1 by Dan Slot et al

Inspired by the Disney+ show (which I loved) I picked up the first collection of the early-00s reboot of She-Hulk. I had never read these, but I remember what a great reception they got at the time. But… this wasn’t very good? It was far too precious, the 4th wall breaking way too self-conscious. A super-hero law firm with a basement full of every marvel comic as a caselaw library is a great one-off joke, but a terrible ongoing premise. The art was pretty good, though.

She-Hulk Epic Collection: Breaking the Fourth Wall by John Byrne, Steve Gerber, and Others

On the other hand, this is the stuff. Byrne makes the 4th wall breaks look easy, and there’s at joy and flow to the series that the later reboots lack. She-Hulk tearing out of the page and screaming in rage at the author is an absolutes delight. And then, when Byrne leaves, he’s replaced by Steve “Howard the Duck” Gerber, and it got even better,

Valuable Humans in Transit and other stories by qntm

This short story collection inclues the most upsetting horror story I’ve ever read, Lena, and the sequel, which manages to be even worse. Great writing, strongly recommended.

Read More
Gabriel L. Helman Gabriel L. Helman

Good Adaptations and the Lord of the Rings at 20 (and 68)

What makes a good book-to-movie adaptation?

Or to look at it the other way, what makes a bad one?

Books and movies are very different mediums and therefore—obviously—are good at very different things. Maybe the most obvious difference is that books are significantly more information-dense than movies are, so any adaptation has to pick and choose what material to keep.

The best adaptations, though, are the ones that keep the the themes and characters—what the book is about— and move around, eliminate, or combine the incidents of the plot to support them. The most successful, like Jaws or Jurassic Park for example, are arguably better than their source material, jettisoning extraneous sideplots to focus on the main concepts.

Conversely, the worst adaptations are the ones that drop the themes and change the point of the story. Stephen King somewhat famously hates the movie version of The Shining because he wrote a very personal book about his struggle with alcoholism disguised as a haunted hotel story, and Kubrick kept the ghosts but not the rest. The movie version of The Hitch-Hiker’s Guide to the Galaxy was made by people who thought the details of the plot were more important than the jokes, rather than the other way around, and didn’t understand why the Nutrimat was bad.

And really, it’s the themes, the concepts, the characters, that make stories appeal to us. It’s not the incidents of the plot we connect to, it’s what the story is about. That’s what we make the emotional connection with.

And this is part of what makes a bad adaptation so frustrating.

While the existence of a movie doesn’t erase the book it was based on, it’s a fact that movies have higher profiles, reach bigger audiences. So it’s terribly disheartening to have someone tell you they watched a movie based on that book you like that they didn’t read, when you know all the things that mattered to you didn’t make it into the movie.

And so we come to The Lord of the Rings! The third movie, Return of the King turned 20 this week, and those movies are unique in that you’ll think they’re either a fantastic or a terrible adaptation based on which character was your favorite.

Broadly speaking, Lord of the Rings tells two stories in parallel. The first, is a big epic fantasy, with Dark Lords, and Rings of Power, and Wizards, and Kings in Exile. Strider is the main character of this story, with a supporting cast of Elves, Dwarves, and Horse Vikings. The second is a story about some regular guys who are drawn into a terrifying and overwhelming adventure, and return home, changed by the experience. Sam is the main character of the second story, supported by the other Hobbits.

(Frodo is an interestingly transgressive character, because he floats between the two stories, never committing to either. But that’s a whole different topic.)

And so the book switches modes based on which characters are around. The biggest difference between the modes is the treatment of the Ring. When Strider or Gandalf or any other character from the first story are around, the Ring is the most evil thing in existence—it has to be. So Gandalf refuses to take it, Galadriel recoils, it’s a source of unstoppable corruption.

But when it’s just the Hobbits, things are different. That second story is both smaller and larger at the the same time—constantly cutting the threat of the Ring off at the knees by showing that there are larger and older things than the Ring, and pointing out thats it’s the small things really matter. So Tom Bombadil is unaffected, Faramir gives it back without temptation, Sam sees the stars through the clouds in Mordor. There are greater beauties and greater powers than some artifact could ever be.

This is, to be clear, not a unique structure. To pull an obvious example, Star Wars does the same thing, paralleling Luke’s kid from the sticks leaving home and growing into his own person with the epic struggle for the future of the entire galaxy between the Evil Galactic Empire and the Rebel Alliance. In keeping with that movie’s clockwork structure, Lucas manages to have the climax of both stories be literally the exact same moment—Luke firing the torpedoes into the exhaust port.

Tolkien is up to something different however, and climaxes his two stories fifty pages apart. The Big Fantasy Epic winds down, and then the cast reduces to the Hobbits again and they go home, where they have to use everything they’ve learned to solve their own problems instead of helping solve somebody else’s.

In my experience, everyone connects more strongly with one of the two stories. The tends to boil down to who your favorite character is—Strider or Sam. Just about everyone picks one of those two as their favorite. It’s like Elvis vs. The Beatles; most people like both, but everyone has a preference.

(And yeah, there’s always some wag that says Boromir/The Who.)

Just to put all my cards on the table, my favorite character is Sam. (And I prefer The Beatles.)

Based on how the beginning and end of the books work, it seems clear that Tolkien thought of that story—the regular guys being changed by the wide world story—as the “main one”, and the Big Epic was there to provide a backdrop.

There’s an entire cottage industry of people explaining what “Tolkien really meant” in the books, and so there’s not a lot of new ground to cover there, so I’ll just mention that the “regular dudes” story is clearly the one influenced—not “based on”, but influenced—by his own WWI experiences and move on.

Which brings us back to the movies.

Even with three very long movies, there’s a lot more material in the books than could possibly fit. And, there’s an awful lot of things that are basically internal or delivered through narration that need dramatizing in a physical way to work as a film.

So the filmmakers made the decision to adapt only that first story, and jettison basically everything from the second.

This is somewhat understandable? That first story has all the battles and orcs and wargs and wizards and things. That second story, if you’re coming at it from the perspective of trying to make an action movie, is mostly Sam missing his garden? From a commercial point of view, it’s hard to fault the approach. And the box office clearly agreed.

And this perfectly explains all the otherwise bizarre changes. First, anything that undercuts the Ring has to go. So, we cut Bombadil and everything around him for time, yes, but also we can’t have a happy guy with a funny hat shake off the Ring in the first hour before Elrond has even had a chance to say any of the spooky lines from the trailer. Faramir has to be a completely different character with a different role. Sam and Frodo’s journey across the plains of Mordor has to play different, becase the whole movie has to align on how terrible the Ring is, and no stars can peek through the clouds to give hope, no pots can clatter into a crevasse to remind Sam of home. Most maddeningly, Frodo has to turn on Sam, because the Ring is all-powerful, and we can’t have an undercurrent showing that there are some things even the Ring can’t touch.

In the book, Sam is the “hero’s journey” characer. But, since that whole story is gone, he gets demoted to comedy sidekick, and Aragorn is reimagined into that role, and as such needs all the trappings of the Hero with a Thousand Faces retrofitted on to him. Far from the confident, legendary superhero of the books, he’s now full of doubt, and has to Refuse the Call, have a mentor, cross A Guarded Threshold, suffer ordeals, because he’s now got to shoulder a big chunk of the emotional storytelling, instead of being an inspirational icon for the real main characters.

While efficient, this all has the effect of pulling out the center of the story—what it’s about.

It’s also mostly crap, because the grafted-on hero’s journey stuff doesn’t fit well. Meanwhile, one of the definitive Campbell-style narratives is lying on the cutting room floor.

One of the things that makes Sam such a great character is his stealth. He’s there from the very beginning, present at every major moment, an absolutely key element in every success, but the book keeps him just out of focus—not “off stage”, but mostly out of the spotlight.

It’s not until the last scene—the literal last line—of the book that you realize that he was actually the main character the whole time, you just didn’t notice.

The hero wasn’t the guy who became King, it was the guy who became mayor.

He’s why my laptop bag always has a coil of rope in the side pocket—because you’ll want if if you don’t have it.

(I also keep a towel in it, because it’s a rough universe.)

And all this is what makes those movies so terribly frustrating—because they are an absolutely incredible adaptation of the Epic Fantasy parts. Everything looks great! The design is unbelievable! The acting, the costumes, the camera work. The battles are amazing. Helm’s Deep is one of those truly great cinematic achievements. My favorite shot in all three movies—and this is not a joke—is the shot of the orc with the torch running towards the piled up explsoves to breach the Deeping Wall like he’s about to light the olympic torch. And, in the department of good changes, the cut down speech Theoden gives in the movie as they ride out to meet the invaders—“Ride for ruin, Ride for Rohan!”—is an absolutely incredible piece of filmmaking. The Balrog! And, credit where credit is due, everything with Boromir is arguably better than in the book, mostly because Sean Bean makes the character into an actual character instead of a walking skepticism machine.

So if those parts were your jam, great! Best fantasy movies of all time! However, if the other half was your jam, all the parts that you connected to just weren’t there.

I’m softer on the “breakdancing wizards” fight from the first movie than a lot of fellow book purists, but my goodness do I prefer Gandalf’s understated “I liked white better,” over Magneto yelling about trading reason for madness. I understand wanting to goose the emotion, but I think McKellen could have made that one sing.

There’s a common complaint about the movie that it “has too many endings.” And yeah, the end of the movie version of Return of the King is very strange, playing out a whole series of what amount to head-fake endings and then continuing to play for another half an hour.

And the reason is obvious—the movie leaves the actual ending out! The actual ending is the Hobbits returning home and using everything they’ve learned to save the Shire; the movie cuts all that, and tries to cobble a resolution of out the intentionally anti-climactic falling action that’s supposed to lead into that.

Lord of the Rings: the Movie, is a story about a D&D party who go on an exciting grueling journey to destroy an evil ring, and then one of them becomes the King. Lord of the Rings: the Book, is a story about four regular people who learn a bunch of skills they don’t want to learn while doing things they don’t want to do, and then come home and use those skills to save their family and friends.

I know which one I prefer.

What makes a good adaptation? Or a bad one?

Does it matter if the filmmaker’s are on the same page as the author?

What happens when they’re only on the same page with half of the audience?

The movies are phenomenally well made, incredibly successful films that took one of the great heros of fiction and sandblasted him down to the point where there’s a whole set of kids under thirty who think his signature moment was yelling “po-TAY-toes” at some computer animation.

For the record: yes, I am gonna die mad about it.

Read More
Gabriel L. Helman Gabriel L. Helman

Layoff Season(s)

Well, it’s layoff season again, which pretty much never stopped this year? I was going to bury a link or two to an article in that last sentence, but you know what? There’s too many. Especially in tech, or tech-adjacent fields, it’s been an absolute bloodbath this year.

So, why? What gives?

I’ve got a little personal experience here: I’ve been through three layoffs now, lost my job once, shoulder-rolled out of the way for the other two. I’ve also spent the last couple decades in and around “the tech industry”, which here we use as shorthand for companies that are either actually a Silicon Valley software company, or a company run by folks that used to/want to be from one, with a strong software development wing and at least one venture capital–type on the board.

In my experience, Tech companies are really bad at people. I mean this holistically: they’re bad at finding people, bad at hiring, and then when they do finally hire someone, they’re bad at supporting those people—training, “career development”, mentoring, making sure they’re in the right spot, making sure they’re successful. They’re also bad any kind of actual feedback cycle, either to reward the excellent or terminate underperformers. As such, they’re also bad at keeping people. This results in the vicious cycle that puts the average time in a tech job at about 18 months—why train them if they’re gonna leave? Why stay if they won’t support me?

There are pockets where this isn’t true, of course; individual managers, or departments, or groups, or even glue-type individuals holding the scene together that handle this well. I think this is all a classic “don’t attribute to malice what you can attribute to incompetence” situtation. I say this with all the love in the world, but people who are good at tech-type jobs tend to be low-empathy lone wolf types? And then you spend a couple decades promoting the people from that pool, and “ask your employees what they need” stops being common sense and is suddenly some deep management koan.

The upshot of all this is that most companies with more than a dozen or two employees have somewhere between 10–20% of the workforce that isn’t really helping out. Again—this isn’t their fault! The vast majority of those people would be great employees in a situation that’s probably only a tiny bit different than the one you’re in. But instead you have the one developer who never seems to get anything done, the other developer who’s work always fails QA and needs a huge amount of rework, the person who only seems to check hockey scores, the person whos always in meetings, the other person whose always in “meetings.” That one guy who always works on projects that never seem to ship.1 The extra managers that don’t seem to manage anyone. And, to be clear, I’m talking about full-time salaried people. People with a 401(k) match. People with a vesting schedule.

No one doing badly enough to get fired, but not actually helping row the boat.

As such, at basically any point any large company—and by large I mean over about 50—can probably do a 10% layoff and actually move faster afterwards, and do a 20% layoff without any significant damage to the annual goals—as long as you don’t have any goals about employee morale or well-being. Or want to retain the people left.

The interesting part—and this is the bad interesting, to be clear—is if you can fire 20% of your employees at any time, when do you do that?

In my experience, there’s two reasons.

First, you drop them like a submarine blowing the ballast tanks. Salaries are the biggest expense center, and in a situation where the line isn’t going up right, dropping 20% of the cost is the financial equivalent of the USS Dallas doing an emergency surface.

Second, you do it to discipline labor. Is the workforce getting a little restless? Unhappy about the stagnat raises? Grumpy about benefits costing more? Is someone waving around a copy of Peopleware?2 Did the word “union” float across the courtyard? That all shuts down real fast if all those people are suddenly sitting between two empty cubicles. “Let’s see how bad they score the engagement survey if the unemployment rate goes up a little!” Etc.

Again—this is all bad! This is very bad! Why do any this?

The current wave feels like a combo plate of both reasons. On the one hand, we have a whole generation of executive leaders that have never seen interest rates go up, so they’re hitting the one easy panic button they have. But mostly this feels like a tantrum by the c-suite class reacting to “hot labor summer” becoming “eternal labor september.”

Of course, this is where I throw up my hands and have nothing to offer except sympathy. This all feels so deeply baked in to the world we live in that it seems unsolvable short of a solution that ends with us all wearing leather jackets with only one sleve.

So, all my thoughts with everyone unexpectedly jobless as the weather gets cold. Hang on to each other, we’ll all get through this.


  1. At one point in my “career”, the wags in the cubes next to mine made me a new nameplate that listed my job as “senior shelf-ware engineer.” I left it up for months, because it was only a little bit funny, but it was a whole lot true.

  2. That one was probably me, sorrryyyyyy (not sorry)

Read More
Gabriel L. Helman Gabriel L. Helman

Re-Capturing the Commons

The year’s winding down, which means it’s time to clear out the drafts folder. Let me tell you about a trend I was watching this year.

Over the last couple of decades, a business model has emerged that looks something like this:

  1. A company creates a product with a clear sales model, but doesn’t have value without a strong community
  2. The company then fosters such a community, which then steps in and shoulders a fair amount of the work of running said community
  3. The community starts creating new things on top of what that original work of the parent company—and this is important—belong to those community members, not the company
  4. This works well enough that the community starts selling additional things to each other—critically, these aren’t competing with that parent company, instead we have a whole “third party ecosystem”.

(Hang on, I’ll list some examples in a second.)

These aren’t necessarily “open source” from a formal OSI “Free & Open Source Software” perspective, but they’re certainly open source–adjacent, if you will. Following the sprit, if not the strict legal definition.

Then, this year especially, a whole bunch of those types of companies decided that they wouldn’t suffer anyone else makining things they don’t own in their own backyard, and tried to reassert control over the broader community efforts.

Some specific examples of what I mean:

  • The website formerly known as Twitter eliminating 3rd party apps, restricting the API to nothing, and blocking most open web access.
  • Reddit does something similar, effectively eliminates 3rd party clients and gets into an extended conflict with the volunteer community moderators.
  • StackOverflow and the rest of the StackExchange network also gets into an extended set of conflicts with its community moderators, tries to stop releasing the community-generated data for public use, revises license terms, and descends into—if you’ll forgive the technical term—a shitshow.
  • Hasbro tries to not only massively restrict the open license for future versions of Dungeons and Dragons, but also makes a move to retroactively invalidate the Open Game License that covered material created for the 3rd and 5th editions of the game over the last 20 years.

And broadly, this is all part of the Enshittification Curve story. And each of these examples have a whole set of unique details. Tens, if not hundreds of thousands of words have been written on each of these, and we don’t need to re-litigate those here.

But there’s a specific sub-trend here that I think is worth highlighting. Let’s look at what those four have in common:

  • Each had, by all accounts, a successful business model. After-the-fact grandstanding non-withstanding, none of those four companies was in financial trouble, and had a clear story about how they got paid. (Book sales, ads, etc.)
  • They all had a product that was absolutely worthless without an active community. (The D&D player’s handbook is a pretty poor read if you don’t have people to play with, reddit with no comments is just an ugly website, and so on)
  • Community members were doing significant heavy lifting that the parent company was literally unable to do. (Dungeon Mastering, community moderating. Twitter seems like the outlier here at first glance, but recall that hashtags, threads, the word “tweet” and literally using a bird as a logo all came from people not on twitter’s payroll.)
  • There were community members that made a living from their work in and around the community, either directly or indirectly. (3rd party clients, actual play streams, turning a twitter account about things your dad says into a network sitcom. StackOverflow seems like the outlier on this one, until you remember that many, many people use their profiles there as a kind of auxiliary outboard resume.)
  • They’ve all had recent management changes; more to the point, the people who designed the open source–adjacent business model are no longer there.
  • These all resulted in huge community pushback

So we end up in a place where a set of companies that no one but them can make money in their domains, and set their communities on fire. There was a lot of handwaving about AI as an excuse, but mostly that’s just “we don’t want other people to make money” with extra steps.

To me, the most enlightening one here is Hasbro, because it’s not a tech company and D&D is not a tech product, so the usual tech excuses for this kind of behavior don’t fly. So let’s poke at that one for an extra paragraph or two:

When the whole OGL controversy blew up back at the start of the year, certain quarters made a fair amount of noise about how this was a good thing, because actually, most of what mattered about D&D wasn’t restrict-able, or was in the public domain, and good old fair use was a better deal than the overly-restrictive OGL, and that the community should never have taken the deal in the first place. And this is technically true, but only in the ways that don’t matter.

Because, yes. The OGL, as written, is more restrictive that fair use, and strict adherence to the OGL prevents someone from doing things that should otherwise be legal. But that misses the point.

Because what we’re actually talking about is an industry with one multi-billion dollar company—the only company on earth that has literal Monopoly money to spend—and a whole bunch of little tiny companies with less than a dozen people. So the OGL wasn’t a crummy deal offered between equals, it was the entity with all the power in the room declaring a safe harbor.

Could your two-person outfit selling PDFs online use stuff from Hasbro’s book without permission legally? Sure. Could you win the court case when they sue you before you lose your house? I mean, maybe? But not probably.

And that’s what was great about it. For two decades, it was the deal, accept these slightly more restrictive terms, and you can operate with the confidence that your business, and your house, is safe. And an entire industry formed inside that safe harbor.

Then some mid-level suit at Hasbro decided they wanted a cut?

And I’m using this as the example partly because it’s the most egregious. But 3rd party clients for twitter and reddit were a good business to be in, until they suddenly were not.

And I also like using Hasbro’s Bogus Journey with D&D as the example because that’s the only one where the community won. With the other three here, the various owners basically leaned back in their chairs and said “yeah, okay, where ya gonna go?” and after much rending of cloth, the respective communities of twitter, and reddit, and StackOverflow basically had to admit there wasn’t an alternative., they were stuck on that website.

Meanwhile, Hasbro asked the same question, and the D&D community responded with, basically, “well, that’s a really long list, how do you want that organized?”

So Hasbro surrendered utterly, to the extent that more of D&D is now under a more irrevocable and open license that it was before. It feels like there’s a lesson in competition being healthy here? But that would be crass to say.

Honestly, I’m not sure what all this means; I don’t have a strong conclusion here. Part of why this has been stuck in my drafts folder since June is that I was hoping one of these would pop in a way that would illuminate the situation.

And maybe this isn’t anything more than just what corporate support for open source looks like when interest rates start going up.

But this feels like a thing. This feels like it comes from the same place as movie studios making record profits while saying their negotiation strategy is to wait for underpaid writers to lose their houses?

Something is released into the commons, a community forms, and then someone decides they need to re-capture the commons because if they aren’t making the money, no one can. And I think that’s what stuck with me. The pettiness.

You have a company that’s making enough money, bills are paid, profits are landing, employees are taken care of. But other people are also making money. And the parent company stops being a steward and burns the world down rather than suffer someone else make a dollar they were never going to see. Because there’s no universe where a dollar spent on Tweetbot was going to go to twitter, or one spent on Apollo was going to go to reddit, or one spent on any “3rd party” adventure was going to go to Hasbro.

What can we learn from all this? Probably not a lot we didn’t already know, but: solidarity works, community matters, and we might not have anywhere else to go, but at the same time, they don’t have any other users. There’s no version where they win without us.

Read More
Gabriel L. Helman Gabriel L. Helman

2023’s strange box office

Weird year for the box office, huh? Back in July, we had that whole rash of articles about the “age of the flopbuster” as movie after movie face-planted. Maybe things hadn’t recovered from the pandemic like people hoped?

And then, you know, Barbenheimer made a bazillion dollars.

And really, nothing hit like it was supposed to all year. People kept throwing out theories. Elemental did badly, and it was “maybe kids are done with animation!” Ant-Man did badly, and it was “Super-Hero fatigue!” Then Spider-Verse made a ton of money disproving both. And Super Mario made a billion dollars. And then Elemental recovered on the long tail and ended up making half a billion? And Guardians 3 did just fine. But Captain Marvel flopped. Harrison Ford came back for one more Indiana Jones and no one cared.

Somewhere around the second weekend of Barbenheimer everyone seemed to throw up their hands as if to say “we don’t even know what’ll make money any more”.

Where does all that leave us? Well, we clearly have a post-pandemic audience that’s willing to show up and watch movies, but sure seems more choosy than they used to be. (Or choosy about different things?)

Here’s my take on some reasons why:

The Pandemic. I know we as a society have decided to act like COVID never happened, but it’s still out there. Folks may not admit it, but it’s still influencing decisions. Sure, it probably wont land you in the hospital, but do you really want to risk your kid missing two weeks of school just so you can see the tenth Fast and the Furious in the theatre? It may not be the key decision input anymore, but that’s enough potential friction to give you pause.

Speaking of the theatre, the actual theater experience sucks most of the time. We all like to wax poetic about the magic of the shared theatre experience, but in actual theaters, not the fancy ones down in LA, that “experience” is kids talking, the guy in front of you on his phone, the couple behind you being confused, gross floors, and half an hour of the worst commercials you’ve ever seen before the picture starts out of focus and too dim.

On the other hand, you know what everyone did while they were stuck at home for that first year of COVID? Upgrade their home theatre rig. I didn’t spend a whole lot of money, but the rig in my living room is better than every mall theatre I went to in the 90s, and I can put the volume where I want it, stop the show when the kids need to go to the bathroom, and my snacks are better, and my chairs are more comfortable.

Finally, and I think this is the key one—The value proposition has gotten out of wack in a way I don’t think the industry has reckoned with. Let me put my cards down on the table here: I think I saw just about every movie released theatrically in the US between about 1997 and maybe 2005. I’m pro–movie theatre. It was fun and I enjoyed it, but also that was absolutely the cheapest way to spend 2-3 hours. Tickets were five bucks, you could basically fund a whole day on a $20 bill if you were deliberate about it.

But now, taking a family of four to a movie is in the $60-70 range. And, thats a whole different category. That’s what a new video game costs. That’s what I paid for the new Zelda, which the whole family is still playing and enjoying six months later, hundreds of hours in. Thats Mario Kart with all the DLC, which we’ve also got about a million hours in. You’re telling me that I should pay the same amount of money that got me all that for one viewing of The Flash? Absolutely Not. I just told the kids we weren’t going to buy the new Mario before christmas, but I’m supposed to blow that on… well, literally anything that only takes up two hours?

And looking at that from the other direction, I’m paying twelve bucks a month for Paramount +, for mostly Star Trek–related reasons. But that also has the first six Mission: Impossible movies on it right now. Twelve bucks, you could cram ‘em all in a long weekend if you were serious about it. And that’s not really even a streaming thing, you could have netted six not-so-new release movies for that back in the Blockbuster days too. And like I said, I have some really nice speakers and a 4k projector, those movies look great in my living room. You’re trying to tell me that the new one is so much better that I need to pay five times what watching all the other movies cost me, just to see it now? As opposed to waiting a couple of months?

And I think that’s the key I’m driving towards here: movies in the theatre have found themselves with a premium price without offering a premium product.

So what’s premium even mean in this context? Clicking back and forth between Box Office Mojo’s domestic grosses for 2023 and 2019, this year didn’t end up being that much worse, it just wasn’t the movies people were betting on that made money.

There’s a line I can’t remember the source of that goes something to the effect of “hollywood doesn’t have a superhero movie problem, it has a ‘worse copy of movies we’ve already seen’ problem.” Which dovetails nicely with John Scalzi’s twitter quip about The Flash bombing: “…the fact is we’re in the “Paint Your Wagon” phase of the superhero film era, in which the genre is played out, the tropes are tired and everyone’s waiting for what the next economic engine of movies will be.”

Of course, when we say “Superhero”, we mostly mean Marvel Studios, since the recent DC movies have never been that good or successful. And Marvel did one of the dumbest things I’ve ever seen, which is gave everyone an off ramp. For a decade they had everyone in a groove to go see two or three movies a year and keep up on what those Avengers and their buddies were up to. Sure, people would skip one or two here or there, a Thor, an Ant-Man, but everyone would click back in for one of the big team up movies. And then they made Endgame, and said “you’re good, story is over, you can stop now!” And so people did! The movie they did right after Endgame needed to be absolutely the best movie they had ever done, and instead it was Black Widow. Which was fine, but didn’t convince anyone they needed to keep watching.

And I’d extend all this out to not just Superheros, but also “superhero adjacent” moves, your Fast and Furious, Mission: Impossible, Indiana Jones. Basically all the “big noise” action blockbusters. I mean, what’s different about this one versus the other half-dozen I’ve already seen?

(Indiana Jones is kind of funny for other reasons, because I think Disney dramatically underestimated how much the general audience knew or cared about Spielburg. His name on those movies mattered! The guy who made “The Wolverine” is fine and all, but I’m gonna watch that one at home. I’m pretty sure if Steve had directed it instead of going off to do West Side Story it would have made a zillion dollars.)

But on the other hand, the three highest grossing movies that weren’t Barbenheimer were Super Mario Bros, Spider-Verse, and Guardians of the Galaxy 3, so clearly superheros and animation are still popular, just the right superheros and animation. Dragging the superhero-movies-are-musicals metaphor to the limit, there were plenty of successful musicals after Paint your Wagon, but they were the ones that did something interesting or different. They stopped being automatically required viewing.

At this point, I feel like we gotta talk about budgets for a second, only only for a second because it is not that interesting. If you don’t care about this, I’ll meet down on the other side of the horizontal line.

Because the thing is, most of those movies that, ahem, “underperformed” cost a ton. The new M:I movie payed the salaries for everyone working on it through the whole COVID lockdown, so they get a pass. (Nice work, Tom Cruise!). Everyone else, though, what are you even doing? If you spend so much money making a movie that you need to be one of the highest grossing films of all time just to break even, maybe that’s the problem right there? Dial of Destiny cost 300 million dollars. Last Crusade cost forty eight. Adjusted for inflation, thats (checks wolfram alpha) …$116 million? Okay, that amount of inflation surprised me too, but the point stands: is Dial three times as much movie as Last Crusade? Don’t bother answering that, no it is not, and thats even before pointing out the cheap one was the one with Sean friggin’ Connery.

This where everyone brings up Sound of Freedom. Let’s just go ahead and ignore, well, literally everything else about the movie and just point out that it made just slightly more money than the new Indiana Jones movie, but also only cost, what, 14 million bucks? Less than five percent of what Indy cost?

There’s another much repeated bon mot I can’t seem to find an origin for that goes something along the lines of “They used to used to make ten movies hoping one would be successful enough to pay for the other nine, but then decided to just make the one that makes money, which worked great until it didn’t.” And look, pulpy little 14 million dollar action movies are exactly the kind of movie they’re talking about there. Sometimes they hit a chord! Next time you’re tempted to make a sequel to a Spielburg/Lucas movie without them, maybe just scrap that movie and make twenty one little movies instead.

So, okay. What’s the point, what can we learn from this strange year in a strange decade? Well, people like movies. They like going to see movies. But they aren’t going to pay to see a worse version of something they can already watch at home on their giant surround-sound-equipped TV for “free”. Or risk getting sick for the privilege.

Looking at the movies that did well this year, it was the movies that had something to say, that had a take, movies that had ambitions beyond being “the next one.”

Hand more beloved brand names to indie film directors and let them do whatever they want. Or, make a movie based on something kids love that doesn’t already have a movie. Or make a biography about how sad it is that the guy who invented the atomic bomb lost his security clearance because iron man hated him. That one feels less applicable, but you never know. If you can build a whole social event around an inexplicable double-feature, so much the better.

And, look, basically none of this is new. The pandemic hyper-charged a whole bunch of trends, but I feel like I could have written a version of this after Thanksgiving weekend for any year in the past decade.

That’s not the point. This is:

My favorite movie of the year was Asteroid City. That was only allegedly released into theatres. It made, statistically speaking, no money. Those kinds of movies never do! They make it up on the long tail.

I like superhero/action movies movies as much as the next dork who knew who “Rocket Racoon” was before 2014, but I’m not about to pretend they’re high art or anything. They’re junk food, sometimes well made very entertaining junk food, but lets not kid ourselves about the rest of this complete breakfast.

“Actually good” movies (as opposed to “fun and loud”) don’t do well in the theatre, they do well on home video.

Go back and look at that 2019 list I linked above. On my monitor, the list cuts off at number fifteen before you have to scroll, and every one of those fifteen movies is garbage. Fun garbage, in most cases! On average, well made, popular, very enjoyable. (Well, mostly, Rise of Skywalker is the worst movie I’ve ever paid to see.)

Thats what was so weird about Barbenheimer, and Spider-Verse, and 2023’s box office. For once, objectively actually good movies made all the money.

Go watch Asteroid City at home, that’s what I’m saying.

Read More
Gabriel L. Helman Gabriel L. Helman

What was happening: Twitter, 2006-2023

Twitter! What can I tell ya? It was the best of times, it was the worst of times. It was a huge part my life for a long time. It was so full of art, and humor, and joy, and community, and ideas, and insight. It was also deeply flawed and profoundly toxic, but many of those flaws were fundamental to what made it so great.

It’s almost all gone now, though. The thing called X that currently lives where twitter used to be is a pale, evil, corrupted shadow of what used to be there. I keep trying to explain what we lost, and I can’t, it’s just too big.1 So let me sum up. Let me tell you why I loved it, and why I left. As the man2 said, let me tell you of the days of high adventure.


I can’t now remember when I first heard the word “twitter”. I distinctly remember a friend complaining that this “new twitter thing” had blown out the number of free SMS messages he got on his nokia flip phone, and that feels like a very 2006 conversation.

I tend to be pretty online, and have been since the dawn of the web, but I’m not usually an early adopter of social networks, so I largely ignored twitter for the first couple of years. Then, for reasons downstream of the Great Recession, I found myself unemployed for most of the summer of 2009.3 Suddenly finding myself with a surfit of free time, I worked my way down that list of “things I’ll do if I ever get time,” including signing up for “that twitter thing.” (I think that’s the same summer I lit up my now-unused Facebook account, too.) Smartphones existed by then, and it wasn’t SMS-based anymore, but had a website, and apps.4

It was great. This was still in it’s original “microblogging” configuration, where it was essentially an Instant Messenger status with history. You logged in, and there was the statuses of the people you followed, in chronological order, and nothing else.

It was instantly clear that this wasn’t a replacement for something that already existed—this wasn't going to do away with your LiveJournal, or Tumblr, or Facebook, or blog. This was something new, something extra, something yes and. The question was, what was it for? Where did it fit in?

Personally, at first I used my account as a “current baby status” feed, updating extended family about what words my kids had learned that day. The early iteration of the site was perfect for that—terse updates to and from people you knew.

Over time, it accumulated various social & conversational features, not unlike a Katamari rolling around Usenet, BBSes, forums, discussion boards, other early internet communication systems. It kept growing, and it became less useful as a micro-blogging system and more of a free-wheeling world-wide discussion forum.

It was a huge part of my life, and for a while there, everyone’s life. Most of that time, I enjoyed it an awful lot, and got a lot out of it. Everyone had their own take on what it was Twitter had that set it apart, but for me it was three main things, all of which reinforced each other:

  1. It was a great way to share work. If you made things, no matter how “big” you were, it was a great way to get your work out there. And, it was a great way to re-share other people’s work. As a “discovery engine” it was unmatched.

  2. Looking at that the other way, It was an amazing content aggregator. It essentially turned into “RSS, but Better”; at the time RSS feeds had pretty much shrunk to just “google reader’s website”. It turns out that sharing things from your RSS feed into the feeds of other people, plus a discussion thread, was the key missing feature. If you had work of your own to share, or wanted to talk about something someone else had done elsewhere on the internet, twitter was a great way to share a link and talk about it. But, it also worked equally well for work native to twitter itself. Critically, the joke about the web shrinking to five websites full of screenshots of the other four5 was posted to twitter, which was absolutely the first of those five websites.

  3. Most importantly, folks who weren’t anywhere else on the web were on twitter. Folks with day jobs, who didn’t consider themselves web content people were there; these people didn’t have a blog, or facebook, or instagram, but they were cracking jokes and hanging out in twitter.

There is a type of person whom twitter appealed to in a way that no other social networking did. A particular kind of weirdo that took Twitter’s limitations—all text, 140 or 280 characters max—and turned them into a playground.

And that’s the real thing—twitter was for writers. Obviously it was text based, and not a lot of text at that, so you had to be good at making language work for you. As much as the web was originally built around “hypertext”, most of the modern social web is built around photos, pictures, memes, video. Twitter was for people who didn’t want to deal with that, who could make the language sing in a few dozen words.

It had the vibe of getting to sit in on the funniest people you know’s group text, mixed with this free-wheeling chaos energy. On it’s best days, it had the vibe of the snarky kids at the back of the bus, except the bus was the internet, and most of the kids were world-class expoerts in something.

There’s a certain class of literary writer goofballs that all glommed onto twitter in a way none of us did with any other “social network.” Finally, something that rewarded what we liked and were good at!

Writers, comedians, poets, cartoonists, rabbis, just hanging out. There was a consistent informality to the place—this wasn’t the show, this was the hotel bar after the show. The big important stuff happened over in blogs, or columns, or novels, or wherever everyone’s “real job” was, this was where everyone let their hair down and cracked jokes.

But most of all, it was weird. Way, way weirder than any other social system has ever been or probably ever will be again, this was a system that ran on the same energy you use to make your friends laugh in class when you’re supposed to be paying attention.

It got at least one thing exactly right: it was no harder to sign into twitter and fire off a joke than it was to fire a message off to the group chat. Between the low bar to entry and the emphasis on words over everthing else, it managed to attract a crowd of folks that liked computers, but didn’t see them as a path to self-actualization.

But what made twitter truly great were all the little (and not so little) communities that formed. It wasn’t the feature set, or the website, or the tech, it was the people, and the groups they formed. It’s hard to start making lists, because we could be here all night and still leave things out. In no particular order, here’s the communities I think I’ll miss the most:

  • Weird Twitter—Twitter was such a great vector for being strange. Micro-fiction, non-sequiturs, cats sending their mothers to jail, dispatches from the apocalypse.
  • Comedians—professional and otherwise, people who could craft a whole joke in one sentence.
  • Writers—A whole lot of people who write for a living ended up on twitter in a way they hadn’t anywhere else on the web.
  • Jewish Twitter—Speaking as a Jew largely disconnection from the local Jewish community , it was so much fun to get to hang out with the Rabbis and other Jews.

But also! The tech crowd! Legal experts! Minorities of all possible interpretations of the word sharing their experiences.

And the thing is, other than the tech crowd,6 most of those people didn’t go anywhere else. They hadn’t been active on the previous sites, and many of them drifted away again the wheels started coming off twitter. There was a unique alchemy on twitter for forming communities that no other system has ever had.

And so the real tragedy of twitter’s implosion is that those people aren’t going somewhere else. That particular alchemy doesn’t exist elsewhere, and so the built up community is blowing away on the wind.


Because all that’s almost entirely gone now, though. I miss it a lot, but I realize I’ve been missing it for a year now. There had been a vague sense of rot and decline for a while. You can draw a pretty straight line from gamergate, to the 2016 Hugos, to the 2016 election, to everything around The Last Jedi, to now, as the site rotted out from the inside; a mounting sense that things were increasingly worse than they used to be. The Pandemic saw a resurgence of energy as everyone was stuck at home hanging out via tweets, but in retrospect that was a final gasp.7

Once The New Guy took over, there was a real sense of impending closure. There were plenty of accounts that made a big deal out of Formally Leaving the site and flouncing out to “greener pastures”, either to make a statement, or (more common) to let their followers know where they were. There were also plenty of accounts saying things like “you’ll all be back”, or “I was here before he got here and I’ll be here after he leaves”, but over the last year mostly people just drifted away. People just stopped posting and disappeared.

It’s like the loss of a favorite restaurant —the people who went there already know, and when people who wen’t there express disbelief, the response is to tell them how sorry you are they missed the party!

The closest comparison I can make to the decayed community is my last year of college. (Bear with me, this’ll make sense.). For a variety of reasons, mostly good, it took me 5 years to get my 4 year degree. I picked up a minor, did some other bits and bobs on the side, and it made sense to tack on an extra semester, and at that point you might as well do the whole extra year.

I went to a medium sized school in a small town.8 Among the many, many positive features of that school was the community. It seemed like everyone knew everyone, and you couldn’t go anywhere without running into someone you knew. More than once, when I didn’t have anything better to do, I’d just hike downtown and inevitably I’d run into someone I knew and the day would vector off from there.9

And I’d be lying if I said this sense of community wasn’t one of the reasons I stuck around a little longer—I wasn’t ready to give all that up. Of course, what I hadn’t realized was that not everyone else was doing that. So one by one, everyone left town, and by the end, there I was in downtown surrounded by faces I didn’t know. My lease had a end-date, and I knew I was moving out of town on that day no matter what, so what, was I going to build up a whole new peer group with a short-term expiration date? That last six months or so was probably the weirdest, loneliest time of my whole lide. When the lease ended, I couldn’t move out fast enough.

The point is: twitter got to be like that. I was only there for the people, and nearly all the people I was there for had already gone. Being the one to close out the party isn’t always the right move.


One of the things that made it so frustrating was that it had always problems, but it had the same problems that any under-moderated semi-anonymous internet system had. “How to stop assholes from screwing up your board” is a 4 decade old playbook at this point, and twitter consistently failed to actually deploy any of the solutions, or at least deploy them at a scale that made a difference. The maddening thing was always that the only unique thing about twitter’s problems was the scale.

I had a soft rule that I could only read Twitter when using my exercise bike, and a year or two ago I couldn’t get to the end of the tweets from people I followed before I collapsed from exhaustion. Recently, I’d run out of things to read before I was done with my workout. People were posting less, and less often, but mostly they were just… gone. Quietly fading away as the site got worse.

In the end, though, it was the tsunami of antisemitism that got me. “Seeing only what you wanted to see” was always a skill on twitter, but the unfolding disaster in Israel and Gaza broke that. Not only did you have the literal nazis showing up and spewing their garbage without check, but you had otherwise progressive liberal leftists (accidentally?) doing the same thing, without pushback or attempt at discussion, because all the people that would have done that are gone. So instead it’s just a nazi sludge.10


There was so much great stuff on there—art, ideas, people, history, jokes. Work I never would have seen, things I wouldn’t have learned, books I wouldn’t have read, people I wouldn’t know about. I keep trying to encompass what’s been lost, make lists, but it’s too big. Instead, let me tell you one story about the old twitter:

One of the people I follow(ed) was Kate Beaton, originally known for the webcomic Hark A Vagrant!, most recently the author of Ducks (the best book I read last year). One day, something like seven years ago, she started enthusing about a book called Tough Guys Have Feelings Too. I don’t think she had a connection to the book? I remember it being an unsolicited rave from someone who had read it and was stuck by it.

The cover is a striking piece of art of a superhero, head bowed, eyes closed, a tear rolling down his cheek. The premise of the book is what it says on the cover—even tough guys have feelings. The book goes through a set of sterotypical “tough guys”—pirates, ninjas, wrestlers, superheros, race car drivers, lumberjacks, and shows them having bad days, breaking their tools, crashing their cars, hurting themselves. The tough guys have to stop, and maybe shed a tear, or mourn, or comfort themselves or each other, and the text points out, if even the tough guys can have a hard time, we shouldn’t feel bad for doing the same. The art is striking and beautiful, the prose is well written, the theme clearly and well delivered.

I bought it immediately. You see, my at-the-time four-year-old son was a child of Big Feelings, but frequently had trouble handling those feelings. I thought this might help him. Overnight, this book became almost a mantra. For years after this, when he was having Big Feelings, we’d read this book, and it would help him calm down and take control of what he was feeling.

It’s not an exaggeration to say this book changed all our lives for the better. And in the years since then, I’ve often been struck that despite all the infrastructure of moden capitalism—marketing, book tours, reviews, blogs, none of those ever got that book into my hands. There’s only been one system where an unsolicited rave from a web cartoonist being excited about a book outside their normal professional wheelhouse could reach someone they’ve never met or heard of and change that person’s son’s life.

And that’s gone now.


  1. I’ve been trying to write something about the loss of twitter for a while now. The first draft of this post has a date back in May, to give you some idea.

  2. Mako.

  3. As as aside, everyone should take a summer off every decade or so.

  4. I tried them all, I think, but settled on the late, lamented Tweetbot.

  5. Tom Eastman: I’m old enough to remember when the Internet wasn't a group of five websites, each consisting of screenshots of text from the other four.

  6. The tech crowd all headed to mastodon, but didn’t build that into a place that any of those other communities could thrive. Don’t @-me, it’s true.

  7. In retrospect, getting Morbius to flop a second time was probably the high point, it was all downhill after that.

  8. CSU Chico in Chico, California!

  9. Yes, this is what we did back in the 90s before cellphones and texting, kids.

  10. This is out of band for the rest of the post, so I’m jamming all this into a footnote:

    Obviously, criticizing the actions of the government of Israel is no more antisemitic than criticizing Hamas would be islamophobic. But objecting to the actions of Israel ’s government with “how do the Jews not know they’re the bad guys” sure as heck is, and I really didn’t need to see that kind of stuff being retweeted by the eve6 guy.

    A lot of things are true. Hamas is not Palestine is not “The Arabs”, and the Netanyahu administration is not Israel is not “The Jews.” To be clear, Hamas is a terror organization, and Israel is on the functional equivalent of Year 14 of the Trump administration.

    The whole disaster hits at a pair of weird seams in the US—the Israel-Palestine conflict maps very strangely to the American political left-right divide, and the US left has always had a deep-rooted antisemitism problem. As such, what really got me was watching all the comments criticizing “the Jews” for this conflict come from _literally_ the same people who spent four years wearing “not my president” t-shirts and absolving themselves from any responsibility for their governments actions because they voted for “the email lady”. They get the benefit of infinite nuance, but the Jews are all somehow responsible for Bibi’s incompetent choices.

Read More
Gabriel L. Helman Gabriel L. Helman

Email Verification

The best and worst thing about email is that anyone can send an email to anyone else without permission. The people designing email didn’t think this was a problem, of course. They were following the pattern of all the other communications technology of the time—regular mail, the phones, telegrams. Why would I need permission to send a letter? That’s crazy.

Of course, here in the Twenties, three of those systems are choked by robot-fueled marketing spam, and the fourth no longer exists. Of all the ways we ended up living in a cyberpunk dystopia, the fact that no one will answer their phone anymore because they don’t want to be harassed by a robot is the most openly absurd; less Gibson, more Vonnegut-meets-Ballard.

(I know I heard that joke somewhere, but I cannot remember where. Sorry, whoever I stole that from!)

Arguably, there are whole social networks who built outward from the basic concept of “what if you had to get permission to send a message directly to someone?”

With email though, I’m always surprised that systems don’t require you to verify your email before sending messages to it. This is actually very easy to do! Most web systems these days use the user’s email address as their identity. This is very convenient, because someone else is handling the problem of making sure your ids are unique, and you always have a way to contact your users. All you have to do is make them click a link in an email you sent them, and now you know they gave you a live address and it’s really them. Easy!

(And look, as a bonus, if you email them “magic links” you also don’t have to worry about a whole lot of password garbage. But thats a whole different topic.)

But instead a remarkable number of places just let people type some stuff that looks like an email address into a web form and then just use it.

And I don’t get it. Presumably you’re collecting user emails because you want to be able to contact them about whatever service you’re providing them, and probably also send them marketing. And if they put an email in that isn’t correct you can’t do either. I mean, if they somehow to put in a fake or misspelled address that happens to turn out to be valid, I guess you can still send that address stuff, but it’s not like the person at the other end of that is going to be receptive.

Okay great, but, ummmmmm, why do you bring this up?

I’m glad you ask! I mention this because there are at least three people out there in the world that keep misspelling their email addresses as mine. Presumably their initials are close to mine, and they have similar names, and they decomposed their names into an available gmail address in a manner similar to how I did. Or even worse—I was early to the gmail party, so I got an address with no numbers, maybe these folks got 47.

My last name is one that came into existence because someone at Ellis Island didn’t care to decipher my great-grandfather’s accent and wrote down something “pretty close.” As a side effect of this, I’ve personally met every human that’s ever had that last name—to whom I’m related. I suspect this name was a fairly common Ellis Island shortcut, however, since there a surprising number of people out there with the same last name whom I’ve never heard of and am not related to.

But so the upshot is that I keep getting email meant for other people. Never anything personal, never anything I could respond to, but spam, or newsletters, or updates about their newspaper account.

I’ve slowly built up a mental image of these people. They all seem older, two midwest or east coast, one in Texas.

One, though, has been a real spree the last year or so. I think he’s somewhere in the greater Chicago area. He signed up for news from Men’s Wearhouse, he ordered a new cable install from Spectrum Cable. Unlike previous people, since this guy started showing up, it’s been a deluge.

And what do you do? I unsubscribe when I can, but that never works. But I don’t just want to unsubscribe, I want to find a third party to whom I can respond and say “hey, can you tell that guy that he keeps spelling his email wrong?”

The Spectrum bills drive me crazy. There were weeks where he didn’t “activate his new equipment”, and I kept shaking my head thinking, yeah, no wonder, he’s not getting the emails with the link to activate in them. He finally solved this problem, but now I get a monthly notification that his bill is ready to be paid. And I know that Spectrum has his actual address, and could technically pass a message along, but there is absolutely no customer support flow to pass a message along that they typed their email wrong.

So, delete, mark as spam, unsubscribe. Just one more thing that clogs up our brief time on Earth.

And then, two weeks ago, I got a google calendar invite.

The single word “counseling” was the meeting summary. No body, just google meet link. My great regret was that I didn’t see this until after the time had passed. It had been cancelled, but there it was. Sitting in my inbox. Having been sent from what was clearly a personal email address.

Was this it? The moment?

I thought about it. A lot. I had to try, right?

After spending the day turning it over in my head, I sent this email back to the person who was trying to do “counseling”:

Hello!

This is a long shot, but on the off chance that someone gave you this address rather than it being a typo, could you please tell whomever you it from to please be more careful entering their email? I've been getting a lot of emails for someone else recently that are clearly the result of someone typing their email wrong and ending up typing mine by mistake. While I can happily ignore the extra spam, I suspect that person would rather be the one receiving the emails they signed up for? Also, their cable bill is ready.

If you typoed it, obviously, no worries! Hope you found the person you meant to send that to.

In any case, have a great weekend!

I never got a response.

But the next day I got an email telling me my free trial for some business scheduling software was ready for me to use.

“The end! No moral.”

Read More
Gabriel L. Helman Gabriel L. Helman

Fall ’23 Good TV Thursdays: “The real TVA was the friends we made along the way”

For a moment there, we had a real embarrassment of riches: three of the best genre shows of the year so far—Loki, Lower Decks, and Our Flag Means Death—were not only all on in October, but they all posted the same night: Thursdays. While none of the people making those three shows thought of themselves as part of a triple-feature, that’s where they ended up, and they contrasted and complimented each other in interesting ways. It’s been a week or two now, let’s get into the weeds.

Heavy Spoilers Ahoy for Loki S2, Lower Decks S4, and Our Flag Means Death S2

Loki season 2

The first season of Loki was a unexpected delight. Fun, exciting, and different, it took Tom Hiddleston’s Loki and put him, basically, in a minor league ball version of Doctor Who.

(The minor league Who comparison was exacerbated later when the press release announcing that Loki S1 director Kate Herron was writing an episode of Who had real “we were so impressed with their work we called them up to the majors” energy.)

Everything about the show worked. The production design was uniformly outstanding, from the TVA’s “fifties-punk” aesthetic , to the cyberpunk city and luxury train on the doomed planet of Lamentis-1, to Alabama of the near future, to casually tossing off Pompei at the moment the volcano exploded.

The core engine of the show was genius—stick Loki in what amounted to a buddy time cop show with Owen Wilson’s Mobius and let things cook. But it wasn’t content to stop there; it took all the character development Loki had picked up since the first Avengers, and worked outwards from “what would you do if you found out your whole life was a waste, and then got a second chance?” What does the norse god of chaos do when he gets a second chance, but also starts working with The Man? The answer is, he turned into Doctor Who.

And, like the Doctor, Loki himself had a catalytic effect on the world around him; not the god of “mischief”, necessarily, but certainly a force for chaos; every other character who interacted with him was changed by the encounter, learning things they’d have rather not learned and having to change in one way or another having learned it.

While not the showiest, or most publicized, the standout for me was Wunmi Mosaku’s Hunter B-15, who went from a true believer soldier to standing in the rain outside a futuristic Wal-Mart asking someone she’d been trying to kill (sorry, “prune”) to show her the truth of what had been done to her.

The first season also got as close as I ever want to get to a Doctor Who origin—not from Loki, but in the form of Mobius. He also starts as a dedicated company man, unorthodox maybe, but a true believer in the greater mission. The more he learns, the more he realizes that the TVA were the bad guys all along, and ends up in full revolt against his former colleagues; by the end I was half expecting him to steal a time machine and run off with his granddaughter.

But look, Loki as a Marvel character never would have shown up again after the first Thor and the Avengers if Tom Hiddleston hadn’t hit it out of the park as hard as he did. Here, he finally gets a chance to be the lead, and he makes the most of the opportunity. He should have had a starring vehicle long before this, and it manages to make killing Loki off in the opening scene of Infinity War even stupider in retrospect than it was at the time.

All in all, a huge success (I’m making a note here) and a full-throated endorsement of Marvel’s plan for Disney+ (Especially coming right after the nigh-unwatchable Falcon and the Winter Soldier).

Season 2, then, was a crushing disappointment.

So slow, so boring. All the actors who are not Tom Hiddleston are visibly checked out; thinking about what’s next. The characters, so vibrant in the first season , are hollowed-out shells of themselves.

As jwz quips, there isn’t anything left of this show other than the leftover production design.

As an example of the slide, I was obsessed with Loki’s season 1 look where he had, essentially, a Miami Vice under-shoulder holster for his sword under his FBI-agent style jacket, with that square tie. Just a great look, a perfect encapsulation of the shows mashup of influences and genres. And this year, they took that away and he wore a kinda boring Doctor Who cosplay coat. The same basic idea, but worse in every conceivable way.

And the whole season was like that, the same ideas but worse.

Such a smaller scope this year, nothing on the order of the first season’s “city on a doomed planet.” The show seemed trapped inside the TVA, sets we had seen time and time before. Even the excursion to the World’s Fair seemed claustrophobic. And wasted, those events could have happened anywhere. Whereas the first season was centered around what Loki would do with a second chance armed with the knowledge that his life came to nothing, here things just happened. Why were any of these people doing any of these things? Who knows? Motivations are non-existant, characters have been flattened out to, basically, the individual actor’s charisma and not much else. Every episode I wanted to sit the writer down and dare them to explain why any character did why they did.

The most painful was probably poor B-15 who was long way from heartbreaking revelations in the rain in front of futuristic WalMarts; this year the character has shrunk to a sub-Riker level of showing up once a week to bark exposition at the audience. She’s basically Sigourney Weaver’s character from Galaxy Quest, but meant seriously, repeating what we can already see on the computer screen.

And Ke Huy Quan, fresh from winning an Oscar for his stunning performance in Everything Everywhere all at Once, is maybe even more wasted, as he also has to recite plot-mechanic dialogue, but he doesn’t even have a well-written version of his character to remember.

And all the female characters were constantly in conflict with each other, mostly over men? What was that even about?

Actually, I take that back, the most disappointing was Tara Strong’s Miss Minutes, a whimsical and mysterious character who became steadily more menacing over the course of the first season, here reduced to less than nothing, practically absent from the show, suddenly pining for Johnathan Majors, and then casually murdered (?) by the main characters in an aside while the show’s attention was somewhere else.

There was a gesture towards an actually interesting idea in the form of “the god of chaos wants to re-fund the police”, but the show didn’t even seem to notice that it had that at hand.

The second to last episode was where I finally lost patience. The TVA has seemingly been destroyed, and Loki has snapped backwards in time. Meeting each of the other characters as who they were before they were absorbed into the TVA, Loki spends the episode trying to get them to remember him and to get back to the “present” to save the TVA. Slowly, painfully, the show arrived at a conclusion where Hiddleston looked the camera in the eye and delivered the punchline that “The real TVA was the friends we made along the way”.

And, what? Stepping past the deeply banal moral, I flatly refuse to believe that these characters, whom Loki has known for, what, a couple of days? Are such great friends of his that he manages to learn how to time travel from sheer will to rescue them. These people? More so than his brother, more so than anyone else from Asgard? (This is where the shared universe fails the show, we know too much about the character to buy what this show is selling.)

The last episode was actually pretty good—this was the kind of streaming show that was really a movie idea with 4 hours of foreplay before getting to the real meat. Loki choosing to shoulder the responsibility for the multiverse and get his throne at the center of the world tree as the god of time (?) is a cool visual, but utterly squanders the potential of the show, Loki and Morbius having cool timecop adventures.

(That said, the Avengers finding out that Loki is sitting at the center of all realities is a hell of a potential scene, and I hope that happens. But even more, I really want a movie with Time Agent Loki and Single Dad Thor. But that seems to have been squandered with everything else.)

Lower Decks season 4

Loki probably wouldn’t have been quite so maddening if he hadn’t very slowly arrived at his cliché epiphany on the same night as the Lower Decks season finale, which started at “The real Star Fleet was the friends we made along the way” and then used that as a launching off point. LD managed to juggle storylines for nearly every recurring character, action that flowed entirely from character’s personalities, a few of the deepest lore cuts I’ve ever seen, and an entire series of space battle action sequences—and all in half the time!

I mean, they did Wrath of Khan in 30 minutes, except Khan was Nick Lorcarno. And it worked! I’m not sure I’ve ever seen a bigger flex.

Plus: “black market Ferengi Genesis Device” is my new band name.

Lower Decks was originally positioned as a kind of “PG-rated Rick & Morty but Star Trek,” which was clearly what they needed to say to get the show greenlit, but then immediately did something different and more interesting.

We’ve been living through a period of “legacy sequels”, reboots, and followups to long-running franchises, and the vast majority of them have trouble figuring out how keep the massive weight of the existing material from being an anchor.

But Lower Decks is one of the few shows that actually figured out how to use the history entirely as value-add. (The other winner in this category is Andor.) Its key advantage is that it’s very, very well written, but it does two brilliant moves: first, the characters are all the in-universe equivalent of Star Trek fans themselves, so they know as much as we do. Second, the show consistently takes a well-worn Star Trek trope and then asks, basically, how would a regular person react to this emotionally?

And, it does this while mining the whole run of the franchise, but especially TNG, for material to be revisited. Frequently the show will take a plot of a TNG episode that was originally played straight over 45 minutes, and then re-stage it as a comedy set-piece for the pre-credits teaser, and they’re all brilliant. It’s a show made by people who love Star Trek as much as anyone, but who are not about to pretend that it’s not mostly ridiculous.

Every Lower Decks episode has at least one joke/deep cut where I laugh like crazy and the kids have no idea what the deal is, and then I have to put on a seminar afterwards. The last two episodes of this season were both the hardest I laughed and the longest it took me to explain everything.

As an example: that Black Market Ferengi Genesis Device, which needs the operator to pay to shut it down. That’s the kind of joke that needs a lot of background to work, the kind of background you only get with decades of material, and the show just rips past it without trying to explain it, reasoning correctly that anyone who will laugh at that doesn’t need the help, and for everyone else there’s no amount of explanation they could fit in 30 minutes that would work. It’s the Mystery Science Theatre 3000 approach, applied to fiction.

They also titled an episode “Parth Ferengi’s Heart Place,” which is such a multi-dimensional deep cut I don’t even know what to say about it besides applaud.

And that’s the thing! You don’t need to get any of these jokes for the show to make sense, and the deep cuts that you do need to understand—like the fact that the main villain is a one-off character from a single TNG episode 30 years ago—the show does stop and explain, and recontextualize in the milieu of Lower Decks. It’s finally a show that manages to use the half-century of “canon” as a sail, not an anchor, using it to deepen the show, rather than get into doctrinal arguments about, say, what the Klingons “really” look like.

But that’s all sauce, bonus points. The real joy of this show are the characters and their friendships. And this is where Lower Decks snapped Loki into sharp relief.

LD took its rule-breaking chaos-agent main character with a group of close friends she had a complex relationships with, and contrasted that with a different rule-breaking chaos-agent with a group of followers, but who broke rules for different reasons, and then made her choose which group to stay with, and she came out on top because she kept operating as a chaos agent, but now realizing why she was doing it, and for the right reasons. And all this while exploring and evolving her relationships with all the other main characters, and giving most of them a beat to change as characters as well.

And this is why it was such a contrast to Loki. Loki’s plotlines resolved by him giving up his chaotic ways and accepting responsibility for the multiverse; Mariner’s plot resolved by her continuing to be chaotic but pointed in the right direction. Lower Decks evolved its characters by making them more themselves instead of giving up their signature features for plot reasons; imagine what Loki would have looked like if the resolution had flowed from who the characters were instead of where the plot needed them to be.

And on top of all that, the ship Mariner steals from the Nova Fleet is my favorite minor starship design, which felt like it was written for me exclusively.

I have not felt this solidly in the center of the target audience for something since Taco Bell announced they were making a custom flavor of Mountain Dew. This is the Star Trek show I’ve always wanted.

Our Flag Means Death season 2

One of my favorite rare genres is the Stealth Movie. A movie that starts looking like something else entirely, a RomCom or period drama and then at about the 20 minute mark the ninjas show up and it turns into a different thing entirely. A movie that changes genres part way through, the bigger the change the better, especially if it can do it by surprise.

This of course, basically never happens in real life, and for good reason! Cranking from one genre to a different midway is a great way to frustrate your audience, especially if you’re trying to keep the shift a surprise. For every person that would leap up in delight when the gears change, there’d be another ten who’d feel ripped off they didn’t get to see the movie in the trailer.

For a long time, Wild Things was my go-to example of a movie that really did this, and it’s about as as good as you could do—the genres are compatible, the shift happens pretty organically, and it does a great job at both the “sleazy sex crimes like Basic Instinct” half and “double-doublecross caper like Usual Suspects” half.

And you know, that’s okay. The audience for media that jumps genre tracks is pretty small, and I understand my desire to be surprised in that manner is a niche, niche intersect.

And then, Our Flag Means Death came out.

Murry from Flight of the Conchords and the main vampire from the movie version of What we do in the Shadows? Doing a pirate comedy loosely based on the real life friendship of “gentleman pirate” Stede Bonnet and Blackbeard? Sounds like their previous stuff mixed with a little Inspector Clouseau. I’m in!

And for the first couple of episodes, I wasn’t that into it. It was fine, but the comedy didn’t really work for me? I kept expecting it to be goofier, for Stede Bonnet to accidentally succeed more often. I mean, it was very well written, well made, well acted, a whole cast of fun characters in the crew of the Revenge, it just wasn’t working for me. And you know, okay, thats disappointing, but not everything is for everybody. A couple episodes in, I was accepting that this just wasn’t my jam afer all.

And then they pulled the switcheroo, and revealed it was actually a gay pirate rom com. And holy smokes, suddenly all the decisions I didn’t understand worked in this new context, and the whole show snapped into place. And it went from a show I was lukewarm on to one of my favorite shows of all time.

The first season ended one one of the great cliffhangers. The best cliffhangers are the not the ones where the characters are in danger—you know they’re going to escape whatever contrived danger they’re in—but ones where the characters and the audience learn a new fact that change your understanding about what show you’re watching, and what options the show has going forward.

Stede and Blackbeard had split up after escaping from prison. Stede had tried to go home to his old life and realized that he really had changed, and really did want to go be a pirate. Blackbeard, meanwhile, had taken back their old ship and marooned most of the worthless-to-him crew on a deserted island. This batch of characters who we’d come to care for very much were basically doomed, and were waiting for the inevitable. Then! A boat on the horizon. Through a spyglass, they spot—Stede! He’s immediately different then we’ve seen him before, different clothes, different body language. An air of confidence, and more importantly, competence. He raises a hand over his head in a single sign of greeting, like a reverse Grail Knight. Six episodes earlier, Stede arriving to rescue the crew would mean they were even more doomed than they already were, now the message is clear—they’re going to be okay. Roll Credits. See you in a year.

Whereas the first season had a slightly hesitant quality, not quite sure how how the show would be received, the second season was clearly made by people that knew they had a fanbase that was absolutely feral for the show, and was absolutely buying what they were selling. Recognizing that the relationship was the core of the show and not dragging things out, Stede and Blackbeard were back together by the end of the first night (the second episode, but they released two a week.)

Everything the first season did well, the second season did better. It’s a hard show to talk about, because it was just so good. Rather than formatting a list of things I love I’ll just mention my favorite revision from the first year: whereas the first season played Izzy Hands, Blackbeard, and Stede as a love triangle, the second played it as the “new girlfriend” and “old best friend” coming to terms with the fact that the other was never going to go away, and learning both get along and to see what their mutual saw in the other.

While a very different genre and style, Our Flag Means Death had a lot in common with Lower Decks: a crew of maybe not A-players doing their best doing action-comedy deeply rooted in characters, their relationships with each other, and their feeling about all of it. And throwing one last elbow Loki’s way, OFMD also demonstrated what a group of people becoming friends, having adventures, and growing into the best versions of themselves, and the central character shouldering responsibility for the others looks like when well done.

It’s unclear if OFMD is going to get a third season. This was clearly uncertain to the people making the show too, as the last episode works both as a conclusion, or to set up a final season.

Great, just great TV.

Found Family and Genre Fiction in the Twenties

Back in the mid-naughties, the pre-scandal Warren Ellis had a line that people in the future were going to look back at turn of the century genre fiction and wonder why everyone was crying all the time (looking at you both, BSG and Doctor Who,) and then he would note that they’d probably nod and say something like “well, they had a lot to cry about.”

I’ve been having a similar feeling the last few years about the current state of genre fiction and “found family.” That’s always been a theme in science fiction and fantasy literature, probably due to the fans of such fiction tending to be on the “social outcasts who find their people” end of the social spectrum, but there’s a different vibe lately. Loki realizing he’s actually working to get his friends back and therefore can time travel, or the Lower Deckers doing anything, or the crew of the Revenge’s Calypso Party, have a distinctly different feel from, say, the other Hobbits refusing to let Frodo sneak out of Hobbiton on his own, or Han realizing he isn’t going to leave his friend in the lurch and giving Luke the cover he needs to blow up the Death Star. This seems like the sort of social moment that’s impossible to really seem from inside, but years from now will be as obvious as the post 9/11-weirdness of BSG.

All three of these shows had a strong central theme of leaving your birth family or where you were “from”, shedding your metaphorical skin and built-up masks, and finding the people you want to spend time with, who make you the best version of the person you’re becoming. (And then, in Lower Deck’s case, because it’s the best of the three, using this growth to forge a new and better relationship with your mom.)

Here, thick into the Disaster of the Twenties, that’s probably a really good message to be sending. Your people: they’re out there. And if we stick together, we’re gonna be okay.

Read More
Gabriel L. Helman Gabriel L. Helman

Still out there: The X-Files at 30

The actual anniversary date whipped past me before I noticed, but apparently The X-Files is thirty years old? Let me settle back into my mummy case and enthuse about it.

I’m also late to this party, but it turns out they did a whole remaster/cleanup on the show a few years back, presumably for Blu-Ray, and those copies are whats streaming now. They went back and rescanned the original film and rebuilt the edits from there, and he show looks amazing! Haircuts non-withstanding, it genuinely looks like it could have been filmed this year, unlike a lot of it’s contemporaries. We’ve been watching them on and off, and man, what a fun show that was! There are very, very few shows where you can almost just pick episodes at random and know you’ll enjoy them quite the way you can with The X-Files.

I actually didn’t come in on the show until halfway through the second year; but I was immediately hooked. My initial reaction was that this was as close as we were ever going to get to an “American Doctor Who” (or really any new Who at all there in the wilderness years of early 90s). A pair of FBI agents solving supernatural/monster/alien problems on a weekly basis? And mostly solving those problems by not just, you know, shooting them? Yes Please!

That said, I’m pretty sure I was the one one that saw a Doctor Who connection. While the cited inspiration is always Kolchak, and UFOs and conspiracy theories were hot in the 90s, The X-Files always struck me as a show designed outward from trying to figure out how to make Twin Peaks viable as an ongoing show.

It took the core premise, “Eccentric FBI agents investigate possibly supernatural crimes in small town America” and then made several very savvy changes.

First, everywhere Twin Peaks satirized nighttime soap operas, X-Files swapped that out with the shape of a standard police procedural. Gone was the sprawling ensamble cast, replaced with a core regular pair and a one-off guest cast, in the mold of something like Law & Order. Instead of a single small town, it was a new semi-rural location every week, freeing up the guest cast to meet the needs of the mood of the week instead of servicing their own stories. The relationship between Mulder and Scully was similar to that between Agent Cooper and Sheriff Truman, but both main characters were FBI, freeing the core cast from being stuck in any one location. And as many, many people have observed, making the “believer” character the man and the skeptical scientist the woman went against the grain of the prevalent gender stereotypes of the time, adding a unique flavor to the show almost “for free,” alongside a light dose of Moonlighting-style sparks. (Not to mention The X-Files even stars one of the best guest-stars from Twin Peaks.)

And both the main characters were really fun to spend time with. They were interesting, and complicated, and had a unique relationship, and were both actually really good at their jobs. Personally, I always wanted to be Scully when I grew up (not a gender thing, I just wanted to be really good at my hard job, be well respected by my peers, have cool banter with coworkers, and then once or twice a season haul a pistol out and shoot a monster without missing. Mulder tended to miss a lot for drama reasons, but if Scully pulled her gun out, someone was getting shot.)

But most critically, it learned the most important lesson of Twin Peaks: that Laura Palmer was too central, and revealing her murderer effectively ended the show. The X-Files’ equivalent, Mulder’s sister’s disappearance and the alien conspiracy, would be an ongoing concern, but was never as omnipresent as Laura Palmer, and was never fully explained or revealed. Of course, X-Files ended up overcorrecting too far, and allowed the alien mythology to sprawl out far beyond any reasonable attempt to make sense.

Personally, I always much preferred the monster-of-the-week episodes, and those were still fun long past where the “mythology” imploded into incoherence. And that was the thing: the show was always fun. And we can just ignore those last couple of years where they squandered the built-up goodwill and the alien plot fizzled out.

Thirty years on, though, that’s what fascinates me about The X-Files. There are plenty of examples of shows that were initially very popular that blew the landing. Lost, the”new” Battlestar Galactica, Game of Thrones, even something like Quantum Leap. Mostly, those shows have slipped out of the conversation, and when they do come up, it’s always with a groan about the end first, and usually that’s all. No one talks about BSG’s stupendous first season, they talk about the robot dance party it ended with.

But not with the X-Files! When that comes up, the topic is always the relationship between Multer and Scully, or the best monsters, or the vibe of the thing, and the last years get treated as an afterthought. Most people won’t even remember that it started Terminator Two for a while unless reminded. For a while The X-Files looked like it was going to be the definitive example of how not to do a long-running plot, why you should work things out ahead of time, and for running out clock too long, but no, Lost took that seat.

Why? Why does X-Files get a pass on the ending, which was just as much a fumble as those others?

I think there’s two big reasons.

First, the show’s pleasures extended beyond the “big plot.” Even at its peak, there were plenty of fans who preferred the non-mythology episodes. The big story failing to cohere didn’t intersect with the joy of watching Scully and Mulder deal with monsters, or vampires, or Peter Boyle.

But more importantly, is something a friend of mine said while talking about this: “Everyone quit watching it when it was still good.” And I think that’s it. Those other shows everyone stuck around to the bitter end. The plural of anecdote is not data, but I don’t know anyone other than myself that stuck it out to the last episode of the X-Files. There were plenty of off-ramps: the moves from Friday to Sunday, the movie, Duchovny leaving. It stayed pretty good for a long time past its peak, and most everyone drifted away before it got actually bad.

I mean, Friday Night X-Files was appointment viewing when I was in college, but everyone had something better to do on Sunday nights. (Except that brief window where it was Simpsons-Futurama-X-Files, that was pretty good.)

As such, most people’s last memory of the show is something like Multer being trapped in the past in the bermuda triangle, rather than, say, Bran having the best story, or Sam just not going home, or whatever the hell Lost tried to sell us.

And so I think all that’s the real lesson The X-Files has for us, all these years later. Long-form serialized TV is great, and as a form is here to stay, but if you only have the one big plot, all you actually have is the ending. If your show works week-to-week without that, and it’s full of characters that are fun to spend time with, people are still going to be rewatching it three decades later.

Read More
Gabriel L. Helman Gabriel L. Helman

Feature Request: Sick Mode for Apple Fitness

As previously mentioned I got pretty sick in October. I’m also a daily Apple Watch wearer, which means there were two solid weeks there where I didn’t close my rings.

As such, I have a feature request: you should be able to tell Apple Fitness you’re sick.

To be clear, this isn’t because I want a way to cheat my streak back into existence. I mean, I had a pretty good streak going, and it’s irritating to reset that count, but that’s not the point.

While I was sick, it was deeply irritating to get those passive agressive “motivating” messages in the morning about “You closed one ring yesterday, bet you can get them all today, go get ‘em!” No man, leave me alone, I’m dying here. There’s a way to delve into the settings and turn off the “coach notifications”, but I was not up to that. I needed one button I could mash; I’m sick, I’ll let you know when I’m better.

Then, once I got better, all my stats and graphs and whatnot have these huge gaps in them. I don’t want to skip those or leave those out, but I would love to have a way to annotate those with a note: “this is when you had covid, ignore this”. Maybe a different color? Yellow for sick, instead of the usual red, green, blue.

But what really frustrates me is whats going on now. Apple Fitness does this genuinely cool and useful thing where it’ll compute long-term trends and averages, and tell you about it when they change significantly. And so for the last week I keep getting updates about “you have a new trend!” and then it shows me how many more steps I’ve taken this week versus the average over October.

And no shit, Apple Fitness! I basically didn’t stand up for ten days there, I sure hope I’m taking more steps now. What would be valuable is to know what my current scores are versus before I got sick. Am I back to where I was? I should be back to where I was in september, am I?

And there’s no way to ask that question. There’s no way to tell it what it needs to know to figure that out itself.

We’re living in the Plague Years, Apple. Let us tell the computers about it.

Read More
Gabriel L. Helman Gabriel L. Helman

The OmniRumor, 10 years on

It’s been ten years since the Doctor Who Missing Episode “Omnirumor” broke containment and made it out into the mainstream. I haven’t seen this commemorated anywhere, and as we’re currently barreling towards another anniversary year celebration in November and another set of Missing Episode Recovery rumors has flared up in the UK press, I found myself reminiscing about the last time this happened.

Let’s recap:

Huge swathes of 50s and 60s BBC television no longer exist, due to the recordings being either lost, or thrown out, or having their master video tapes recorded over. This happened for a bunch of complex interlocking reasons, but which mostly boil down to “it wasn’t anyone’s job to make sure they didn’t lose them.”

Currently, 97 of of the 253 Doctor Who episodes broadcast between 1963 and 1969 are missing; that’s actually quite a bit better than many of its contemporaries. Doctor Who is also in a unique position in that all of the missing episodes exist as audio-only recordings, many of them have surviving still images, and all of them were published as novels.

Classic Doctor Who has a strange structure by today’s standards; half-hour episodes making up usually 4 or 6-part stories. A strange aspect of having 100-or-so missing episodes is that some stories are only partly missing. Some stories are just missing a bit in the middle, some only have one part surviving.

This has always been a unique aspect of being a fan of the show; there’s this chunk of the early show that’s just out of reach, stories where everyone knows what happened, but no one has actually seen in fifty years.

And since the BBC got serious about preserving it’s own archive in the late 70s, and a rash of rediscoveries in the early 80s, lost shows have slowly trickled in. One of the bedrocks of being a Doctor Who fan is that there is always a rumor circulating about a recovered episode.

Whether true or not, it’s a widely held belief that there are still “lost” episodes in the hands of private collectors, and for a long time it was also widely believed that their “had to” be more film cans out there, lost, misplaced, sitting in the bottom of a locked filing cabinet in a disused lavatory. So a lot of people have been poking around in basements over the last 40 years, and doing the hard work to see if they can dig up some more lost TV.

So missing episode rumors have a strange energy around them. First, what more is there to say? Beyond “which do you hope it is?” there isn’t a lot to talk about from the perspective of a fan out of the loop of any real recovery efforts. But the other thing is that it’s a widely held belief that any chatter out on the internet or in fan circles could “spook” any private collector negotiating to return what amounts to highly valuable stolen property. So, there’s always been pressure to not actually talk about the rumors; not an Omerta, but it’s considered in poor taste to risk a potential recovery because you couldn’t stay off twitter. It’s unclear if a recovery of Who or any other BBC show has actually been scuttled due to excited fans being loose-lipped on the internet, but the fan social contract remains: just keep it at a low volume.

In early 2013, there started to be whispers out on the internet that maybe someone had found something. Now, I’m not particularly tapped in to the underground or anything, so for something it make it up to my level it has to have been churning for a while. Lots of “I can’t say anything more, but there should be some good news later this year!” trying to keep just inside the threshold of talking about it too loud.

To add some color of the time, this was also very close to when the rumors started that David Lynch might actually be doing more Twin Peaks. I have weirdly clear memories of this, since I had just changed jobs and had not yet cultivated a group of nerds to talk about these kinds of things with, so I found myself sitting on the two most interesting genre rumors in recent memory with no one to talk to, instead just poking around the deep fora on the web over lunches by myself.

But, again, there’s always a rumor circulating, and this was the start of the big 50th anniversary year, and it seemed too perfect that someone had managed to time a one-in-a-decade happenstance for when it would have the most commercial impact.

But, unlike a lot of missing episode rumors, this one kept emitting smoke, splitting into two distinct branches. The first was that someone had found a huge cache of film, encompassing nearly every missing Doctor Who episode along with a host of other 60s-era BBC shows. The second was more restrained, claiming that three stories had been recovered: The Web of Fear, The Enemy of the World, and Marco Polo). There were, of course, any number of sub-variants and weird contradictory details. The whole situation soon became nicknamed “The Omnirumor.”

Every version of this seemed too good to be true; fan fantasizing for the 50th anniversary. Especially the Web of Fear, which was always on the top of everyone’s wishlist (your’s truly included) for what would you hope is found. For various reasons, Marco Polo had the most copies made, so it always ends up in any rumor mill as it’s the one most likely to be found, despite stubbornly refusing to exist for five decades and counting. Enemy of the World was a little more idiosyncratic, but still part of the terribly under-surviving season 5.

And a cache? Seemed absurd. The last time more than one half-hour episode was found a time was Tomb of the Cybermen in 1992. Since then there had been three standalone episodes found? The idea that there were still piles of film can somewhere in the 2010s seemed like the hight of wishful thinking,

But the rumor mill kept churning, eventually breaking out of the deep nerd corners of the web. I missed the exact anniversary day due to being distracted by cyber goggles, but for my money the moment it broke out into the mainstream, or at least the mainstream of the nerd web, was when it hit the front page of Bleeding Cool. From there, it was a short jump to, if you will, “real” news.

This pretty badly violated the “don’t talk about missing episodes too loudly” rule. This made a bunch of people upset, which made a bunch of other people more upset, and proceeded to be a Internet Fan kerfuffle. But the whole thing seemed absurd, because the core claim was preposterous. There was no way there was still an undiscovered cache of multiple film cans sitting around. Fan wishful thinking gone nuclear.

Anyway, imagine our collective surprise when the BBC announced they had recovered The Enemy of the World and (most of) The Web of Fear.

(I can’t find it now, but I remember somewhere on the web someone’s initial shocked response to the news was to blurt “what happened to Marco Polo?” Which then someone else immediately responded to by posting a youtube link to Meat Loaf singing “two out of three ain’t bad.”)

The details of the find, and who and how they found it—and why it was only most of the Web of Fear are well documented elsewhere, but the upshot was someone really did find a cache of missing tv, sitting abandoned in the back of a local TV station in Nigeria. Knowing what really happened, you can look back and if you squint you can sort of see what information must have leaked out when to cause the various flavors of the Omnirumor took shape.

And what an absolute treat. I’d read the novel of Web of Fear probably a dozen times a kid, watched the reconstruction, watched the one surviving episode and tried to imagine what the rest might have looked like. Never, did I ever think I would actually get to see it.. And there it was, come October, sitting in iTunes.

Web of Fear was one of those stories that had a single part of of 6 surviving: the first. I’d seen that first episode more than once, and it was the strangest feeling to sit down to watch and have “Episode 2” appear on the screen.

There’s always a hint of hesitation when one of these stories is actually recovered. I mean, we are talking about a low budget (mostly) kids show from the mid 60s, here. Decades of imagining the best possible version of something tends to crash rather badly into the reality of what the show really was. The poster child for this is Tomb of the Cybermen, which was always hailed as one of the best Doctor Who stories of all time, and then in 1992 we finally got to see it, and the reality was that it looked cheap even by the standards of the times, the plot made next-to-no sense, and there was way more casual racism than anyone expected. Turns out, the novel had papered over a lot of shortcomings. Overnight, it went from “best of the 60s” to, “it’s fine, I guess, but let me warn you about a couple things…”

That’s not what happened with the Web of Fear, though. The premise is bonkers even by Doctor Who standards—robot Yeti with web guns have taken over London, and the Doctor teams up with an Army team hiding in the London Underground to fight them off. Across the board, it just works. Where the BBC budget struggles with other planets or space ships, it can do a fantastic Underground tunnel. And the camerawork and direction around the Yeti keeps them strange and uncanny where they could easily become silly. Theres a part abou 2/3 of the way through the story where a group of soldiers have to venture up to the Yeti-controlled city to find some parts, and get ambushed by the monsters. And even that works! It manages to find a “kid-friendly Aliens” tone where the soldiers get absolutely wrecked as more and more monsters emerge, and it manages to do this without ever descending into farce. Remarkable.

And then on top of all that, Enemy of the World, which wasn’t at the top of anyone’s wishlist, turned out to be an absolute classic that we basically had never noticed. On paper it seemed very dull and slow moving, but it turns out you really needed to see what the actors were doing to appreciate it.

The whole experience was like being a kid at christmas, being surprised and delighted by a present that you didn’t even know was possible.

But I digress. Ten years ago in August, we didn’t know what was coming. All we knew was that the rumor mill was going into overdrive, we didn’t know what was really going on, and so we all hoped.

And sometimes, crazy rumors and hopes turn out to be true.

Read More
Gabriel L. Helman Gabriel L. Helman

Achewood is back!

Pleased beyond words that Achewood has returned from what turned out to not be a permanent hiatus after all.

Achewood was one of the very best webcomics from that era between the dot com crash and the web shrinking to five websites full of screenshots of the other four where you could put art on the net for free and then actually pay rent by selling t-shirts.

The Verge has a nice writeup and interview with the Chris Onstad, the creator, talking about why he stopped and what caused him to come back.

It’s on Patreon now, which from the outside seems like it might have been the missing piece to making a living putting art on the internet.

Not only is it back, but it’s as good as it ever was. Clearly talking an extended sabbatical was worth it; Onstad hit the ground running and has been turning out bangers every week with the same voice the strip always had.

But, at the same time, it’s clearly being written by an older person with a different perspective. There is this additional note, where there’s a a hint of Onstad stepping back on to stage and looking around at his early-00s contemporaries asking, “guys, what’s gonna happent to all this stuff we made in in our twenties?” I won’t spoil it, but the new Achewood provides an answer that is extremely in character, while also informed by decades of experience.

Easiest 14 bucks a month I’ve ever spent.

Read More
Gabriel L. Helman Gabriel L. Helman

Apple Vision Pro: New Ways to be Alone

A man sits alone in an apartment. The apartment is small, and furnished with modern-looking inexpensive furniture. The furniture looks new, freshly installed. This man is far too old to be sitting in a small, freshly furnished apartment for any good or happy reason. Newly divorced? He puts on his Apple Vision Pro(tm) headset. He opens the photos app, and looks on as photos of his children fill the open space of an apartment no child has ever lived in. “Relive your happiest memories,” intones the cheerful narrator. The apartment is silent. It is one of the most quietly devastating short films I have ever seen. Apple Inc made this movie hoping it would convince you to buy their new headset. I am now hoping this man is only divorced, and not a widower. There is hope, because the fact that he has spent $3,500 on a headset strongly indicates he himself is the biggest tragedy in his own life.

The year is 2023. Apple would like to sell you a new way to be alone.


And there is is, the Apple Vision Pro. The hardware looks incredible. The software looks miraculous. The product is very, very strange.

Back when I worked in the Space Glasses racket, I used to half-joke that space glasses designers should just own how big the thing has to be and make them look like cyberpunk 80s ski goggles. Apple certainly leaned into that—not Space Glasses, but Cyber Goggles.

Let’s start with the least intersting thing: the Price. “Does Tim Apple really expect me to pay 3,500 bucks for cyber goggles?” No, he literally doesn't. More so that any other Apple product in recent memory, this is a concept car.. The giveaway is the name, this is the Apple Vision Pro.. The goal is to try things out and build up anticipation, so that in three years when they release the Apple Vision Air for 1,800 bucks they’ll sell like hotcakes.

Apple being Apple, of course, figured out a way to sell their concept car at retail.

It’s status as a concept car goes a long way towards explaining many—but not all—of the very strange things about this product.

From a broad hardware/software features & functionality, this is close to what we were expecting. AR/Mixed Reality as the default operating mode, Apps and objects appearing as if they were part of the real-life environment, hand gesture control, a focus on experiences and enhanced productivity, with games getting only a passing glance.

Of course, there were several things I did find surprising.

First, I didn’t expect it to be a standalone unit, I was really expecting a “phone accessory” like the Watch (or arguably the Apple TV was to begin with.). But no, for all intents and purposes, there’s an entire laptop jammed into a pair of goggles. That’s a hell of an impressive feat of industrial engineering.

I was certainly not expecting the “external screen showing your eyes.” That got rumored, and I dismissed it out of hand, because that’s crazy. But okay, as implemented, now I can see what they were going for.

One of the biggest social problems with space glasses—or cyber goggles—is how you as the operator can communicate to other people that you’re paying attention to cyberspace as opposed to meat space. Phones, laptops, books all solve this the same way—you point your face at them and are clearly looking at the thing, instead of the people around you.

Having the screen hide your eyes while in cyberspace certainly communicates which mode the operator is in and solves the “starting a fight by accident” problem.

Using eye tracking as a key UI interaction shouldn’t have been surprising, but was. I spent that whole part of the keynote slapping my forehead; _of course! Of course that’s how that would work!

I expected games to get short shrift, but the lack of any sort of VR gaming attention at all really surprised me. Especially given that in the very same keynote they had actual real-life KOJIMA announcing that Death Stranding was coming to the Mac! Gaming is getting more attention at Apple than it’s gotten in years, and they just… didn’t talk about that with the headset?

Also strange was the lack of new “spacial” UIs? All the first party Apple software they showed was basically the same as on the Mac or iOS, just in a window floating in space. By comparison, when the Touch Bar launched, they went out of their way to show what every app they made used it for, from the useful (Final Cut’s scrub timeline, emoji pickers, predictive text options) to the mediocre (Safari’s tabs). Or Force Touch on the iPhone, for “right click” menus in iOS. Here? None of that. This is presumably a side effect of Apple’s internal secrecy and the schedule being such that they needed to announce it at the dev conference half a year before it shipped, but that’s strange. I was expecting at least a Final Cut Pro spacial interface that looks like an oldschool moviola, given they just ported FCP X to the iPad, and therefore presumably, the Vision.

Maybe the software group learned from all the time they poured into the Toubchbar & Force Touch. Or more likely, this was the first time most of the internal app dev groups got to see the new device, and are starting their UI designs now, to be ready for release with the device next year.

And so, if I may be so crude as to grade my own specific predictions:

  1. Extremely aware of it's location in physical space, more so than just GPS, via both LIDAR and vision processing. Yes.
  2. Able to project UI from phone apps onto a HUD. Nope! Turns out, it runs locally!
  3. Able to download new apps by looking at a visual code. Unclear? Presumably this will work?
  4. Hand tracking and handwriting recognition as a primary input paradigm. Yes, although I missed the eye tracking. And a much stronger emphasis on voice input than I expected, although it’s obvious in retrospect.
  5. Spacial audio. Yes.
  6. Able to render near-photoreal "things" onto a HUD blended with their environment. Heck yes.
  7. Able to do real-time translation of languages, including sign language. Unclear at this time. Maybe?

But okay! Zooming out, they really did it—they built Tony Stark’s sunglasses. At least, as close as the bleeding edge of technology can get you here in 2023. It’s only going to get lighter and smaller from here on out.

And here’s the thing: this is clearly going to be successful. The median response from the people who got hands-on time last week has been very positive. It might not fly off the shelves, but it’ll do at least as well as the new Mac Pro, whose whole selling point is the highly advanced technology of “PCI slots”.

By the time the Apple Vision Air ships in 2027, they’ll have cut the weight and size of the goggles, and there’s going to be an ecosystem built up from developers figuring out how to build a Spacial UI for the community of early adopters.

I’m skeptical the Cyber Goggles form factor will replace the keyboard-screen laptop or iPhone as a daily driver, but this will probably end up with sales somewhere around the iPad Pro at the top of the B-tier, beloved by a significant but narrow user base.


But all that’s not even remotely the most interesting thing. The most interesting thing is the story they told.

As usual, Apple showed a batch of filmed demos and ads demonstrating “real world” use, representing their best take on what the headset is for.

Apple’s sweet spot has always been “regular, creative people who have things to do that they’d like to make easier with a computer.” Not “computers for computer’s sake”—that’s *nix, not “big enterprise capital-W Work”—that’s Windows. But, regular folks, going about their day, their lives being improved by some piece of Apple kit.

And their ads & demos always lean in the aspirational nature of this. Attractive young people dancing to fun music from their iPods! Hanging out in cool coffee shops with their MacBooks! Creative pros working on fun projects in a modern office with colorful computers! Yes! That all looks fun! I want to be those people!

Reader, let me put my cards directly out on the table: I do not want to be any of the people in the Apple Vision demos.

First, what kind of work are these people doing? Other than watching movies, they’re doing—productivity software? Reviewing presentations, reading websites, light email, checking messages. Literally Excel spreadsheets. And meetings. Reviewing presentations in a meeting. Especially for Apple, this is a strangely “corporate” vision of the product.

But more importantly, where are they? Almost always, they’re alone.

Who do we see? A man, alone, looking at photos. A woman, alone in her apartment, watching a movie. Someone else, alone in a hotel room, reviewing a work presentation with people who are physically elsewhere. Another woman alone in a hotel room using FaceTime to talk to someone—her mother? “I miss you!” she says in one of the few audible pieces of dialog. A brief scene of someone playing an Apple Arcade game, alone in a dark room. A man in a open floor-plan office, reading webpages and reading email, turns the dial to hide his eyes from his coworkers. A woman on a flight pulls her headset on to tune out the other people om the plane.

Alone, alone, alone.

Almost no one is having fun. Almost no one is happy to be where they are. They’re doing Serious Work. Serious, meaning no one is creating anything, just reviewing and responding. Or consuming. Consuming, and wishing they were somewhere, anywhere, else.

It’s a sterile, corporate vision of computing, where we use computers to do, basically, what IBM would have imagined in the 1970s. A product designed _by_ and for upper middle management at large corporations. Work means presentation, spreadsheets, messages, light email.

Sterile, and with a grim undercurrent of “we know things are bad. We know you can’t afford an apartment big enough for the TV you want, or get her take you back, or have the job you wanted. But at least you can watch Avatar while pretending to be on top of a mountain.”

And with all these apps running on the space glasses, no custom UIs. Just, your existing apps floating in a spectral window, looking mostly the same.

Effectively, no games. There was a brief shot of someone playing something with a controller in a hovering window? But nothing that used the unique capabilities platform. No VR games. No Beat Saber, No Mans Sky, Superhot, Half-Life: Alyx. Even by Apple standards, this is a poor showing.

Never two headsets in the same place. Just one, either alone, or worn by someone trying to block out their surroundings.

The less said about the custom deepfake facetime golems, the better.

And, all this takes place in a parallel world untouched by the pandemic. We know this product was already well along before anyone had heard of COVID, and it’s clear the the last three years didn’t change much about what they wanted to build. This is a product for a world where “Remote Work” means working from a hotel on a trip to the customer. The absolute best use case for the product they showed was to enable Work From Home in apartments too small to have a dedicated office space, but Apple is making everyone come back to the office, and they can’t even acknowledge that use.

There are ways to be by yourself without being alone. They could have showed a DJ prepping their next set, a musician recording music, an artist building 3d models for a game. Instead, they chose presentations in hotels and photos dark, empty apartments.


I want to end the same way they ended the keynote, with that commercial. A dad with long hair is working while making his daughter toast. This is more like it! I am this Dad! I’ve done exactly this! With close to that hair!

And by the standards they’s already set, this is much better! He’s interacting with his kids while working. He’s working on his Surf Shop! By which we mean he’s editing a presentation to add some graphics that were sent to him.

But.

That edit couldn’t wait until you made your kid toast? It’s toast, it doesn’t take that long. And he’s not designing a surfboard, he’s not even building a presentation about surfboards, he’s just adding art someone sent him to a presentation that already exists.

His kid is staring at a screen with a picture of her dad’s eyes, not the real thing. And not to put too fine a point on it, but looking at his kid without space glasses in the way is the moment Darth Vader stopped being evil. Tony Stark took his glasses off when he talked to someone.

I can already do all that with my laptop. And when I have my laptop in the kitchen, when my daughter asks what I’m working on, I can just gesture to the screen and show her. I can share.

This is a fundamentally isolating view of computing, one where we retreat into unsharable private worlds, where our work email hovers menacingly over the kitchen island.

No one ever looks back and their life and thinks, “thank goodness I worked all those extra hours instead of spending time with my kids.” No one looks back and celebrates the times they made a presentation at the same time as lunch. No one looks back and smiles when they think of all the ways work has wormed into every moment, eroding our time with our families or friends, making sure we were never present, but always thinking about the next slide, the next tab, the next task..

No one will think , “thank goodness I spent three thousand five hundred dollars so I had a new way to be alone.”

Read More