Gabriel L. Helman Gabriel L. Helman

Why is this Happening, Part III: Investing in Shares of a Stairway to Heaven

Previously: Part I, Part II.

We’ve talked a lot about “The AI” here at Icecano, mostly in terms ranging from “unflattering” to “extremely unflattering.” Which is why I’ve found myself stewing on this question the last few months: Why is this happening?

The easy answer is that, for starters, it’s a scam, a con. That goes hand-in-hand with it also being hype-fueled bubble, which is finally starting to show signs of deflating. We’re not quite at the “Matt Damon in Superbowl ads” phase yet, but I think we’re closer than not to the bubble popping.

Fad-tech bubbles are nothing new in the tech world, in recent memory we had similar grifts around the metaverse, blockchain & “web3”, “quantum”, self-driving cars. (And a whole lot of those bubbles all had the same people behind them as the current one around AI. Lots of the same datacenters full of GPUs, too!) I’m also old enough to remember similar bubbles around things like bittorrent, “4gl languages”, two or three cycles on VR, 3D TV.

This one has been different, though. There’s a viciousness to the boosters, a barely contained glee at the idea that this will put people out of work, which has been matched in intensity by the pushback. To put all that another way, when ELIZA came out, no one from MIT openly delighted at the idea that they were about to put all the therapists out of work.

But what is it about this one, though? Why did this ignite in a way that those others didn’t?

A sentiment I see a lot, as a response to AI skepticism, is to say something like “no no, this is real, it’s happening.” And the correct response to that is to say that, well, asbestos pajamas really didn’t catch fire, either. Then what happened? Just because AI is “real” it doesn’t mean it’s “good”. Those mesothelioma ads aren’t because asbestos wasn’t real.

(Again, these tend to be the same people who a few years back had a straight face when they said they were “bullish on bitcoin.”)

But I there’s another sentiment I see a lot that I think is standing behind that one: that this is the “last new tech we’ll see in our careers”. This tends to come from younger Xers & elder Millennials, folks who were just slightly too young to make it rich in the dot com boom, but old enough that they thought they were going to.

I think this one is interesting, because it illuminates part of how things have changed. From the late 70s through sometime in the 00s, new stuff showed up constantly, and more importantly, the new stuff was always better. There’s a joke from the 90s that goes like this: Two teams each developed a piece of software that didn’t run well enough on home computers. The first team spent months sweating blood, working around the clock to improve performance. The second team went and sat on a beach. Then, six months later, both teams bought new computers. And on those new machines, both systems ran great. So who did a better job? Who did a smarter job?

We all got absolutely hooked on the dopamine rush of new stuff, and it’s easy to see why; I mean, there were three extra verses of “We Didn’t Light the Fire” just in the 90s alone.

But a weird side effect is that as a culture of practitioners, we never really learned how to tell if the new thing was better than the old thing. This isn’t a new observation, Microsoft figured out to weaponize this early on as Fire And Motion. And I think this has really driven the software industry’s tendency towards “fad-oriented development,” we never built up a herd immunity to shiny new things.

A big part of this, of course, is that the press tech profoundly failed. A completely un-skeptical, overly gullible press that was infatuated shiny gadgets foisted a whole parade of con artists and scamtech on all of us, abdicating any duty they had to investigate accurately instead of just laundering press releases. The Professionally Surprised.

And for a long while, that was all okay, the occasional CueCat notwithstanding, because new stuff generally was better, and even if was only marginally better, there was often a lot of money to be made by jumping in early. Maybe not “private island” money, but at least “retire early to the foothills” money.

But then somewhere between the Dot Com Crash and the Great Recession, things slowed down. Those two events didn’t help much, but also somewhere in there “computers” plateaued at “pretty good”. Mobile kept the party going for a while, but then that slowed down too.

My Mom tells a story about being a teenager while the Beatles were around, and how she grew up in a world where every nine months pop music was reinvented, like clockwork. Then the Beatles broke up, the 70s hit, and that all stopped. And she’s pretty open about how much she misses that whole era; the heady “anything can happen” rush. I know the feeling.

If your whole identity and worldview about computers as a profession is wrapped up in diving into a Big New Thing every couple of years, it’s strange to have it settle down a little. To maintain. To have to assess. And so it’s easy to find yourself grasping for what the Next Thing is, to try and get back that feeling of the whole world constantly reinventing itself.

But missing the heyday of the PC boom isn’t the reason that AI took off. But it provides a pretty good set of excuses to cover the real reasons.

Is there a difference between “The AI” and “Robots?” I think, broadly, the answer is “no;” but they’re different lenses on the same idea. There is an interesting difference between “robot” (we built it to sit outside in the back seat of the spaceship and fix engines while getting shot at) and “the AI” (write my email for me), but that’s more about evolving stories about which is the stuff that sucks than a deep philosophical difference.

There’s a “creative” vs “mechanical” difference too. If we could build an artificial person like C-3PO I’m not sure that having it wash dishes would be the best or most appropriate possible use, but I like that as an example because, rounding to the nearest significant digit, that’s an activity no one enjoys, and as an activity it’s not exactly a hotbed of innovative new techniques. It’s the sort of chore it would be great if you could just hand off to someone. I joke this is one of the main reasons to have kids, so you can trick them into doing chores for you.

However, once “robots” went all-digital and became “the AI”, they started having access to this creative space instead of the physical-mechanical one, and the whole field backed into a moral hazard I’m not sure they noticed ahead of time.

There’s a world of difference between “better clone stamp in photoshop” and “look, we automatically made an entire website full of fake recipes to farm ad clicks”; and it turns out there’s this weird grifter class that can’t tell the difference.

Gesturing back at a century of science fiction thought experiments about robots, being able to make creative art of any kind was nearly always treated as an indicator that the robot wasn’t just “a robot.” I’ll single out Asimov’s Bicentennial Man as an early representative example—the titular robot learns how to make art, and this both causes the manufacturer to redesign future robots to prevent this happening again, and sets him on a path towards trying to be a “real person.”

We make fun of the Torment Nexus a lot, but it keeps happening—techbros keep misunderstanding the point behind the fiction they grew up on.

Unless I’m hugely misinformed, there isn’t a mass of people clamoring to wash dishes, kids don’t grow up fantasizing about a future in vacuuming. Conversely, it’s not like there’s a shortage of people who want to make a living writing, making art, doing journalism, being creative. The market is flooded with people desperate to make a living doing the fun part. So why did people who would never do that work decide that was the stuff that sucked and needed to be automated away?

So, finally: why?

I think there are several causes, all tangled.

These causes are adjacent to but not the same as the root causes of the greater enshittification—excuse me, “Platform Decay”—of the web. Nor are we talking about the largely orthogonal reasons why Facebook is full of old people being fooled by obvious AI glop. We’re interested in why the people making these AI tools are making them. Why they decided that this was the stuff that sucked.

First, we have this weird cultural stew where creative jobs are “desired” but not “desirable”. There’s a lot of cultural cachet around being a “creator” or having a “creative” jobs, but not a lot of respect for the people actually doing them. So you get the thing where people oppose the writer’s strike because they “need” a steady supply of TV, but the people who make it don’t deserve a living wage.

Graeber has a whole bit adjacent to this in Bullshit Jobs. Quoting the originating essay:

It's even clearer in the US, where Republicans have had remarkable success mobilizing resentment against school teachers, or auto workers (and not, significantly, against the school administrators or auto industry managers who actually cause the problems) for their supposedly bloated wages and benefits. It's as if they are being told ‘but you get to teach children! Or make cars! You get to have real jobs! And on top of that you have the nerve to also expect middle-class pensions and health care?’

“I made this” has cultural power. “I wrote a book,” “I made a movie,” are the sort of things you can say at a party that get people to perk up; “oh really? Tell me more!”

Add to this thirty-plus years of pressure to restructure public education around “STEM”, because those are the “real” and “valuable” skills that lead to “good jobs”, as if the only point of education was as a job training program. A very narrow job training program, because again, we need those TV shows but don’t care to support new people learning how to make them.

There’s always a class of people who think they should be able to buy anything; any skill someone else has acquired is something they should be able to purchase. This feels like a place I could put several paragraphs that use the word “neoliberalism” and then quote from Ayn Rand, The Incredibles, or Led Zeppelin lyrics depending on the vibe I was going for, but instead I’m just going to say “you know, the kind of people who only bought the Cliffs Notes, never the real book,” and trust you know what I mean. The kind of people who never learned the difference between “productivity hacks” and “cheating”.

The sort of people who only interact with books as a source of isolated nuggets of information, the kind of people who look at a pile of books and say something like “I wish I had access to all that information,” instead of “I want to read those.”

People who think money should count at least as much, if not more than, social skills or talent.

On top of all that, we have the financializtion of everything. Hobbies for their own sake are not acceptable, everything has to be a side hustle. How can I use this to make money? Why is this worth doing if I can’t do it well enough to sell it? Is there a bootcamp? A video tutorial? How fast can I start making money at this?

Finally, and critically, I think there’s a large mass of people working in software that don’t like their jobs and aren’t that great at them. I can’t speak for other industries first hand, but the tech world is full of folks who really don’t like their jobs, but they really like the money and being able to pretend they’re the masters of the universe.

All things considered, “making computers do things” is a pretty great gig. In the world of Professional Careers, software sits at the sweet spot of “amount you actually have to know & how much school you really need” vs “how much you get paid”.

I’ve said many times that I feel very fortunate that the thing I got super interested in when I was twelve happened to turn into a fully functional career when I hit my twenties. Not everyone gets that! And more importantly, there are a lot of people making those computers do things who didn’t get super interested in computers when they were twelve, because the thing they got super interested in doesn’t pay for a mortgage.

Look, if you need a good job, and maybe aren’t really interested in anything specific, or at least in anything that people will pay for, “computers”—or computer-adjacent—is a pretty sweet direction for your parents to point you. I’ve worked with more of these than I can count—developers, designers, architects, product people, project managers, middle managers—and most of them are perfectly fine people, doing a job they’re a little bored by, and then they go home and do something that they can actually self-actualize about. And I suspect this is true for a lot of “sit down inside email jobs,” that there’s a large mass of people who, in a just universe, their job would be “beach” or “guitar” or “games”, but instead they gotta help knock out front-end web code for a mid-list insurance company. Probably, most careers are like that, there’s the one accountant that loves it, and then a couple other guys counting down the hours until their band’s next unpaid gig.

But one of the things that makes computers stand out is that those accountants all had to get certified. The computer guys just needed a bootcamp and a couple weekends worth of video tutorials, and suddenly they get to put “Engineer” on their resume.

And let’s be honest: software should be creative, usually is marketed as such, but frequently isn’t. We like to talk about software development as if it’s nothing but innovation and “putting a dent in the universe”, but the real day-to-day is pulling another underwritten story off the backlog that claims to be easy but is going to take a whole week to write one more DTO, or web UI widget, or RESTful API that’s almost, but not quite, entirely unlike the last dozen of those. Another user-submitted bug caused by someone doing something stupid that the code that got written badly and shipped early couldn’t handle. Another change to government regulations that’s going to cause a remodel of the guts of this thing, which somehow manages to be a surprise despite the fact the law was passed before anyone in this meeting even started working here.

They don’t have time to learn how that regulation works, or why it changed, or how the data objects were supposed to happen, or what the right way to do that UI widget is—the story is only three points, get it out the door or our velocity will slip!—so they find someting they can copy, slap something together, write a test that passes, ship it. Move on to the next. Peel another one off the backlog. Keep that going. Forever.

And that also leads to this weird thing software has where everyone is just kind of bluffing everyone all the time, or at least until they can go look something up on stack overflow. No one really understands anything, just gotta keep the feature factory humming.

The people who actually like this stuff, who got into it because they liked making compteurs do things for their own sake keep finding ways to make it fun, or at least different. “Continuous Improvement,” we call it. Or, you know, they move on, leaving behind all those people whose twelve-year old selves would be horrified.

But then there’s the group that’s in the center of the Venn Diagram of everything above. All this mixes together, and in a certain kind of reduced-empathy individual, manifests as a fundamental disbelief in craft as a concept. Deep down, they really don’t believe expertise exists. That “expertise” and “bias” are synonyms. They look at people who are “good” at their jobs, who seem “satisfied” and are jealous of how well that person is executing the con.

Whatever they were into at twelve didn’t turn into a career, and they learned the wrong lesson from that. The kind of people who were in a band as a teenager and then spent the years since as a management consultant, and think the only problem with that is that they ever wanted to be in a band, instead of being mad that society has more open positions for management consultants than bass players.

They know which is the stuff that sucks: everything. None of this is the fun part; the fun part doesn’t even exist; that was a lie they believed as a kid. So they keep trying to build things where they don’t have to do their jobs anymore but still get paid gobs of money.

They dislike their jobs so much, they can’t believe anyone else likes theirs. They don’t believe expertise or skill is real, because they have none. They think everything is a con because thats what they do. Anything you can’t just buy must be a trick of some kind.

(Yeah, the trick is called “practice”.)

These aren’t people who think that critically about their own field, which is another thing that happens when you value STEM over everything else, and forget to teach people ethics and critical thinking.

Really, all they want to be are “Idea Guys”, tossing off half-baked concepts and surrounded by people they don’t have to respect and who wont talk back, who will figure out how to make a functional version of their ill-formed ramblings. That they can take credit for.

And this gets to the heart of whats so evil about the current crop of AI.

These aren’t tools built by the people who do the work to automate the boring parts of their own work; these are built by people who don’t value creative work at all and want to be rid of it.

As a point of comparison, the iPod was clearly made by people who listened to a lot of music and wanted a better way to do so. Apple has always been unique in the tech space in that it works more like a consumer electronics company, the vast majority of it’s products are clearly made by people who would themselves be an enthusiastic customer. In this field we talk about “eating your own dog-food” a lot, but if you’re writing a claims processing system for an insurance company, there’s only so far you can go. Making a better digital music player? That lets you think different.

But no: AI is all being built by people who don’t create, who resent having to create, who resent having to hire people who can create. Beyond even “I should be able to buy expertise” and into “I value this so little that I don’t even recognize this as a real skill”.

One of the first things these people tried to automate away was writing code—their own jobs. These people respect skill, expertise, craft so little that they don’t even respect their own. They dislike their jobs so much, and respect their own skills so little, that they can’t imagine that someone might not feel that way about their own.

A common pattern has been how surprised the techbros have been at the pushback. One of the funnier (in a laugh so you don’t cry way) sideshows is the way the techbros keep going “look, you don’t have to write anymore!” and every writer everywhere is all “ummmmm, I write because I like it, why would I want to stop” and then it just cuts back and forth between the two groups saying “what?” louder and angrier.

We’re really starting to pay for the fact that our civilization spent 20-plus years shoving kids that didn’t like programming into the career because it paid well and you could do it sitting down inside and didn’t have to be that great at it.

What future are they building for themselves? What future do they expect to live in, with this bold AI-powered utopia? Some vague middle-management “Idea Guy” economy, with the worst people in the world summoning books and art and movies out of thin air for no one to read or look at or watch, because everyone else is doing the same thing? A web full of AI slop made by and for robots trying to trick each other? Meanwhile the dishes are piling up? That’s the utopia?

I’m not sure they even know what they want, they just want to stop doing the stuff that sucks.

And I think that’s our way out of this.

What do we do?

For starters, AI Companies need to be regulated, preferably out of existence. There’s a flavor of libertarian-leaning engineer that likes to say things like “code is law,” but actually, turns out “law” is law. There’s whole swathes of this that we as a civilization should have no tolerance for; maybe not to a full Butlerian Jihad, but at least enough to send deepfakes back to the Abyss. We dealt with CFCs and asbestos, we can deal with this.

Education needs to be less STEM-focused. We need to carve out more career paths (not “jobs”, not “gigs”, “careers”) that have the benefits of tech but aren’t tech. And we need to furiously defend and expand spaces for creative work to flourish. And for that work to get paid.

But those are broad, society-wide changes. But what can those of us in the tech world actually do? How can we help solve these problems in our own little corners? We can we go into work tomorrow and actually do?

It’s on all of us in the tech world to make sure there’s less of the stuff that sucks.

We can’t do much about the lack of jobs for dance majors, but we can help make sure those people don’t stop believing in skill as a concept. Instead of assuming what we think sucks is what everyone thinks sucks, is there a way to make it not suck? Is there a way to find a person who doesn’t think it sucks? (And no, I don’t mean “Uber for writing my emails”) We gotta invite people in and make sure they see the fun part.

The actual practice of software has become deeply dehumanizing. None of what I just spent a week describing is the result of healthy people working in a field they enjoy, doing work they value. This is the challenge we have before us, how can we change course so that the tech industry doesn’t breed this. Those of us that got lucky at twelve need to find new ways to bring along the people who didn’t.

With that in mind, next Friday on Icecano we start a new series on growing better software.


Several people provided invaluable feedback on earlier iterations of this material; you all know who you are and thank you.

And as a final note, I’d like to personally apologize to the one person who I know for sure clicked Open in New Tab on every single link. Sorry man, they’re good tabs!

Read More
Gabriel L. Helman Gabriel L. Helman

Why is this Happening, Part II: Letting Computers Do The Fun Part

Previously: Part I

Let’s leave the Stuff that Sucks aside for the moment, and ask a different question. Which Part is the Fun Part? What are we going to do with this time the robots have freed up for us?

It’s easy to get wrapped up in pointing at the parts of living that suck; especially when fantasizing about assigning work to C-3PO’s cousin. And it’s easy to spiral to a place where you just start waving your hands around at everything.

But even Bertie Wooster had things he enjoyed, that he occasionally got paid for, rather than let Jeeves work his jaw for him.

So it’s worth recalibrating for a moment: which are the fun parts?

As aggravating as it can be at times, I do actually like making computers do things. I like programming, I like designing software, I like building systems. I like finding clever solutions to problems. I got into this career on purpose. If it was fun all the time they wouldn’t have to call it “work”, but it’s fun a whole lot of the time.

I like writing (obviously.) For me, that dovetails pretty nicely with liking to design software; I’m generally the guy who ends up writing specs or design docs. It’s fun! I owned the customer-facing documentation several jobs back. It was fun!

I like to draw! I’m not great at it, but I’m also not trying to make a living out of it. I think having hobbies you enjoy but aren’t great at is a good thing. Not every skill needs to have a direct line to a career or a side hustle. Draw goofy robots to make your kids laugh! You don’t need to have to figure out a the monetization strategy.

In my “outside of work” life I think I know more writers and artists than programmers. For all of them, the work itself—the writing, the drawing, the music, making the movie—is the fun part. The parts they don’t like so well is the “figuring out how to get paid” part, or the dealing with printers part, or the weird contracts part. The hustle. Or, you know, the doing dishes, laundry, and vacuuming part. The “chores” part.

So every time I see a new “AI tool” release that writes text or generates images or makes video, I always as the same question:

Why would I let the computer do the fun part?

The writing is the fun part! The drawing pictures is the fun part! Writing the computer programs are the fun part! Why, why, are they trying to tell us that those are the parts that suck?

Why are the techbros trying to automate away the work people want to do?

It’s fun, and I worked hard to get good at it! Now they want me to let a robot do it?

Generative AI only seems impressive if you’ve never successfully created anything. Part of what makes “AI art” so enragingly radicalizing is the sight of someone whose never tried to create something before, never studied, never practiced, never put the time in, never really even thought about it, joylessly showing off their terrible AI slop they made and demanding to be treated as if they made it themselves, not that they used a tool built on the fruits of a million million stolen works.

Inspiration and plagiarism are not the same thing, the same way that “building a statistical model of word order probability from stuff we downloaded from the web” is not the same as “learning”. A plagiarism machine is not an artist.

But no, the really enraging part is watching these people show off this garbage realizing that these people can’t tell the difference. And AI art seems to be getting worse, AI pictures are getting easier spot, not harder, because of course it is, because the people making the systems don’t know what good is. And the culture is following: “it looks like AI made it” has become the exact opposite of a compliment. AI-generated glop is seen as tacky, low quality. And more importantly, seen as cheap, made by someone who wasn’t willing to spend any money on the real thing. Trying to pass off Krusty Burgers as their own cooking.

These are people with absolutely no taste, and I don’t mean people who don’t have a favorite Kurosawa film, I mean people who order a $50 steak well done and then drown it in A1 sauce. The kind of people who, deep down, don’t believe “good” is real. That it’s all just “marketing.”

The act of creation is inherently valuable; creation is an act that changes the creator as much as anyone. Writing things down isn’t just documentation, it’s a process that allows and enables the writer to discover what they think, explore how they actually feel.

“Having AI write that for you is like having a robot lift weights for you.”

AI writing is deeply dehumanizing, to both the person who prompted it and to the reader. There is so much weird stuff to unpack from someone saying, in what appears to be total sincerity, that they used AI to write a book. That the part they thought sucked was the fun part, the writing, and left their time free for… what? Marketing? Uploading metadata to Amazon? If you don’t want to write, why do you want people to call you a writer?

Why on earth would I want to read something the author couldn’t be bothered to write? Do these ghouls really just want the social credit for being “an artist”? Who are they trying to impress, what new parties do they think they’re going to get into because they have a self-published AI-written book with their name on it? Talk about participation trophies.

All the people I know in real life or follow on the feeds who use computers to do their thing but don’t consider themselves “computer people” have reacted with a strong and consistant full-body disgust. Personally, compared to all those past bubbles, this is the first tech I’ve ever encountered where my reaction was complete revulsion.

Meanwhile, many (not all) of the “computer people” in my orbit tend to be at-least AI curious, lots of hedging like “it’s useful in some cases” or “it’s inevitable” or full-blown enthusiasm.

One side, “absolutely not”, the other side, “well, mayyybe?” As a point of reference, this was the exact breakdown of how these same people reacted to blockchain and bitcoin.

One group looks at the other and sees people musing about if the face-eating leopard has some good points. The other group looks at the first and sees a bunch of neo-luddites. Of course, the correct reaction to that is “you’re absolutely correct, but not for the reasons you think.”

There’s a Douglas Adams bit that gets quoted a lot lately, which was printed in Salmon of Doubt but I think was around before that:

I’ve come up with a set of rules that describe our reactions to technologies:

  1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.

  2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.

  3. Anything invented after you’re thirty-five is against the natural order of things.

The better-read AI-grifters keep pointing at rule 3. But I keep thinking of the bit from Dirk Gently’s Detective Agency about the Electric Monk:

The Electric Monk was a labour-saving device, like a dishwasher or a video recorder. Dishwashers washed tedious dishes for you, thus saving you the bother of washing them yourself, video recorders watched tedious television for you, thus saving you the bother of looking at it yourself; Electric Monks believed things for you, thus saving you what was becoming an increasingly onerous task, that of believing all the things the world expected you to believe.

So, what are the people who own the Monks doing, then?

Let’s speak plainly for a moment—the tech industry has always had a certain…. ethical flexibility. The “things” in “move fast and break things” wasn’t talking about furniture or fancy vases, this isn’t just playing baseball inside the house. And this has been true for a long time, the Open Letter to Hobbyists was basically Gates complaining that other people’s theft was undermining the con he was running.

We all liked to pretend “disruption” was about finding “market inefficiencies” or whatever, but mostly what that meant was moving in to a market where the incumbents were regulated and labor had legal protection and finding a way to do business there while ignoring the rules. Only a psychopath thinks “having to pay employees” is an “inefficiency.”

Vast chunks of what it takes to make generative AI possible are already illegal or at least highly unethical. The Internet has always been governed by a sort of combination of gentleman’s agreements and pirate codes, and in the hunger for new training data, the AI companies have sucked up everything, copyright, licensing, and good neighborship be damned.

There’s some half-hearted attempts to combat AI via arguments that it violates copyright or open source licensing or other legal approach. And more power to them! Personally, I’m not really interested in the argument the AI training data violates contract law, because I care more about the fact that it’s deeply immoral. See that Vonnegut line about “those who devised means of getting paid enormously for committing crimes against which no laws had been passed.” Much like I think people who drive too fast in front of schools should get a ticket, sure, but I’m not opposed to that action because it was illegal, but because it was dangerous to the kids.

It’s been pointed out more than once that AI breaks the deal behind webcrawlers and search—search engines are allowed to suck up everyone’s content in exchange for sending traffic their way. But AI just takes and regurgitates, without sharing the traffic, or even the credit. It’s the AI Search Doomsday Cult. Even Uber didn’t try to put car manufacturers out of business.

But beyond all that, making things is fun! Making things for other people is fun! It’s about making a connection between people, not about formal correctness or commercial viability. And then you see those terrible google fan letter ads at the olympics, or see people crowing that they used AI to generate a kids book for their children, and you wonder, how can these people have so little regard for their audience that they don’t want to make the connection themselves? That they’d rather give their kids something a jumped-up spreadsheet full of stolen words barfed out instead of something they made themselves? Why pass on the fun part, just so you can take credit for something thoughtless and tacky? The AI ads want you to believe that you need their help to find “the right word”; what thay don’t tell you is that no you don’t, what you need to do is have fun finding your word.

Robots turned out to be hard. Actually, properly hard. You can read these papers by computer researchers in the fifties where they’re pretty sure Threepio-style robot butlers are only 20 years away, which seems laughable now. Robots are the kind of hard where the more we learn the harder they seem.

As an example: Doctor Who in the early 80s added a robot character who was played by the prototype of an actual robot. This went about as poorly as you might imagine. That’s impossible to imagine now, no producer would risk their production on a homemade robot today, matter how impressive the demo was. You want a thing that looks like Threepio walking around and talking with a voice like a Transformer? Put a guy in a suit. Actors are much easier to work with. Even though they have a union.

Similarly, “General AI” in the HAL/KITT/Threepio sense has been permanently 20 years in the future for at least 70 years now. The AI class I took in the 90s was essentially a survey of things that hadn’t worked, and ended with a kind of shrug and “maybe another 20?”

Humans are really, really good at seeing faces in things, and finding patterns that aren’t there. Any halfway decent professional programmer can whip up an ELIZA clone in an afternoon, and even knowing how the trick works it “feels” smarter than it is. A lot of AI research projects are like that, a sleight-of-hand trick that depends on doing a lot of math quickly and on the human capacity to anthropomorphize. And then the self-described brightest minds of our generation fail the mirror test over and over.

Actually building a thing that can “think”? Increasingly seems impossible.

You know what’s easy, though, comparatively speaking? Building a statistical model of all the text you can pull off the web.

On Friday: conclusions, such as they are.

Read More
Gabriel L. Helman Gabriel L. Helman

Movies from This Year I Finally Saw: Dune Part 2

Spoilers Ahoy

The desert is beautiful in exactly the way that means it’s something that can kill you. It’s vast, and terrifying, and gorgeous. The only thing that compares is the sea; but the sea is totally alien, and to survive there, we need to bring tiny islands of our world with us to survive. The desert allows no such vessels, it demands that we join it, live as it does.

My Dad grew up on the edge of the low desert, I spent a lot of time as a kid there. I mention all this so that I can tell this story: when we watched the 2001 Sci-Fi channel version of Dune, the first time the Fremen arrived on screen my Dad burst into laugher; “Look how fat they are!” he roared, “they’ve never been in the desert a day in their lives!”

He did not say that when we watched the new movies.

And so yes, I finally saw the second half of Dune. I liked it. I liked it a lot. I think this has to go down as the new definitive example of how to turn a great book into a great movie (the examples for how to turn “decent-but-not-great” and “bad” books into great movies remain Jurassic Park and Jaws, respectively.)

It’s vast, it’s grand, it looks great, the acting is phenomenal, it’s fun, it’s exciting. It’s the sort of movie where you can list ten things about it at random and someone is likely to say “oh yeah, that was my favorite part.”

Denis Villeneuve has two huge advantages, and wastes neither. First, this is the third attempt at filming Dune, and as such he has a whole array of examples of things that do and don’t work. Second, he even has an advantage over Frank Herbert, in that unlike the author of the book, Villeneuve knows what’s going to happen in the next one, and can steer into it.

It’s immediately obvious that splitting the book into two movies was an even better idea than it first looked. While stretching a book-to-movie adaptation into two movies has become something of a cliche, Dune is different, if for no other reason than when they announced that they were going to make two films, literally everyone who’d ever read the book correctly guessed where the break was going to be.

But in addition to giving the story enough room to stretch out and get comfortable, the break between movies itself also turns out to have been a boost, because everyone seems more relaxed. The actors, who were all phenomenal in the first part, are better here, the effects are better, the direction is more interesting. Everyone involved clearly spent the two years thinking about “what they’d do next time” and it shows.

It looks great. The desert is appropriately vast and terrible and beautiful. The worms are incredible, landing both as semi-supernatural forces of nature but also clearly real creatures. All the stuff looks great, every single item or costume or set looks like it was designed by someone in the that world for a reason. The movie takes the Star Wars/Alien lived-in-future aesthetic and runs with it; the Fremen gear looks battered and used, the Harkonnen stuff is a little too clean, the Imperial stuff is clean as a statement of power, the smooth mirrored globe of a ship hanging over the battered desert outpost.

The book casually mentions that Fremen stillsuits are the best but then doesn’t talk more about that; the movie revels in showing the different worse protective gear everyone else wears. The Fremen stillsuits looks functional, comfortable, the kind of thing you could easily wear all day. The various Harkonnen and Imperial and smuggler suits all look bulky and uncomfortable and impractical, more like space suits than clothes; the opening scene lingers on the cooling fans in the back of Harkonnen stillsuit’s helmets, a group of soldiers in over their heads trying to bring a bubble of their world with them, and failing. In the end, those fans are all food for Shai-halud.

Every adaptation like this has an editorial quality; even with the expanded runtime we’re playing with here, the filmmakers have to choose what stays and what to cut. Generally, we tend to focus on what got left out, and there’s plenty that’s not here (looking at you, The Spacing Guild.) But oftentimes, the more interesting subject is what they choose to leave in, what to focus on. One detail Villeneuve zooms in on here is that everyone in this movie is absolutely obsessed with something.

Silgar is obsessed that his religion might be coming true. Gurney is obsessed with revenge at any cost. The Baron is obsessed with retaking Arrakis. The Bene Gesserit are obsessed with regaining control of their schemes. Elvis is obsessed with proving his worth to his uncle.

Rebecca Ferguson plays Jessica as absolutely consumed with the twin desires for safety and for her son to reach his full potential, whatever the cost. She has a permanently crazed look in her eyes, and the movie keeps it ambiguous how much of that is really her, and how much is PTSD mixed with side-effects of that poison.

At first, Paul is a kid with no agency, and no particular obsessions. He’s upset, certainly, but he someone who’s adrift on other people’s manipulations, either overt or hidden. You get the sense that once they join up with the Fremen, he’d be happy to just do that forever. But one the spice starts to kick in, Timothée Chalamet plays him as a man desperately trying to avoid a future he can barely glimpse. When reality finally conspires to make that future inevitable, he decides the only way forward is to sieze agency from everything and everyone around him, and from that point plays the part as a man possessed, half-crazed and desparate to wrestle him and the people he cares about through the only path he can see that doesn’t lead to total disaster.

My favorite character was Zendaya’s Chani. Chani was, to put it mildly, a little undercooked in the book, and one of the movie’s most interesting and savvy changes is to make her the only character that isn’t obsessed with the future, but as the only character who can clearly see “now”, a sort of reverse-Cassandra. While everyone else is consumed with plots and goals and Big Obsessions, she’s the only one that can see what the cost is going to be, what it already is. The heart of the movie is Zendaya finding new ways to express “this isn’t going to work out” or “oh shit” or “you have got to be fuckin’ kidding me” with just her face, as things get steadily out of control around her. It’s an incredible performance.

Chani also sits at the center of the movie’s biggest change: the ending.

In the book, Chani and Jessica aren’t exactly friends, but they’re not opposed to each other. The story ends with Paul ascending to the Imperial throne, with the implicit assent of the Spacing Guild and a collective shrug from the other great houses, and the story’s point-of-view slides off him and onto the two woman, as they commiserate over the fact that the men in their lives are formally married to other people, but “history will call us wives.”

Then, Dune Messiah opens a decade later after a giant war where the Fremen invaded the universe, and killed some billions of people. It’s not a retcon in the modern sense of the word exactly, but the shift from the seeming peaceful transition of power and “jihad averted” ending of the first book to the post-war wreckage of the opening of the second is a little jarring. Of course, Dune Messiah isn’t a novel so much as it’s 200 pages of Frank Herbert making exasperated noises and saying “look, what I meant was…”

Villeneuve knows how the second book starts, and more important, knows he’s going to make that the third movie, so he can steer into it in a way Herbert didn’t. So here, rather than vague allies, Jessica and Chani stand as opposing views on Paul’s future. The end of the film skips the headfake of a peaceful transition, and starts the galactic jihad against the houses opposed to Paul’s rule, and then the movie does the same POV shift to Chani that the book does, except now it’s her walking off in horror, the only person convinced that this will all end in flames and ruin. (Spoiler: she’s right.)

It’s a fascinating structure, to adapt one long book and its shorter sequel into a trilogy, with the not-quite-as-triumphant-as-it-looks ending of the first book now operating as (if you’ll forgive the comparison) an Empire Strikes Back–style cliffhanger.

It’s also a change that both excuses and explains the absense of the Spacing Guild from the movie, it’s much easier to light off a galactic war in one scene if there isn’t a monopoly on space travel that has a vested interest in things staying calm.

Dune is a big, weird, overstuffed book. The prose is the kind that’s politely described as “functional” before you change the subject, it doesn’t really have a beginning, and the end kind of lurches to a halt mid-scene. (And it must be said that it is significantly better written than any of Herbert’s other works. Dune started life as fixup of serialized short stories; the novel’s text implies the influence of either a strong editor or someone who gave a lot of productive feedback. Whatever the source, that influence wouldn’t show up for any of the sequels.) It’s a dense, talky book, with scene after scene of people expositing at each other, including both their conversation and respective internal monologies.

Despite it’s flaws, It’s a great book, and a classic for a reason, mostly because whatever else you can say about it, Dune is a book absolutely fizzing with ideas.

This is a book with a culture where computers are outlawed because of a long-ago war against “Thinking Machines”, and a guild of humans trained from birth to replace computers. There are plenty of authors who would have milked that as a book on its own, here it gets treated as an aside, the name “Butlerian Jihad” only appearing in the appendix.

Taking that a step further, the guild of analytical thinking people are all men, and their counterpart guild—the Bene Gesserit—are the scheming concubine all-woman guild. And yeah, there’s some gender stereotypes there, but that’s also the point, it’s not hidden. They’re both “what if we took these stereotypes and just went all the way.”

The book is constantly throwing out new concepts and ideas, tripping over them as it runs to the next. Even the stock mid-century science fiction ideas get a twist, and we end up with things like what if Asimov’s Galactic Empire was a little less “Roman” and a little more “Holy Roman”. And that’s before we get to the amount of word-building heavy-lifting done by phrases like “zensunni wanderers.”

And on top of all that, Herbert was clearly a Weird Guy (complementary.) The whole book is positively bubbling over with The Writer's Barely-Disguised Fetish, and while that would swamp the later books, here the weird stuff about politics or sex or religion mostly just makes the book more interesting—with a big exception around the weird (derogatory) homophobia.

And this is where I start a paragraph with “however”—However most of those ideas don’t really pay off in a narratively compelling way. They’re mostly texture, which is fine in a sprawling talky novel like Dune, but harder to spare room for in a movie, or even in two long ones.

An an example: Personal shields are another fun piece of texture to the setting, as well as artfully lampshading why this futuristic space opera has mostly melee combat, but they don’t really influence the outcome in a meaningful way. You can’t use them on Arrakis because they arger the worms, which sort of explains part of the combat edge the Fremen have, but then in the book it just sorta doesn’t come up again. The book never gives the Fremen a fighting style or weapons that take advantage of the fact their opponents don’t have shields but are used to having them. Instead, the Fremen are just the best fighters in the universe, shields or no shields, and use the same sorts of knives as everyone else.

The movies try to split the difference; shields are there, and we get the exposition scene at the start to explain how they work, but the actual fights don’t put a lot of effort in showing “the slow blade penetrates”, just sometimes you can force a blade through a shield and sometimes you can’t.

Visually, this does get gestured in a few ways: those suspensor torpedoes that slow down and “tunnel” through the shields are a very cool deployment of the idea, and the second movie opens with a scene where a group of Harkonnens are picked apart by snipers but never think to take cover, because they usually don’t have to.

And this is how the movie—I think correctly—chooses to handle most of those kinds of world-building details. They’re there, but with the volume dialed way down. The various guilds and schools are treated the same way; Dr Yueh turning traitor is unthinkable because he’s a trusted loyal member of the house, the Suk School conditioning is never mentioned, because it’s a detail that really doesn’t matter.

As someone who loves the book, It’s hard not to do a little monday-morning quarterbacking on where the focus landed. I’d have traded the stuff at the Ecological Testing Station for the dinner with the various traders and local bigwings, Count Fenring is much missed, I’d have preferred the Spacing Guild was there. But it works. This isn’t a Tom Bombadil/Souring of the Shire “wait, what did you think the book was about?” moment, they’re all sane & reasonable choices.

It turns out letting someone adapt the book who doesn’t like dialoge is the right choice, because the solution turned out to be to cut basically all of it, and let the story play out without the constant talking.

And this leads into the other interesting stylistic changes, which is that while Dune the book is deliriously weird, Dune the movies are not. Instead, they treat everything with total sincerity, and anything they can’t figure out how to ground they leave out.

I think this is a pretty savvy call for making a Dune in the Twenties. Most of the stuff that made Dune weird in the 60s has been normalized over the last few decades of post-Star Wars blockbusters, such that we live in a world where Ditko’s psychedelic Dr. Strange has starred in six different big budget movies, and one of the highest grossing movies of last year co-starred a talking tree and a cyborg raccoon. There’s no out-weirding that, the correct answer is, ironically, to take a cue from George Lucas and shoot it like it’s a documentary about a place that doesn’t exist.

So most of the movie, the fights, the worms, gets shot with total seriousness, and then Paul’s powers get visually reduced to the point where the movie is ambiguous about if he can really see the future or not. Even something as out-there-bananas as Alia is stripped down to the minimum, with the story’s timeline being compressed from multiple years to a couple of months so that we don’t have to figure out how to make a toddler with the mind of an adult work on the screen.

Which brings me to the last topic I want to cover here, which is that David Lynch’s Dune hangs over this movie like a shadow. It’s clear that everyone making this movie has seen that one. This is almost always to this movie’s benefit, both in terms of what’s there and what isn’t.

To wit: if anyone could have made something as very-specifically weird as “toddler with the mind of an adult” work, it was Lynch, and he didn’t, so the new movie stays clear. The look of both the Atreides and the Harkonnens owes more to the Lynch film than it does to the book, and there are any number of other aspects that feel like a direct response to that movie—either copy it, or get as far away as possible.

I picture Villeneuve with an effects pedal labeled “Lynch”, and he’d occasionally press on it.

I really, really liked these two movies. They’re far better than the Lynch film both as an adaptation of the book and as movies in their own right. But I really hope that pedal gets a little more of a workout in Dune Messiah.

You know, I really, really, really wanted to hear Christopher Walken say “Bring in that floating fat man—the baron!” I can hear it!

This means that the music video for Weapon of Choice is a prequel, right?

A final thought. Lynch’s Dune opens with Princess Irulan looking the camera dead in the eye and explaining the premise of the film, a sort of sci-fi Chorus asking for a muse of fire, but clunkier. Denis Villeneuve’s first part—correctly—does away with all that and just starts the movie.

Before this second movie came out, I joked that the real power move would be to open the this film with Irulan narrating (“The beginning was a dangerous time”,) to act as the ‘previously on Dune’ recap.

Reader, you cannot possibly imagine my surprise and delight when that actually happened.

Read More
Gabriel L. Helman Gabriel L. Helman

Year in review, self-linkblog-edition

A long time ago (longer now than it seems,) I used to write a lot. Flash forward a few lifetimes, I sat down at the end of last year to try and do something a little more complex than normal, and realized that I’d basically forgotten how to write everything other than technical specs or slack messages.

I needed to get back into shape, and to do that I needed some practice. A lot of practice.

So, I triggered the Genesis Device on this URL and relit the blog as an explicit project to re-teach myself how to write. I had a vague goal of doing about two pieces a week, focused on covering tech and pop culture; maybe Tuesdays and Thursdays, respectively?

Well, it’s been a year. How’d it go?

Including this one, I ended up with 117 pieces, which is just over two a week, but a cursory glance at the archives reveals very very few times I ever hit exactly two a week, and certainly never Tech Tuesday / Pop Culture Thursday.

The schedule wasn’t the actual goal though, the goal was to just keep playing the scales and see where the wind took me. I ended up just north of 110,000 words for the year, which is a lot more than I would have done without the blog.

It really look a while to get going. I’m a much slower writer than I used to be (or at least, remember being,) so one of the hardest parts has been finding the time to actually do the writing, and then building a habit around it. The other big thing I had to re-learn was how to actually finish things.

By the end of the summer I was posting little things on a fairly regular basis, but a deep backlog of half-finished drafts. As a piece of self-deprecating humor, the tag for “the big ones” was for pieces that stayed stuck in draft form for more than a month. For example, the first draft of Fully Automated Insults to Life Itself had a file date in February, and the first draft of Fractals was at the start of July, and 2023’s strange box office was half-done before Barbenheimer even came out. (They all turned out pretty well, I think?)

But! The rust finally started to come off for real as I was recovering from COVID in October, I refocused on actually finishing things in the backlog at the start of November, and as of this post, the drafts folder is empty.

(And I’m not going to say there’s a direct correlation with the blog output hitting a groove and twitter imploding, but that’s not a total coincidence either, you know?)

My biggest surprise has been that I was expecting to do a lot more tech & software engineering writing, and that didn’t end up being where the inspiration flowed. Eyeing my tag stats, I have 66 pieces for pop culture, and 42 for tech, and so hey, thats a fun number to hit, but I was expecting the ratio to go the other way.

In roughly chronological order, here’s some of my favorite pieces I did this year.

On the tech side of the house:

And on the pop culture side:

All that said, I don’t know if it was the best, but my absolutely favorite piece I did this year was Fractals.

A few other stray observations.

You can really spot the period in late summer/fall where I wasn’t getting enough sleep and the blog got extremely grouchy. Favorite Programming Language Features: Swift’s Exception handling with Optionals was the result me realizing I had written way too may grouchy posts in a row and telling myself, “go write about something you like! Anything!” I’m not sure it’s obvious that the blog has gotten less grouchy since then, but that’s the point where I started paying attention.

Meanwhile, my posts with the most traffic were:

All of which popped off on one search engine or another; I was Google’s number 2 hit for “enshittification curve” for a bit over the summer, so that was exciting.

Like many, many podcast listeners, I’ve been a (mostly?) happy Squarespace customer for many years. This project has really stretched what their platform is good at, though; it’s great for infrequently updated sites—restaurants, small businesses, portfolios, and the like—but daily blogs with complex formatting is outside their wheelhouse, to say the least. I spent the year slowly realizing I was trying to recreate wordpress inside squarespace’s editor, and “hmmmm”.

I have a few loose piecs queued up for the start of the year, but then I think I’m gonna pump the brakes a little. I’ve got a few longer-form things I want to try and do next year, so I’m going to see if I can redirect this habit I’ve built in a slightly different way.

So Happy New Year, everyone! This year was pretty good, all things considered. Let’s make the next one ever better.

Read More
Gabriel L. Helman Gabriel L. Helman

Fully Automated Insults to Life Itself

In 20 years time, we’re going to be talking about “generative AI”, in the same tone of voice we currently use to talk about asbestos. A bad idea that initially seemed promising which ultimately caused far more harm than good, and that left a swathe of deeply embedded pollution across the landscape that we’re still cleaning up.

It’s the final apotheosis of three decades of valuing STEM over the Humanities, in parallel with the broader tech industry being gutted and replaced by a string of venture-backed pyramid schemes, casinos, and outright cons.

The entire technology is utterly without value and needs to be scrapped, legislated out of existence, and the people involved need to be forcibly invited to find something better to send their time on. We’ve spent decades operating under the unspoken assumption that just because we can build something, that means it’s inevitable and we have to build it first before someone else does. It’s time to knock that off, and start asking better questions.

AI is the ultimate form of the joke about the restaurant where the food is terrible and also the portions are too small. The technology has two core problems, both of which are intractable:

  1. The output is terrible
  2. It’s deeply, fundamentally unethical

Probably the definite article on generative AI’s quality, or profound lack thereof, is Ted Chiang’s ChatGPT Is a Blurry JPEG of the Web; that’s almost a year old now, and everything that’s happened in 2023 has only underscored his points. Fundamentally, we’re not talking about vast cyber-intelligences, we’re talking Sparkling Autocorrect.

Let me provide a personal anecdote.

Earlier this year, a coworker of mine was working on some documentation, and had worked up a fairly detailed outline of what needed to be covered. As an experiment, he fed that outline into ChatGPT, intended to publish the output, and I offered to look over the result.

At first glance it was fine. Digging in, thought, it wasn’t great. It wasn’t terrible either—nothing in it was technically incorrect, but it had the quality of a high school book report written by someone who had only read the back cover. Or like documentation written by a tech writer who had a detailed outline they didn’t understand and a word count to hit? It repeated itself, it used far too many words to cover very little ground. It was, for lack of a better word, just kind of a “glurge”. Just room-temperature tepidarium generic garbage.

I started to jot down some editing notes, as you do, and found that I would stare at a sentence, then the whole paragraph, before crossing the paragraph out and writing “rephrase” in the margin. To try and be actually productive, I took a section and started to rewrite in what I thought was better, more concise manner—removing duplicates, omitting needless words. De-glurgifying.

Of course, I discovered I had essentially reconstituted the outline.

I called my friend back and found the most professional possible way to tell him he needed to scrap the whole thing start over.

It left me with a strange feeling, that we had this tool that could instantly generate a couple thousand words of worthless text that at first glance seemed to pass muster. Which is so, so much worse than something written by a junior tech writer who doesn’t understand the subject, because this was produced by something that you can’t talk to, you can’t coach, that will never learn.

On a pretty regular basis this year, someone would pop up and say something along the lines of “I didn’t know the answer, and the docs were bad, so I asked the robot and it wrote the code for me!” and then they would post some screenshots of ChatGPTs output full of a terribly wrong answer. Human’s AI pin demo was full of wrong answers, for heaven’s sake. And so we get this trend where ChatGPT manages to be an expert in things you know nothing about, but a moron about things you’re an expert in. I’m baffled by the responses to the GPT-n “search” “results”; they’re universally terrible and wrong.

And this is all baked in to the technology! It’s a very, very fancy set of pattern recognition based on a huge corpus of (mostly stolen?) text, computing the most probable next word, but not in any way considering if the answer might be correct. Because it has no way to, thats totally outside the bounds of what the system can achieve.

A year and a bit later, and the web is absolutely drowning in AI glurge. Clarkesworld had to suspend submissions for a while to get a handle on blocking the tide of AI garbage. Page after page of fake content with fake images, content no one ever wrote and only meant for other robots to read. Fake articles. Lists of things that don’t exist, recipes no one has ever cooked.

And we were already drowning in “AI” “machine learning” gludge, and it all sucks. The autocorrect on my phone got so bad when they went from the hard-coded list to the ML one that I had to turn it off. Google’s search results are terrible. The “we found this answer for you” thing at the top of the search results are terrible.

It’s bad, and bad by design, it can’t ever be more than a thoughtless mashup of material it pulled in. Or even worse, it’s not wrong so much as it’s all bullshit. Not outright lies, but vaguely truthy-shaped “content”, freely mixing copied facts with pure fiction, speech intended to persuade without regard for truth: Bullshit.

Every generated image would have been better and funnier if you gave the prompt to a real artist. But that would cost money—and that’s not even the problem, the problem is that would take time. Can’t we just have the computer kick something out now? Something that looks good enough from a distance? If I don’t count the fingers?

My question, though, is this: what future do these people want to live in? Is it really this? Swimming a sea of glurge? Just endless mechanized bullshit flooding every corner of the Web?Who looked at the state of the world here in the Twenties and thought “what the world needs right now is a way to generate Infinite Bullshit”?

Of course, the fact that the results are terrible-but-occasionally-fascinating obscure the deeper issue: It’s a massive plagiarism machine.

Thanks to copyleft and free & open source, the tech industry has a pretty comprehensive—if idiosyncratic—understanding of copyright, fair use, and licensing. But that’s the wrong model. This isn’t about “fair use” or “transformative works”, this is about Plagiarism.

This is a real “humanities and the liberal arts vs technology” moment, because STEM really has no concept of plagiarism. Copying and pasting from the web is a legit way to do your job.

(I mean, stop and think about that for a second. There’s no other industry on earth where copying other people’s work verbatim into your own is a widely accepted technique. We had a sign up a few jobs back that read “Expert level copy and paste from stack overflow” and people would point at it when other people had questions about how to solve a problem!)

We have this massive cultural disconnect that would be interesting or funny if it wasn’t causing so much ruin. This feels like nothing so much as the end result of valuing STEM over the Humanities and Liberal Arts in education for the last few decades. Maybe we should have made sure all those kids we told to “learn to code” also had some, you know, ethics? Maybe had read a couple of books written since they turned fourteen?

So we land in a place where a bunch of people convinced they’re the princes of the universe have sucked up everything written on the internet and built a giant machine for laundering plagiarism; regurgitating and shuffling the content they didn’t ask permission to use. There’s a whole end-state libertarian angle here too; just because it’s not explicitly illegal, that means it’s okay to do it, ethics or morals be damned.

“It’s fair use!” Then the hell with fair use. I’d hate to lose the wayback machine, but even that respects robots.txt.

I used to be a hard core open source, public domain, fair use guy, but then the worst people alive taught a bunch of if-statements to make unreadable counterfit Calvin & Hobbes comics, and now I’m ready to join the Butlerian Jihad.

Why should I bother reading something that no one bothered to write?

Why should I bother looking at a picure that no one could be bothered to draw?

Generative AI and it’s ilk are the final apotheosis of the people who started calling art “content”, and meant it.

These are people who think art or creativity are fundamentally a trick, a confidence game. They don’t believe or understand that art can be about something. They reject utter the concept of “about-ness”, the basic concept of “theme” is utterly beyond comprehension. The idea that art might contain anything other than its most surface qualities never crosses their mind. The sort of people who would say “Art should soothe, not distract”. Entirely about the surface aesthetic over anything.

(To put that another way, these are the same kind people who vote Republican but listen to Rage Against the Machine.)

Don’t respect or value creativity.

Don’t respect actual expertise.

Don’t understand why they can’t just have what someone else worked for. It’s even worse than wanting to pay for it, these creatures actually think they’re entitled to it for free because they know how to parse a JSON file. It feels like the final end-point of a certain flavor of free software thought: no one deserves to be paid for anything. A key cultual and conceptual point past “information wants to be free” and “everything is a remix”. Just a machine that endlessly spits out bad copies of other work.

They don’y understand that these are skills you can learn, you have to work at, become an expert in. Not one of these people who spend hours upon hours training models or crafting prompts ever considered using that time to learn how to draw. Because if someone else can do it, they should get access to that skill for free, with no compensation or even credit.

This is why those machine generated Calvin & Hobbes comics were such a shock last summer; anyone who had understood a single thing about Bill Watterson’s work would have understood that he’d be utterly opposed to something like that. It’s difficult to fathom someone who liked the strip enough to do the work to train up a model to generate new ones while still not understanding what it was about.

“Consent” doesn’t even come up. These are not people you should leave your drink uncovered around.

But then you combine all that with the fact that we have a whole industry of neo-philes, desperate to work on something New and Important, terrified their work might have no value.

(See also: the number of abandoned javascript frameworks that re-solve all the problems that have already been solved.)

As a result, tech has an ongoing issue with cool technology that’s a solution in search of a problem, but ultimately is only good for some kind of grift. The classical examples here are the blockchain, bitcoin, NFTs. But the list is endless: so-called “4th generation languages”, “rational rose”, the CueCat, basically anything that ever got put on the cover of Wired.

My go-to example is usually bittorrent, which seemed really exciting at first, but turned out to only be good at acquiring TV shows that hadn’t aired in the US yet. (As they say, “If you want to know how to use bittorrent, ask a Doctor Who fan.”)

And now generative AI.

There’s that scene at the end of Fargo, where Frances McDormand is scolding The Shoveler for “all this for such a tiny amount of money”, and thats how I keep thinking about the AI grift carnival. So much stupid collateral damage we’re gonna be cleaning up for years, and it’s not like any of them are going to get Fuck You(tm) rich. No one is buying an island or founding a university here, this is all so some tech bros can buy the deluxe package on their next SUV. At least crypto got some people rich, and was just those dorks milking each other; here we all gotta deal with the pollution.

But this feels weirdly personal in a way the dunning-krugerrands were not. How on earth did we end up in a place where we automated art, but not making fast food, or some other minimum wage, minimum respect job?

For a while I thought this was something along one of the asides in David Graeber’s Bullshit Jobs, where people with meaningless jobs hate it when other people have meaningful ones. The phenomenon of “If we have to work crappy jobs, we want to pull everyone down to our level, not pull everyone up”. See also: “waffle house workers shouldn’t make 25 bucks an hour”, “state workers should have to work like a dog for that pension”, etc.

But no, these are not people with “bullshit jobs”, these are upper-middle class, incredibly comfortable tech bros pulling down a half a million dollars a year. They just don’t believe creativity is real.

But because all that apparently isn’t fulfilling enough, they make up ghost stories about how their stochastic parrots are going to come alive and conquer the world, how we have to build good ones to fight the bad ones, but they can’t be stopped because it’s inevitable. Breathless article after article about whistleblowers worried about how dangerous it all is.

Just the self-declared best minds of our generation failing the mirror test over and over again.

This is usually where someone says something about how this isn’t a problem and we can all learn to be “prompt engineers”, or “advisors”. The people trying to become a prompt advisor are the same sort who would be proud they convinced Immortan Joe to strap them to the back of the car instead of the front.

This isn’t about computers, or technology, or “the future”, or the inevitability of change, or the march or progress. This is about what we value as a culture. What do we want?

“Thus did a handful of rapacious citizens come to control all that was worth controlling in America. Thus was the savage and stupid and entirely inappropriate and unnecessary and humorless American class system created. Honest, industrious, peaceful citizens were classed as bloodsuckers, if they asked to be paid a living wage. And they saw that praise was reserved henceforth for those who devised means of getting paid enormously for committing crimes against which no laws had been passed. Thus the American dream turned belly up, turned green, bobbed to the scummy surface of cupidity unlimited, filled with gas, went bang in the noonday sun.” ― Kurt Vonnegut, God Bless You, Mr. Rosewater

At the start of the year, the dominant narrative was that AI was inevitable, this was how things are going, get on board or get left behind.

Thats… not quite how the year went?

AI was a centerpiece in both Hollywood strikes, and both the Writers and Actors basically ran the table, getting everything they asked for, and enshrining a set of protections from AI into a contract for the first time. Excuse me, not protection from AI, but protection from the sort of empty suits that would use it to undercut working writers and performers.

Publisher after publisher has been updating their guidelines to forbid AI art. A remarkable number of other places that support artists instituted guidlines to ban or curtail AI. Even Kickstarter, which plunged into the blockchain with both feet, seemed to have learned their lesson and rolled out some pretty stringent rules.

Oh! And there’s some actual high-powered lawsuits bearing down on the industry, not to mention investigations of, shall we say, “unsavory” material in the training sets?

The initial shine seems to be off, where last year was all about sharing goofy AI-generated garbage, there’s been a real shift in the air as everyone gets tired of it and starts pointing out that it sucks, actually. And that the people still boosting it all seem to have some kind of scam going. Oh, and in a lot of cases, it’s literally the same people who were hyping blockchain a year or two ago, and who seem to have found a new use for their warehouses full of GPUs.

One of the more heartening and interesting developments this year was the (long overdue) start of a re-evaluation of the Luddites. Despite the popular stereotype, they weren’t anti-technology, but anti-technology-being-used-to-disenfrancise-workers. This seems to be the year a lot of people sat up and said “hey, me too!”

AI isn’t the only reason “hot labor summer” rolled into “eternal labor september”, but it’s pretty high on the list.

Theres an argument thats sometimes made that we don’t have any way as a society to throw away a technology that already exists, but that’s not true. You can’t buy gasoline with lead in it, or hairspray with CFCs, and my late lamented McDLT vanished along with the Styrofoam that kept the hot side hot and the cold side cold.

And yes, asbestos made a bunch of people a lot of money and was very good at being children’s pyjamas that didn’t catch fire, as long as that child didn’t need to breathe as an adult.

But, we've never done that for software.

Back around the turn of the century, there was some argument around if cryptography software should be classified as a munition. The Feds wanted stronger export controls, and there was a contingent of technologists who thought, basically, “Hey, it might be neat if our compiler had first and second amendment protection”. Obviously, that didn’t happen. “You can’t regulate math! It’s free expression!”

I don’t have a fully developed argument on this, but I’ve never been able to shake the feeling like that was a mistake, that we all got conned while we thought we were winning.

Maybe some precedent for heavily “regulating math” would be really useful right about now.

Maybe we need to start making some.

There’s a persistant belief in computer science since computers were invented that brains are a really fancy powerful computer and if we can just figure out how to program them, intelligent robots are right around the corner.

Theres an analogy that floats around that says if the human mind is a bird, then AI will be a plane, flying, but very different application of the same principals.

The human mind is not a computer.

At best, AI is a paper airplane. Sometimes a very fancy one! With nice paper and stickers and tricky folds! Byt the key is that a hand has to throw it.

The act of a person looking at bunch of art and trying to build their own skills is fundamentally different than a software pattern recognition algorithm drawing a picture from pieces of other ones.

Anyone who claims otherwise has no concept of creativity other than as an abstract concept. The creative impulse is fundamental to the human condition. Everyone has it. In some people it’s repressed, or withered, or undeveloped, but it’s always there.

Back in the early days of the pandemic, people posted all these stories about the “crazy stuff they were making!” It wasn’t crazy, that was just the urge to create, it’s always there, and capitalism finally got quiet enough that you could hear it.

“Making Art” is what humans do. The rest of society is there so we stay alive long enough to do so. It’s not the part we need to automate away so we can spend more time delivering value to the shareholders.

AI isn’t going to turn into skynet and take over the world. There won’t be killer robots coming for your life, or your job, or your kids.

However, the sort of soulless goons who thought it was a good idea to computer automate “writing poetry” before “fixing plumbing” are absolutely coming to take away your job, turn you into a gig worker, replace whoever they can with a chatbot, keep all the money for themselves.

I can’t think of anything more profoundly evil than trying to automate creativity and leaving humans to do the grunt manual labor.

Fuck those people. And fuck everyone who ever enabled them.

Read More
Gabriel L. Helman Gabriel L. Helman

Doctor Who and The Church on Ruby Road

When the first trailer for “The Church on Ruby Road” aired, opening as it did with a shot of the new Doctor dancing in a nightclub, I saw someone online react something along the lines of “why would a thousand year old Time Lord go dancing?”

To this, I had a very strong two-part reaction, namely:

  1. I think you mean “billion”, not “thousand”
  2. My nightclub days are long, long behind me, but if I woke up looking like Ncuti Gatwa, you couldn’t drag me out of them

But this grouchy internet person made an interesting point, albeit accidentally: there’s a solid sub-genre of Doctor Who where the story opens with the Doctor already in the middle of something, and I can’t remember there ever being one where that “something” was “having fun”.

Taken entirely on its own, “The Church on Ruby Road” is an absolute delight. Just fun from beginning to end. The stakes are never that high, and the plot is a slender thing, but that’s the point; we’re here to launch the two new leads and set up the show going forward. And be as Christmasy as possible while doing it.

From the moment he pops onto screen, here in his first real episode, Ncuti Gatwa makes it clear why he got the part; playing a character that’s unquestioningly the Doctor, but a different model than we’ve ever seen before.

The script does a lot of heavy lifting for him, giving him a series of, if you will, “median value” Doctor Who moments, and letting him show off his spin on them. He gets two scenes—where tells the police officer that his girlfriend is going to say yes, and then later when he compares “real time travellers” to whatever the goblins are doing—that are practically Doctor Who audition pieces (and I’d be surprised if at least one of them wasn’t literally one). You can close your eyes and hear how any of his predecessors would have done either of those scenes. Gatwa manages to land a take on both that’s both utterly unlike how any of the other actors would have done it, but also unmistakably the Doctor.

And, mind you, this is after being introduced in a scene doing something no other Doctor would do—that seems custom designed to stroke out anyone left watching who’s been complaining about “woke doctor who”—and then immediately snaps into frame and Doctors the hell out of his first scene with Millie Gibson.

Mille Gibson’s Ruby Sunday, on the other hand, is a little harder to get a read on, mostly landing on “high energy” and being unflappable. Frankly, introducing the character by having her recite her life story in a literal TV interview feels a little—not lazy, exactly, but impatient? Mostly, she’s there to have stuff happen to, which is a little unfortunate. Her big moment comes at the end of the big song set-piece—the Goblin Song was heavily promoted ahead of time, but of course that turned out to be a headfake to cover the fact that we’ve got a Doctor that can sing now too, and then that turned into the reveal that we’ve also got a companion that can.

“Can ad-lib lyrics to a goblin song while trapped in a sky ship” isn’t the strongest character premise, but it’s a pretty solid start.

The goblins themselves, meanwhile, feel like exactly the kind of move you do when you have a new potential audience and you want to make sure they know “hey, this isn’t star wars.” Doctor Who has always worked better when it knows it’s science fantasy instead of science fiction, and musical steampunk goblins feels like a real statement of purpose. Plus, a solid use of that extra Disney+ money.

The ending is a little clunky? The brief riff on It’s a Wonderful Life and closing the time travel loop both feel a beat too short and easy, and the look on Gatwa’s face as he watches the figure that dropped off the baby walk away is “I could find out now, but I guess I’ll save it for the season finale.”

And then, Ruby runs down stairs and boards the Tardis because… the episode is over? Even the bigger-on-the-inside scene is swallowed so that the Mysterious Neighbor can break the fourth wall.

It’s clunky, but what’s funny is that it’s clunky in exactly the same way “Rose” was.

Russell Davies is now in the unique position of having written the introductions for three Doctors and four companions, which puts him in solidly in the forefront versus anyone else that’s worked on the show.

(Okay, anorak time: Prior to this, RTD and Moffat had both done two Doctors. Terrance Dicks was involved with two—3rd and 4th—but script-edited one and wrote the other. JNT was the producer for three new Doctors—5th, 6th, and 7th—but had a different script editor and writer each time. Moffat did four companions if we include Rory, which we do. If I’m counting off the top of my head right, JNT hired seven companions, but again with different creative staff nearly every time.)

So how does this compare to his other two?

The first time—“Rose”—was a full reset of the show, assuming that the vast majority of the audience had never seen the old show. That episode spent a lot of time setting up the “Rose Tyler Show” so that the Doctor could crash into it.

The second time—“The Christmas Invasion”—was mostly a character piece about existing main character Rose Tyler reacting to her friend changing, and then David Tennant swaggers in with ten minutes to go and takes over the show.

This doesn’t resemble either of those so much as it does “The Eleventh Hour” in that it has to introduce a whole new cast and serve as a jumping-on point, but assumes that most of the audience already knows the score.

Besides, the “swagger in and steal the show” scene came two weeks ago, this is more worried about getting on with it and showing what the show is going to be like going forward.

RTD has an interesting tic where the Tardis is sidelined for a companion’s first story, and then the story ends with “all that and also a time machine!” “Rose” gets the Tardis involved earlier, but doesn’t time travel, but both “Smith and Jones” and “Partners in Crime” leaves it to the end. (“The Runaway Bride” has a lot of Tardis, but, like “Rose”, obscures it’s more unique features.)

Compare that to “The Eleventh Hour” or “The Pilot”, where the fact that it’s a time machine factors heavily into Amy/Bill’s first encounter.

“Rose” was pretty deliberately designed as part of a triptych with “The End of the World” and “The Unquiet Dead”; that first part ends with her running towards the Police Box, and most of the “Tardis Stuff” gets handled at the start of the second; that is, other than the big “bigger on the inside” beat halfway though “Rose”.

“The Church on Ruby Road” kind of awkwardly straddles the middle The perfunctory ending would play a lot better if the next episode was next week instead of in 4 months. And the show spends a lot more time setting up the mystery about Ruby’s birth than exploring what her life is like now, and why she’s willing to run off with the Doctor at the end other than a vague sense of “waiting for her life to start” malaise.

But, having typed all that out now, I actually think that’s pretty savvy. “Rose” was about pulling in a whole new audience. “The Christmas Invasion” and “The Eleventh Hour” were about telling that existing audience not to worry, it’s still the same show.

“The Church on Ruby Road” is doing something new, it’s trying to get the old audience back. It’s no secret that the ratings, however you measure them, have been in a slow but steady decline since the 50th anniversary. These four 2023 specials aren’t really about attracting new people, their job is to reel back in all those people who were watching in 2008 and saying “that show you like is back in style”.

Much like how “The Eleventh Hour” accidentally became the jumping-on point for everyone in the US who discovered the show on BBC America, this might be that for a next generation of Disney+ first time viewers, but: no. Those people all clicked “Special 1” instead of “Special 4” and discovered the show with “The Star Beast.”

Historically, the closest analogue to what the show is doing here is “Remembrance of the Daleks”, but in a parallel universe where they had bothered to tell anyone that the show was about to be better than it had maybe ever been.

So, this can get away with having Millie Gibson pop onto screen, deliver her character brief directly to the camera, and the audience goes “got it, new companion. So about those goblins from the trailer?”

Something that does come through clearly is that RTD has been watching the show since he left. Ruby Sunday doesn’t feel like anything so much as Davies looking at Clara and thinking “ooh, I’ll have one of those, please”. And the casual use of time travel in way the Doctor goes back in time to make sure the baby gets where it needs to be isn’t something that really entered the show’s vocabulary until Moffat took over.

And, after having Neal Patrick Harris look the fanbase directly in the eye and say, essentially, “you can’t trust anything the Master said about the Doctor’s origins”, he picks out the most interesting nugget—that the Doctor might be adopted—and runs with it.

Love that “mavity” is going to be running thing.

There’s a long running fan “tradition” of breathlessly claiming any mysterious character is the return of the Rani/Romana/Drax/Susan, etc. That last shot seems to be there specifically to wind those people up, but okay, I’ll play along. I think Mrs. Flood is going to turn out to be… K'anpo.

What would you do if you woke up, and you were young, and beautiful, and all the pain was gone? You still had your memories, you’re still the same person, but healed?

How great would that be?

One of the big innovations when the show came back into 2005 was to massively expand the emotional palette. Now, this is as much a ding on the old show as its a compliment to the new one; the old show went off the air less than four months before Twin Peaks started, which is a remarkable demonstration of how behind-the-times the show had gotten. Expanding the emotional palette was less an innovation and more admitting that there are other shows on TV.

But, the upshot was the main character was suddenly allowed to have actual feelings for the first time, which tremendously widened the scope of what kinds of stories the show could tell.

The last time RTD rebooted the show, the character and the show had both been though some stuff. The Time War was pretty explicitly a metaphor for the show’s cancellation; and both the character and the show were pretty angsty about everything that had happened since we saw them last.

Now, almost two decades later he’s rebooting it again, and both the show and the character have been though even more stuff. It’s been a weird time! But now, the characer and the show’s reaction is to just be glad to be here, thrilled to be alive. That feels like an older and wiser reaction.

Here in 2023, having an angst-filled tortured main character feels positively old-fashioned. Instead, now we’ve got one that seems motivated more by joy and raw enthusiasm.

Good to see you, Doctor. Glad you’re back. Roll on the future.

Read More
Gabriel L. Helman Gabriel L. Helman

Good Adaptations and the Lord of the Rings at 20 (and 68)

What makes a good book-to-movie adaptation?

Or to look at it the other way, what makes a bad one?

Books and movies are very different mediums and therefore—obviously—are good at very different things. Maybe the most obvious difference is that books are significantly more information-dense than movies are, so any adaptation has to pick and choose what material to keep.

The best adaptations, though, are the ones that keep the the themes and characters—what the book is about— and move around, eliminate, or combine the incidents of the plot to support them. The most successful, like Jaws or Jurassic Park for example, are arguably better than their source material, jettisoning extraneous sideplots to focus on the main concepts.

Conversely, the worst adaptations are the ones that drop the themes and change the point of the story. Stephen King somewhat famously hates the movie version of The Shining because he wrote a very personal book about his struggle with alcoholism disguised as a haunted hotel story, and Kubrick kept the ghosts but not the rest. The movie version of The Hitch-Hiker’s Guide to the Galaxy was made by people who thought the details of the plot were more important than the jokes, rather than the other way around, and didn’t understand why the Nutrimat was bad.

And really, it’s the themes, the concepts, the characters, that make stories appeal to us. It’s not the incidents of the plot we connect to, it’s what the story is about. That’s what we make the emotional connection with.

And this is part of what makes a bad adaptation so frustrating.

While the existence of a movie doesn’t erase the book it was based on, it’s a fact that movies have higher profiles, reach bigger audiences. So it’s terribly disheartening to have someone tell you they watched a movie based on that book you like that they didn’t read, when you know all the things that mattered to you didn’t make it into the movie.

And so we come to The Lord of the Rings! The third movie, Return of the King turned 20 this week, and those movies are unique in that you’ll think they’re either a fantastic or a terrible adaptation based on which character was your favorite.

Broadly speaking, Lord of the Rings tells two stories in parallel. The first, is a big epic fantasy, with Dark Lords, and Rings of Power, and Wizards, and Kings in Exile. Strider is the main character of this story, with a supporting cast of Elves, Dwarves, and Horse Vikings. The second is a story about some regular guys who are drawn into a terrifying and overwhelming adventure, and return home, changed by the experience. Sam is the main character of the second story, supported by the other Hobbits.

(Frodo is an interestingly transgressive character, because he floats between the two stories, never committing to either. But that’s a whole different topic.)

And so the book switches modes based on which characters are around. The biggest difference between the modes is the treatment of the Ring. When Strider or Gandalf or any other character from the first story are around, the Ring is the most evil thing in existence—it has to be. So Gandalf refuses to take it, Galadriel recoils, it’s a source of unstoppable corruption.

But when it’s just the Hobbits, things are different. That second story is both smaller and larger at the the same time—constantly cutting the threat of the Ring off at the knees by showing that there are larger and older things than the Ring, and pointing out thats it’s the small things really matter. So Tom Bombadil is unaffected, Faramir gives it back without temptation, Sam sees the stars through the clouds in Mordor. There are greater beauties and greater powers than some artifact could ever be.

This is, to be clear, not a unique structure. To pull an obvious example, Star Wars does the same thing, paralleling Luke’s kid from the sticks leaving home and growing into his own person with the epic struggle for the future of the entire galaxy between the Evil Galactic Empire and the Rebel Alliance. In keeping with that movie’s clockwork structure, Lucas manages to have the climax of both stories be literally the exact same moment—Luke firing the torpedoes into the exhaust port.

Tolkien is up to something different however, and climaxes his two stories fifty pages apart. The Big Fantasy Epic winds down, and then the cast reduces to the Hobbits again and they go home, where they have to use everything they’ve learned to solve their own problems instead of helping solve somebody else’s.

In my experience, everyone connects more strongly with one of the two stories. The tends to boil down to who your favorite character is—Strider or Sam. Just about everyone picks one of those two as their favorite. It’s like Elvis vs. The Beatles; most people like both, but everyone has a preference.

(And yeah, there’s always some wag that says Boromir/The Who.)

Just to put all my cards on the table, my favorite character is Sam. (And I prefer The Beatles.)

Based on how the beginning and end of the books work, it seems clear that Tolkien thought of that story—the regular guys being changed by the wide world story—as the “main one”, and the Big Epic was there to provide a backdrop.

There’s an entire cottage industry of people explaining what “Tolkien really meant” in the books, and so there’s not a lot of new ground to cover there, so I’ll just mention that the “regular dudes” story is clearly the one influenced—not “based on”, but influenced—by his own WWI experiences and move on.

Which brings us back to the movies.

Even with three very long movies, there’s a lot more material in the books than could possibly fit. And, there’s an awful lot of things that are basically internal or delivered through narration that need dramatizing in a physical way to work as a film.

So the filmmakers made the decision to adapt only that first story, and jettison basically everything from the second.

This is somewhat understandable? That first story has all the battles and orcs and wargs and wizards and things. That second story, if you’re coming at it from the perspective of trying to make an action movie, is mostly Sam missing his garden? From a commercial point of view, it’s hard to fault the approach. And the box office clearly agreed.

And this perfectly explains all the otherwise bizarre changes. First, anything that undercuts the Ring has to go. So, we cut Bombadil and everything around him for time, yes, but also we can’t have a happy guy with a funny hat shake off the Ring in the first hour before Elrond has even had a chance to say any of the spooky lines from the trailer. Faramir has to be a completely different character with a different role. Sam and Frodo’s journey across the plains of Mordor has to play different, becase the whole movie has to align on how terrible the Ring is, and no stars can peek through the clouds to give hope, no pots can clatter into a crevasse to remind Sam of home. Most maddeningly, Frodo has to turn on Sam, because the Ring is all-powerful, and we can’t have an undercurrent showing that there are some things even the Ring can’t touch.

In the book, Sam is the “hero’s journey” characer. But, since that whole story is gone, he gets demoted to comedy sidekick, and Aragorn is reimagined into that role, and as such needs all the trappings of the Hero with a Thousand Faces retrofitted on to him. Far from the confident, legendary superhero of the books, he’s now full of doubt, and has to Refuse the Call, have a mentor, cross A Guarded Threshold, suffer ordeals, because he’s now got to shoulder a big chunk of the emotional storytelling, instead of being an inspirational icon for the real main characters.

While efficient, this all has the effect of pulling out the center of the story—what it’s about.

It’s also mostly crap, because the grafted-on hero’s journey stuff doesn’t fit well. Meanwhile, one of the definitive Campbell-style narratives is lying on the cutting room floor.

One of the things that makes Sam such a great character is his stealth. He’s there from the very beginning, present at every major moment, an absolutely key element in every success, but the book keeps him just out of focus—not “off stage”, but mostly out of the spotlight.

It’s not until the last scene—the literal last line—of the book that you realize that he was actually the main character the whole time, you just didn’t notice.

The hero wasn’t the guy who became King, it was the guy who became mayor.

He’s why my laptop bag always has a coil of rope in the side pocket—because you’ll want if if you don’t have it.

(I also keep a towel in it, because it’s a rough universe.)

And all this is what makes those movies so terribly frustrating—because they are an absolutely incredible adaptation of the Epic Fantasy parts. Everything looks great! The design is unbelievable! The acting, the costumes, the camera work. The battles are amazing. Helm’s Deep is one of those truly great cinematic achievements. My favorite shot in all three movies—and this is not a joke—is the shot of the orc with the torch running towards the piled up explsoves to breach the Deeping Wall like he’s about to light the olympic torch. And, in the department of good changes, the cut down speech Theoden gives in the movie as they ride out to meet the invaders—“Ride for ruin, Ride for Rohan!”—is an absolutely incredible piece of filmmaking. The Balrog! And, credit where credit is due, everything with Boromir is arguably better than in the book, mostly because Sean Bean makes the character into an actual character instead of a walking skepticism machine.

So if those parts were your jam, great! Best fantasy movies of all time! However, if the other half was your jam, all the parts that you connected to just weren’t there.

I’m softer on the “breakdancing wizards” fight from the first movie than a lot of fellow book purists, but my goodness do I prefer Gandalf’s understated “I liked white better,” over Magneto yelling about trading reason for madness. I understand wanting to goose the emotion, but I think McKellen could have made that one sing.

There’s a common complaint about the movie that it “has too many endings.” And yeah, the end of the movie version of Return of the King is very strange, playing out a whole series of what amount to head-fake endings and then continuing to play for another half an hour.

And the reason is obvious—the movie leaves the actual ending out! The actual ending is the Hobbits returning home and using everything they’ve learned to save the Shire; the movie cuts all that, and tries to cobble a resolution of out the intentionally anti-climactic falling action that’s supposed to lead into that.

Lord of the Rings: the Movie, is a story about a D&D party who go on an exciting grueling journey to destroy an evil ring, and then one of them becomes the King. Lord of the Rings: the Book, is a story about four regular people who learn a bunch of skills they don’t want to learn while doing things they don’t want to do, and then come home and use those skills to save their family and friends.

I know which one I prefer.

What makes a good adaptation? Or a bad one?

Does it matter if the filmmaker’s are on the same page as the author?

What happens when they’re only on the same page with half of the audience?

The movies are phenomenally well made, incredibly successful films that took one of the great heros of fiction and sandblasted him down to the point where there’s a whole set of kids under thirty who think his signature moment was yelling “po-TAY-toes” at some computer animation.

For the record: yes, I am gonna die mad about it.

Read More
Gabriel L. Helman Gabriel L. Helman

Doctor Who and the Canon… Of Death

I’ve very much been enjoying the commentary around the last couple of Doctor Whos, especially “The Giggle”. There’s a lot of intersting things to talk about! But there’s a strand of fans, primarily ones used to American Sci-fi, that really struggle with the way Doctor Who works, and especially with how Doctor Who relates to itself. It fundamentally operates on a different set of rules for a long-running show than most American shows.

You see—Doctor Who doesn’t have a canon. It has a continuity, but that’s not the same thing.

Lets step back and talk about “canon” for a second.

“Canon” in the sense of organizing a body of fiction, originates with the Sherlock Holmes fandom. There, they were making a distinction between Doyle’s work and what we’d now call “fan fiction”. Using the biblical term was one of those jokes that was “ha ha only serious”, it’s clearly over the top, but makes a clear point—some things exist at a higher level of importance than other things.

But it also sets the stage nicely for all future uses of the term; it draws a box neatly around the core works, and the social contact from that point on is that any new work needs to treat the material in “the canon” as having happened, but can pick and choose from the material outside—the apocrypha, to continue the metaphor.

So, any future Sherlock Holmes work is expected to include the fact that he faked his death at the top of a waterfall, but isn’t expected to necessarily include the fact once he was treated by Freud.

Again, here the term mostly draws a line between what today we’d call “Official” and not. It’s a fancier way of putting the work of the original author at a higher level importance than any other continuation, formally published or not.

But then a funny thing happened. As large, multi-author franchises became the norm in the late 20th century, we started getting Official works that still “didn’t count”.

As usual for things like this, Patient Zero is Star Trek. When The Next Generation got going, the people making that show found there was an awful lot of material out there they didn’t want to have to deal with. Not fan-fiction, the official vs fan device was clear by the mid-80s, but works that were formally produced by the same people, had all the rights to do so, but “didn’t really happen.” Specifically, the Animated Series, but also every single spin-off novel. So, Roddenberry & co. declared that the “Star Trek Canon” was the original show and the then four movies, and everything else was not. Apocrypha. Official, but “didn’t count.”

(Pushing the biblical metaphor to the breaking point, this also introduced the first “deuterocanonical” work in the form of the Animated Star Trek, where nearly everything in it has been taken to have “happened” except the actual plots of the episodes themselves. And those force-field belts.)

(And, it’s absolutely insane to live in a world where we act like the Voyager episode “Threshold” happened and Diane Duane’s Rihannsu didn’t, but at least the rules are clear.)

And this became the standard for most big sprawling multi-media franchies: sooner or later nearly all of them made some kind of formal statement about which bits were “The Canon.” And the key detail, always, was that the only reason to formally declare something like this was to leave things out. This isn’t always a bad thing! As I said before a lot of this was around establishing a social contract between the authors and the audience—“these are the things we’ll adjust future work to fit, and these are the things we’re giving ourselves permission to ignore.”

The most extreme version of this was Star Wars, twice over. First, you have the overly complex 4-tired canon of the late 90s and early 00s, which not only established the Canon, but also provided a borderline-talmudic conflict resolution system to determine which of two pieces of canon that disagreed with each other was “right”.

Then, after Disney bought LucasFilm, they rescoped the canon, shrinking it down to pretty much just live action movies and the Clone Wars cartoon, banishing all the previous novels and such into the Deuterocanonical wilderness of “Legends”, which is sort of like if Martin Luther had also been the CEO of the company that bought the Catholic Church.

But, the point remains. Canon is way to exclude works, largely as an attention-conservation device, a way for a franchise to say “this is what what we commit to pay attention to, and the rest of this is fun but we’re going to ignore it.”

Which is where we get back to Doctor Who.

Because Doctor Who is unique in that no one in a position to do so has ever made a formal declaration about “Canon”. And this makes a certain kind of fan go absolutely bananas.

There’s no point in having a canon if you’re not excluding something; the whole point is to draw a box around part, rather than the whole thing. And that just isn’t Doctor Who’s style.

There’s a quote from 70s script editor Terrance Dicks that I can’t find at the moment, that goes somesthing like “Doctor Who’s continuity is whatever the general public can remember,” and that’s really the animating principle. It’s a more free-wheeling, “it’s all true”, don’t sweat the details kind of attitude. This is how you end up with three completely different and utterly incompatible destructions of Atlantis. It’s not really a show that gets wrapped up in the tiny details? It’s a big picture, big concepts, moving forward kind of show.

And this completely violates the social contract of something like Star Trek or Star Wars, where the implied promise of having a Canon is that everything inside it will fit together like clockwork, and that any “violations” are opportunities for deep navel-gazing stories explaining the reasons. This leads to those franchises worst impulses, for example both to aggressively change how the Klingons look in an attempt to prove that “this isn’t your Dad’s Star Trek”, and then also spend three episodes with the guy from Quantum Leap explaining why they look different.

Doctor Who on the other hand, just kind of says “hey! Look how cool the Cybermen look now!” and keeps moving.

The point is, if you’ve bought into the clockwork canon worldview, Who looks incredibly sloppy, like a bunch of careless bunglers just keep doing things without any consideration of what came before.

(Which is really funny, because I absolutely guarantee you that the people who have been running Who the last two decades are much bigger fans of the old show than anyone who’s worked on Star Trek over the same period.)

So when the show got big in the US, the American fans kept trying to apply the Star Trek rules and kept getting terribly upset. This has spawned a fair amount of, shall we say, internet discussion over the years. The definitive statement on Doctor Who’s lack of canon is probably Paul Cornell’s Canonicity in Doctor Who. But there’s those Trek fans that remain unconvinced. Whenever the show tosses out something new that doesn’t really fit with the existing material—bigeneration, say—there’s the fan cohort that goes completely mental. Because if you treat decades old stuff as having higher precedence that new ideas, the whole thing looks sloppy and careless.

But it’s not carelessness, it’s just a different world view to how this kind of storytelling works. Thematically, it all works together. The details? Not the point.

I tend to think of Who working more like Greek Myths than a documentary about fictional people. Do all the stories about Hercules fit together? No, not really. Is he always the same guy in those stories? Yes, yes he is.

Same rules apply to the madman in a box. And if someone has a better idea for a new story, they should go ahead and tell it. Atlantis can always drown one more time.

Read More
Gabriel L. Helman Gabriel L. Helman

Doom @ 30

I feel like there have been a surprising number of “30th anniversaries” this year, I hadn’t realized what a nerd-culture nexus 1993 was!

So, Doom! Rather than belabor points covered better elsewhere, I’ll direct your attention to Rock Paper Shotgun’s excellent series on Doom At 30.

I had a little trouble with experienced journalists talking about Doom as a game that came out before they were born, I’m not going to lie. A very “roll me back into my mummy case” moment.

Doom came out halfway though my second year of high school, if I’m doing my math right. My friends and I had all played Wolfenstein, had been reading about it in PC Gamer, we knew it was coming, we were looking forward to it.

At the time, every nerd group had “the guy that could get stuff.” Which usually meant the one with well-off lax parents. Maybe going through a divorce? This was the early 90s, so we were a little past the “do you know where your kids are” era, but by today’s standards we were still pretty… under-supervised. Our guy showed up at school with a stack of 3.5-inch floppies one day. He’d got the shareware version of Doom from somewhere.

I can’t now remember if we fired it up at the school or if we took it to somebody’s house; but I _do_ remember that this was one of maybe three or four times where I genuinely couldn’t believe what I was seeing.1

Our 386 PC couldn’t really handle it, but Doom had a mode where you could shrink the window down in the center of the monitor, so the computer had fewer pixels to worry about. I played Doom shrunk down nearly all the way, with as much border as image, crouched next to the monitor like I was staring into a porthole to another world.

I think it holds up surprisingly well. The stripped-down, high-speed, arcade-like mechanics, the level design that perfectly matches what the engine can and can’t do, the music, the just whole vibe of the thing. Are later games more sophisticated? Sure, no question. Are they better? Well… Not at shooting demons on a Mars base while early 90s synth-rock plays, no.

Reading about Doom’s anniversary this last week, I discovered that the current term of art for newly made Doom-like retro-style shooters is “Boomer Shooter.” I know everyone forgets Gen-X exists, that’s part of our thing, but this will not stand. The Boomers can’t have this one—there is no more quintessentially, universal “Gen-X” experience than playing Doom.

Other than everyone forgetting we exist and giving the Boomers credit, that is.


  1. The others, off the top of my head, were probably the original Kings Quest, Tomb Raider, Grand Theft Auto III, and Breath of the Wild.

Read More
Gabriel L. Helman Gabriel L. Helman

Doctor Who and the Giggle

Spoilers Ahoy

An older man embraces his younger self. The younger man is filled with guilt, and rage, and dispair. The older man is calm, almost serene.

“It’s okay,” the older man says. “I got you.” He kisses his younger self on the forehead.

Sometimes the subtext gets to just be the text, you know? Or, to slightly misquote Garth Marenghi, sometimes writers who use subtext really are cowards.

It turns out we got a multi-Doctor story after all!

It’s maybe the most obvious idea that the show has never used—what if a Doctor’s last episode was also a teamup with the next Doctor? You can easily imagine how that might work—some time travel shenanigans, they team up, defeat the bad guy, then some more time travel shenanigans as the loop closes and the one regenerates into the other. Imagine if the Watcher in Logopolis had been played by Peter Davison!

On the other hand, it’s also perfectly obvious why no one has ever done it.

First, this is a hard character to play, and most Doctors have fairly rough starts. Having to share your first story with your predecessor is beyond having to hit the ground running, you have to be all the way there.

Second, it’s disrespectful to the last actor. A villain the current Doctor can’t beat, but that the next one can? Seems like a bad story beat to go out on. But more than that, making this show is an actual job—people come in, they go to work, they know each other. Imagine spending your last couple of days at work sharing your job with the next guy. That sucks. You don’t get a going-away party finale about you, instead you have to share with the new kid.

But this finds a solution to both of those.

There’s a long history in the show of easing the new lead in by keeping the old supporting cast and letting them do the heavy lifting while the new Doctor finds their feet. Look at “Robot”, Tom Baker’s first episode as an example—that’s essentially a baseline Jon Pertwee episode that just happens to have Tom in it instead. This story realizes that you can extend the concept to include the old Doctor as well, treating them as part of the existing cast to help get the new Doctor going.

But more critically, in this case, the old Doctor has already left once! Tennant got his big showpiece exit back in 2010. This is bonus time for him, and can’t diminish his big exit in any way.

Instead, after two and a half hours of a greatest hits reunion, he steps to the side and pours all his energy into getting the new guy off the ground. It’s hard to imagine any other actor who’s played the part being willing to put this much work in to making their replacement look this good.

And good he does look. Ncuti Gatwa bursts onto the screen and immediately shows why he got the part. He’s funny, he’s exciting, he’s apparently made out of raw, uncut charisma. Tennant is still there, but once Gatwa arrives, he’s the only one you’re watching.

The whole thing is just tremendously fun, an absolute delight from beginning to end. Presumably it’s title refers to what the author was doing the whole time they wrote it. It’s a big goofy, exciting, joyous, ridiculous adventure where the Doctors win by being brave, and clever, and charismatic, and kind, and what more could you possibly want from this show?

The Toymaker is an interesting choice of returning classic villain. For everyone not steeped in the Deep Lore, “The Celestial Toymaker” is a mostly-missing story from late in the original show’s 3rd season where the Doctor gets pulled into the realm of the Toymaker, a siniser, seemingly immortal being living in a domain of play. The Doctor has to defeat him at a very boring game while his companions have a whole set of largely filler encounters with evil clowns and whatnot.

Oh, and the Toymaker himself (played by Batman’s Best Butler, Michael Gogh) is a deeply racist Chinese caricature, a white man dressed in full Mandarin robes and all. “Celestial”, get it? He’s from space, and also Chinese! It’s a racist pun.

This is strangely controversial take in some corners of Who fandom, where the arguments that the Toymaker isn’t racist seems to boil down to the suggestion that the show correctly used a racial slur for their yellow peril character during an uncharacteristically racist period of the show to perfectly craft a racist pun… by accident?

¯_(ツ)_/¯

That said, it’s easy to imagine that if you were ten in 1966 this was probably the coolest thing you’d ever seen—and then no one ever got to watch it again. For ages, that impression of the original viewers held sway in fan circles—the Discontinutity guide, formal record of mid-90s fan consensus, calls it an “unqualified success”.

Reader, it is not. It’s slow, the bad kind of talky, and feels like a show made entirely out of deleted scenes from another, better show.

Once we could watch reconstructions, the consensus started to shift a little.

Credit where credit is due, what it does have going for it is one of the show’s first swings at surrealism, and also one of the first versions of a powerful evil space entity; the adjective “lovecraftian” didn’t really exist in ’66, but this is one of the show’s first takes on “spooky elder god”. Also, it was strongly implied that the Toymaker and the Doctor already knew each other, and that was definitely the first time the show had hit that note.

So why bring him back?

Well, The Toymaker has the “mythic heft” to be the returning villain for the big anniversary show, while also not having anyone who would care that he got dispatched early, and in a way where he probably won’t be back again.

I thought the reworking of the character from a racist caricature to a character who likes to perform racist caricatures was very savvy, a solid way to rehab the character for a one-off return.

Plus, Neal Patrick Harris clearly understands the assignment, and absolutely delivers “evil camp” like no one else can. (More on that in a bit.)

Let’s talk about Mel for a second. Mel wasn’t anyone’s favorite companion, barely a sketch of a character during a weird time on the old show, despite Bonnie Langford being probably the highest profile actor to be cast as a regular on the original show’s run.

A much-told anecdote is that for the cliffhanger of her first episode, the producer asked if she could scream in the same key as the first note of the closing credits, so that the one would slide into the other.

The bit of that story everyone always leaves out is that 1) yes she could, and 2) she nailed it in one take. There was a whole lot of talent there that the show just left on the floor. She was there for a year and a half, and then got out of the way so Ace could anchor the final mini-renaissance of the show before it finally succumbed to its wounds. Consigned to that list of characters where you go, “oh right, them” when you remember.

But then a funny thing happened.

Classic Doctor Who has been embarrassingly well-supported on home video. The entire show was released on DVD, and they’re now about half-way through re-releasing the whole show on blu-ray as well. As result of their decision to release each story separately on DVD, every single story has a wealth of bonus material—interviews, archive clips, making-of documentaries. The bulk of the DVDs came out during the tail-end of the “wilderness years” before the show came back, and the special features tend to split their time between “settling old grudges” and “this wasn’t that bad, actually.” There’s a real quality that “this is for the permanent record”, and so everyone tries to put the best face forward, to explain why things were the way they were, and that it was better than you remembered.

The Blu-rays, on the other hand, have a very different tone. Released long after the new show has become a monster hit, the new sets repackage all the old material while adding new things to fill in the gaps. While the DVDs tended to focus on the nuts-and-bolts of the productions, the new material is much more about the people involved. And they are all much more relaxed. We’re long past the point where the shows needs apologizing or explaining, and everyone left just finally says what they really thought about that weird job they had for a year or two decades ago.

A consequence of all this material has been that several figures have had their reputations change quite a bit. And perhaps none more so than Bonnie Langford. Far from being “that lady that played Peter Pan who kept yelling about carrot juice”, in every interview she comes across as a formidably talented consummate professional who walked into an absurd situation, did the best job anyone could possibly do, and then walked back out again.

Faced with a character with no background, no personality other than “80s perky”, and not even a real first story, and in a situation where she got no direction on a show where the major creative figures were actively feuding with each other, she makes the decision to, basically, lean into “spunky”, hit her marks, and go home. From my American perspective, she basically settles on “Human on Sesame Street interacting with the Doctor as a muppet” as a character concept, which in retrospect, is a really solid approach to Doctor Who in 1986.

The character, as on screen from “Terror of the Vervoids” to “Dragonfire” still doesn’t, in any meaningful way, work, but the general consensus floated away from “terrible idea” to “actually fairly interesting idea executed terribly.”

So, here in 2023, Bonnie Langford can show up on BBC One and credibly represent the whole original show for the big 60th anniversary.

And, this version of the character basically does work, which it accomplishes by just giving her something to do. For example, she gets to deliver exposition through song, a mid-bogglingly obvious idea that the old show just never thought of.

And look, if Lis Sladen were still alive that probably would have been Sarah Jane, but that wasn’t an option, so RTD went for something interesting that hadn’t been tried yet.

What’s this story for?

Like we talked about before, it’s hard not to read these three specials as an artist in conversation with their previous work. If “The Star Beast” was about resolving Donna, and “The Wild Blue Yonder” was about turning out a great episode of Doctor Who, what’s “The Giggle” here to do?

On a purely mechanical basis, this is here to give Tennant a big send off and clear the decks for Gatwa and the new, new show can get a clean start.

But also, you get the feeling there were a couple things RTD wanted a do-over on before he relaunched the show for real.

One of the things thats so great about Doctor Who is that it’s camp, but not just any camp. Doctor Who is AAA, extra-virgin, weapons-grade camp, and most people can’t hit that.

A lot of the time, when someone complains about someone coming on Who and being “camp” what they really mean is that they weren’t camp enough.

For example: John Simm’s take on the Master back in 2007. Like most of Series 3, it almost worked. There’s a scene towards the end where he’s dancing around the helicarrier dancing to a Scissor Sisters song, and it’s supposed to be sinister and instead it’s just kind of goofy? Simm can’t quite throttle up the camp required to pull that off, and in all their scenes together you can see Tennant easing off on the throttle. None of it quote worked, it just never hit the “evil camp” that RTD was clearly looking for.

Harris dancing to the Spice Girls while the UNIT soliders fired rose petals at him was clearly what RTD had in mind a decade and a half ago, and it was glorious.

And the reprise of the Flash Gordon hand retrieving the Master is just delicious.

I think my favorite moment of the whole show was “But she was killed by a bird!”

The toymaker’s puppet show was glorious. It served (at least) two purposes.

First, this was clearly some gentle ribbing of one show runner to the other. While “The Star Beast” directly engaged with Moffat’s criticism of Donna’s mind-wipe, this was RTD responding in kind about Moffat’s fetish for killing-but-not-really his companions. And then, RTD locking in on The Flux as a source of more Doctor AngstTM.

Second, it grounded the whole point of the episode. The Doctor has been through a lot. Trauma has been a core feature of the show since the 2005 revival, but this was moment to pause and underscore, mostly for Donna’s benefit, how many terrible things have happened since she was on the show.

Like the Doctor casually mentioning that he was “a Billion years old”, things have happened, over the last fifteen years.

There’s been some suggestion that the puppet show was RTD throwing shade on in successors, and no. The shade was “I made a jigsaw of your history.”

This set of specials had a very relaxed attitude towards “the rules”, whatever those might be. The sharpest example of this is keeping the emotional reverberations of The Flux, but muddying all the water around The Timeless Child, and the general “aww screw it” anything-goes attitude towards regeneration.

One of the big, maybe the biggest, innovations of the 2005 re-imagining of Doctor Who was to expand the emotional palette. While the original show tended to operate in a very narrow band of—frankly—safe emotions, the revival opened the throttle wide open. Mostly this was used for angst, and doubt, and unrequited love.

Now, here at the start of the 2023 revival, we add healing to the show’s vocabulary.

These specials summon up all the unresolved trauma of the revival show to date, and exorcise it.

Who in 2005 was about pain, and loss, and grief, and living with trauma. Who in 2023 is about healing.

For once, both the Doctor and the companion get a happy ending, and dine off into the sunset.

The new Doctor is a man healed, finally free of the weight of the revival show.

It’s hard not to read that as at least partly autobiographical?

Having the Doctor talking about past challenges, and then list The Time War, The Pandorica, and Mavic Chen as equals is hilarious. It’s nice to remember RTD is one of us, you know?

Bringing back Trinity Wells, but she’s become an Alex Jones/Sean Hannity–type is even funner.

Formally, the upcoming season of the show is Series 1 of Doctor Who (2023). Much hay has been made in some quarters that “Disney has reset the show”, and there’s some gnashing teeth that it’s “really” Series 14 of Doctor Who (2005) (or even Season 40 of Doctor Who (1963)).

From a production standpoint, it clearly is a new show; it’s being made at a new facility under the auspices of a new co-production company. From the view of the BBC’s internal paperwork, the 2023 show is as different an entity from the 2005–2022 show as that was from the 1963–1989 one. There’s still some churn, but the community seems to be coalescing on “Original era”, “Revival era”, and “Disney+ era” as the names you use in lists to organize the three iterations.

And it’s clear that from a branding perspective, Disney+—which is distributing the show outside of the UK and putting up a chunk of the budget for the privilege—would rather have the show page start with “Season 1” instead of the inexplicable-to-newcomers “Season 14.” And the contracts that cover the three interactions are clearly different too, with BritBox, Max (formerly HBO Max), and now Disney+ each having the rights to one of them. At worst, this seems like one of those moments where Amazing Spider-Man will declare a “bold new beginning!” and reset the issue numbering to #1. Sooner or later the original numbering sneaks back in to the inside cover, and then eventually it resets and issue 27 is followed by issue five hundred-something. It’s silly, but a decent branding exercise, a way to signal to new people “hey, here’s a safe place to jump on!” And, with the old business mostly concluded here, the Christmas episode seems like it’ll be a solid place to on-board.

But again, the subtext pulls up into the text.

By all reasonable measure, David Tennant is the revival show. He was by far the most popular, and Series 4 with him and Catherine Tate was the all-time ratings high.

So here, the two of them stand in for the entire revival era of the show. Bonnie Langford gets to represent the Original. This episode ends with the revival show and the new embracing, while the original show watches and approves. The revival show hands the keys to the new show, and then the revival and original shows retire to country, while the new show heads off to new adventures.

Can’t wait to see what happens next.

Read More
Gabriel L. Helman Gabriel L. Helman

Indiana Jones and the Dial of Destiny

Okay, I finally saw Dial of Destiny. It was… “fine”, I guess? But I don’t understand why you would go to all the trouble of making “one more” Indy movie in 2023 if the best you could muster was “fine”.

Spoilers Ahoy for Dial of Destiny

Let’s start with what works: The best part of the movie was its enthusiastic endorsement of punching Nazis. It’s strangely rare to see that stated so clearly and without hedging these days, so that almost makes up for everything else.

Also, the cast is uniformly excellent. This is the first Indiana Jones movie since Radiers where there’s no weak link, everyone does a great job with what they have to do, and frankly, everyone looks like they’re having a good time doing it. Even Harrison Ford looks awake and engaged, which isn’t always a given post-somewhere around Air Force One.

Other than that, it’s well made, looks good, solid production design, the punches all sound great. The plot cooks along at a steady clip, the action works. And the strange thing about this movie is that while it doesn’t really do anything badly, it just also doesn’t do anything particularly well. It’s fine.

So what doesn’t work so well?

The funniest thing is that Harrison Ford doesn’t even try to make his voice sound younger in the prologue. Just a fifty-year old face with an eighty-year old voice. What a legend!

But the first thing I noticed was how still the camera was. I appreciate not wanting to make a pastiche, but scene after scene of actors looking at something in a locked-off camera shot, I’d think to myself, “man, Spielburg would have put a really cool camera move here.”

It’s way too long. There’s a reason all the others are a tight 2 hours, there’s no excuse for a two-and-a-half hour Indy movie. Halfway through the WW2 prologue I caught myself thinking “wow, this is still going, huh?” Also, look, the third time you write “and then Indy is captured and bundled into the back of a van” in the script, your movie is too long. So it’s not just Spielburg that’s missed, but also Michael Kahn.

Similarly, there is no universe where you should spend 200+ million dollars on an Indiana Jones movie.

And then it works its way through the other greatest hits of all the bad habits that “legacy sequels” have picked up over the last decade or so:

  • Overly enamored with mediocre computer de-aging
  • The Hero has suffered terrible personal setbacks since we saw them last, and are now living in failure, all past successes forgotten
  • Full of new, younger characters, but they’re not super like-able, and are there more than makes sense, but not enough to tee them up as the new leads, as if they wanted to set up a spin-off but then got cold feet halfway through the movie.
  • Way, way too much greenscreen instead of practical effects

Strangely, it seems like they used Crystal Skull as their main source of inspiration, fixing the cosmetic mistakes but not the fundamental ones. For example, replacing Shia with Phoebe Waller-Bridge is a huge upgrade, but at no point did anyone seem to stop and ask why they needed a Junior Varsity Indy to begin with. I like Phoebe Waller-Bridge a lot so she was fun; but giving one the big big hero moments to… the new kid sidekick? Why? Personally, instead of another one-off sidekick I would have much preferred Indy & Marion on one last ride bickering the whole time. If you’re doing a one-last-ride nostalgia piece, why add so many new people?

And look, Crystal Skull was bad, but at least it had Cate Blanchett vamping it up as an evil Russian psychic? This one had… the guy from Casino Royale playing Great Value Brand Red Skull?

And why break up Indy and Marion only to get them back together again at the end?

Frustratingly, It’s not like this movie was short on ideas. There’s at least a dozen really good ideas for an Indiana Jones movie:

  • What if Werner von Braun was still a Nazi?
  • Related: Nazis are sneaking back, time to get punching!
  • The Moon Landing!
  • Closely related: Astronauts! (Imagine a fistfight between Indy and some NASA guys)
  • The Antikythera mechanism as a macguffin. Great choice, brings in a whole set of Mediterranean iconography you can play with that the Indy movies haven’t done yet
  • Bonus macguffin: the Spear of Destiny, as used in every single Indy spinoff in the 90s, and for good reason
  • CIA agents working with neo-nazis but not being happy about it
  • Indy as a retired “old guy”, living an a world that’s passed him by, yet is still historical for the audience. Credit where credit is due, the cut to old Indy being awoken by “Magical Mystery Tour” was absolutely worth whatever it cost to get that song. (Plus, Indy in an anti-Vietnam demonstration? YES PLEASE!)
  • A plot that ties unfinished business from whatever he was doing during the war with what’s going on now
  • And more broadly from the above, what does a retired action hero do with his day?
  • Confronting the past choices of the other movies: hey, wait a sec, was he a grave robber? There’s a whole confronting the past angle that the movie dips it’s toes into and then cowards out from. Remarkably, this is the only Indiana Jones to contain the words “grave robber”, and the only movie where Indy actually destroys a historical artifact.
  • But the absolute best idea this movie has is Indy trying to recover historical artifacts stolen by the Nazi as part of the end-of-war plunder. It’s inconceivable to me that they wasted this on just the opening: Just gonna throw this out there, but “Indiana Jones and the Secret of the Amber Room” set in the mid-70s would have been absolutely incredible.

And you can squint and make just about any of those work as a spine for a whole movie. Instead, this movie throws them all into the blender and they’re all just… there? They don’t line up in any sort of thematic way, the movie just flirts with one and then moves to the next. But also, there’s four credited screenwriters, so it really feels like they took every pitch from the last 15 years and jammed them all in there. Considering the director, it also feels like they started with “Logan, but Indy” and then kept rounding down.

As a point of comparison, Indiana Jones and the Last Crusade has just as many plates spinning: the Nazis, Donnovan’s ambitions, Indy’s dad, whatever Dr. Elsa Schneider is playing at, the Brotherhood of the Cruciform Sword. But, all of those characters are oriented around the Grail, their actions center around their motivations regarding it. Plus, that movie has maybe the best action scene Spielburg has ever put together with the tank chase. In Dial, none of these elements go together, and there are some car chases.

This is a movie that knows “emotions” are a thing other movies have, but isn’t sure where they go? So we get Indy being—correctly—very upset that his friend Armand the Vampire was murdered but only for about seven seconds. Or the scene where Indy talks about his son’s death, which Ford acts the hell out of, but then descends to pure bathos the second you realize that yes, they really did pull a Poochy on Shia’s character and that Mutt died on the way back to his home planet.

The defining moment of the movie for me came about half-way through. The good guys are in trouble, and Indy says “hang on, I have an old friend that’ll help us,” right after a long conversation about the kid that Fleabag has picked up, and then the movie cuts to… ANTIONO BANDERAS, of all people, playing his character from the SpongeBob Squarepants movie? Meanwhile, at the exact same time, Ke Huy Quan is turning in an oscar-winning performance in another movie. Short Round is never mentioned.

Actually, though, the worst part of the movie is that John Williams took one look at it and decided not to even try. Less than ten minutes into the prologue, and he’s recycling the music from the Last Crusade tank chase. Say what you will about Crystal Skull, but at least the Skulls got their own leitmotif.

Dial of Destiny cost a lot of money, and didn’t do very well at the box office. It’s once of the central exhibits in both 2023’s weird box office specifically and Disney’s post-2019 slump generally. This is the point where people on twitter start blaming it’s failure on someone “having an agenda” or “repackaging nostalgia”. And what’s funny is this is the movie that proves all those people wrong, because if that was the problem, fucking Short Round would be in the movie.

Instead, I think the problem is both deeper and simpler. This is a movie made by people with no taste, no ambition beyond “making another one”, “whose main creative vision is they love to have meetings.” People who are here to make “content”.

I’d love to ask the people behind this movie to describe, in their own words, to explain what makes Indiana Jones a unique character, and to do that without using the words “brand” or “franchise.” Because I’m not sure they could?

Indy is a character who is always in over his head, but gets through because he’s got more guts and never quits. And that’s just… not in this movie.

And that’s where it starts to get a little insulting: Radiers of the Lost Ark is as close to a perfect movie as anyone has ever made, Indiana Jones himself was a truly unique creation. Here, he’s been sandblasted down to just another superhero-adjacent character, the hat and jacket more of a signature costume than something someone would really wear than ever. On the most superficial level, he doesn’t even really use his whip, it’s just hanging from his belt because “Indiana Jones”. There’s nothing here that couldn’t be in some other action movie. More than anything, this movie feels like a late-period Roger Moore Bond movie: perfectly competent, but utterly lacking in any ambition beyond the release date. That and the fact that the lead moves like an 80-year old when you can see their face, and like 30-year old when their back is turned to the camera.

Critically, the other Indy movies all have a moment where Indy realizes that the macguffin isn’t what he cares about, and that he’s really here to save a person—Marion, the village, his father, his son. Artifacts, supernatural or otherwise, can take care of themselves, he’s here to protect something else. And that turn never comes here, instead Indy’s real mission is—what, exactly?

This movie is made by people who really think that Indy didn’t do anything in Raiders, and he really doesn’t get anything done here.

Everyone in tis movie had better things to be doing with their time, and I don’t understand why they bothered to go ahead if this was the best they could do.

It was fine.

Read More
Gabriel L. Helman Gabriel L. Helman

Doctor Who and the Wild Blue Yonder

My favorite moment was a little beat about a third of the way through the story. While working to reboot the spaceship they're trapped on, the Doctor quietly speculates to himself where the TARDIS has gone. The show always works better when it remembers to treat the TARDIS as a character instead of “just” the Doctor's car. It’s a perfect Doctor Who moment; simultaneously both explicitly mythic, with an undying space god invoking the image of an immortal, indestructible alien Time Machine outlasting whole civilizations, and quietly personal as the main character ruminates on where their oldest friend goes on vacation.

The TARDIS’s agency, and unique personality, have been intriguingly foregrounded; last week she dropped the Doctor right on top of Donna seemingly intentionally, and this week the ship delivers a warning via a the subtext of a song, runs off to repair herself, and then pops in to save everyone just at the nick of time. The return of the TARDIS’s personality from “The Edge of Destruction” was nowhere near my bingo card for this anniversary run, and I am here for it.

Ahhh, the mysterious, all-secret, all-filmed-inside second one! The rumor mill was all over the place, the marketing for these specials went out of their way to avoid it, and by the last few days the internet had gone positively feral trying to guess what was going on.

So it starts, and the question is, what kind of story is this? All we knew for sure was that it was “scary”, except then it starts with a very self-contained comedy skit. There’s an unjustified tension to the first few minutes, as The Doctor and Donna open spaceship doors; is one going to reveal Matt Smith or Peter Capaldi or Carole Ann Ford or Ncuti Gatwa or someone? (Depending on which batch of rumors you believed.)

And then, about 15 minutes in, no—this is none of those things, this is RTD calling a do-over on “Midnight”.

RTD always liked having a sort of meta-structure to his Who seasons: start with the mostly-comedy opener, with a present-past-future triplet at the start, do the “funny” two-parter for kids, throw in a celebrity historical, the scary two-parter, a weird spiky and cheap one towards the end, and then a big blowout finale. And then a weirdly dark christmas episode as an epilogue.

The non-season of the 2009 specials was a stripped-down version of this—the fun opener of “Planet of the Dead”, the spooky two-parter of “Waters of Mars”, and then the grand finale of “The End of Time.”

And so now, it’s obvious we’re using the same basic format, except this middle is closer to “Midnight” or “Blink” or “Boom Town” than “The Empty Child” or “Impossible Planet” or “Silence in the Library.” I think that’s a good move! Those weird ones were always some of the best, and It’s fun to see him slip back into the “small and scary” mold this early in the return. And not only that, but one explicitly in the mold of a “let me prove I can still write” story.

What made Tennant and Tate such a great pair of leads for Doctor Who? Their one year in 2008 remains the new show’s all-time ratings peak), and has the all-time highest AI scores for the entire 60-year run of the show. Not that it isn’t deserved, but why?

Partly, much like Tom Baker and Elizabeth Sladen, they had the good fortune to be on a show that was firing on all cylinders, operating at an absolute creative peak of the people behind the cameras.

You have one of the very few times where both leads are 1) at the same acting skill level, and 2) that level is very, very high. So you get this effect where not only are they both good, but they make each other better, if nothing else by virtue of that fact that neither one has to slow down to let the other one keep up. Here, they can go as hard as they can, and the other will stay right with them. I mean, Tennant was significantly better than his other co-leads, and on the other side, Karen Gillan was visibly dialing it back so Matt Smith could stay the lead. The only time you both leads pushing each other upwards like Tennant and Tate do was Capaldi and Coleman, and that was the other creative peak of the new show.

So here, Tate and Tennant put on an absolute clinic in how making tiny choices slightly different can flag “wrongness” without actually foregrounding anything as obviously wrong. And then, when they go full Evil Doppleganger Vampires, they manage to keep it as “the same characters, but scary”, and while still only nibbling the edges of the scenery rather than devouring it all-you-can—eat buffet–style.

(One almost gets the impression that Tennant especially is thinking back to John Sim’s moderately succesful take on the Master and thinking, “look, let me show you the right way to do Evil Doctor.”)

This is extra impressive considering neither of these two have played these characters in a decade and a half, and that this is only their second swing back at it.

We havn’t talked much about the episode itself yet, and thats because it’s hard to know what to say. It’s utterly delightful that we’ve got an episode that looks and moves like “a cheap one”, but is blatantly incredibly expensive.

The core concept is incredibly solid; joking aside, this really does feel like “Midnight, but Donna comes along.” Take just the two main characters, strip away everything extraneous—no sonic, no guest cast, not even Tennant’s coat, and build the tension around how well these two actually know each other.

And then, fabulously, take two characters (and two actors) known for moving and talking fast, and put them in a situation where to win they have to be slow. Beautiful!

It might be a perfect example of Doctor Who running in “small and scary” mode.

And, the Doctor changing the subject away from Gallifrey with “well, then all that got complicated” is one of the best pieces of writing for telling a part of the audience “we’re not going to retcon anything, but we’re going to keep moving forward not looking back” that I’ve ever seen.

Overall, it’s an interesting approach to an anniversary. We had a big messy “lots of past cast members show up” carnival last year with “Power of the Doctor”, and semi-wishful thinking aside it was unlikely that RTD was going to do something similar again.

Instead, the old gang got back back together and are effectively slotting a missing half-season between 2009 and 2010. Because despite what I said in the last paragraph, here we’ve got nothing but past cast members. Instead of a big cameo museum, we pick one specific point of the show and do a litte more of that. It’s an approach that I’d like to see more of, frankly. I’ve love a Cartmel-McCoy-Aldred special, or a Moffat-Capaldi-Coleman. And as fun as “The Two Doctors” was, they really should have just let Troughton and Hines have an episode to themselves.

There’s a faint hint in some corners of “is this all they’re doing?” But yes! Look at all they’re doing! Getting three extra episodes from one of the all-time great casts is a gift. Even better, they’re spending a whole third of their limited time making “real” Doctor Who, not just reunion grandstanding. Incredible.

Finally, there’s a real glee in the way that between this week’s “hot Newton” and last week’s scream for Trans rights RTD is making “Doctor Who is woke now” old news long before Ncuti Gatwa has to absorb the brunt of it. It’s both delightful trolling of a group that deserves it, as well as an act of real kindness towards the new lead.

And then it turns out the big surprise return of a past cast member was Bernard Cribbins. Perfect.

Read More