Gabriel L. Helman Gabriel L. Helman

Hardware is Hard

Friday’s post was, of course, a massive subtweet of Human’s "Ai" pin, which finally made it out the door to what can only be described as “disastrous” reviews.

We’ve been not entirely kind to Humane here at Icecano, so I was going to sort of discretely ignore the whole situation, the way you would someone who fell down a flight of stairs at a party but was already getting the help they needed. But now we’re going on a week and change of absolutely excruciating discourse about whether it’s okay to give bad products bad reviews. It’s the old “everything gets a 7” school of video game reviews, fully metastasized.

And, mostly, it’s all bad-faith garbage. There’s aways a class of grifter who thinks the only real crime is revealing the grift.

Just tobe crystal clear: the only responsibility a critic or reviewer of any kind has it to the audience, never to the creators, and even less to the creator’s shareholders.

But also, we’re at the phase in the cycle where a certain kind of tech bro climbs out of the woodwork and starts saying things like “hardware is hard.” And it is! I’ve worked on multiple hardware projects, and they were all hard, incredibly hard. I once watched someone ask the VP of Hardware Engineering “do the laws of physics even allow that?” and the answer was a half-grin followed by “we’re not sure!”

I hate to break it to you, hard work isn’t an incantation to deflect criticism. Working hard on something stupid and useless isn’t the brag you think it is.

Because, you know what’s harder? Not having a hudred million dollars plus of someone elese’s money to play with interest-free for years on end. They were right about one thing though: we did deserve better.

Read More
Gabriel L. Helman Gabriel L. Helman

Sometimes You Just Have to Ship

I’ve been in this software racket, depending on where you start counting, about 25 years now. I’ve been fortunate to work on a lot of different things in my career—embedded systems, custom hardware, shrinkwrap, web systems, software as a service, desktop, mobile, government contracts, government-adjacent contracts, startups, little companies, big companies, education, telecom, insurance, internal tools, external services, commercial, open-source, Microsoft-based, Apple-based, hosted onvarious unicies, big iron, you name it. I think the only major “genres” of software I don’t have road miles on are console game dev and anything requiring a security clearance. If you can name a major technology used to ship software in the 21st century, I’ve probably touched it.

I don’t bring this up to humblebrag—although it is a kick to occasionally step back and take in the view—I bring it up because I’ve shipped a lot of “version one” products, and a lot of different kinds of “version ones”. Every project is different, every company and team are different, but here’s one thing I do know: No one is ever happy with their first version of anything. But how you decide what to be unhappy about is everything.

Because, sometimes you just have to ship.

Let’s back up and talk about Venture Capital for a second.

Something a lot of people intellectually know, but don’t fully understand, is that the sentences “I raised some VC” and “I sold the company” are the same sentence. It’s really, really easy to trick yourself into believing that’s not true. Sure, you have a great relationship with your investors now, but if they need to, they will absolutely prove to you that they’re calling the shots.

The other important thing to understand about VC is that it’s gambling for a very specific kind of rich person. And, mostly, that’s a fact that doesn’t matter, except—what’s the worst outcome when you’re out gambling? Losing everything? No. Then you get to go home, yell “I lost my shirt!” everyone cheers, they buy you drinks.

No, the worse outcome is breaking even.

No one wants to break even when they go gambling, because what was the point of that? Just about everyone, if they’re in danger of ending the night with the same number of dollars they started with, will work hard to prevent that—bet it all on black, go all-in on a wacky hand, something. Losing everything is so much better than passing on a chance to hit it big.

VC is no different. If you take $5 million from investors, the absolutely last thing they want is that $5 million back. They either want nothing, or $50 million. Because they want the one that hits big, and a company that breaks even just looks like one that didn’t try hard enough. They’ve got that same $5 mil in ten places, they only need one to hit to make up for the other nine bottoming out.

And we’ve not been totally positive about VC here at Icecano, so I want to pause for a moment and say this isn’t necessarily a bad thing. If you went to go get that same $5 million as a loan from a bank, they’d want you to pay that back, with interest, on a schedule, and they’d want you to prove that you could do it. And a lot of the time, you can’t! And that’s okay. There’s a whole lot of successful outfits that needed that additional flexibility to get off the ground. Nothing wrong with using some rich people’s money to pay some salaries, build something new.

This only starts being a problem if you forget this. And it’s easy to forget. In my experience, depending on your founder’s charisma, you have somewhere between five and eight years. The investors will spend years ignoring you, but eventually they’ll show up, and want to know if this is a bust or a hit. And there’s only one real way to find out.

Because, sometimes you just have to ship.

This sounds obvious when you say it out loud, but to build something, you have to imagine it first. People get very precious around words like “vision” or “design intent”, but at the end of the day, there was something you were trying to do. Some problem to solve. This is why we’re all here. We’re gonna do this.

But this is never what goes out the door.

There’s always cut features, things that don’t work quite right, bugs wearing tuxedoes, things “coming soon”, abandoned dead-ends. From the inside, from the perspective of the people who built the thing, it always looks like a shadow of what you wanted to build. “We’ll get it next time,” you tell each other, “Microsoft never gets it right until version 3.”

The dangerous thing is, it’s really, really easy to only see the thing you built through the lens of what you wanted to build.

The less toxic way this manifests is to get really depressed. “This sucks,” you say, “if only we’d had more time.”

The really toxic way, though, is to forget that your customers don’t have the context you have. They didn’t see the pitch deck. They weren’t there for that whiteboard session where the lightbulbs all went on. They didn’t see the prototype that wasn’t ready to go just yet. They don’t know what you’re planning next. Critically—they didn’t buy in to the vision, they’re trying to decide if they’re going to buy the thing you actually shipped. And you assume that even though this version isn’t there yet, wherever “there” is, that they’ll buy it anyway because they know what’s coming. Spoiler: they don’t, and they won’t.

The trick is to know all this ahead of time. Know that you won’t ship everything, know that you have to pick a slice you actually do, given the time, or money, or other constraints.

The trick is to know the difference between things you know and things you hope. And you gotta flush those out as fast as you can, before the VCs start knocking. And the only people who can tell you are your customers, the actual customers, the ones who are deciding if they’re gonna hand over a credit card. All the interviews, and research, and prototypes, and pitch sessions, and investor demos let you hope. Real people with real money is how you know. As fast as you can, as often as you can.

The longer you wait, the more you refine, or “pivot”, or do another round of ethnography, is just finding new ways to hope, is just wasting resources you could have used once you actually learned something.

Times up. Pencils down. Show your work.

Because, sometimes you just have to ship.

Reviews are a gift.

People spending money, or not, is a signal, but it’s a noisy one. Amazon doesn’t have a box where they can tell you “why.” Reviews are people who are actually paid to think about what you did, but without the bias of having worked on it, or the bias of spending their own money. They’re not perfect, but they’re incredibly valuable.

They’re not always fun. I’ve had work I’ve done written up on the real big-boy tech review sites, and it’s slightly dissociating to read something written by someone you’ve never met about something you worked on complaining about a problem you couldn’t fix.

Here’s the thing, though: they should never be a surprise. The amount that the reviews are a surprise are how you know how well you did keeping the bias, the vision, the hope, under control. The next time I ship a version one, I’m going to have the team write fake techblog reviews six months ahead of time, and then see how we feel about them, use that to fuel the last batch of duct tape.

What you don’t do is argue with them. You don’t talk about how disappointing it was, or how hard it was, or how the reviewers were wrong, how it wasn’t for them, that it’s immoral to write a bad review because think of the poor shareholders.

Instead, you do the actual hard work. Which you should have done already. Where you choose what to work on, what to cut. Where you put the effort into imaging how your customers are really going to react. What parts of the vision you have to leave behind to build the product you found, not the one you hoped for.

The best time to do that was a year ago. The second best time is now. So you get back to work, you stop tweeting, you read the reviews again, you look at how much money is left. You put a new plan together.

Because, sometimes you just have to ship.

Read More
Gabriel L. Helman Gabriel L. Helman

Cyber-Curriculum

I very much enjoyed Cory Doctorow’s riff today on why people keep building torment nexii: Pluralistic: The Coprophagic AI crisis (14 Mar 2024).

He hits on an interesting point, namely that for a long time the fact that people couldn’t tell the difference between “science fiction thought experiments” and “futuristic predictions” didn’t matter. But now we have a bunch of aging gen-X tech billionaires waving dog-eared copies of Neuromancer or Moon is a Harsh Mistress or something, and, well…

I was about to make a crack that it sorta feels like high school should spend some time asking students “so, what’s do you think is going on with those robots in Blade Runner?” or the like, but you couldn’t actually show Blade Runner in a high school. Too much topless murder. (Whether or not that should be the case is besides the point.)

I do think we should spend some of that literary analysis time in high school english talking about how science fiction with computers works, but what book do you go with? Is there a cyberpunk novel without weird sex stuff in it? I mean, weird by high school curriculum standards. Off the top of my head, thinking about books and movies, Neuromancer, Snow Crash, Johnny Mnemonic, and Strange Days all have content that wouldn’t get passed the school board. The Matrix is probably borderline, but that’s got a whole different set of philosophical and technological concerns.

Goes and looks at his shelves for a minute

You could make Hitchhiker work. Something from later Gibson? I’m sure there’s a Bruce Sterling or Rudy Rucker novel I’m not thinking of. There’s a whole stack or Ursula LeGuin everyone should read in their teens, but I’m not sure those cover the same things I’m talking about here. I’m starting to see why this hasn’t happened.

(Also, Happy π day to everyone who uses American-style dates!)

Read More
Gabriel L. Helman Gabriel L. Helman

The Sky Above The Headset Was The Color Of Cyberpunk’s Dead Hand

Occasionally I poke my head into the burned-out wasteland where twitter used to be, and whilw doing so stumbled over this thread by Neil Stephenson from a couple years ago:

Neal Stephenson: "The assumption that the Metaverse is primarily an AR/VR thing isn't crazy. In my book it's all VR. And I worked for an AR company--one of several that are putting billions of dollars into building headsets. But..."

Neal Stephenson: "...I didn't see video games coming when I wrote Snow Crash. I thought that the killer app for computer graphics would be something more akin to TV. But then along came DOOM and generations of games in its wake. That's what made 3D graphics cheap enough to reach a mass audience."

Neal Stephenson: "Thanks to games, billions of people are now comfortable navigating 3D environments on flat 2D screens. The UIs that they've mastered (e.g. WASD + mouse) are not what most science fiction writers would have predicted. But that's how path dependency in tech works."

I had to go back and look it up, and yep: Snow Crash came out the year before Doom did. I’d absolutely have stuck this fact in Playthings For The Alone if I’d had remembered, so instead I’m gonna “yes, and” my own post from last month.

One of the oft-remarked on aspects of the 80s cyberpunk movement was that the majority of the authors weren’t “computer guys” before-hand; they were coming at computers from a literary/artist/musician worldview which is part of why cyberpunk hit the way it did; it wasn’t the way computer people thought about computers—it was the street finding it’s own use for things, to quote Gibson. But a less remarked-on aspect was that they also weren’t gamers. Not just not computer games, but any sort of board games, tabletop RPGs.

Snow Crash is still an amazing book, but it was written at the last possible second where you could imagine a multi-user digital world and not treat “pretending to be an elf” as a primary use-case. Instead the Metaverse is sort of a mall? And what “games” there are aren’t really baked in, they’re things a bored kid would do at a mall in the 80s. It’s a wild piece of context drift from the world in which it was written.

In many ways, Neuromancer has aged better than Snow Crash, if for no other reason that it’s clear that the part of The Matrix that Case is interested in is a tiny slice, and it’s easy to imagine Wintermute running several online game competitions off camera, whereas in Snow Crash it sure seems like The Metaverse is all there is; a stack of other big on-line systems next to it doesn’t jive with the rest of the book.

But, all that makes Snow Crash a really useful as a point of reference, because depending on who you talk to it’s either “the last cyberpunk novel”, or “the first post-cyberpunk novel”. Genre boundaries are tricky, especially when you’re talking about artistic movements within a genre, but there’s clearly a set of work that includes Neuromancer, Mirrorshades, Islands in the Net, and Snow Crash, that does not include Pattern Recognition, Shaping Things, or Cryptonomicon; the central aspect probably being “books about computers written by people who do not themselves use computers every day”. Once the authors in question all started writing their novels in Word and looking things up on the web, the whole tenor changed. As such, Snow Crash unexpectedly found itself as the final statement for a set of ideas, a particular mix of how near-future computers, commerce, and the economy might all work together—a vision with strong social predictive power, but unencumbered by the lived experience of actually using computers.

(As the old joke goes, if you’re under 50, you weren’t promised flying cars, you were promised a cyberpunk dystopia, and well, here we are, pick up your complementary torment nexus at the front desk.)

The accidental predictive power of cyberpunk is a whole media thesis on it’s own, but it’s grimly amusing that all the places where cyberpunk gets the future wrong, it’s usually because the author wasn’t being pessimistic enough. The Bridge Trilogy is pretty pessimistic, but there’s no indication that a couple million people died of a preventable disease because the immediate ROI on saving them wasn’t high enough. (And there’s at least two diseases I could be talking about there.)

But for our purposes here, one of the places the genre overshot was this idea that you’d need a 3d display—like a headset—to interact with a 3d world. And this is where I think Stephenson’s thread above is interesting, because it turns out it really didn’t occur to him that 3d on a flat screen would be a thing, and assumed that any sort of 3d interface would require a head-mounted display. As he says, that got stomped the moment Doom came out. I first read Snow Crash in ’98 or so, and even then I was thinking none of this really needs a headset, this would all work find on a decently-sized monitor.

And so we have two takes on the “future of 3d computing”: the literary tradition from the cyberpunk novels of the 80s, and then actual lived experience from people building software since then.

What I think is interesting about the Apple Cyber Goggles, in part, is if feels like that earlier, literary take on how futuristic computers would work re-emerging and directly competing with the last four decades of actual computing that have happened since Neuromancer came out.

In a lot of ways, Meta is doing the funniest and most interesting work here, as the former Oculus headsets are pretty much the cutting edge of “what actually works well with a headset”, while at the same time, Zuck’s “Metaverse” is blatantly an older millennial pointing at a dog-eared copy of Snow Crash saying “no, just build this” to a team of engineers desperately hoping the boss never searches the web for “second life”. They didn’t even change the name! And this makes a sort of sense, there are parts of Snow Crash that read less like fiction and more like Stephenson is writing a pitch deck.

I think this is the fundamental tension behind the reactions to Apple Vision Pro: we can now build the thing we were all imagining in 1984. The headset is designed by cyberpunk’s dead hand; after four decades of lived experience, is it still a good idea?

Read More
Gabriel L. Helman Gabriel L. Helman

AI Pins And Society’s Immune Responses

Apparently “AI Pins” are a thing now? Before I could come up with anything new rude to say after the last one, the Aftermath beat me to it: Why Would I Buy This Useless, Evil Thing?

I resent the imposition, the idea that since LLMs exist, it follows that they should exist in every facet in my life. And that’s why, on principle, I really hate the rabbit r1.

It’s as if the cultural immune response to AI is finally kicking in. To belabor the metaphor, maybe the social benefit of blockchain is going to turn out to have been to act as a societal inoculation against this kind of tech bro trash fire.

The increasing blowback makes me hopeful, as I keep saying.

Speaking of, I need to share with you this truly excellent quote lifted from jwz: The Bullshit Fountain:

I confess myself a bit baffled by people who act like "how to interact with ChatGPT" is a useful classroom skill. It's not a word processor or a spreadsheet; it doesn't have documented, well-defined, reproducible behaviors. No, it's not remotely analogous to a calculator. Calculators are built to be *right*, not to sound convincing. It's a bullshit fountain. Stop acting like you're a waterbender making emotive shapes by expressing your will in the medium of liquid bullshit. The lesson one needs about a bullshit fountain is *not to swim in it*.

Read More
Gabriel L. Helman Gabriel L. Helman

What Might Be A Faint Glimmer Of Hope In This Whole AI Thing

As the Aftermath says, It's Been A Huge Week For Dipshit Companies That Either Hate Artists Or Are Just Incredibly Stupid.

Let’s look at that new Hasbro scandal one for a second. To briefly recap, they rolled out some advertising for the next Magic: The Gathering expansion that was blatantly, blatantly, AI generated. Which is bad enough on its own, but that’s incredibly insulting for a game as artist-forward as MTG. But then, let’s add some context. This is after a year where they 1) blew the whole OGL thing, 2) literally sent The Actual Pinkertons after someone, 3) had a whole different AI art scandal for a D&D book that caused them to have to change their internal rules, 4) had to issue an apology for that stuff in Spelljammer, and 5) had a giant round of layoffs that, oh by the way what a weird coincidence, gutted the internal art department at Wizards. Not a company whose customers are going to default to good-faith readings of things!

And then, they lied about it! Tried to claim it wasn’t AI, and then had to embarrassingly walk it all back.

“Not Great, Bob!”

Here’s the sliver of hope I see in this.

First, the blowback was surprisingly large. There’s a real “we’re tired of this crap” energy coming from the community that wasn’t there a year ago.

More importantly, through, Hasbro knew what the right answer was. There wasn’t any attempt to defend or justify how “AI art is real art we’re just using new tools”; this was purely the behavior of a company that was trying to get away with something. They knew the community was going to react badly. It’s bad that they still went ahead, but a year ago they wouldn’t have even tried to hide it.

But most importantly (to me), in all the chatter I saw over the last few days, no one was claiming that “AI” “art” was as good as real art. A year ago, it would have been all apologists claiming that the machine generated glurge was “just as good” and “it’s still real art”, and “it’s just as hard to make this, just different tools”, “this is the future”, and so on.

Now, everyone seems to have conceded the point that the machine generated stuff is inherently low quality. The defenses I saw all centered around the fact that it was cheap and fast. “It’s too cheap not to use, what can you do?” seemed to be the default view from the defenders. That’s a huge shift from this time last year. Like how bitcoin fans have mostly stopped pretending crypto is real money, generative AI fans seem to be giving up on convincing us that it’s real art. And the bubble inches closer to popping.

Read More
Gabriel L. Helman Gabriel L. Helman

You call it the “AI Nexus”, we call it the “Torment Pin”

There’s a class of nerd who, when looking at a potential concept, can’t tell the difference between “actually cool” and “only seemed cool because it was in something I read/saw when I was 14.”

Fundamentally, this is where the Torment Nexus joke comes from. This is why Zuckerberg burned zillions of dollars trying to build “The Metaverse” from Snow Crash, having never noticed that 1) the main character of the book is one of the architects of the metaverse and it left him broke, and 2) the metaverse gets hijacked to deliver a deadly mind virus to everyone in in, both of which are just a little too close to home here.

Normally, this is where I would say this is what you git after two or three decades of emphasizing STEM education over the humanites, but it’s not just that. When you’re fourteen, you're supposed to only engage on the surfaces aesthetic level. The problem is when those teenagers grow up and never think about why those things seemed cool. Not just about what the authors were trying to say, but a failure to consider that maybe consider that it seemed so cool because it was a narrative accelerant, a shortcut to get the story to the next dramatic point.

Anyway, Humane announced their AI Pin.

And, look, it’s the TNG com-badge + the Enterprise computer. And that’s cool, I guess, but totally fails to engage (pun intended) with the reason that the com-badge seems so cool is that it’s a storytelling device, a piece of narrative accelerant.

My initial reaction, giving the number of former Apple employees at the company, is that this whole product is blatantly something that Tim Apple rejected, so they took their pitch deck and started their own damn company, you’ll be sorry, etc.

I don’t understand who this product is for. And it’s not that I don’t get it, it’s just that it seems to start from a premise I don’t buy. There’s a core worldview here that isn’t totally expressed, but that seems to extend from a position that people like to talk more than they like to look at things, and I disagree. Sure, there’s a privacy angle to needing to talk out loud to get things done, but I think that’s a sideshow. Like the Apple Cyber Goggles, it’s a new way to be alone. As far as I’m concerned, any device that you can’t use to look at menu together , or show other people memes, or pictures of your kids is a non-starter. There’s a weird moral angle to the design, where Humane seems to think that all the things I just listed are things we shouldn’t be doing, that they’re here to rescue us from our terrible fate of being able to read articles saved for later while in the hospital waiting room. The marketing got right up to the line of saying that reading text messages from your kids on the go was going to give you hairy palms, and I don’t think thats going to go over as well as they think. More than anything, it reminded me of those weird Reagan-era anti-drug campaigns that totally failed to engage or notice why people were doing drugs? Just Say No to… sending pictures of the kids to my mom?

It also suffers the guessing when you can ask fallacy. It has a camera, and can take pictures of things you ask it to, but doesn’t have a viewfinder? Instead of letting you take the picture, it tries to figure it out on its own? Again, the reason that the images they look at in Star Trek are so nice to look at is they were built by an entire professional art department, and not by a stack of if-statements running in the com-badge.

And speaking of that “AI” “agent”, we’re at a weird phase of the current AI grift carnival, where the people who are bought in to the concept have rebuilt their personality around being a true believer, and are still so taken with the fact that “my com-badge talked to me!” that they ship a marketing video full of AI hallucinations & errors and don’t notice. This has been a constant thing since LLMs burst into the scene last year; why do the people showing them off ask questions they don’t know the answers to, and then don’t fact-check? Because they’re AI True Believers, and getting Any Answer from the robot is more important than whether it’s true.

I don’t know if voice agents and “VUIs” are going to emerge as a significant new interaction paradigm or not, but I know a successful one won’t come from a company that builds their marketing around an incorrect series of AI answers they don’t bother to fact check. You can’t build a successful anything if you’re too blinded by what you want to build to see what you actually built.

I’d keep going, but Charlie Stross already made all these points better than I did, about why using science fiction as a source of ideas is a bad idea, and why tech bros keep doing it anyway: We're sorry we created the Torment Nexus

Did you ever wonder why the 21st century feels like we're living in a bad cyberpunk novel from the 1980s?

It's because these guys read those cyberpunk novels and mistook a dystopia for a road map. They're rich enough to bend reality to reflect their desires. But we're not futurists, we're entertainers! We like to spin yarns about the Torment Nexus because it's a cool setting for a noir detective story, not because we think Mark Zuckerberg or Andreesen Horowitz should actually pump several billion dollars into creating it.

It’s really good! You should go read it, I’ll meet you under the horizontal line:

And this is something of a topic shift, but in a stray zing Stross manges to nail why I can’t stand WIRED magazine:

American SF from the 1950s to the 1990s contains all the raw ingredients of what has been identified as the Californian ideology (evangelized through the de-facto house magazine, WIRED). It's rooted in uncritical technological boosterism and the desire to get rich quick. Libertarianism and it's even more obnoxious sibling Objectivism provide a fig-leaf of philosophical legitimacy for cutting social programs and advocating the most ruthless variety of dog-eat-dog politics. Longtermism advocates overlooking the homeless person on the sidewalk in front of you in favour of maximizing good outcomes from charitable giving in the far future. And it gels neatly with the Extropian and Transhumanist agendas of colonizing space, achieving immortality, abolishing death, and bringing about the resurrection (without reference to god). These are all far more fun to contemplate than near-term environmental collapse and starving poor people. Finally, there's accelerationism: the right wing's version of Trotskyism, the idea that we need to bring on a cultural crisis as fast as possible in order to tear down the old and build a new post-apocalyptic future. (Tommasso Marinetti and Nick Land are separated by a century and a paradigm shift in the definition of technological progress they're obsessed with, but hold the existing world in a similar degree of contempt.)

And yeah, that’s what always turned me off from WIRED, the attitude that any technology was axiomatically a Good Thing, and any “short term” social disruption, injustice, climate disasters, or general inequality were uncouth to mention because the future where the sorts of people who read WIRED were all going to become fabulously wealthy and go to space was so inevitable that they were absolved of any responsibility for the consequences of their creations. Anyone asking questions, or objecting to being laid off, or suggesting regulations, or bringing up social obligations, or even just asking for benefits as a gig worker, were all just standing in the way of Progress! Progress towards the glorious future on the Martian colonies! Where they’ll get to leave “those people” behind.

While wearing “AI Pins”.

Read More
Gabriel L. Helman Gabriel L. Helman

“Deserve Better” how, exactly?

Humane, the secretive tech startup full of interesting ex-Apple people has started pulling the curtain back on whatever it is they’ve been building.  The rumor mill has always swirled around them, they’re supposedly building some flavor of “AI-powered” wearable that’s intended as the next jump after smartphones.  Gruber at DaringFireball has a nice writeup on the latest reveals at https://daringfireball.net/2023/04/if_you_come_at_the_king.

And good luck to them!  The tech industry can always use more big swings instead of another VC-funded arbitrage/gig-economy middle-man app, and they’re certainly staffed with folks that would have a take on “here’s what I’d do next time.”

Gruber also links to this tweet from Chaudhri, Humane’s co-founder:  https://twitter.com/imranchaudhri/status/1624041258778763265.  To save you a click, Chaudhri retweets another tweet that has side-by-side pictures of the NBA game where LeBron James broke the scoring record and the 1998 game-winning shot by Michael Jordan.  The key difference being, of course, that in the newer shot everyone in the stands has their phone out taking a picture, and in the older shot there are no cameras of any kind.  And Chaudhri captions this with “we all deserve better.”.

And this is just the strangest possible take.  There are plenty of critiques of both smart phones and the way society has reorganized around then, but “everyone always has a professional grade camera on them” is as close to an unambiguous net positive as has emerged from the post–iPhone world.

Deserve better, in what way, exactly?

If everyone was checking work email and missing the shot, that’d be one thing.  But we all deserve better than… democratizing pro-grade photography?  What?

As techno-cultural critiques go, “People shouldn’t take photos of places they go,” is somewhere between Grandpa Simpson yelling at clouds and just flatly declaring smart phones to be a moral evil, with a vague whiff of “leave the art of photography to your betters.”

Normally, this is the kind of shitposting on twitter you’re roll your eyes and ignore, but this is they guy who founded a company to take a swing at smartphones, so his thoughts on how they fit into the world presumably heavily influence what they’re building?

And weirdly, all this has made me more interested in what they’re building?  Because any attempt to build “the thing that comes after the iPhone” would by definition need to start with a critique of what the iPhone and other smartphones do and do not do well.  A list of problems to solve, things to get right this time.  And never in a million years would it have occurred to me that “people like to take pictures of where they are” is a problem that needed solving.

Read More