Ridicule as praxis, darling! Alex Hanna on busting “AI” hype and building the world we want

Between “The AI Con” book tour and marking three years of her “accidental” hit podcast, she still has time for Power Lunch. The director of research at DAIR — and ex-senior research scientist at Google — talks “AGI” silliness, the power of imagining possible futures and, of course, roller derby.

Alex Hanna, wearing a yellow sweater and blue jeans, kneels by a sign in a bookstore reading "queer as in free Palestine."
I emailed Power Lunchee VI, aka Alex Hanna, the following request: "Would you be able to share a few photo options of yourself you'd be comfortable running in the piece? Lunch-adjacent photos encouraged." Power Lunchee VI sent a few lunch photos, and this is not one of them. This is a bookstore.
🍴
Nom nom nom! Welcome to Power Lunch, a series of speculative lunches with creative minds navigating achievement and reinvention in our very weird era. Is a Power Lunch a case study? Knowledge exchange? Restaurant rec?! Nothing’s off the table. 

Alex Hanna, director of research at the Distributed AI Research (DAIR) Institute, would rather skate across the roller rink. But, instead, she’s here.

Do the math: roller derby > “AI” hype. 

One might counter that this is not technically a matter of mathematics, but the point generally applies in a colloquial sense: “AI” hype “is lesser than” most anything else you’d rather be doing. 

Indeed, there’s little pleasure to be derived from repeatedly countering the broligarch’s science fiction fantasies and eugenicist ideologies; the venture capitalist’s snake oil; and the real harms caused by thrusting large language models into every corner of the internet and public life. The pleasure derived from roller derby, by contrast, is great. 

The same contrasts could be drawn between “AI” hype and any manner of things. For instance, funding mental healthcare “is greater than” the deadly chatbots tech companies promote as “therapy.”  Funding journalism and academic research, similarly, “is greater than” the synthetic media spill polluting the internet right now. Reading “is greater than” not reading. 

So it follows that, yes, Hanna — ESC KEY .CO’s sixth Power Lunchee — would rather be jamming and blocking with Bay Area Derby, a full contact, women’s flat track roller derby league located in Oakland, California. (The non-profit, skater-owned league is a member of the Women’s Flat Track Derby Association, the official governing body for modern roller derby.)

In roller derby, people who might not have that much in common work together to achieve collective aims. That’s one thing that makes it “greater than” the warped rink of “AI” absurdity, where working together to achieve common aims is explicitly not the point of their game. In fact, for their game, rink isn’t a very good analogy. Empire is a more apt one, as Karen Hao puts it in her instant best-seller, “Empire of AI.” 

Hao is among many guests that Hanna has had on her podcast in the last three years. And in June, Hao shared the virtual stage with Hanna and Emily M. Bender for a Data & Society book talk where these three leading voices in critical conversations around “AI” drew a line between the concepts of the con and empire.

“I literally laughed out loud multiple times while reading [‘The AI Con’].”

—Karen Hao

“AI is not inevitable,” Hanna noted at the event. “AI — as it is constructed right now — is, I think, fundamentally anti-democratic.” She was there to promote “The AI Con,” the searing new book she’s co-authored with Bender. Together, these two books have entered the zeitgeist. 

“One of the features of empires is that they’re made to feel inevitable,” Hao remarked. “[But] throughout history every single empire has fallen […] as much as they seem strong, they’re very weak at their foundations because they are built off of extraction and exploitation.”

Equally as important, Hao also added: “I literally laughed out loud multiple times while reading [‘The AI Con’] in the cafe.”

And, indeed, if countering the broligarchy must be done, then leave it to Hanna to make it a little bit more fun. She employs what she and her co-author Bender describe as “ridicule as praxis” — as well as many, many needles. As they put it at the start of each podcast episode, they’re here to pop the hype balloons “with the sharpest needles we can find!”

Consequently, Hanna spends an incredible amount of her time working with sharp objects, metaphorically speaking. On Monday September 15, she will commemorate three years of wielding them on the anniversary episode of Mystery AI Hype Theatre 3000, the “accidental” podcast she began co-hosting in 2022 with Bender as a Twitch livestream. It’s become a bit of a support group for people concerned with the “AI” bullshit du jour. Their book builds on the critical conversations they’ve had across dozens of episodes. Added together, they raised Hanna’s profile as a sought-after speaker on the subject. As a result, her inbox and speaking schedule is madness. (Say, did Sam Altman do something silly this week? Some journalist somewhere wants Hanna’s comments.)

There are some questions she’s tired of answering at this point, she tells me when we remotely gather at her chosen lunch spot — “a very cursed place” in Napa Valley.  If anyone has a read on both the state of the hype and the resistance to it, it’s Hanna. She has been zooming her way across the United States and around the world for book talks, lectures, a few irritating interviews with podcasts and broadcast journalists — and, yes, ESC KEY .CO, where I reviewed “The AI Con” during its U.K. launch week in May. 

She, of course, took time for Power Lunch. I had a lot of questions, starting with roller derby. But in the time since we’d corresponded last, I was also curious about what she’d observed about our media ecosystem’s coverage of Big Tech in this moment (“I'm not envious of journalists right now,” she tells me at the end of her sharp answer). Equally, I wanted to know if she had any tips for talking to people who think “AI” is overhyped but feel caught in the cognitive dissonance of not being “left behind” (the good news: “people are pretty receptive in fields that haven’t fully gotten lost in the sauce”).

Tip number one? “Start by asking what kind of problems is this technological solution supposed to solve, and why do we have that problem in the first place?” she tells me — all while we’re not eating a $425/per person tasting menu, which does not include any beverages, which feels wrong. 

There's only one rule to Power Lunch: we are granted the power to lunch anywhere in the world, but the Power Lunchee must pick only one place where we will virtually gather. This rule is now in effect.

🍴🍴🍴

JD: You made it! Where in the world are you right now?

AH: I'm in the San Francisco Bay Area, so I'm actually not too far from where the fake place we're going is.

Alex Hanna, smiling, sits outside at a restaurant's patio table with a friend, giving a peace sign.
This is the lunch image that Power Lunchee VI, Alex Hanna, right, submitted. Hanna is lunching here with Nathan Kim, research intern at DAIR. This photo was not taken at the French Laundry.

JD: The suspense, the reveal! I can’t wait to know: Where do you want us to go for lunch?

AH: We're going to have a casual lunch at the three-Michelin-star restaurant, the French Laundry.

JD, laughs: Never heard of it. 

AH, laughs: Sure. It is in Yountville, California in the Napa Valley. 

JD: Let me pull up its website.

AH: It is a very cursed place. It starts at $425 per person. On an extra special day, it’s like $1,200. But yeah, it’s French. “French with Californian cuisine influences,” according to Wikipedia.

JD: Hearing that, I’m honestly happy this is a speculative lunch. Side note: There’s also a Gavin Newsom lockdown controversy, if I recall correctly?

AH: So Governor Gavin Newsom, who I think was best described once as a walking oil slick, went to the French Laundry in the middle of California lockdowns. And he got a bunch of shit over it, rightly so. He was like, everybody must stay home! And meanwhile, this man is having his very expensive dinners.

Screenshot of the Google Maps image results for the French Laundry, with one image showing the "Michelin 2024" sign with a "5" post-it note place over the 4 so it reads "Michelin 2025."
This is a Google Maps image shows the exact location where we did (not) go for Power Lunch. Apparently, this is how the Michelin signage system works.

JD: Which makes it the perfect place for us, to be honest.

AH: It feels like the right setting for this kind of conversation. And, if anything, we can cheapen up the place a little. 

JD: OK, I am always ready to be cheap. Looking at today’s menu, there are two: the Chef’s Tasting Menu and the Tasting of Vegetables, which seems, OK, great, $425 worth of socially consciousness.

AH, reading the menu: On the Chef’s Tasting Menu, there’s the quote “Oysters and Pearls.” Then the Garden Summer Melon salad, which is not in quotes. And then the quote “Chip and Dip.”

JD: Not to interrupt your flow, but this is a lot of scare quotes for one menu. 

AH: It's really heavy on the scare quotes. I don’t know what it’s supposed to signify. It’s like quote “Bread and Butter,” which sounds like just bread and butter to me. 

JD: Also, the bottom of the menu reads “sense of urgency,” which feels like undue pressure?

AH: Maybe this is some kind of labor power thing, sort of a Taylorism signal to the waiters — move with a sense of urgency — so it has to be on the menu which seems really draconic to me. But hey, I watched “The Bear” for two episodes and I'm like, nope, too close to home. I've worked in the restaurant industry. I don’t want to watch this. 

It’s like watching TV shows about the tech industry these days. I'm like, no, I don't need to watch this. This is stressful.

JD: I guess we’re just getting what the chef wants to give us. Because we have no choice. OK!

On the topic of stress, something that stresses me out that does not appear to stress you out is roller derby. You’re in a roller derby … team? Do we use that term?

AH: It's a league! Yeah, I play roller derby with Bay Area Derby, we’re one of the quote “grandmother leagues” of modern flat track roller derby. Back when there were only five or six leagues around the country. Now there's over a thousand leagues around the world. 

I've been playing roller derby for 12 years now. The actual playing is fun. It’s not stressful. What’s stressful is that a league is essentially running a non-profit by committee, and that’s pretty tiring. 

“It’s important to clarify what we mean by ‘AI’ because that term is not helpful.”

JD: What has roller derby taught you that no PhD, no amount of research could? 

AH: The unique lesson that roller derby teaches you is very practical: getting a lot of different types of people involved who have a lot of different life experiences. And it gets them under one roof with a unified objective of what you’re trying to do.

Often, the problem with organizing people in, let’s say, academia or the tech industry is that people think they’re very smart. That’s something an organizer at a higher ed union once told me. And many of them are very smart. But often they’re very smart in one way and very not smart in other ways. That’s no shade to anyone in particular. That’s also true of myself. 

The thing roller derby really teaches you is how to work with a lot of different people with different backgrounds. And to that end, it definitely keeps me grounded.

JD: Talking about lessons, I’m curious what you’ve been learning on your book tour. 

Let me say, there’s a kind of interview I find cringe, where the interviewer asks the interviewee for the solutions to, like, everything — give me a PhD dissertation in a sound bite. Our media ecosystem doesn’t really favor nuance. And I’ve observed a similar thing in some of the interviews you’ve done around the book’s launch, how some hosts just give you a list of counterarguments to the points you’ve already tackled extensively elsewhere, then ask you for the solution. 

Have you learned anything from the current media landscape having spent most of this year being the subject in so many channels, independent and legacy?

AH: For sure. There’s a desire in many different kinds of media to pose the question, “but what about all the good things about ‘AI?’” My response is usually, first off, that’s not our job. The book is called “The AI Con.” Then it’s important to clarify what we mean by “AI,” because that term is not helpful, as we establish in the first chapters.

Then the question will come up: “But isn’t it making us more productive?” And if by “AI” we mean synthetic text extruding machines — large language models — then the answer is no, it’s not. That’s the whole conceit. And yet, there’s no robust data showing that’s actually the case. 

There’s other externalities in continual usage that we’re seeing whether it’s “AI” sycophancy and the kind of weird relationships people are forming with chatbots; the loss of critical thinking; or the de-skilling people are experiencing — not because “AI” is taking over their jobs, but because of the skills that they're losing due to cognitive offloading in areas where they should be maintaining their judgement. 

That question — what about all the good use cases — comes up a bit more with credulous journalists. For some, they’ve really gotten lost in the sauce. For others, they’re not really well equipped to deal with it. And in this ecosystem, if you weren't on the tech beat before, you kind of have to be on the tech beat now. You have to be informed to some degree about what's happening.  

For media companies, part of it is because the bosses have made deals with the “AI” companies; they’re in financial free fall and so they talk about this pivot to “AI.” But that’s not going to pan out well for them. 

Jason Koebler at 404 Media had this nice long piece about how the media’s pivot to “AI” is not real and not going to work. And, really, this pivot is just throwing shit at the wall and trying to see what will happen and if any of it will work, but if you know anything about the underlying tech and the history of Big Tech’s intrusion into journalism, you know it’s not going to work too well. 

I'm not envious of journalists right now.

“‘AGI’ doesn’t even have a concrete definition. It’s a fake thing. It is something that just makes no sense.”

JD: I mean, don’t be! In the book and podcast, you tackle a lot of the bullshit — “ridicule as praxis,” as you put it. Popping the hype with the sharpest needles you can find. But I wonder, are there any particular balloons you’ve popped so many times that you’re just done

AH: I want to answer this in two ways. 

First is the one I’m tired of talking about because the problem is so urgent. We’ve recently had Maggie Harrison Dupré from Futurism on the podcast to talk about the reporting she’s done on “AI” psychosis and chatbot psychosis, which is harrowing to read her work on this subject. I don’t want to talk about the people dying and committing suicide because it’s such a terrible thing that is continuing to happen. But it seems like more and more of it’s happening. And yet, companionship and mental health support are still being advertised as use cases. So, we’re going to keep talking about it because it’s so pertinent and necessary, but I’m tired because it’s harming people right now. 

Second is the one I’m tired of talking about because I think it’s silly and stupid, and that’s so-called “AGI,” artificial general intelligence. Because it still doesn’t even have a concrete definition. It’s a fake thing. It is something that just makes no sense. I’m sick of just saying it’s a nonsense topic, but it is in the news. All the time.

JD: We’re in this weird moment where everyone seems to know “AI” is overhyped but also fear being somehow “left behind.” How do you talk to people caught in that cognitive dissonance?

AH: I find people are receptive when you start by asking what kind of problems is this technological solution supposed to solve, and why do we have that problem in the first place? Is it a hole in the social fabric, in social services? Is it a skill or function that we don't have access to on the team? Should we do more hiring for that? Is there a place where human judgment is essential and needed here? Starting from that position is very helpful.

Even when someone is like, but what about all the cool things it can do? Then you can have the conversation about how, with synthetic media extruding machines, it is a parlor trick. Because it’s generating synthetic text, we have this idea that there's a mind behind the text. That’s not a new phenomenon. We have this ELIZA effect, referring to the program that Joseph Weizenbaum created in the 1960s. Today’s chatbots seem more realistic, but it’s effectively the same effect. 

People tell us they’ve given our book to their manager, who then reads it and is like, I had no idea. I completely agree. People are pretty receptive in fields that haven’t fully gotten lost in the sauce.

JD: I want to back up to early in the hype cycle for a moment. Because Mystery AI Hype Theatre 3000 started in 2022 as an “accidental podcast,” what you thought might be a one-off Twitch livestream. It took off, and for a lot of listeners, you and Bender are a bit of an institutional pairing at this point. But you two didn’t meet IRL until quite recently, which seems like a very relatable 2020s experience. Was that kind of a weird thing having known someone so well, then being like, OK, wow, here we are?!

AH: You’re completely correct. It’s a very 2020s type of experience: We’re talking about 60-plus podcast episodes, several papers and op-eds, and a whole book we’ve worked on together. She’s been my most prolific collaborator ever in terms of output. We compliment each other very well. So it’s wild we’d done so much together before actually meeting in real life. I think it’s funny. 

“We can’t only critique the world as it is, as Ruha Benjamin has said. We also ‘have to build the world as it should be to make justice irresistible.’”

JD: When you’re not co-authoring and co-hosting, you’re the director of research at DAIR, where you have two stated aims: to “mitigate the harms caused by AI technology today, and create space to imagine otherwise, building towards the world we want tomorrow.” It seems like there are a growing number of critical spaces, but there are so few venues for the second aim. 

I attended DAIR’s Imagining Possible Futures workshop this spring, and I found it really refreshing. One memorable moment was a discussion about a piece you co-authored about imagining “an internet for our elders,” which was a really beautiful idea. 

What role does expanding our critical imagination play in “building towards the world we want”?

AH: I'm really thankful that it has become one of our pillars and I think that is a muscle that many of us have such a hard time exercising. But if we are to create the future we want, we can’t only critique the world as it is, as Ruha Benjamin has said. We also “have to build the world as it should be to make justice irresistible.”

We don't flex it enough, and it's often because we're in the shit right now. We're seeing the hells of late capitalism and the death throes of American imperialism and the speculative bubbles that are multiplying. “AI” surely is one, but maybe we haven't reached the nadir of the crypto bubble. And then we're in this Trump administration in the U.S., which of course is just kind of more of a symptom of everything rather than the source.

I know personally, it's hard — I feel like my imagination has often been atrophied. And that’s why it’s such an important exercise. I encourage myself to think and talk about this stuff more. I know I’m failing at that given, well, everything. 

JD: Of course, imagination requires honesty about where we are now. The flip side of imagining better futures is confronting present horrors. And right now we're watching “AI”-enabled genocide in Gaza. But so many in the so-called “AI ethics” space don’t want to talk about the fact that the Big Tech companies are complicit in this. You’re one of the few in this space who’s been explicit about Palestine solidarity. What changes when you name genocide as genocide in rooms full of people whose paychecks depend on not seeing it?

That’s certainly something that goes along with the industry of “AI ethics.” So many of those people are working at corporations that are complicit. Any movement on this, at certain corporations, does get shut down completely or ignored. 

We’re seeing more and more people calling a genocide a genocide, but it's really ... super late. That’s been a criticism of “AI ethics” within the field for a while, and it’s as relevant today as it ever was.

JD: You conclude your book looking at how people can resist the hype. Is there anything you’ve spotted while on “The AI Con” promotional circuit that’s given you hope?

AH: The only way things change is through people power and concerted action. And I’ve been really encouraged by all the people who have spoken to us after our book talks, who’ve said how it’s helped them talk to their colleagues and their unions about “AI” and automation. That gives me some hope. The fact that it’s an accessible book to read. That’s probably why it’s been well received. It is polemic and, to some degree, polarizing. But I mean, that’s fine, because the book was written to antagonize a certain set of industry actors. I mean, I haven’t seen anybody hold the book up on a picket line or anything. Yet. I don’t think that’s the intent of the text. But, I would love to see it. 

🛼🛼🍴

While looking at the PDF menu, we zoomed in on dessert with, again, a "sense of urgency."

JD: I usually end with “dessert is a good idea, right?” But since it’s a set menu, we don’t have a choice, and we’re already paying this much, let me underscore: bring us dessert with a sense of urgency, please!

AH: All with a sense of urgency! Hurry the fuck up! Yes, chef!

Read more