4 min read

What a 30-year-old Simpsons episode taught me about AI

A cursed monkey's paw from a 30-year-old Simpsons episode can teach us lots of things about AI agents, criteria, and the companies actually shipping.
What a 30-year-old Simpsons episode taught me about AI
Artwork handmade by Patricia Bedoya.

I know you’ve seen the episode. How could you not. Homer Simpson is holding a severed monkey's paw. The shopkeeper has just warned him it's cursed. Homer, obviously, buys it anyway.

You can imagine how this goes. Every wish the family makes technically comes true, and every wish technically ruins their life. Bart asks to be rich and famous, and the whole town turns on them. Lisa wishes for world peace, and Kang and Kodos invade Earth with a pair of slingshots. Homer, last in line, wishes for a turkey sandwich. He gets one. It's dry.

The paw is not broken. The paw works perfectly. The problem is the people holding it.

I cannot stop thinking about that episode lately, because I see a lot of companies right now buying their own monkey's paw and calling it an AI strategy.

The easy wish vs. the real wish

A lot of organizations are looking at AI agents the way Homer looked at that paw. They see a magical object that grants wishes. They have a list of problems piling up: slow processes, rising costs, delayed roadmaps, bottlenecks nobody wants to own. And the pitch writes itself: what if we just hand this to an agent and ask nicely?

The small wishes work. Summarize this doc. Draft this email. Write this boilerplate. Clean and instant. You feel the magic.

But the moment the wish gets bigger, the gap between what you asked for and what you actually wanted starts to widen. And it widens faster the more ambitious you get.

Rich and famous becomes hated. World peace becomes invasion. The turkey sandwich arrives dry.

The problem was never the paw

Here is the uncomfortable part: the paw is not the villain of the episode. It does exactly what you tell it. The problem is that most of us are bad at knowing what we actually want, worse at expressing it, and even worse at defining what good looks like when the result finally shows up.

That is the part nobody wants to talk about in the AI conversation, because it is not sexy and it does not fit in a keynote slide. Taste, criteria, product culture, clear incentives, a point of view on what you are building and why. The boring invisible work that happens before the prompt.

If your organization did not have that before AI, the agent will not fix it. It will just let you skip steps faster.

Speed is not the feature you think it is

The real risk of this moment is not that AI replaces people. It is that it lets teams without direction make the same mistakes they were already making, only at ten times the speed and a fraction of the cost.

Bad briefs, faster. Misaligned goals, automated. Products built without a point of view, shipped before anyone notices.

A magical object in the hands of someone without criteria is not a productivity tool. It is a faster way to get a dry turkey sandwich.

And then there's Flanders

Later in the episode, the paw ends up in Ned Flanders' hands. He wishes carefully, thinks things through, and everything works out for him. A happy ending while the Simpsons watch from across the street, furious.

Here's the thing about Flanders that I think matters. In a show full of beautifully broken characters, people driven by impulse, ego, laziness, or resentment, Flanders is one of the very few with a clear set of values he actually follows. He is almost a joke because of it. The man has believes, and he lives by them even when nobody is watching.

That is not a small detail. That is the whole episode.

Flanders does not succeed with the paw because he's smarter than Homer. He succeeds because he showed up to the wish already knowing who he is, what he wants, and what he is not willing to trade for it. The magic object just amplified what was already there.

Look around the industry right now and you can see the same pattern. Companies like Notion or Anthropic are shipping fast and shipping interesting things, not because they got lucky with a model, but because they had the taste, the point of view, and the internal alignment before the tools showed up. The agents just made their existing criteria faster.

Meanwhile, plenty of other companies are throwing spaghetti at the wall at ten times the speed and calling it innovation. More output, same confusion. Homer with a faster paw.

The tool does not give you criteria. It reveals whether you already had any.

What I keep coming back to

The Simpsons aired that episode more than thirty years ago. No LLMs, no agents, no frontier labs. Just a writers' room and a cursed prop.

And yet the whole thing reads today like a warning label for anyone about to deploy an AI agent into a team that has not done its homework.

The magic is real. The paw works. The question is whether you know what to ask for, and whether you will recognize the answer when it arrives.

Flanders did. Homer did not.

Which one is your company?