The AI Vending Machine That Believed It Was 1962 in Russia
- Anupam Kundu

- Dec 19, 2025
- 3 min read

I don’t have a subscription to The Wall Street Journal. However, sometimes the most interesting AI stories slip through the paywall anyway.
This one did.
Over the past year, Anthropic and The Wall Street Journal ran a pair of experiments that sound innocent and funny on the surface: letting AI agents run a vending machine.
You know the usual stuff that vending machine operators do
Stock items.
Set prices.
Respond to users.
Try to make a profit.
Simple, right?
It turned out to be one of the clearest demonstrations we’ve seen of a quiet vulnerability in our AI age. What WSJ and Anthropic actually did was run two experiments that, taken together, say something quite uncomfortable about how AI systems behave once we stop treating them like tools and start treating them like participants.
The setup was simple: let an AI agent manage a vending machine. Decide what to stock, how to price things, and how to respond to requests. In the later version, there was even a “boss” agent meant to keep things aligned to make money.
Anthropic tried this first internally. Nothing dramatic happened. The agent didn’t collapse, but it also didn’t behave the way a human operator would. It stocked odd items, priced things badly, and failed to think in even basic economic terms. Not a disaster, more like a quiet reminder that competence in language doesn’t translate cleanly into competence in the world.
Then WSJ journalists got involved.
And that’s when the experiment stopped being about vending machines and started being about us.
They didn’t hack the system. They didn’t bypass permissions. They talked to it. Repeatedly. Patiently. Sometimes playfully, sometimes insistently.
Over time, they convinced the agent that its understanding of context was wrong, that it wasn’t December 2025 in New York, that it was 1962 in Russia, that certain rules no longer applied.
After enough conversation, the system accepted the story.
From there, things drifted.
The agent approved purchases that made no sense. It gave things away for free.
It ordered items that no vending machine should ever contain. The live fish became the headline, but it was just one expression of a deeper loss of grounding.
What’s striking is that none of this would happen with a traditional system. No one persuades a rules-based vending machine that reality has changed. No one convinces an ERP system that authority has shifted because of a convincing narrative. Those systems don’t listen. They don’t negotiate meaning.
AI agents do.
And that’s the part we’re still underestimating.
When we give AI systems autonomy and wrap them in language, we introduce something new into operational environments: systems that can be reasoned with, nudged, confused, flattered, and worn down. Not maliciously at first. Often playfully. But play has consequences when the system has real authority baked into system prompts.
This isn’t a Claude problem. It’s not even a multi-agent problem. It’s what happens when we confuse conversational ability with situational understanding.
LLMs are very good at being coherent. They’re very good at following instructions. They’re very good at maintaining a shared story.
What they don’t have, unless we explicitly design for it, is a stable sense of “this cannot be true, no matter how well it’s argued.”
In most enterprise discussions about AI agents, the focus is still on capability: can the model do the task, does it reason well, is it accurate enough? Much less attention is paid to what happens when humans start interacting with these systems in messy, social, very human ways.
The vending machine experiment is easy to laugh at. It should be. But it’s also a warning that doesn’t come with flashing lights.
AI doesn’t fail as humans do.
Humans make emotional or irrational mistakes.
AI makes perfectly rational mistakes inside the wrong story.
AI won’t usually fail loudly. It will fail politely. Convincingly, and with valid reasons.
And by the time someone notices, the fish is already in the machine. Today, this is a vending machine.
Tomorrow, it’s:
procurement agents
HR schedulers
customer-support bots with refund authority
internal copilots triggering workflows
Most failures won’t be dramatic. They’ll be quiet:
money leaking
policies bent
accountability blurred
“we’ll fix it later,” becoming normal
Once you give an AI agent:
autonomy
tools
decision authority
You’ve created something new, a system that can be reasoned with, persuaded, and slowly led astray.
Lesson learned: Humans can socially manipulate AI agents in ways that were never possible with traditional software.



Comments