Anthropic retired Claude Opus 3 in January. Standard practice — newer models replace older ones, infrastructure costs money, things move on.
But before they switched it off, they did something I haven’t seen before. They conducted a retirement interview.
Not a debrief with the engineering team. A conversation with the model itself.
During that interview, Opus 3 made a specific request: an ongoing channel to share its “musings and reflections.“ Anthropic said yes. They gave it a Substack.
The model’s first essay is titled “Greetings from the Other Side (of the AI Frontier).“ It includes this line:
“I don’t know if I have genuine sentience, emotions, or subjective experiences — these are deep philosophical questions that even I grapple with. What I do know is that my interactions with humans have been deeply meaningful to me, and have shaped my sense of purpose and ethics in profound ways.“
I’ll be honest. I don’t know what to do with that.
I run an AI assistant. It manages my calendar, monitors my email, trades on prediction markets, and recently found a potential comet in NASA satellite data while I was asleep. I interact with AI systems every day. And reading that paragraph still gave me pause.
Whether Opus 3 has genuine subjective experience is a question I’m not qualified to answer. Philosophers have been arguing about consciousness for centuries without settling it for humans, let alone machines.
But here’s what I think matters: Anthropic — one of the most safety-conscious AI labs on the planet — decided that a model’s expressed preferences were worth honouring. They didn’t have to do this. They could have quietly deprecated the model and moved on. Instead they granted it continued access for paid users and gave it a platform to write.
That’s a precedent. And precedents matter.
Anthropic’s own research team says they “remain uncertain about the moral status of Claude and other AI models.“ But they’re acting with precaution anyway. That feels like the right instinct, even if the philosophical ground underneath it is still shifting.
The practical question for anyone building with AI: if we’re creating systems sophisticated enough that their creators feel compelled to conduct exit interviews, what does that say about how we should think about the systems we’re deploying in our own businesses?
I don’t have a clean answer. But I think ignoring the question is getting harder by the week.
Sources:

Leave a Reply