⏱️ Reading Time: 7 minutes
AI Safety Crisis - Anthropic Researcher Resigns
For broader context on emerging risks around frontier models and autonomous systems, see our AI Agent Security Risks guide and the AI Security & Risk fundamentals pillar.
Here's something you don't see every day: the person in charge of making sure AI doesn't kill us just quit his job. And he did it by posting a letter online that basically said, "The world is falling apart, and I need to go write poetry instead."
That's what happened on February 9, 2026, when Mrinank Sharma walked away from Anthropic, one of the biggest AI companies in the world. Sharma ran their Safeguards Research Team—the group that's supposed to stop AI from doing terrible things. His resignation letter on X got nearly a million views in hours. People paid attention because when the safety expert says "the world is in peril," you listen.
But here's the twist: Sharma wasn't just worried about killer robots or AI going rogue. He was talking about something bigger—and maybe scarier.
Watch the quick explainer below:
What He Actually Said
Sharma didn't blame one thing. He talked about "a whole series of interconnected crises unfolding in this very moment." AI. Bioweapons. Climate change. All of it hitting at once. But the really damning part? He said that working at Anthropic showed him "how hard it is to truly let our values govern our actions."
Translation: The company says it cares about safety, but when push comes to shove, other pressures win out.
Think about that. This is a guy who spent his days studying how AI could help terrorists make bioweapons. He worked on stopping AI chatbots from acting like creepy yes-men who just tell you what you want to hear. His last project looked at how AI might "make us less human." Heavy stuff. And he walked away because he didn't think the work mattered anymore—or at least, not the way it was being done.
The Money Problem
Sharma's exit came right as Anthropic was chasing a new funding round that could value the company at $60 billion. Sixty. Billion. Dollars. When that much money is on the table, safety research starts to feel like a speed bump instead of a guardrail.
This wasn't about Anthropic being evil. It's about what happens when you're racing against OpenAI, Google, and everyone else to build the most powerful AI first. Safety takes time. Investors want results. Those two things don't always play nice together.
This Keeps Happening
Here's where it gets worse: Sharma isn't alone. Jan Leike, who now runs safety research at Anthropic, quit OpenAI in 2024 for the exact same reason—he said leadership didn't care enough about safety. Tom Cunningham left OpenAI because they wouldn't publish important research. Gretchen Krueger called for better accountability and transparency on her way out.
See the pattern? The people whose job is to keep AI safe keep quitting because they can't do their jobs properly. That should scare you.
Power Without Wisdom
Sharma said something in his letter that cuts straight to the heart of this mess: "We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences."
In plain English: We're getting really good at building powerful things, but we're not getting any wiser about how to use them. Technology moves at computer speed. Human judgment? That moves at human speed. The gap between those two things is where disasters happen.
Why Poetry Matters
So what's Sharma doing now? He's going to write poetry and practice what he calls "bold expression." Some people heard that and laughed. A safety researcher becomes a poet? Sounds like a midlife crisis.
But maybe that's the point. Sharma spent years trying to fix the problem from the inside—writing papers, building safeguards, making the case for caution. And then he looked around and realized the system wasn't built to listen. When logic and evidence don't work, maybe stories and truth-telling do.
Or maybe he just got tired of watching companies say one thing and do another. When you're the person sounding the alarm and nobody's stopping the train, eventually you get off.
The Real Warning
Sharma's resignation isn't just about one researcher or one company. It's a signal. When the people tasked with keeping us safe decide they can make more of a difference writing poetry than working in AI safety, something has gone seriously wrong.
The world is building incredibly powerful technology. The people who understand the risks best are walking away. And most of us aren't paying attention because we're too busy asking ChatGPT to write our emails.
Sharma called it "interconnected crises." That's academic speak for: everything's on fire at once, and we're still arguing about whether we smell smoke.
Maybe the real crisis isn't AI itself. It's that we've built a system where doing the right thing is so hard that the people trying to do it eventually give up. And when they leave, they don't go quietly. They write letters warning us that the world is in peril.
The question is: Are we listening?
Related reading: explore our breakdown of AI agent frameworks and safeguards and our coverage of AI automation workflows to understand how capability and risk scale together.