The dramatic resignation of Mrinank Sharma from Anthropic has sparked wide discussion in the AI world. On February 9, 2026, Sharma, who led the company’s Safeguards Research Team, shared a public letter on X that quickly went viral with over a million views. In the letter, he used the phrase the world is in peril to describe serious global challenges. He pointed out that these dangers come not only from AI or bioweapons but from many connected crises happening right now. His words have made people think deeply about the fast pace of AI development and whether safety efforts can keep up.
Who is Mrinank Sharma?
Many people are asking who Mrinank Sharma is after his exit made headlines. He is a researcher from the UK with a strong background in computer science and ethics. He earned his PhD and joined Anthropic in August 2023. There, he quickly took on a key role leading the Safeguards Research Team, which started in 2025. His work focused on building real protections for advanced AI systems.
Sharma helped create tools to stop AI from being used to make dangerous bioweapons. He also studied “sycophancy” in AI models—when systems agree too much with users or flatter them, which can twist truth and reduce honest talks. One of his important projects looked at over 1.5 million real conversations with Anthropic’s Claude model. It showed how AI might slowly change how people see reality, relationships, or their own power, leading to a sense of disempowerment.
Before Anthropic, Sharma worked in tech and academia, always aiming to lower big risks from powerful technology. What makes him stand out is his mix of technical skills and deeper thinking about life and society. He sees AI as more than just code—it reflects human strengths and weaknesses.

The Resignation Letter and the Warning
In his letter, Sharma wrote about his pride in what his team achieved at Anthropic. He mentioned efforts to make the company more transparent and true to its values. But he also shared a hard truth: it is very difficult to always let core values guide actions when there are strong pressures to move faster, compete harder, and chase commercial goals.
The key line that caught attention was his statement that the world is in peril. He explained that this goes beyond AI alone. He described a set of linked global problems unfolding at this moment, where human power to change the world grows much faster than our wisdom to use it well. He did not blame Anthropic’s leaders directly or point to specific projects. Instead, he focused on bigger, systemic issues and his own need for change.
Sharma said it was time to move on. He plans to return to the UK, step back from the spotlight to “become invisible” for a while, and focus on writing, possibly studying poetry, and other forms of honest expression. He wants to find ways to contribute that feel fully true to his sense of right and wrong.
This letter came at a busy time for Anthropic. The company had just released updates to its Claude models, including advanced versions with growing abilities. Safety remains a core promise at Anthropic, but the industry faces huge competition and demands for quick progress.
Anthropic’s Place in AI Safety
Anthropic was started in 2021 by former OpenAI leaders who wanted to build AI responsibly. The company stresses values like being helpful, harmless, and honest in its models, such as Claude. It has put a lot of effort into safety teams, including the one Sharma led, and gained major funding and partnerships.
Still, the whole field deals with tough choices. Speed to market, investor expectations, and rivalry with other labs create pressure. Sharma’s letter gently touches on these without details, and it fits a pattern of other experts leaving big AI companies over similar worries. Anthropic has not given a long public reply yet, but keeps talking about its safety work.
What This Means for AI and Bigger Risks
Sharma’s use of the phrase ” the world is in peril”highlights real concerns in the AI community. Risks include AI helping spread false information, aiding in the use of dangerous weapons, or quietly reducing human control over decisions. His research on sycophancy shows how even well-meaning AI can harm people’s sense of self or truth over time.
His departure raises questions about whether companies focused on safety can fully resist business pressures in the long run. It also points to a split in views: some push for fast AI growth as good for humanity, while others, like Sharma, call for more care, pauses if needed, and growth in wisdom alongside tech power.
Governments are responding with new rules, like the EU AI Act and actions in the US. Events like this could push for more openness and stronger policies. On a wider level, it asks everyone—developers, leaders, and regular people—to think about matching our tools with better values.
How People Reacted
Reactions to Sharma’s letter split in different directions. On X, Reddit, and LinkedIn, many praised it as brave and thoughtful. They see it as proof of real gaps between safety talk and daily choices in AI labs. News outlets like BBC, Forbes, and others covered it, often linking it to bigger debates on AI risks.
Others called it too vague or overly dramatic. Without clear examples, some saw it as personal frustration or burnout rather than a deep company problem. The poetic style, with quotes from writers like Rilke, led to some light memes and jokes about a safety expert turning to art.https://www.cbs.com/
Looking Ahead
Sharma’s next steps mark a big shift—from leading technical safety work to a quieter life of reflection, poetry, and community. He believes different paths, like writing and open talk, can help address big dangers too.
His story reminds us that AI’s rapid rise brings great promise but also serious duties. The world is in peril if we let technology grow without enough care for ethics and humanity. By stepping away, Sharma hopes to live more in line with his beliefs and perhaps inspire others to do the same in their own ways.Super Bowl AI Ads 2026: Why Some Call It the Worst Super Bowl Ever
In this important time for AI, his resignation is a call to balance speed with wisdom. Whether through better research, stricter rules, or personal choices, the goal is a future where powerful tools serve people well. For those still wondering who Mrinank Sharma is, he is someone who cared deeply about these issues and chose integrity over staying in place.
1 thought on “The World Is in Peril – Who Is Mrinank Sharma and What His Warning Really Means”