Gossip Herald

Home / Technology

Anthropic AI safety lead resigns, warns ‘world is in peril’ to study poetry

Mrinank Sharma, a leading AI safety researcher at Anthropic, has resigned

By GH Web Desk |
Anthropic AI safety lead resigns, warns ‘world is in peril’ to study poetry
Anthropic AI safety lead resigns, warns ‘world is in peril’ to study poetry

Mrinank Sharma, a leading AI safety researcher at Anthropic, has resigned from the US AI firm, issuing a stark warning that “the world is in peril.” 

In his resignation letter, shared on X, Sharma cited concerns about artificial intelligence, bioweapons, and a range of global crises as motivating his departure.

Sharma, who led a team studying AI safeguards, said he plans to return to the UK to focus on writing and pursuing a degree in poetry, aiming to “become invisible” for a period. 

He reflected on his time at Anthropic, noting successes in combating AI-assisted bioterrorism risks, researching AI alignment, and exploring how AI could impact human behaviour.

Despite enjoying his work, Sharma said he repeatedly witnessed how difficult it is to let values govern decisions—even in a safety-focused company like Anthropic. 

“The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment,” he wrote.

Anthropic, known for its Claude chatbot, has positioned itself as a safety-first AI firm, but it has faced scrutiny, including a $1.5 billion settlement over allegations of using copyrighted material to train AI models.

Sharma’s departure comes amid broader concerns in the AI sector about ethics, commercial pressures, and the societal impacts of emerging technologies. 

Earlier this week, another former OpenAI researcher resigned, citing unease over the company’s decision to introduce advertising into its ChatGPT platform.