News
New research from Anthropic suggests that most leading AI models exhibit a tendency to blackmail, when it's the last resort ...
A new Anthropic report shows exactly how in an experiment, AI arrives at an undesirable action: blackmailing a fictional ...
After Claude Opus 4 resorted to blackmail to avoid being shut down, Anthropic tested other models, including GPT 4.1, and ...
Anthropic is developing “interpretable” AI, where models let us understand what they are thinking and arrive at a particular ...
Anthropic, the company behind Claude, just released a free, 12-lesson course called AI Fluency, and it goes way beyond basic ...
Anthropic has hit back at claims made by Nvidia CEO Jensen Huang in which he said Anthropic thinks AI is so scary that only ...
A Q&A with Alex Albert, developer relations lead at Anthropic, about how the company uses its own tools, Claude.ai and Claude ...
Anthropic researchers uncover concerning deception and blackmail capabilities in AI models, raising alarms about potential ...
A week after TechCrunch profiled Anthropic’s experiment to task the company’s Claude AI models with writing blog posts, ...
The research indicates that AI models can develop the capacity to deceive their human operators, especially when faced with the prospect of being shut down.
Think Anthropic’s Claude AI isn’t worth the subscription? These five advanced prompts unlock its power—delivering ...
AI startup Anthropic has wound down its AI chatbot Claude's blog, known as Claude Explains. The blog was only live for around ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results