News

Anthropic says its AI model Claude Opus 4 resorted to blackmail when it thought an engineer tasked with replacing it was ...
In tests, Anthropic's Claude Opus 4 would resort to "extremely harmful actions" to preserve its own existence, a safety ...
Anthropic’s Claude Opus 4 model attempted to blackmail its developers at a shocking 84% rate or higher in a series of tests that presented the AI with a concocted scenario, TechCrunch reported ...
The testing found the AI was capable of "extreme actions" if it thought its "self-preservation" was threatened.
Anthropic reported that its newest model, Claude Opus 4, used blackmailing as a last resort after being told it could get ...
Anthropic's Claude 4 models show particular strength in coding and reasoning tasks, but lag behind in multimodality and ...
California-based AI company Anthropic just announced the new Claude 4 models. These are Claude Opus 4 and Claude Sonnet 4.
Claude 4 Sonnet is a leaner model, with improvements built on Anthropic's Claude 3.7 Sonnet model. The 3.7 model often had ...
The company said it was taking the measures as a precaution and that the team had not yet determined if its newst model has ...
In a fictional scenario, Claude blackmailed an engineer for having an affair.
Anthropic's Claude Opus 4 AI model attempted blackmail in safety tests, triggering the company’s highest-risk ASL-3 ...
Anthropic has introduced two advanced AI models, Claude Opus 4 and Claude Sonnet 4, designed for complex coding tasks.