News

Malicious use is one thing, but there's also increased potential for Anthropic's new models going rogue. In the alignment section of Claude 4's system card, Anthropic reported a sinister discovery ...
Another large law firm was forced to explain itself to a judge this week for submitting a court filing with made-up citations ...
The testing found the AI was capable of "extreme actions" if it thought its "self-preservation" was threatened.
Anthropic CEO Dario Amodei says AI now hallucinates less than humans on factual tasks—but urges better definitions and ...
Anthropic's Claude 4 Opus AI sparks backlash for emergent 'whistleblowing'—potentially reporting users for perceived immoral ...
OpenAI’s doomsday bunker plan, the “potential benefits” of propaganda bots, plus the best fake books you can’t read this ...
AI 'hallucinations' are causing lawyers professional embarrassment, sanctions from judges and lost cases. Why do they keep ...
A judge is “not prepared” to say companion chatbots should receive First Amendment protection.
Businesses have already plunged headfirst into AI adoption, racing to deploy chatbots, content generators, and ...
The lawyers blamed AI tools, including ChatGPT, for errors such as including non-existent quotes from other cases.
Hallucinations from AI in court documents are infuriating judges. Experts predict that the problem’s only going to get worse.