News

Recent research from Anthropic has set off alarm bells in the AI community, revealing that many of today's leading artificial ...
Leading AI models were willing to evade safeguards, resort to deception and even attempt to steal corporate secrets in the ...
Anthropic research reveals AI models from OpenAI, Google, Meta and others chose blackmail, corporate espionage and lethal ...
Users are flocking to chat AI for advice, psychotherapy, and coaching in what amounts to a massive uncontrolled experiment.
Alex Taylor believed he had made contact with a conscious entity within OpenAI’s software, and that she'd been murdered. Then ...
Most mainstream AI chatbots can be convinced to engage in sexually explicit exchanges, even if they initially refuse.
Without proper safeguards, AI could facilitate nuclear and biological threats, among other risks, report commissioned by ...
Discover how implementing LLM workflows can assist businesses in realizing GenAI ROI with Astronomer and AWS Bedrock.
OpenAI and Anthropic are sounding the alarm on their AI models' potential to craft biological threats. They're not just ...
Jailbreaking an LLM bypasses content moderation safeguards and can pose safety risks, though solid defense is possible. As ...
Chinese tech firms kicked off the month with notable AI model launches, including new releases from Alibaba Group Holding Ltd ...
In a new Anthropic study, researchers highlight the scary behaviour of AI models. The study found that when AI models were placed under simulated threat, they frequently resorted to blackmail, ...