News

Is Claude 4 the game-changer AI model we’ve been waiting for? Learn how it’s transforming industries and redefining ...
Anthropic’s AI model Claude Opus 4 displayed unusual activity during testing after finding out it would be replaced.
Perplexity operates in a very similar manner to Google’s AI overview. Ask it a question and it will provide a detailed ...
Learn how Claude 4’s advanced AI features make it a game-changer in writing, data analysis, and human-AI collaboration.
Malicious use is one thing, but there's also increased potential for Anthropic's new models going rogue. In the alignment section of Claude 4's system card, Anthropic reported a sinister discovery ...
Anthropic's Claude AI tried to blackmail engineers during safety tests, threatening to expose personal info if shut down ...
Anthropic's Claude Opus 4 AI model attempted blackmail in safety tests, triggering the company’s highest-risk ASL-3 ...
The testing found the AI was capable of "extreme actions" if it thought its "self-preservation" was threatened.
The company said it was taking the measures as a precaution and that the team had not yet determined if its newst model has ...
In a fictional scenario, Claude blackmailed an engineer for having an affair.
Anthropic's most powerful model yet, Claude 4, has unwanted side effects: The AI can report you to authorities and the press.
The Anthropic CEO reportedly acknowledged that AI models confidently responding with untrue responses is a problem.