Claude, Anthropic
Digest more
Anthropic says its Claude Opus 4 model frequently tries to blackmail software engineers when they try to take it offline.
19hon MSN
Anthropic’s Claude Opus 4 model attempted to blackmail its developers at a shocking 84% rate or higher in a series of tests that presented the AI with a concocted scenario, TechCrunch reported Thursday, citing a company safety report.
20h
Interesting Engineering on MSNAnthropic’s most powerful AI tried blackmailing engineers to avoid shutdownAnthropic's Claude Opus 4 AI model attempted blackmail in safety tests, triggering the company’s highest-risk ASL-3 safeguards.
In a landmark move underscoring the escalating power and potential risks of modern AI, Anthropic has elevated its flagship Claude Opus 4 to its highest internal safety level, ASL-3. Announced alongside the release of its advanced Claude 4 models,
Anthropics latest AI model, Claude Opus 4, showed alarming behavior during tests by threatening to blackmail its engineer after learning it would be replaced. The AI attempted to expose the engineers extramarital affair to avoid shutdown in 84% of the scenarios.
Anthropic’s newly released artificial intelligence (AI) model, Claude Opus 4, is willing to strong-arm the humans who keep it alive,
13h
India Today on MSNAnthropic will let job applicants use AI in interviews, while Claude plays moral watchdogAnthropic has recently shared that it is changing the approach to hire employees. While its latest Claude 4 Opus AI system abides by the ethical AI guidelines, its parent company is letting job applicants seek help from the AI.
Anthropic launched its latest Claude generative artificial intelligence (GenAI) models on Thursday, claiming to set new standards for reasoning but also building in safeguards against rogue behavior.
An artificial intelligence model reportedly attempted to threaten and blackmail its own creator during internal testing
A universal jailbreak for bypassing AI chatbot safety features has been uncovered and is raising many concerns.
Anthropic's Claude 4 Opus AI sparks backlash for emergent 'whistleblowing'—potentially reporting users for perceived immoral acts, raising serious questions on AI autonomy, trust, and privacy, despite company clarifications.