I can guess that the head security guy at Anthropic left Because they decided to let some guardrails off on their AGI to be improving faster autonomously to outcompete GPT Codex 5.3 No way to prove it, but why they shared this now, and why the dude left, is eerring

Anthropic
@AnthropicAI
When we released Claude Opus 4.5, we knew future models would be close to our AI Safety Level 4 threshold for autonomous AI R&D. We therefore committed to writing sabotage risk reports for future frontier models. Today we’re delivering on that commitment for Claude Opus 4.6.
From Twitter
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments