Today
Intel
Market
Earn
Settings
Account
Theme Selection
Light
Dark
Language
English
简体中文
繁體中文
Tiếng Việt
한국어
Followin APP
Mine Web3 Possibilities
App Store
Google Play
Log in
Anthropic
26,531 Twitter followers
Follow
Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems.
Posts
Anthropic
Thread
#Thread#
New Anthropic research: Emotion concepts and their function in a large language model. All LLMs sometimes act like they have emotions. But why? We found internal representations of emotion concepts that can drive Claude’s behavior, sometimes in surprising ways.
ANTHROPIC
20.97%
Anthropic
03-18
The open source ecosystem underpins nearly every software system in the world. As AI grows more capable, open source security becomes increasingly important. We're donating to the Linux Foundation to continue to help secure the foundations AI runs on. twitter.com/AnthropicAI/status...
Anthropic
03-11
Thread
#Thread#
Anthropic is expanding to Australia & New Zealand. We’ll soon open an office in Sydney—our fourth in Asia-Pacific after Tokyo, Bengaluru, and Seoul. Read more:
ANTHROPIC
20.97%
Anthropic
02-27
Thread
#Thread#
A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War.
ANTHROPIC
20.97%
Anthropic
02-24
Thread
#Thread#
We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.
Anthropic
02-03
New Anthropic Fellows research: How does misalignment scale with model intelligence and task complexity? When advanced AI fails, will it do so by pursuing the wrong goals? Or will it fail unpredictably and incoherently—like a "hot mess?" Read more: alignment.anthropic.com/2026/h...…
ANTHROPIC
20.97%
Anthropic
01-27
Thread
#Thread#
We’re partnering with the UK's Department for Science, Innovation and Technology to build an AI assistant for http:/GOV.UK. It will offer tailored advice to help British people navigate government services. Read more about our partnership:
Anthropic
01-27
New research: When open-source models are fine-tuned on seemingly benign chemical synthesis information generated by frontier models, they become much better at chemical weapons tasks. We call this an elicitation attack.
Anthropic
01-22
We’re publishing a new constitution for Claude. The constitution is a detailed description of our vision for Claude’s behavior and values. It’s written primarily for Claude, and used directly in our training process.
Anthropic
01-13
Thread
#Thread#
AI is ubiquitous on college campuses. We sat down with students to hear what's going well, what isn't, and how students, professors, and universities alike are navigating it in real time. 0:00 Introduction 0:22 Meet the panel 1:06 Vibes on campus 6:28 What are students building? 11:27 AI as tool vs. crutch 16:44 Are professors keeping up? 20:15 Downsides 25:55 AI and the job market 34:23 Rapid-fire questions
Loading..