GPT-5's key figure abruptly jumped ship; Anthropic CEO boasts about employee retention surpassing OpenAI, causing the "number one AI company" halo to collapse?

This article is machine translated
Show original

While news of Lin Junyang, the technical head of Alibaba's Qwen, leaving the company dominated headlines in China's tech circles, a personnel change in Silicon Valley remained unusually quiet.

Max Schwarzer, the head of post-training at OpenAI and a key figure behind the GPT-5 series, has announced his departure to join Anthropic as a frontline researcher.

The timing of this departure is particularly delicate. Previously, OpenAI and Anthropic were locked in a power struggle, with OpenAI taking on a Pentagon contract that Anthropic had explicitly avoided. This news sparked a strong public backlash, and ChatGPT's uninstallation rate surged by 295% in a short period.

At a crossroads where commercial expansion and ethical controversies coexist, Max Schwarzer, a key figure in controlling the model's effectiveness, chose to leave just seven months after being promoted to Vice President of Research. This in itself sends a strong signal: OpenAI is speeding towards a commercial future and may no longer be the ideal place for pure researchers.

Max Schwarzer's departure is not an isolated case. In her resignation statement, she frankly admitted that many of her admired colleagues were already at Anthropic.

For example, John Schulman, co-founder of OpenAI, head of post-training, and creator of ChatGPT's "conversational capabilities"; Jan Leike, head of the Super Alignment team who attempted to build a safety fence for AI that surpasses human capabilities; and Durk Kingma, co-founder of OpenAI and a geeky algorithm scientist...

This reveals an emerging "talent migration line" in which technology leaders are leaving OpenAI's headquarters in San Francisco's Mission District and converging on Anthropic, which places a greater emphasis on "constitutional AI" and security research.

Max Schwarzer's new focus at Anthropic is reinforcement learning (RL), the core area she led when she spearheaded GPT-o1, emphasizing logical reasoning and aiming to push the limits of model thinking. This dedication to pushing the boundaries of capabilities stands in stark contrast to OpenAI's current product strategy.

Looking back at the recent iteration path of the GPT-5 series, a significant change is that OpenAI's R&D focus is shifting from simply expanding the parameter boundaries of models to solving the "last mile" problem of commercialization.

Whether it's optimizing inference, reducing illusions, or embedding agent capabilities and enterprise-level deployment, the GPT-5 series aims for "controllability, reliability, and scalability." This trend is particularly evident in the newly released model, GPT-5.3 Instant, which focuses on optimizing the user experience and enhancing emotional intelligence. Clearly, a new round of competition concerning "user experience" has quietly begun.

If we broaden our perspective and look at OpenAI's recent moves, from signing a Pentagon government contract to launching a code hosting platform to replace GitHub, and expanding from a model provider to a developer tool ecosystem, they all tell the same story:

OpenAI is at a critical juncture in its strategic transformation, aiming to become a global AI platform giant deeply embedded in business and government systems.

A Shift in Product Strategy: From a "Parameter Arms Race" to an "Experience Moat"

The iterations from the GPT-4 series to the GPT-5 series clearly demonstrate that Open AI is shifting from "making AI smarter" to "making AI more trustworthy".

This shift was not accidental. After a two-year-long "parameter arms race," OpenAI realized that simply increasing the size of models was facing the dilemma of diminishing marginal returns.

As Ilya Sutskever, former chief scientist at OpenAI, said, "The era of relying solely on the Scaling Law is over; we have returned to a phase of exploration and discovery."

In the past, doubling the size of a model would significantly improve its capabilities; now, doubling the size of a model tenfold might only result in a less than 10% improvement. Blindly increasing the number of parameters has become extremely inefficient, while optimizing the later training and inference stages can yield a much higher return on investment.

However, there is a huge paradox here: since "post-training" has become a new strategic high ground for OpenAI, why have top leaders in the field of post-training, such as Max Schwarzer, chosen to leave?

This precisely reveals the fundamental disagreement within OpenAI regarding the definition of "post-training": is it "for the sake of truth" or "for the sake of the product"? Scientists see post-training as the final safety gate to AGI. However, in the eyes of OpenAI's management, who are accelerating commercialization, post-training is being redefined as "advanced customer service training."

Previously, the overly didactic tone and easily triggered security defenses of GPT-5.2 had angered many users, leading to a significant wave of unsubscriptions on social media. This made OpenAI acutely aware that a poor user experience was negating the advantages of model intelligence.

Therefore, in the newly released GPT-5.3 Instant, the enormous computing resources have been shifted from "logical reasoning" to the more pragmatic "engineering fixes": How to make the tone smoother? How to increase emotional intelligence? How to make the conversation more fluent? It no longer tries to be an "omniscient and omnipotent god", but strives to be "someone who even understands your subtext".

Thus, the goal of OpenAI's post-training has been reduced from "preventing AI from destroying the world" to "preventing AI from getting into lawsuits."

OpenAI's shift toward commercialization corresponds to a rewriting of the entire industry's evaluation criteria.

At the beginning of the year, Andrew Ng proposed the "Turing-AGI test," which no longer focuses on whether AI "can solve problems," but rather on whether it can truly complete a task under uncontrollable conditions. Stanford University's "2026 AI Prediction Report" and Google Cloud's ROI report also point to the same trend: don't talk about the upper limit of model intelligence, talk about the practical benefits in enterprises.

For enterprise clients, the calculation is clear: a genius who scores full marks but occasionally rambles is far less valuable than an assistant who scores 90 but is emotionally stable and logically consistent. Reducing compliance risks is the key hurdle for enterprise-level implementation.

Just as Apple wins users without ever piling on hardware specifications, OpenAI is also trying to prove through extreme engineering refinement that in the business world, "experience wins" is far more effective than "parameter stacking" in retaining customers, which is also the core logic of its commercialization shift.

Accelerated Commercial and Political Maneuvering: Evolution from Research Institutions to "Infrastructure-Level Platforms"

OpenAI's technological shift is just the tip of the iceberg; beneath the surface lies Sam Altman's dual strategy in politics and business.

This company, born from the ideals of "open source, public welfare, and promoting AGI to benefit mankind," is frantically devouring computing power, data, and capital. At the core of all its actions, there is one key word: "control."

According to The Information, OpenAI is secretly developing a code hosting platform with the intention of directly replacing Microsoft's GitHub and becoming the world's next-generation code hosting and generation center.

Although OpenAI engineers claimed it was due to dissatisfaction with GitHub's recent frequent outages, the move appears more like a struggle for the "fundamental definition rights" of the software industry.

Currently, OpenAI's Copilot is merely an "add-on" to the IDE, always dependent on Microsoft's developer ecosystem. What it truly wants is to make AI a "native environment" for programming.

Developers complete the entire process of code hosting, generation, debugging, and deployment on its platform. OpenAI, by controlling this closed loop, harvests the freshest and most core engineering data and builds a self-circulating system of "data-model-application".

OpenAI has also completed its expansion from an AI model provider to a developer tool ecosystem.

An even more controversial step was OpenAI's political "pledge of allegiance" to the Pentagon.

In early 2024, OpenAI updated its usage policy, removing the clause that previously explicitly prohibited "military and wartime use." This change was not announced publicly, but it was noticed by several media outlets.

Subsequently, OpenAI appointed former NSA Director Paul Nakasone to its board of directors and established a "Security and Assurance Committee".

These actions are widely interpreted as a signal that OpenAI is deepening its cooperation with the U.S. national security system.

In the latest Pentagon order incident, Anthropic chose to uphold the boundaries of "constitutional-grade AI," refusing to allow its AI models to be used for large-scale domestic surveillance or fully automated weapons, ultimately being listed as a "national security supply chain risk." OpenAI, on the other hand, quickly reached an agreement with the Pentagon, accepting the Pentagon's core framework of "use as long as applicable law applies." Although it added three security red lines and secured the technical protection rights of "cloud deployment and autonomous control of the security stack," the wording of the agreement still left room for interpretation regarding potential surveillance.

OpenAI's takeover is not just a commercial hijacking, but also a political maneuver. It demonstrates that OpenAI is prepared to shoulder the complexities of serving as a "national-level AI infrastructure."

After all, with a defense budget of $200 million, the revenue from enterprise-grade SaaS is negligible. Becoming a supplier to the U.S. military grants them "too big to fail" political immunity.

In its latest round of financing, OpenAI raised a staggering $110 billion, making it the largest financing round in AI history. OpenAI's post-investment valuation is approaching $840 billion, nearing the trillion-dollar club.

OpenAI plans to use this money to expand its artificial intelligence infrastructure, building a computing power barrier that competitors cannot overcome. This is the ultimate application of the "network effect" in the platform economy: by monopolizing core resources such as computing power and data, a self-reinforcing cycle of "user aggregation - resource reinforcement - more users flocking in" is formed, ultimately achieving market monopoly. This is also one of the core logics behind OpenAI's pursuit of "control".

But behind this seemingly glamorous rapid expansion lies a deadly sword of Damocles: the extremely rapid rate of cash burn and the still-unfinished business model constitute OpenAI's biggest hidden danger.

OpenAI is currently embroiled in a three-way struggle between government security concerns, ethical risks, and commercial interests. It is undertaking an unprecedented gamble in human commercial history, forcibly establishing a commercial closed loop before its funding chain breaks.

Talent mobility and cultural friction: an inevitable differentiation under the shift in strategic focus

When a company undergoes a genetic mutation, it inevitably triggers cellular metabolism. The high-level shake-up at OpenAI is a natural consequence of a strategic shift and subsequent differentiation.

As ChatGPT became a super app with hundreds of millions of users, the gravitational field within OpenAI underwent a fundamental reversal: engineering and productization began to dominate decision-making, while pure research and exploration were forced to retreat.

Several OpenAI executives and research leaders have left the company in recent years, including the CTO, head of post-training, and head of research. For tech purists like John Schulman or Max Schwarzer, leaving becomes the only option when computing resources begin to prioritize product deployment over cutting-edge exploration, and when the security team's authority is squeezed by commercial delivery nodes.

Anthropic became a haven for these "exiles." It was more like OpenAI before 2019: a slower release pace, stricter security reviews, and a greater obsession with Scaling Laws.

In its latest 2025 talent trends report, venture capital firm SignalFire revealed that Anthropic has an 80% retention rate for top AI talent, and engineers who move from OpenAI to Anthropic are eight times more likely to do so than those who move from Anthropic to OpenAI.

Anthropic CEO Dario Amodei boasted that his company is capable of resisting poaching by competitors. They ignore tactics like Meta offering ten times the salary to lure employees away, because most employees stay willingly out of a sense of "mission." According to sources, only two employees resigned to join Meta despite this incentive, resulting in a retention rate far higher than OpenAI's.

This talent migration also indicates that OpenAI is weeding out "pure researchers" and retaining "product managers" and "engineers." It has gathered the best product development talent, skilled at monetizing technology and creating world-changing, experience-defying products like ChatGPT. OpenAI is becoming the Microsoft of the AI era.

Anthropic is attracting the purest "scientists" and "security experts." It brings together minds dedicated to exploring the theoretical boundaries and security foundations of AGI, seemingly becoming the Bell Labs of the AI era.

This is not just a competition between two companies, but a gamble between two technological paths. OpenAI chose "breadth and penetration," aiming to become an indispensable infrastructure and win market share. Anthropic chose "depth and boundaries," betting on the future and on a secure foundation.

OpenAI has been making a series of subtle moves recently. On the one hand, it's refining ChatGPT into a scalable "enterprise-grade default entry point" with less illusion, less refusal to answer, and less offensive interaction. On the other hand, it's extending its reach into code hosting, government contracts, and security governance, embedding itself into a more robust production system and state apparatus. This is a transformation from a "model company" to an "infrastructure company."

The cost of this path is the redistribution of trust and culture: researchers will vote with their feet, users will express their attitude by uninstalling apps, and competitors will use "more ethical" narratives to seize minds.

Reference link:

https://x.com/max_a_schwarzer

https://www.theinformation.com/articles/openai-developing-alternative-microsofts-github

https://www.reuters.com/business/openai-is-developing-alternative-microsofts-github-information-reports-2026-03-03/

https://openai.com/zh-Hant/index/gpt-5-3-instant/

This article is from the WeChat official account "AI Frontline" (ID: ai-front) , author: Yunyi, and published with authorization from 36Kr.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments