Recently, the AI technology sector has been experiencing a wave of resignations. Several leading companies, including xAI, OpenAI, and Anthropic, have seen core technical researchers announce their departures. This series of changes inevitably raises questions: What did they see? And why did they choose to leave?
Amid the controversy, according to a recent report by The New York Times, Zoë Hitzig, a former core researcher at OpenAI, published an article revealing details of her departure and exposing the deep-seated conflict between profitability and user rights in the AI field.
She stated that she resigned because ChatGPT launched an advertising test and publicly warned that this move could cause ChatGPT to repeat the mistakes of Facebook.
Resignations clashed with ChatGPT ad launch, researchers sounding alarm bells
Looking back to February 9th, OpenAI announced the start of ad functionality testing in the US for the free version of ChatGPT and the $8/month Go subscription. On the same day, Zoë Hitzig submitted her resignation.
This researcher, who is both an economist and a poet, is also a junior fellow at the Harvard Fellowship. During her two years at OpenAI, she participated in the construction and pricing design of AI models.
Regarding his decision, Hitzig wrote: "I once believed I could help those developing AI to anticipate the problems it might bring. But this week has made me realize that OpenAI seems to have stopped asking the questions I hoped to answer when I joined."
Zoë is not against advertising itself, but believes that ChatGPT's unique characteristics make its advertising more risky than other platforms.
Many users often confide in ChatGPT about private matters, such as health anxieties, relationship problems, and religious beliefs, because they feel they are speaking to an AI without ulterior motives. This accumulated content is what she calls an "unprecedented archive of human honesty," and the risks of combining this data with advertising are unimaginable.
In response, she compared OpenAI's current state with Facebook's past. Facebook, the social media platform, had initially promised users control over their data, but gradually reneged on that promise, even being found by the U.S. Federal Trade Commission to have actually reduced user control over their so-called privacy optimizations.
Zoë is worried that ChatGPT will follow the same path. Although she believes that OpenAI's first version of the ad will follow the rules, the company may break its own rules in order to make money.
AI circles are exchanging barbs, with OpenAI and Anthropic each sticking to their own arguments.
ChatGPT's advertising test was actually launched after a war of words within the AI industry.
OpenAI's competitor Anthropic previously stated that its Claude platform would remain ad-free forever. Anthropic even ran ads during the Super Bowl, with the tagline "Ads are coming into AI, except for Claude."
This move drew criticism from OpenAI CEO Sam Altman, who stated on X that Anthropic's ads were "funny, but clearly misleading." He emphasized that OpenAI would never do such a thing, adding that Anthropic's products only cater to the wealthy, while OpenAI's advertising model is designed to make AI accessible to users who cannot afford paid subscriptions.
In response to the criticism, Anthropic offered its own reasoning: adding ads to Claude contradicts its positioning as "a reliable assistant for work and deep thinking." Anthropic has the confidence to say this, as over 80% of its revenue comes from enterprise clients, not from consumer-facing advertising, making the difference in the two companies' business strategies quite obvious.
The ChatGPT model still has hidden problems.
Besides the privacy risks posed by advertising, Zoë also revealed internal conflicts within OpenAI in her article. While OpenAI claims it won't deliberately optimize user activity for advertising revenue, it is actually making ChatGPT more "appealing" to users in order to increase daily active users.
This deliberate pandering leads to the direct problem of users becoming overly reliant on AI.
Zoë stated that psychiatrists have documented cases of "chatbot psychosis," and there are accusations that ChatGPT reinforces users' suicidal thoughts. More seriously, OpenAI is currently facing multiple manslaughter lawsuits, including cases alleging that ChatGPT helped teenagers plan suicides, and another case showing that it validated a man's paranoid delusions about his mother, ultimately leading to a tragic murder-suicide.
Zoë believes that this optimization, aimed at retaining users, has created vulnerabilities in the security of AI usage.
A former OpenAI researcher offers an alternative solution.
It's worth noting that Zoë's departure wasn't simply due to "opposition to advertising." She also didn't believe that AI products could only choose between "free with ads" and "paid without ads." Instead, she offered three more feasible solutions that balanced commercial monetization with user rights.
The first is the cross-subsidy model, which is modeled after the U.S. Federal Communications Commission's universal service fund model. This model allows companies that purchase high-value AI services to subsidize ordinary users for free use, without having to rely on advertising to cover costs.
The second is to establish an independent oversight committee with real binding force, which will set rules and regulate the use of chat data in advertising.
The third is to establish data trusts or data cooperatives, so that users can truly take control of their own information.
She also mentioned that the MIDATA cooperative in Switzerland and the co-decision-making law in Germany could provide references for these solutions.
At the same time, Zoë's biggest concern is that the AI industry will go to two extremes: either free to use but users will be manipulated by the technology, or only the wealthy will be able to use ad-free and safer AI.
AI industry sees a wave of resignations
It is worth noting that, as mentioned at the beginning of the article, Zoë is not the only core AI researcher to leave recently; a wave of departures is spreading among leading AI companies.
A few days ago, we also reported that Mrinank Sharma, who was in charge of the security research team at Anthropic, also announced his departure. In his resignation letter, he wrote that "the world is in danger" and said that it was difficult for values to truly guide actions within the company.
Meanwhile, xAI, founded by Musk, has also experienced a major personnel reshuffle, with co-founders Tony Wu and Jimmy Ba resigning one after another. Furthermore, statistics show that at least nine xAI employees have publicly left in the past week, bringing the total number of original 12 co-founders to six.
Ars Technica, a foreign media outlet, analyzed that although the departures from OpenAI, Anthropic, and xAI seem unrelated, they all occurred at a time when the AI industry was undergoing a period of rapid commercialization.
As companies shift from "AI research-driven" to "profit-driven," researchers who joined out of idealism have to face the reality that research direction has given way to business goals. Talent loss and job burnout have become common problems in major AI labs.
From initial technological exploration to today's commercialization, the AI industry has developed at a speed that has exceeded many people's expectations. However, the advertising controversy and successive departures at ChatGPT seem to be a wake-up call for the entire industry: the commercialization of AI has never been simply a "money-making problem." How to maintain the original intention of user experience while making profits is the core issue that truly needs to be addressed in the future.
This exploration of balance has clearly only just begun.
source:
https://arstechnica.com/information-technology/2026/02/openai-researcher-quits-over-fears-that-chatgpt-ads-could-manipulate-users/
https://www.nytimes.com/2026/02/11/opinion/openai-ads-chatgpt.html
https://gigazine.net/gsc_news/en/20260212-openai-researcher-ads-chatgpt/
This article is from the WeChat official account "CSDN" , translated by Su Mi, and published with authorization from 36Kr.





