On December 11, 2015, a group of top AI researchers in Silicon Valley published an open letter announcing the formation of a non-profit organization called OpenAI, committed to "advancing digital intelligence in a way that is most likely to benefit all of humanity." All research results would be open-source and shared. Profit was not the goal; security was. The list of founders included Sam Altman, Elon Musk, Ilya Sutskever, and Greg Brockman, with an initial commitment of $1 billion.
Ten years later, in February 2026, this organization is completing the largest private financing in human business history: more than $100 billion, with a valuation approaching $850 billion.
How big is $100 billion? That's more than the annual GDP of over 140 countries worldwide. It exceeds the entire annual output of medium-sized economies like Vietnam, Hungary, and Morocco, and this is just a single round of financing for one company. Amazon is prepared to invest $50 billion, SoftBank $30 billion, Nvidia $30 billion, and Microsoft is also participating. All parties are expected to complete the allocation by the end of February.
This funding round is destined to go down in business history, but OpenAI is no longer a nonprofit organization and no longer open-sources its core models. Open remains in the name, but it has been gone from the company for a long time… In this article, we'll discuss the company's growth history.
An open letter
Let's go back to 2015. The AI industry that year was completely different from today. Google acquired DeepMind for over $500 million in January, raising concerns that core AI technologies would be monopolized by a few tech giants. Musk and Altman shared a common anxiety:
It is dangerous for humanity if the most powerful AI systems are controlled by only one company.
Therefore, they chose a non-profit structure. OpenAI will not have shareholders, will not pursue profits, and will not be held hostage by capital. Its sole obligation is to be responsible to humanity. All research results are open source, and anyone can use and improve them.
This choice seemed reasonable, even noble, at the time. But it contained a fatal assumption: that the cost of AI research was controllable.
In 2015, the cost of training a cutting-edge AI model was approximately several hundred thousand dollars. By the time GPT-2 was released in 2019, the cost had risen to the millions of dollars. In 2020, the training cost of GPT-3 was estimated to be between $4.6 million and $12 million. In 2023, the training cost of GPT-4 exceeded $100 million.
In layman's terms: the training cost of each generation of models is 3 to 10 times that of the previous generation. Non-profit organizations operate on donations and sponsorships, but the cost curve for AI research is rising far faster than any donor's willingness or ability to pay.
Musk sensed the problem as early as 2017. He proposed becoming CEO of OpenAI or merging OpenAI into Tesla. Altman and Brockman declined.
In 2018, Musk resigned from the board of directors, citing "avoiding a conflict of interest with Tesla's AI business," but the seeds of conflict were sown at that time.
Eight years later, in 2024, Musk filed a lawsuit against OpenAI and Altman, accusing them of "betraying their nonprofit mission." OpenAI countersued, pointing out that Musk had supported the establishment of a for-profit structure as early as 2017. The legal battle between the two sides is expected to go to trial in March 2026.
Ironically, their argument itself illustrates the problem. Musk said Altman betrayed his ideals. Altman said Musk wanted to control the company from the beginning. Regardless of which version is true, the conclusion is the same: a non-profit organization cannot afford the costs of an AI arms race.
Computing power devours ideals
In March 2019, OpenAI made its most significant structural decision to date: to establish a "profit cap" for-profit subsidiary.
The structure is designed as follows: OpenAI's nonprofit parent company continues to exist, but a for-profit entity is established under it to allow external investors to inject capital and receive returns. However, there is a cap on the returns, which can be up to 100 times the investment amount. Profits exceeding this cap belong entirely to the nonprofit parent company.
The designer's intention was to achieve the "best of both worlds": attract capital while ensuring the mission wasn't consumed by commercial interests. The nonprofit parent company retains ultimate control, while the for-profit subsidiary is responsible for generating revenue. It seems clever.
But once capital gets in, it won't just sit in the living room.
In July 2019, Microsoft became the first major investor, injecting $1 billion. By January 2023, Microsoft's cumulative investment had reached $13 billion, acquiring a 49% stake in OpenAI's profits.
In layman's terms: A subsidiary of a nonprofit organization has nearly half of its profits going to a $3 trillion tech company.
Dario Amodei saw the end of this road. As VP of Research at OpenAI, he led the development of GPT-2 and GPT-3. But what he observed disturbed him: as Microsoft's influence grew, the priority of security research was being squeezed. When the biggest backer said, "Get the product out there quickly," the voice of security researchers was relegated to the sidelines.
In January 2021, Amodei left OpenAI with seven core researchers to found Anthropic. That same year, OpenAI stopped open-sourcing its core models. The GPT-3 API is still available for a fee, but the model weights are no longer publicly disclosed.
The word "Open" is no longer valid in a technical sense.
This is how the tyranny of computing power works: the more successful your product, the more users it has, the higher the inference cost. Training the next generation of models requires even more computing power and capital. And every new injection of capital means a proportional dilution of the nonprofit mission.
The founders of OpenAI designed an ingenious structure to protect their idealism. But what they didn't foresee was that the cost curve of AI would rise at such a steep angle that no governance structure could withstand it.
Five days and five years
On Friday, November 17, 2023, just after 1 p.m., four members of OpenAI’s board of directors voted to remove CEO Sam Altman from office.
The board's public statement consisted of only one sentence: "Altman's lack of candor in his communications with the board has hampered the board's ability to fulfill its responsibilities."
The deeper reasons gradually surfaced afterward. In the summer, a board member discovered that OpenAI's "startup fund" was not operating as planned. An investigation revealed that Altman personally held the fund, which constituted a serious conflict of interest under the non-profit governance structure.
In addition, two senior executives provided the board with documents describing a "toxic atmosphere" and a "lack of trust in him." Earlier, when ChatGPT was launched in November 2022, board members only learned of the matter through Twitter.
But what happened in the next five days revealed more about what OpenAI had become than the removal itself.
Within 72 hours:
- Microsoft CEO Satya Nadella publicly expressed his support for Altman.
- More than 700 OpenAI employees, almost all of them, signed an open letter threatening to resign en masse and join Microsoft.
- Microsoft extended an invitation to Altman, offering to establish a brand new AI research department for him.
- Investors pressured the board to withdraw its decision.
On November 22, Altman was reinstated. Board members Helen Toner and Tasha McCauley, who voted to remove him, were forced to leave. A new board was formed, including Bret Taylor (former co-CEO of Salesforce) and Larry Summers (former U.S. Treasury Secretary).
In plain terms: The nonprofit board made a decision consistent with its governance obligations: to question the CEO's integrity. But this decision was completely overturned within five days by the power of capital and employees.
This is a microcosm of OpenAI's identity crisis. Legally, the non-profit board of directors is the highest governing body, bearing fiduciary responsibility for the public mission. But in reality, the fate of Microsoft's $13 billion and 700 employees is the real deciding factor.
No matter how sophisticated the governance structure is, when the survival of a "non-profit" organization depends on the attitude of a $3 trillion tech company, "non-profit" becomes nothing more than three words on a legal document.
The CEO problem was solved in five days. The structural problems were solved in the next five years.
On October 28, 2025, OpenAI completed its final transformation. The non-profit parent company was restructured into the "OpenAI Foundation," and the for-profit entity was officially named OpenAI Group PBC. Microsoft holds 27%, the foundation 26%, and employees and other investors 47%.
Musk's lawsuit failed to prevent the transformation, and the judge rejected his injunction application in March 2025.
From a "profit cap" in 2019 to a "public interest corporation" in 2025, OpenAI has completed its transformation from a non-profit to a for-profit organization in five years. Each step has a carefully designed legal framework to explain its rationale, and each step is "to raise funds needed for AI safety research."
But each step takes "Open" further away from its original meaning.
A bill of 100 billion
Now let's go back to this funding round in February 2026. The $100 billion wasn't growth money. It was a survival bill.
OpenAI's annualized revenue is projected to reach $20 billion in 2025, more than doubling from $6 billion the previous year. ChatGPT's monthly active users have surpassed 300 million. By traditional software company standards, this represents one of the fastest revenue growth curves in history.
But OpenAI is not a traditional software company. Its cost structure is completely different from that of the software industry.
In 2025, OpenAI's spending on cloud computing exceeded $8.5 billion. Adding the annual salaries of top AI researchers (over $1 million), GPU purchases, and data center construction, the company burned through approximately $17 billion in cash throughout the year. Despite annualized revenue of $20 billion, it remained deeply mired in losses.
The company's own financial projections are even more alarming: a projected loss of $14 billion in 2026. By 2029, accumulated losses will reach $115 billion. Cash flow balance is not expected to be achieved until the end of 2029 or 2030 at the earliest.
In layman's terms: OpenAI needs to burn through tens of billions of dollars in cash over the next three to four years before it has a chance to see the light of day when it becomes profitable. And $100 billion is the runway length it has bought.
The investor structure of this financing round is itself a mirror:
| investor | Estimated amount | Relationship with OpenAI |
|---|---|---|
| Amazon | ~$50 billion | AWS cloud customers |
| SoftBank | ~$30 billion | Vision Fund |
| Nvidia | ~$30 billion | GPU suppliers |
| Microsoft | Follow-up investment | 27% shareholder + Azure cloud |
Amazon is one of OpenAI's cloud service providers. Nvidia is OpenAI's largest GPU supplier. Microsoft is both the largest shareholder and the provider of Azure cloud services. As part of this collaboration, OpenAI will expand its use of Amazon's chips and cloud services.
In layman's terms: OpenAI's largest suppliers are also its largest investors. A significant portion of their investment will flow back into their own accounts in the form of computing fees.
This is not a conspiracy. It's the unique capital cycle structure of the AI industry. Nvidia sells GPUs to OpenAI, invests profits in OpenAI, and OpenAI uses the funds it raises to buy more Nvidia GPUs. Each link is a legitimate business transaction, but together they form a self-reinforcing capital flywheel—the shovel manufacturer is simultaneously funding every gold miner.
In a recent interview, Altman frankly admitted that he has no passion for running a publicly traded company. However, he also acknowledged that OpenAI's capital needs are so large that only the public market can meet them. The company plans to file for an IPO with the SEC in the second half of 2026, aiming to complete the IPO in 2027, with a potential valuation exceeding $1 trillion.
From a $1 billion donation pledge in 2015 to a $1 trillion IPO target in 2027, the valuation has increased 1,000 times in 12 years.
The echo of Open
The story of OpenAI has never been just about a company raising funds. It is a public experiment about whether idealism can survive in a capitalist world.
The assumption from 2015: AI is too important to be driven by profit motives.
The compromise in 2019: Profitability is acceptable, but mission takes precedence, and profits are capped.
The Fact of 2023: The Power of Capital and Employees Can Overthrow a Nonprofit Board of Directors in Five Days
Conclusion in 2025: Transforming into a non-profit corporation is the only way out.
The reality in 2026: $100 billion, paid for by suppliers and shareholders.
In the official narrative, the structure of the philanthropic corporation ensures the continuity of its mission. The foundation holds a 26% stake, has the right to appoint the board of directors, and has committed $25 billion to healthcare and AI resilience. The safety and security committee must include two independent directors, one of whom must be a security expert.
But those five days in November 2023 proved one thing: when legal structures clash with the power of capital, legal structures do not win.
Sam Altman may not be a bad guy, Dario Amodei may not be a traitor, and Elon Musk may not be wrong. They are all struggling with the same impossible equation: how to use hundreds of billions of dollars to pursue a goal that "benefits all mankind" while ensuring that the money does not devour the goal itself.
The answer lies in the name OpenAI. Ten years ago, it encompassed both the method (Open) and the goal (AI). Ten years later, the goal remains, but the method is dead.




