Recently , we watched the "movie" "The Master of AI," a trilogy of dramas produced by Silicon Valley's big VCs and tech giants with more than $10 billion invested, including "The AI Companion," "Two... The Road” and three episodes of “The Return of the King.” Many people applauded Sam Altman's return to the "throne", and some even compared it to Steve Jobs' return to Apple.
However, the two are simply not comparable. Masters of AI is a completely different story, one about a battle to choose between two paths: to pursue profit, or not to pursue it? this is the key of the problem!
Let’s revisit the beginning of The Lord of the Rings. When Gandalf sees the Ring at Uncle Bilbo's house, he quickly realizes that such a powerful object cannot be handled by ordinary people. Only some divine and otherworldly being, like Frodo, could handle it.
That's why Frodo is the heart of the team - he's the only one who can carry such a powerful thing without being devoured by it. Not Gandalf, not Aragorn, not Legolas, not Gimli, just Frodo. The key to the entire Lord of the Rings story lies in Frodo's unique nature.
Back to 2015
Now, switch back to the beginning of Masters of AI. In 2015, Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Elon Musk and a number of technology companies announced the establishment of OpenAI and committed to inject more than $1 billion into this venture capital fund.
Character identity supplement
- Sam Altman is CEO of OpenAI
- Ilya Sutskever is one of the co-founders of OpenAI (he had differences with Sam Altman on the path selection of OpenAI and was eventually marginalized)
- Greg Brockman is Chief Technology Officer of OpenAI
- Reid Hoffman is a well-known entrepreneur and venture capitalist who was the co-founder of LinkedIn
- Jessica Livingston is one of the founding partners of the venture capital firm Y Combinator
- Peter Thiel is a well-known entrepreneur, venture capitalist, and one of the co-founders of PayPal.
This is a group of some of the smartest brains in the world, almost as smart as Gandalf. They also knew they were building something powerful that, like the Lord of the Rings, should not be owned and controlled by anyone pursuing their own interests. It must be mastered by selfless people, like Frodo.
So instead of launching a for-profit company, they established OpenAI as a non-profit research organization, presumably not for profit.
The idea that "such a powerful thing should not be controlled by a company with the goal of profit" may not only be the consensus of OpenAI's co-founders when it was founded. This is likely why the founders got together when they decided to form OpenAI in the first place.
Even before OpenAI was founded, Google had already demonstrated the potential for wielding this superpower. It seems that OpenAI is a "protector alliance" formed by these visionary "protectors of humanity" to fight against the AI monster that Google is becoming, a profit-seeking company.
Ilya's belief in this philosophy may have been what persuaded him to leave Google to lead OpenAI's research and development, because from any other perspective, Ilya's move made no sense.
Back in 2015, no one could provide a better AI development platform than Google, and although the founders of OpenAI were all Silicon Valley tycoons, none of them were AI practitioners (they didn't code at all).
Not to mention the financial disadvantage: OpenAI is obviously not as well-funded as Google. The founders promised $1 billion, but only about 10% came through ($100 million from Elon Musk or $30 million from other donors). From a personal financial return perspective, a nonprofit could not have offered Elijah better financial compensation than working at Google.
The only thing that might have convinced Ilya to leave Google to lead OpenAI was this idea. Ilya's philosophical ideas are not as well known to the public as those of his doctoral supervisor. Geoffrey Hinton will leave Google in 2023 due to disillusionment with the politics of the Ronald Reagan era and dissatisfaction with military funding of AI.
In short, the founders want OpenAI to be their Frodo, carrying the "Lord of the Rings" for them.
The rise and transformation of OpenAI
But life in science fiction novels or movies is much easier. In the movie, the solution is very simple. Tolkien simply created the character of Frodo, a selfless guy who could resist the temptation of the Ring and was protected from physical attacks by the Fellowship of the Ring.
In order to make the character of Frodo more believable and natural, Tolkien even created a race of innocent, kind and selfless people - the hobbits. As the quintessentially upright, good-natured hobbit, Frodo was the natural choice, able to resist temptations that even the wise Gandalf could not resist.
If Frodo's nature is attributed to the hobbit's racial characteristics, then Tolkien's solution to the biggest problem of "The Fellowship of the Ring" is inherently racist, pinning the hope of mankind on On the noble character of a certain race. As a non-racist, while I can enjoy superheroes (or races of superheroes) solving problems in novels or movies, I can't be so naive as to think the real world is as simple as the movies. In the real world, I don't believe in this solution.
The real world is just much more complicated. Take OpenAI as an example. Most of the models built by OpenAI (especially the GPT series) are computing power monsters that rely on power-driven chips (mainly GPUs).
In a capitalist world, this means it desperately needs capital. Therefore, without the blessing of capital, OpenAI’s model would not have developed into what it is today. In this sense, Sam Altman is a key figure as the company's resource center. Thanks to Sam's connections in Silicon Valley, OpenAI received strong support from investors and hardware vendors.
The resources that flow into OpenAI to drive models are there for a reason - profit. Wait, isn’t OpenAI a non-profit organization? Well, technically yes, but something has changed under the hood.
While maintaining its nominal non-profit structure, OpenAI is transitioning into more of a for-profit entity. This happened when OpenAI Global LLC was launched in 2019, a for-profit subsidiary set up to legally attract venture funds and give employees shares. This clever move aligns OpenAI's interests with those of its investors (not donors this time, so probably profit-seeking).
Through this consistency, OpenAI can grow with the blessing of capital. OpenAI Global LLC has had a profound impact on OpenAI's growth, notably by attaching itself to Microsoft, securing a $1 billion investment (and later billions), and executing OpenAI's computational monsters on Microsoft's Azure-based supercomputing platform .
We all know that a successful AI model requires three things: algorithms, data, and computing power. OpenAI has gathered the world's top AI experts for the algorithms of their models (note: this also depends on capital, OpenAI's professional team is not cheap).
ChatGPT's data mainly comes from the open Internet, so it is not a bottleneck. Building computing power on chips and electricity is an expensive project. In short, half of these three elements are primarily provided by OpenAI Global LLC's profit structure. Without this constant supply of fuel, OpenAI wouldn’t be able to get this far with donations alone.
But this comes at a cost. It is almost impossible to remain independent while being blessed by capital. What is now called a non-profit framework is more in name than substance.
Power infighting emerges
There are many signs that the fight between Ilya and Sam is about this path choice: llya seems to be trying to prevent OpenAI from straying from the direction they originally set.
There is also a theory that Sam made a mistake during the so-called Q model breakthrough incident, which led to the failed coup. But I don't believe that OpenAI's board of directors would fire a very successful CEO because he went wrong in handling a particular problem. This so-called error in Q-model breakthroughs, if it exists, is at best a trigger.
The real problem with OpenAI may be that it has strayed from its original path. In 2018, Elon Musk parted ways with Sam for the same reason. And it seems that in 2021, the same reasons led a group of former members to leave OpenAI to start Anthropic. In addition, at the time of the plot, Elon Musk’s anonymous letter released on Twitter also pointed to this issue.
To profit or not to profit, this question seems to find the answer at the end of "The Return of the King": with Sam's return and Ilya's exile, the battle for the road is over. OpenAI is destined to become a de facto for-profit company (perhaps still with a non-profit shell).
But don't get me wrong. I'm not saying Sam is a bad guy and Ilya is a good guy. I'm just pointing out that OpenAI is caught in a dilemma, what could be called the supercompany dilemma:
A company that is run for profit can become dominated by the capital invested in it, which can present some dangers, especially if the company is building a super-powerful tool. And if it doesn't operate with the goal of making a profit, it may face a lack of resources, which in a capital-intensive space means it may not be able to build a product at all.
In fact, the creation of any super-powerful tool will raise similar concerns about control, not just in the corporate world. Take the recently released movie "Oppenheimer" as an example. When the atomic bomb successfully exploded, Oppenheimer felt more fear than joy.
Scientists at the time hoped to establish a supranational organization to monopolize nuclear power. The idea is similar to what OpenAI's founders were thinking at the time - something as super powerful as the atomic bomb should not be in the hands of a single organization, or even the U.S. government.
This is not just an idea, it is implemented into action. Theodore Hall, a physicist on the Manhattan Project who leaked key details of the atomic bomb's creation to the Soviet Union, acknowledged in a 1997 declaration that "the U.S. monopoly on nuclear weapons" was "dangerous and should be avoided." In other words, Theodore Hall helped decentralize nuclear bomb technology. The idea of decentralizing nuclear power by leaking secrets to the Soviet Union was obviously a controversial approach (the Rosenbergs were even executed by the electric chair for leaking, despite evidence that they had been wronged), but it was reflective of the scientists of the time (including the atomic bomb Oppenheimer)’s consensus – such a super powerful thing should not have a monopoly! But I'm not going to get into how to deal with something super powerful because that's too broad a topic. Let’s refocus our attention on the issue of ultra-powerful tools controlled by companies with a goal of profit.
Vitalik faces the same situation?
So far we still haven't mentioned Vitalik in the title of the article. What does Vitalik have to do with OpenAI or The Lord of the Rings?
This is because Vitalik and the founders of Ethereum were once in a very similar situation.
In 2014, when the founders of Ethereum launched Ethereum, they were divided over whether the legal entity they were about to establish would be a non-profit organization or a for-profit company. The final choice, like OpenAI at the time, was a non-profit organization, the Ethereum Foundation.
At that time, the differences between the founders of Ethereum were probably greater than the differences between the founders of OpenAI, which led to the departure of some founders. In contrast, establishing OpenAI as a non-profit organization was a consensus among all founders. Differences over OpenAI's path came later.
As an outsider, it's unclear to me whether the disagreements among Ethereum's founders are rooted in their expectation that Ethereum will become a super-powerful "Lord of the Rings" and therefore should not be controlled by a profit-oriented entity.
But it doesn't matter. Importantly: Although Ethereum has grown into a powerful thing, the Ethereum Foundation remains a non-profit organization to this day and does not face the yes or no dilemma like OpenAI. The fact is, as of today, it doesn’t matter that much whether the Ethereum Foundation is a non-profit organization or a for-profit company. Perhaps this issue was relatively important when Ethereum was first launched, but it is no longer the case today.
The powerful Ethereum itself has its own autonomous life and is not controlled by the Ethereum Foundation. During its development, the Ethereum Foundation seems to be facing financing problems similar to OpenAI. For example, I heard Xiao Feng, one of the early donors of the Ethereum Foundation, complain at a seminar that the Ethereum Foundation was too poor to provide adequate financial support to developers.
I don’t know how poor the Ethereum Foundation actually is, but this financial limitation doesn’t seem to be affecting the development of the Ethereum ecosystem. In contrast, some well-funded blockchain foundations cannot develop into a prosperous ecosystem simply by burning money. In this world, capital still matters, but only to a certain extent. And in the case of OpenAI, no capital? no way!
Ethereum and artificial intelligence are of course completely different technologies. But one thing is similar: the development of both depends on a large amount of resource investment, or capital investment. (Note: Just developing the Ethereum code itself may not require much capital, but here I am referring to building the entire Ethereum system)
In order to attract such a large amount of capital investment, OpenAI had to deviate from its original intentions and quietly transform into an actually profitable company. On the other hand, despite attracting large amounts of capital into the system, Ethereum is not controlled by any profit-making organization. To be blessed by capital without being controlled by it – that’s almost a miracle!
The reason Vitalik is able to do this is because Vitalik has his Frodo – the Blockchain!
Let's classify technologies into two categories based on whether they actually produce products: those that produce and those that connect. Artificial intelligence belongs to the former, while blockchain belongs to the latter. Artificial intelligence can perform many production activities, such as ChatGPT generating text, Midjourney generating images, and robots producing cars in Tesla's unmanned factory.
Technically speaking, the blockchain does not produce anything. It's just a state machine and can't even initiate any operations on its own. But its importance as a wired technology lies in providing a way to formalize human collaboration at scale beyond traditional for-profit companies. Essentially, a corporation is a contract between shareholders, creditors, board of directors, and management. The validity of a contract is that if one party breaches the contract, the other party can sue in court. The effectiveness of this prosecution lies in the fact that its results are executed by a state machine (so-called enforcement).
So, fundamentally, a corporation is a contractual relationship enforced by a state machine. But now, blockchain brings us a new way of contracting that is enforced by technology. While Bitcoin’s blockchain contracts are still very functional (and intentionally kept that way), Ethereum’s smart contracts extend this new way of contracting into something general. Basically, Ethereum allows humans to collaborate at scale in many areas in a completely new way, unlike the profit-driven companies of the past. For example, DeFi is a new way for people to collaborate in finance.
In this sense, blockchain is a “super company”! It is precisely because of the formalization of this "super company" that Ethereum can develop into the prosperous state it is today without having to face the corporate dilemma of OpenAI. The blockchain is Vitalik’s Frodo, carrying the “Lord of the Rings” without being swallowed by its power.
So now you can see that Frodo has been a key character behind all of these stories:
- Gandalf is lucky because he has Frodo as a friend in the fantasy world.
- Vitalik is also lucky because in the new world he has his Frodo - Blockchain.
- Ilya and the other OpenAI founders are not so lucky, because they are in an old world where Frodo does not exist.