Sam Ultraman and Doomsday Capitalism

This article is machine translated
Show original
Text | Sleepy.txt

In 2016, The New Yorker published a feature article on Sam Altman titled "Sam Altman's Destiny." At 31, he was already the president of Y Combinator, one of Silicon Valley's most powerful incubators.

The article included a detail that mentioned Ultraman enjoys racing cars, owns five sports cars, and likes to rent planes. He told the reporter that he has two bags, one of which is an escape kit for when he needs to run away.

He also prepared guns, gold, potassium iodide (for protection against nuclear radiation), antibiotics, batteries, water, and an Israeli Defense Forces-grade gas mask. He also prepared a piece of land in Big Sur (a famous coastal resort in California) so he could fly there to take refuge at any time.

Ten years later, Ultraman became the person most dedicated to creating the apocalypse and most dedicated to promoting the Ark. While warning the world that AI would destroy humanity, he personally accelerated this process; while claiming he wasn't in it for the money, he built a personal investment empire worth $2 billion; while calling for regulation, he kicked out everyone who tried to apply the brakes.

Rather than calling him a schizophrenic madman or a cunning conman, it's more accurate to say he was simply the most standard and successful product of the massive machine that is Silicon Valley. His "destiny" was to forge humanity's collective anxiety into his own scepter and crown.

Doomsday is a good business

Ultraman's business model can be summed up in one sentence: package a business as a holy war concerning the survival of humanity.

He's been practicing this approach since his Y Combinator days. He transformed Y Combinator from a small workshop providing tens of thousands of dollars to early-stage startups into a massive entrepreneurial empire. He established the Y Combinator Research Lab, funding projects that don't generate revenue but sound ambitious. He told reporters that Y Combinator's goal is to fund "all important areas."

At OpenAI, he took this approach to its extreme. He was selling a packaged worldview: AI apocalypse + redemption solution.

He is better than anyone at describing the "extinction risk" posed by AI. He co-signed a statement with hundreds of scientists saying that the risks of AI are comparable to nuclear war. Testifying before the Senate, he said, "We have a certain fear (of AI's potential)—and people should be happy about it." He implied that this fear itself is a useful warning.

Each of these statements could make headlines, each one a free advertisement for OpenAI. This meticulously crafted fear is the most efficient lever for attention. Which is more exciting for capital and the media: a technology that "improves efficiency" or a technology that "may destroy humanity"? The answer is self-evident.

For the redemption aspect, he also had a ready-made product: Worldcoin. When fear is implanted into the public consciousness, the peddling of solutions becomes a natural progression. Using a basketball-sized silver sphere to scan human irises globally, the claim was to distribute money to everyone in the AI ​​era. The story sounded appealing, but this practice of exchanging money for biometric data quickly aroused the vigilance of governments worldwide. More than a dozen countries, including Kenya, Spain, Brazil, India, and Colombia, halted or investigated Worldcoin on the grounds of data privacy.

But that might not matter to Ultraman at all. What matters is that through this project, he successfully positioned himself as the "only one with a solution."

Selling fear and hope in a package is the most efficient business model of our time.

Regulation is my weapon, not my shackles.

How can someone who constantly talks about the end of the world do business? Ultraman's answer is: turn regulation into your weapon.

In May 2023, he testified before the U.S. Congress for the first time. Unlike other tech company owners who complained about regulation, he proactively requested, "Please regulate us." He suggested an AI licensing system, where only licensed companies could develop large-scale models. This presented him as a highly responsible industry leader, but at that time, OpenAI was far ahead in technology, and a strict, high-barrier regulatory system would primarily serve to keep all potential competitors out.

However, as time went on, especially after competitors like Google and Anthropic caught up technologically and the open-source community began to rise in power, Altman's rhetoric on regulation underwent a subtle shift. He began to emphasize on various occasions that overly stringent regulations, particularly requiring AI companies to undergo mandatory pre-release reviews, could stifle innovation and be "disastrous."

At this point, regulation is no longer a moat, but a stumbling block.

When he held an absolute advantage, he called for regulation to lock it in; when that advantage waned, he called for freedom to seek a breakthrough. He even attempted to extend his reach to the very upstream of the industry chain. He proposed a massive $7 trillion chip plan, seeking support from capital such as the UAE sovereign wealth fund, aiming to reshape the global semiconductor industry landscape. This far exceeded the authority of a CEO, resembling more the actions of an ambitious figure seeking to influence the global order.

Behind all this lies OpenAI's rapid transformation from a non-profit organization to a commercial behemoth. Founded in 2015, its mission was "to safely ensure that AGI benefits all of humanity." In 2019, it established a "limited-profit" subsidiary. By early 2024, it was discovered that the word "safely" had been quietly removed from OpenAI's mission statement. Although the company structure remained "limited-profit," its commercialization pace had clearly accelerated. Correspondingly, revenue exploded, from tens of millions of dollars in 2022 to over ten billion dollars in annualized revenue in 2024, and its valuation soared from 29 billion to the hundreds of billions of dollars.

When a person starts gazing at the stars and talking about the fate of humanity, he should first look at where his wallet is.

Character Design: The Immunity of a Charismatic Leader

On November 17, 2023, Ultraman was dismissed by the board of directors he personally selected, on the grounds of "dishonesty in communication with the board."

What happened over the next five days was less a business battle and more a referendum on faith. President Greg Brockman resigned; 95% of the company's employees, more than 700 people, signed a petition demanding the board's resignation, threatening to collectively defect to Microsoft if they didn't resign; Microsoft CEO Satya Nadella, the company's largest investor, publicly sided with Brockman, saying he was welcome to come to work at any time. Ultimately, Brockman returned in triumph, reinstated, and purged almost all the board members who opposed him.

How could a CEO who was officially deemed "dishonest" by the board of directors return unscathed and even wield greater power?

Helen Toner, the ousted board member, later revealed details. Altman concealed his actual control over the OpenAI startup fund from the board; repeatedly lied about critical security procedures; and even learned about the major ChatGPT announcement from Twitter. Any one of these accusations would be enough to sack a CEO a hundred times over.

But Ultraman is fine. Because he's not an ordinary CEO, he's a "charismatic leader."

This is a concept proposed by sociologist Max Weber a century ago, suggesting that there is a kind of authority that comes not from position or law, but from the leader's "extraordinary personal charisma." Followers believe in him not because he has done anything right, but because he is who he is. This kind of faith is irrational. When a leader makes a mistake or is challenged, the followers' first reaction is not to question the leader, but to attack the challenger.

This is how OpenAI's employees are. They don't believe in the procedural justice of the board of directors; they only believe in the "destiny" represented by Ultraman, and they feel that the board members are "hindering human progress."

After Altman's reinstatement, OpenAI's security team was quickly disbanded. Chief Scientist Ilya Sutzkwell, who spearheaded Altman's dismissal, also left. In May 2024, security team leader Jan Leike resigned, tweeting, "The company's security culture and processes have been sacrificed in pursuit of those glamorous products."

In the presence of a charismatic leader, facts don't matter, processes don't matter, and safety doesn't matter. The only thing that matters is faith.

Prophets on the assembly line

Sam Altman is just the latest and most successful model on Silicon Valley's "prophet" production line.

There are many familiar faces on this production line.

Take Elon Musk, for example. In 2014, he went around saying that "AI is summoning the devil." Yet his Tesla is the world's largest robotics company and the most complex application of AI. After breaking with Ultraman, he founded xAI in 2023, declaring war head-on. Just one year later, xAI's valuation exceeded $20 billion. He warns of the coming devil while simultaneously creating another one. This self-contradictory narrative is strikingly similar to Ultraman's.

Take Mark Zuckerberg, for example. A few years ago, he staked his entire company's future on the Metaverse, burning through nearly $90 billion, only to find it was a trap. So he immediately changed course, shifting the company's core narrative from the Metaverse to AGI. In 2025, he announced the establishment of the "Super Intelligence Lab" and personally recruited talent. Both involve grand visions about the future of humanity, both require astronomical capital investments, and both adopt a savior-like stance.

Then there's Peter Thiel. As Altman's mentor, he's more like the chief architect of this production line. While investing in companies promoting "technological singularity" and "immortality," he's simultaneously buying land and building doomsday bunkers in New Zealand, obtaining citizenship after only 12 days there. His company, Palantir, is one of the world's largest data surveillance companies, primarily serving governments and the military. He's simultaneously preparing for the collapse of civilization and crafting the most sophisticated surveillance tools for those in power. In the military operation against Iran in early 2026, it was Palantir's AI platform that acted as the brain, integrating massive amounts of data from spy satellites, communications eavesdropping, drones, and Claude model analysis, transforming chaotic information into decision-making information in real time, ultimately locking onto the target and carrying out the decapitation strike.

Each of them plays a dual role: both "warning of the impending doom" and "driving the doomsday forward." This isn't a split personality; it's a business model proven by the capital markets to be the most efficient. They capture attention, capital, and power by creating and selling structural anxiety. They are both products and shapers of this system, the "evil behind the grand narrative."

Silicon Valley is no longer just a place that exports technology; it is a factory that creates "modern myths."

Why does this trick always work?

Every few years, Silicon Valley produces a new prophet who sweeps through the attention of capital, media, and the public with a grand narrative of apocalypse and redemption. This trick is repeated time and again, yet it works time and again. Every step of it precisely targets specific loopholes in human cognition.

Step 1: Manage the rhythm of fear, not just create fear.

The potential risks of AI are real, but these risks could have been discussed calmly. It was this group that deliberately chose to present them in the most dramatic way, and they had a precise control over the release of fear.

The timing of instilling fear in the public, the timing of offering hope, and the timing of raising the alarm are all carefully designed. Fear is the fuel, but the timing and method of ignition are the real skill.

The second step: Turn the incomprehensibility of technology into a source of authority.

AI is a completely opaque black box to the vast majority of people. When something becomes too complex to be fully understood, people instinctively relinquish the right to interpret it to "the person who understands it best." They deeply understand this and have turned it into a structural advantage; the more mysterious, dangerous, and beyond human comprehension they describe AI, the more irreplaceable they themselves become.

The terrifying aspect of this logic is that it's self-reinforcing. Any external criticism is automatically neutralized because the critic "doesn't understand enough." Regulators don't understand the technology, so their judgments are unreliable; academic critics haven't built models on the front lines, so their concerns are theoretical. Ultimately, only they themselves are qualified to judge themselves.

The third step: Replace "interests" with "meaning" to make followers voluntarily give up criticizing.

This is the most difficult layer to penetrate in the entire system, and also its most enduring source of power. What they're peddling is never just a job or a product, but a story meaningful on a cosmic scale: you are deciding the fate of humanity. Once this narrative is accepted, followers will willingly relinquish independent judgment. Because in the face of a mission concerning "human survival," questioning the leader's motives makes one appear insignificant, even like an obstacle to history. It makes people willingly surrender their critical abilities and understand this surrender as a noble choice.

Put these three steps together, and you'll understand why this system is so difficult to shake. It doesn't rely on lies; it relies on a precise understanding of human cognitive structures. It first creates a fear you can't ignore, then monopolizes the interpretation of that fear, and finally uses "meaning" to turn you into its most loyal propagator.

Within this system, Ultraman is the model that operates most smoothly to date.

Whose destiny?

Altman has always maintained that he does not own any equity in OpenAI and only receives a symbolic salary, which was once the cornerstone of his "powering for love" narrative.

But Bloomberg calculated his net worth in 2024 to be approximately $2 billion. This wealth primarily stemmed from a series of venture capital investments he made over the past decade. His early investment in the payment company Stripe reportedly yielded hundreds of millions of dollars in returns; his investment in Reddit's IPO also brought him substantial profits. He also invested in the nuclear fusion company Helion, simultaneously betting heavily on nuclear fusion while claiming that the future of AI depended on energy breakthroughs. Then OpenAI negotiated a major electricity purchase deal with Helion. He claimed to have avoided the negotiations, but the entire chain of interests was crystal clear.

He doesn't actually own direct shares in OpenAI, but he has built a vast, self-centered investment empire around it. Every grand sermon he gives about the future of humanity adds value to this empire.

Now, looking back at his doomsday survival kit filled with guns, gold, and antibiotics, and that land in Big Sur that he could fly to at any time, do you have a new understanding?

He never hid any of it. The escape kit was real, the bunker was real, and his fascination with the apocalypse was real. But he was also the one most actively pushing for the apocalypse to arrive. These two things weren't contradictory, because in his logic, the apocalypse didn't need to be stopped, just anticipated. He was obsessed with playing the role of the only one who could see the future clearly and prepare for it.

Whether it's preparing a material escape kit or building a financial empire around OpenAI, it's essentially the same thing: securing the most certain winning position for yourself in a future full of uncertainty that you are driving yourself.

In February 2026, he had barely stated his red line of supporting "AI not being used in war" when he signed a contract with the Pentagon. This wasn't hypocrisy; it was an inherent requirement of his business model. Moral stance is part of the product, while commercial contracts are the source of profit. He needs to simultaneously play the roles of a compassionate savior and a ruthless prophet of doom, because only by playing both roles can his story continue, and his "destiny" be revealed.

The real danger is never AI, but those who believe they have the right to define the fate of humanity.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments