OpenAI CEO Sam Altman published an article titled "The Gentle Singularity" yesterday (11th), detailing his vision for the future development of artificial intelligence (AI). Altman believes that humans have crossed the technological "event horizon" and are moving towards the era of digital superintelligence, with this transformation unfolding smoothly rather than abruptly.
In this blog post, Altman's core points include:
- Current AI systems have already surpassed human capabilities in many aspects, with hundreds of millions of people relying on them daily to complete important tasks, showing AI's deep integration into daily life. He predicts that by 2025, AI agents capable of performing cognitive work will emerge, by 2026 AI systems may generate entirely new insights, and by 2027 physical robots may be able to perform real-world tasks.
- Altman emphasizes that the 2030s will bring great abundance of intelligence and energy, driving scientific progress and productivity leaps, significantly improving quality of life. He notes that AI's recursive self-improvement and the self-reinforcing cycles of automated data centers and robotic supply chains will bring the cost of intelligence close to the cost of electricity, thus achieving a future where intelligence is "too cheap to meter".
- However, he also acknowledges challenges, including the AI alignment problem (ensuring AI aligns with long-term human interests) and the fair distribution of superintelligence. He suggests that after solving the alignment problem, the focus should be on making superintelligence cheap and widely available, avoiding concentration in the hands of a few individuals or companies, to promote social adaptation and fairness.
Below is the full text of Sam Altman's blog post "The Gentle Singularity".
The Gentle Singularity
While some jobs may disappear, new opportunities and wealth will emerge
We have crossed the event horizon, and liftoff has begun. Humans are about to create digital superintelligence, and so far, it's far less weird than imagined.
Robots are not yet walking the streets, and most people are not spending all day conversing with artificial intelligence (AI). People still die from diseases, we cannot easily travel to space, and there are many things in the universe we do not understand.
However, we have recently constructed systems that are smarter than humans in many ways, systems that can significantly enhance user output. The most unlikely part is now in the past; the scientific insights that led us to create systems like GPT-4 and o3 were hard-won but will take us far.
AI will contribute to the world in many ways, but the quality improvement brought by AI driving faster scientific progress and increasing productivity will be enormous; the future may be far better than now. Scientific progress is the greatest driver of overall progress; imagine how exciting it is that we could have more.
In a significant sense, ChatGPT is already more powerful than any human that has ever existed. Hundreds of millions of people rely on it daily for increasingly important tasks; a small new feature can have an enormous positive impact; a small deviation multiplied by hundreds of millions could cause massive negative impacts.
2025 will witness the arrival of agents capable of performing truly cognitive work; writing computer code will be forever changed. In 2026, systems that can discover entirely new insights may emerge. In 2027, robots capable of performing tasks in the real world may appear.
More people will be able to create software and art. But the world's need for these will also be greater, with experts likely still far superior to novices, as long as they embrace new tools. Overall, by 2030, the amount of work a person can accomplish will far exceed that of 2020, which will be an amazing change, and many will find ways to benefit from it.
In the most important aspects, the 2030s may not be very different from now. People will still love their families, express creativity, play games, and swim in lakes.
But in some still very important ways, the 2030s may be entirely different from any previous era. We do not know to what extent intelligence can surpass human levels, but we are about to find out.
In the 2030s, intelligence and energy, creativity, and the ability to realize creativity, will become extremely abundant. These two have long been the fundamental limiting factors of human progress; with sufficient intelligence and energy (and good governance), we could theoretically have anything.
We already coexist with amazing digital intelligence, and after the initial shock, most people have adapted quite well. Quickly, we went from being amazed that AI could generate a beautiful piece of text to wondering when it could write a beautiful novel; from being amazed that it could make life-saving medical diagnoses to wondering when it could develop treatments; from being amazed that it could create small computer programs to wondering when it could create an entirely new company. This is the progression of the singularity: miracles become everyday, and then become basic requirements.
We have heard scientists say their productivity is two to three times what it was before AI appeared. There are many reasons why advanced AI is compelling, but perhaps the most important is that we can use it to accelerate AI research. We might discover new computational substrates, better algorithms, and even more unknown things. If we can complete ten years of research in a year, or even a month, the pace of progress would obviously be entirely different.
From now on, the tools we have built will help us gain more scientific insights and assist us in creating better AI systems. Of course, this is different from AI systems completely autonomously updating their own code, but it is the embryonic form of recursive self-improvement.
Other self-reinforcing cycles are also at work. The creation of economic value has initiated an expanding infrastructure construction flywheel to run these increasingly powerful AI systems. Robots that can manufacture other robots (and to some extent, data centers that can build other data centers) are not far off.
If we had to traditionally manufacture the first hundred thousand humanoid robots, but then they could operate the entire supply chain, mining and refining minerals, driving trucks, operating factories, etc., to manufacture more robots, which in turn would build more chip manufacturing plants, data centers, etc., the pace of progress would obviously be entirely different.
As data center production becomes automated, the cost of intelligence should ultimately approach the cost of electricity. (People often wonder how much energy a ChatGPT query consumes; on average, a query uses about 0.34 watt-hours, equivalent to an oven running for just over a second, or an efficient light bulb running for a few minutes. A query also uses about 0.000085 gallons of water, approximately one-fifteenth of a teaspoon.)
The pace of technological progress will continue to accelerate, and human ability to adapt to almost anything will persist. Some challenges will be difficult, such as entire job categories disappearing, but on the other hand, the world will quickly become more prosperous, and we will be able to seriously consider policies previously unimaginable. We may not immediately adopt a new social contract, but looking back decades later, incremental changes will accumulate into significant transformations.
If history offers any insight, we will find new things to do, new things to aspire to, and quickly absorb new tools (job transitions after the Industrial Revolution are a recent good example). Expectations will rise, but capabilities will also increase just as quickly, and we will all obtain better things. We will build increasingly wonderful things for each other. Humans have a long-term and important advantage over AI: we are naturally concerned about others and their thoughts and actions, whereas machines are less so.
A farmer from a thousand years ago looking at many of our jobs would say we have "fake jobs", believing we are merely entertaining ourselves and playing games because we have abundant food and unimaginable luxuries. I hope we will look at jobs a thousand years from now and consider them extremely "fake jobs", but I have no doubt that for those performing them, they will be extremely important and satisfying.
The realization of new miracles will be astonishing. It's difficult to imagine today what we might discover by 2035; perhaps we'll solve high-energy physics problems in one year and begin space colonization the next; or achieve a major breakthrough in materials science one year and develop a truly high-bandwidth brain-machine interface the next. Many people might choose to live similarly, but at least some might decide to "plug in".
Looking ahead, this sounds incomprehensible. But when actually experiencing it, it might feel impressive yet manageable. From a relativistic perspective, singularities happen incrementally, and mergers proceed slowly. We are climbing along the long arc of exponential technological progress; looking forward always appears steep, while looking backward seems flat, but it is a smooth curve. (Recalling 2020, saying we'd be near Artificial General Intelligence (AGI) by 2025 might have sounded crazier than our current predictions for 2030, and how the past five years actually were.)
Beyond enormous potential, serious challenges must also be faced. We need to address technological and societal safety issues, but subsequently, given economic implications, broadly distributing access to superintelligence is crucial. The best path forward might be:
- Solve alignment problems, ensuring we can reliably guarantee AI systems learn long-term and act in the direction we collectively truly want (social media feeds are an example of alignment failure; their algorithms are extremely adept at keeping you scrolling, clearly understanding your short-term preferences, but by exploiting a mechanism in your brain, they override your long-term preferences).
- Then focus on making superintelligence cheap, widely available, and not concentrated in any individual, company, or nation's hands. Society is resilient, creative, and adapts quickly. If we can leverage collective will and wisdom, despite making many mistakes and some things going seriously wrong, we will learn and adapt rapidly and be able to maximize benefits and minimize risks from this technology. Within broad societal boundaries that must be decided, giving users significant freedom seems very important. The earlier the world starts discussing what these broad boundaries are and how to define collective alignment, the better.
We (the entire industry, not just OpenAI) are building a brain for the world. It will be highly personalized and easy for everyone to use; our limitations will be good ideas. For a long time, tech entrepreneurs in startup circles have mocked "idea people" - those with ideas who seek teams to implement them. Now it seems their time is about to arrive.
OpenAI has many facets, but above all, we are a superintelligence research company. Much work lies ahead, but the path is largely illuminated, with dark areas rapidly receding. We are immensely grateful to be doing this work.
The era of intelligence so cheap it need not be measured is nearly upon us. This might sound crazy, but if we told you in 2020 that we would reach where we are today, it might have sounded crazier than our current predictions for 2030.
May we smoothly, exponentially, and without turbulence advance towards superintelligence.



