In the AI community, every statement Sam Altman makes is seen as an update to the future "weather forecast".
Last night, Altman posted on X that he would be hosting an online workshop to gather feedback and opinions from the public before starting to build the next generation of tools.
At 8:00 AM Beijing time this morning, the seminar initiated by OpenAI CEO Sam Altman began as scheduled. Entrepreneurs, CTOs, scientists, and representatives from the developer community from various industries raised the most pointed and practical questions to Altman regarding the future form of AI, model evolution, intelligent agents, scientific research automation, and security issues.
At the seminar, the head of OpenAI not only outlined the evolutionary blueprint for GPT-5 and its subsequent versions, but also revealed a reality that all developers and entrepreneurs must face: we are entering a period of dramatic change where intellectual costs are extremely low and software forms are shifting from "static" to "on-the-fly generation" .
The first focus of the meeting was the "asymmetry" in GPT-5's performance. Some developers astutely noticed that while the new version was extremely strong in logical reasoning and programming compared to GPT-4.5, it seemed slightly inferior in written expression. Altman showed great candor regarding this.
He admitted that OpenAI did "mess up" the priority of writing ability in the development of GPT-5.2, because the team allocated limited computing resources to hard intellectual metrics such as reasoning, coding and engineering .
In Altman's view, intelligence is a "malleable resource," and once a model possesses a top-tier inference engine, the return of writing ability is only a matter of time. This "specialization" actually reflects a certain strategic focus of OpenAI: first conquering the highest level of human intelligence through the Scaling Law, and then turning back to fill in the details of aesthetics and expression. This means that future competition among models will no longer be a contest of a single dimension, but rather a test of who can achieve "intellectual equality" across all dimensions earlier.
If the level of intelligence determines the ceiling, then cost and speed determine the penetration rate of AI. Altman made a very impressive promise at the conference: by the end of 2027, the cost of GPT-5.2 level intelligence will decrease by at least 100 times .
However, this future of "cheap enough to be measured" is not the end.
Altman points out that a subtle shift is occurring in the market: developers' thirst for speed is surpassing their focus on cost. As agents begin handling long-range tasks with dozens of steps, complex autonomous decision-making becomes impractical if output speed cannot be increased by more than a hundredfold. Given this trade-off, OpenAI may offer two paths: one is extremely inexpensive "intelligence tap water," and the other is an "intelligence booster" with extremely rapid feedback. This emphasis on speed foreshadows a complete leap from simple question-and-answer to high-frequency, real-time autonomous driving.
Against this backdrop of plummeting intellectual costs and soaring speed, the traditional concept of software is crumbling. Altman put forward a disruptive vision: the software of the future should not be static.
In the past, we were accustomed to downloading a generic Word or Excel file; in the future, when you encounter a specific problem, the computer should directly write a piece of code for you, generating an "on-demand application" to solve it. This "on-demand, disposable" model will completely reshape the operating system. Although we may retain some familiar interactive buttons out of habit, the underlying logical architecture will be highly personalized. The tools in each person's hands will evolve with the accumulation of their workflow, ultimately forming a unique, dynamically evolving productivity system. This is not merely software customization, but a restructuring of production relations.
InfoQ has translated and compiled the key points of this seminar for our readers:
Question: What are your views on the impact of AI on the future society and economy?
Sam Altman: To be honest, it’s very difficult to fully digest an economic transformation of this scale within a year. But I think it will greatly empower everyone: it will bring massive abundance of resources, lower barriers to entry, and extremely low costs for creating new things, building new companies, and exploring new science.
Provided we avoid major policy mistakes, AI should act as a "balancing force" in society, giving genuine opportunities to those who have long been unfairly treated. However, I do worry that AI could also lead to a high concentration of power and wealth, which must be a core focus of policymaking, and we must resolutely avoid this situation.
Question: I've noticed that GPT-4.5 was once the pinnacle of writing ability, but recently GPT-5's writing performance in ChatGPT seems somewhat clumsy and difficult to read. Clearly, GPT-5 is stronger in Agent (intelligent agent) handling, tool invocation, and reasoning; it seems to have become more "specialized" (e.g., extremely strong in programming, but only average in writing). What is OpenAI's view on this imbalance in capabilities?
Sam Altman: To be honest, we really messed up the writing part. We hope that future GPT-5.x versions will far surpass 4.5 in writing.
At the time, we decided to focus most of our efforts on GPT-5.2's "intelligence, reasoning, programming, and engineering capabilities" because resources and bandwidth are limited, and sometimes focusing on one aspect can lead to neglecting another. However, I firmly believe the future belongs to "general-purpose, high-quality models." Even if you only want it to write code, it should possess excellent communication and expression skills, able to communicate clearly and incisively. We believe that "intelligence" is fundamentally interconnected, and we have the ability to maximize all these dimensions within a single model. Currently, we are indeed focusing heavily on "programming intelligence," but we will soon catch up in other areas.
Intelligence will be so cheap that it will be impossible to measure.
Question: For developers running tens of millions of agents, cost is the biggest bottleneck. What are your thoughts on smaller models and future cost reductions?
Sam Altman: Our goal is to reduce the intellectual costs of GPT-5.2 levels by at least 100 times by the end of 2027.
However, a new trend is emerging: as model outputs become increasingly complex, users' demand for "speed" is even surpassing that for "cost." OpenAI excels at reducing the cost curve, but in the past, we haven't focused enough on "ultra-fast output." In some scenarios, users might be willing to pay a premium for a 100-fold speed increase. We need to find a balance between "extreme affordability" and "extreme speed." If the market craves lower costs, we will go very far down that curve.
Question: Current user interfaces are not designed for agents. Will the proliferation of agents accelerate the emergence of "micro apps"?
Sam Altman: I no longer see software as a "static" thing. Now, if I encounter a small problem, I expect the computer to immediately write a piece of code to solve it for me. I believe the way we use computers and operating systems will fundamentally change.
While you might use the same word processor every day (because you need buttons in familiar places), the software will be extremely customized to your habits. Your tools will continuously evolve and converge to your individual needs. Within OpenAI, people are already accustomed to using programming models (Codex) to customize their workflows, and everyone's tools are completely different . Software being born "because of me and for me" is almost an inevitable trend.
Advice for entrepreneurs: Don't just patch up your existing models.
Question: As model updates continuously consume a startup's functionality, how can entrepreneurs build a competitive moat? What are some things OpenAI promises not to touch?
Sam Altman: Many people think the physical laws of business have changed, but they haven't. The only changes are that "work speed has increased" and "software development has accelerated." But the rules for building a successful startup haven't changed: you still need to solve customer acquisition problems, establish a GTM (Gross to Market) strategy, create stickiness, and build network effects or competitive advantages.
My advice to entrepreneurs is this: When your company faces the astonishing update to GPT-6, are you happy or sad? You should build things where "the stronger the model, the stronger your product." If you're just patching the edges of the model, you'll have a tough time.
Question: Current agents often break down after 5 to 10 steps when executing long-process tasks. When will we be able to achieve truly long-term autonomous operation?
Sam Altman: It depends on the complexity of the task. Within OpenAI, some specific tasks that run via the SDK can run almost indefinitely.
This is no longer a question of "when to implement," but rather a question of "scope of application." If you have a specific task that you understand very thoroughly, you can try to automate it today. But if you want to tell the model, "Go and start a startup for me," it's still very difficult at present due to the long feedback loop and the difficulty in verification. It is recommended that developers first break down the task, allowing the agent to self-verify each intermediate step, and then gradually expand its scope of responsibility .
Can AI help humans generate good ideas?
Question: Many people complain that AI-generated content is "slop." How can we use AI to improve the quality of human creativity?
Sam Altman: While people call AI output garbage, humans produce just as much nonsense. Generating truly new ideas is extremely difficult. I'm increasingly convinced that the boundaries of human thought depend on the boundaries of the tools we use.
I hope to develop tools that help people generate good ideas. When the cost of creation drops dramatically, we can quickly try and fail through intensive feedback loops, thus finding good ideas sooner.
Imagine having a "Paul Graham robot" (the founder of Y Combinator) who knows your entire past, your code, and your work, and can constantly brainstorm with you. Even if 95 out of 100 ideas he gives are wrong, as long as they inspire those 5 brilliant ideas in you, the contribution to the world would be enormous. Our GPT-5.2 has already demonstrated extraordinary scientific progress to our internal scientists. A model that can generate scientific insights has no reason not to generate excellent product insights.
Question: I'm worried that the model will trap us in outdated technologies. Current models struggle to learn new technologies from two years ago. Could we guide the model to learn the latest emerging technologies in the future?
Sam Altman: Absolutely. Essentially, a model is a "general-purpose inference engine." While they currently have a vast amount of world knowledge built into them, the milestone in the next few years will be: when you give a model a completely new environment, tool, or technology, it can learn to use it remarkably reliably after just one explanation (or letting it explore autonomously once). This is not far off.
Question: As a scientist, I find that research inspiration grows exponentially, but human energy is limited. Could models take over the entire research process?
Sam Altman: There is still a long way to go before achieving fully closed-loop, autonomous scientific research. While mathematical research may not require laboratories, top mathematicians still need deep involvement to correct intuitive biases in models.
This is similar to the history of chess: after Deep Blue defeated Kasparov, there was a period when "human-machine collaboration (Centaur)" was stronger than pure AI, but pure AI quickly regained dominance in the game.
For scientists, AI today is like an "unlimited pool of postdoctoral researchers." It can help you explore 20 new questions simultaneously, performing breadth searches. Regarding physics experiments, we're discussing whether OpenAI should build its own automated labs or have the global research community contribute experimental data. Currently, the research community's embrace of GPT-5.2 leads us to favor the latter, which would result in a more distributed, intelligent, and efficient research ecosystem.
Question: I'm more concerned about security, ideally with stronger security measures. In 2026, AI has many potential problems, one of which is biosafety, causing us great anxiety. These models are already quite powerful in the biological field, and currently, both OpenAI and the global strategy largely focus on restricting access to these models and using various classifiers to prevent them from creating new pathogens. But I don't think this approach can last long. What are your thoughts?
Sam Altman: I believe the world needs a fundamental shift in AI security, especially AI biosecurity—from “blocking” to “resilience”.
One of my co-founders once used an analogy I really like: fire safety . Fire initially brought enormous benefits to human society, then it began to burn down entire cities. Humanity's initial reaction was to limit fire as much as possible. I only recently learned that the word "curfew" was originally related to "not allowing fires at night" because cities would burn down.
Later, we changed our approach, moving beyond simply trying to ban fire to improving our resilience against it: we developed fire safety regulations, invented flame-retardant materials, and established a comprehensive system. Now, as a society, we are doing quite well in dealing with fires.
I believe that AI must follow the same path . AI will become a real problem in bioterrorism; AI will also become a real problem in cybersecurity; but at the same time, AI is also an important solution to these problems.
Therefore, I believe what's needed is a societal effort : not relying on a few "laboratories we trust" to always correctly contain risks, but building a resilient infrastructure. Because there will inevitably be a large number of excellent models in the world. We've discussed with many biological researchers and companies how to deal with "novel pathogens." Many people are indeed involved, and there's been considerable feedback that AI is helpful in this area, but this won't be a purely technical problem, nor will it be solved entirely by technology. The whole world needs to think about this in a way that's different from the past. Frankly, I'm very anxious about the current situation. But I don't see any other realistic option besides a "resilience-centric" approach. And, on the positive side, AI can indeed help us build this resilience more quickly.
However, if AI does experience a "significant and serious" failure this year, I believe biosafety is a very reasonable area to consider as a "risk flashpoint." And looking a year or two later, you can imagine many other things that could cause major problems.
With the improvement of AI learning efficiency, is human collaboration still important?
Question: My question relates to "human collaboration." As AI models become increasingly powerful, they are highly efficient in individual learning, such as quickly mastering a new subject. We've seen and strongly agree with this in ChatGPT and educational experiments. However, I often think about this question: when you can get the answer anytime, why spend time and even endure friction to ask another person a question? You also mentioned that AI programming tools can complete tasks that previously required human teamwork at extremely high speeds. So, when we talk about "collaboration, cooperation, and collective intelligence," human + AI is a powerful combination. What changes will occur in collaboration between humans?
Sam Altman: There are many layers to this. I'm a bit older than most of you here. But even so, when Google came along, I was in high school. Back then, teachers tried to get students to promise "not to use Google" because everyone thought: if you can find everything so easily, why have history class? Why memorize anything?
In my opinion, this idea was completely absurd. My feeling at the time was: this would make me smarter , learn more , and do more things— tools I would use long-term as an adult. It would be crazy to make me learn outdated skills just because they exist.
This is like forcing me to learn the abacus when you already know calculators exist—it might have been an important skill then, but it's worthless now. My view on AI tools is the same. I understand that AI tools are indeed a problem within the current education system . But this precisely illustrates that we need to change our teaching methods , not pretend AI doesn't exist.
The idea of "letting ChatGPT write for you" is part of the future. Of course, writing training remains important because writing is part of thinking. But the way we teach people how to think and how we assess thinking ability must change , and we shouldn't pretend that this change doesn't exist. I'm not pessimistic about it.
Those 10% of learners with exceptionally strong self-learning abilities have already performed remarkably well. We will find new ways to restructure the curriculum and bring other students along. Regarding your other point—how to make this not a process of "you becoming very good at it alone in front of a computer," but a collaborative process —we haven't seen any evidence so far that AI leads to a reduction in human interaction, and this is something we are continuously observing and measuring.
My intuition tells me the opposite: in a world saturated with AI, human connections will become more valuable, not less valuable . We've already seen people exploring new interfaces to make collaboration easier. Even from the very beginning, when we considered developing our own hardware and devices, we were thinking: what should the experience of "multi-person collaboration + AI" look like?
While no one has quite gotten this completely right yet, I believe AI will enable this kind of collaboration in unprecedented ways. Imagine five people sitting around a table, with an AI or robot nearby—the team's productivity will be dramatically amplified. In the future, AI will be an integral part of every brainstorming session and every problem-solving effort, helping the entire group perform better.
With agents entering production systems on a large scale, what is the biggest, underestimated risk?
Question: As agents begin to operate at scale and directly manipulate production systems, what do you think are the most underestimated failure modes? Are they security, cost, or reliability? And what "difficult but important work" is currently not being adequately addressed?
Sam Altman: Almost every one of the issues you mentioned is true. One thing, however, surprised me personally, and many of us. When I first used Codex, I was absolutely certain of one thing: "I would never give it complete, unsupervised computer access."
I persisted for about two hours. Then I thought: it seems to be doing something really reasonable; having to confirm every step is too annoying; I might as well leave it open for a while and see what happens. As a result, I never turned off full access again. I found that many people have had similar experiences.
What truly worries me is that these tools are so powerful and convenient , and while their failure rate may be low, the consequences of failure could be catastrophic . Because failures are infrequent, people gradually drift into a state of mind: "It should be fine, right?"
However, as models become more powerful and harder to understand, if there are subtle misalignments within the model, or new systemic problems emerge after prolonged and complex use, you may have already planted a security vulnerability in a system. You may have varying degrees of science fiction sensibility when imagining "AI going out of control," but what I'm truly worried about is that people will be led astray by the power and enjoyment of these tools, and stop seriously considering their complexity. Capabilities will increase very rapidly; we will become accustomed to the behavior of models at a certain stage and therefore trust them; but without building a sufficiently robust and holistic security infrastructure.
Thus, we will unknowingly move towards a dangerous state.
I believe that this very point is enough to give rise to a great company .
How should AI be integrated into early childhood and basic education?
Question: I'd like to return to the topic of education. In high school, I saw my classmates using ChatGPT for writing essays and doing assignments; now in university, we're discussing AI policies in various fields like Computer Science and Humanities. My question is: As a father, how do you view the impact of AI on education during the crucial formative years of kindergarten, elementary school, and middle school?
Sam Altman: Overall, I'm against the use of computers in kindergartens . Kindergartens should be more about running around outdoors, interacting with real objects, and learning how to interact with others. So, not just AI, I think most of the time, kindergartens shouldn't even have computers.
From a developmental perspective, we still don't fully understand the long-term effects of technology on children. There's been a lot of research on the impact of social media on teenagers, and the results have been quite poor. My intuition is that the impact of technology on younger children is likely even worse , yet it's discussed far less. Until we truly understand these effects, I don't think it's necessary to have preschool children heavily using AI.
Question: We are in the biopharmaceutical field. Generative AI has already been very helpful in areas such as clinical trial documentation and regulatory processes. We are now also trying to use it for drug design, especially compound design. However, a major bottleneck is the ability to perform 3D reasoning. Do you think there will be a critical inflection point here?
Sam Altman: We will definitely solve this problem. I'm not sure if it will be completed by 2026, but it's a very common and frequent need. We roughly know how to do it, but there are still many more urgent areas to advance. But this will definitely happen.
Reference link:
https://www.youtube.com/watch?v=Wpxv-8nG8ec&t=2s
This article is from the WeChat official account "AI Frontline" , author: Dongmei, and published with authorization from 36Kr.




