According to Zhidongxi on August 16, last week, OpenAI released its highly anticipated next-generation flagship model GPT-5. Immediately afterwards, OpenAI co-founder and CEO Sam Altman appeared on Cleo Abram's podcast, talking about what he believed to be the breakthrough of GPT-5, the four major bottlenecks of AI technology breakthroughs, predicting that AI will discover major scientific breakthroughs by the end of 2027, and asserting that GPT-8 may find a cure for cancer.
Here are the highlights from the podcast:
1. GPT-5's core breakthrough: It achieves upgrades in programming, writing, and complex problem solving, can instantly generate professional-grade software, and supports real-time iterative updates.
2. AI is a double-edged sword: ChatGPT can make people lazy and stop thinking, but it can also expand the boundaries of human cognition by enhancing memory and other functions. The key lies in how the tool design guides user behavior.
3. Definition of superintelligence: When an AI system consistently surpasses top human experts in core areas, it enters the superintelligence stage , and this process may be faster than expected.
4. The real standard is floating: People in the future society will gradually accept AI-generated content .
5. The job market will undergo disruptive changes, but society will be resilient: unimaginable new occupations will emerge in the future, and the threshold for individual entrepreneurship will be greatly lowered due to AI tools.
6. AI faces four major development bottlenecks: computing power, data, algorithm optimization and productization .
7. Major AI-driven scientific discoveries will occur by the end of 2027: General models may achieve major scientific breakthroughs by the end of 2027. The key bottleneck is the ability to expand from "minute-level tasks" to "thousand-hour-level" complex research.
8. AI may dominate drug development: GPT-5 has significantly improved the accuracy of medical advice, and in the future it may achieve "AI-led drug development", and GPT-8 may cure cancer .
9. The social contract needs to be restructured: A new mechanism for allocating AI computing power needs to be established to avoid resource wars.
10. Developer responsibility paradox: The industry still faces a cognitive divide between "warning about extinction risks while simultaneously pursuing development."
The following is the transcript of Altman's interview (Zhidongxi has made some additions, deletions and modifications without changing the original meaning):
01.7 seconds to make a Snake game, GPT-5 can create software on demand almost instantly
Cleo Abram: Let's start with your recent announcement. You said a while ago that GPT-4 will be the dumbest model we'll ever have to use. But GPT-4 has already outperformed 90% of humans on tests like the SAT, LSAT, and GRE. It can also pass programming exams, sommelier exams, and medical licensing exams. And now you've just released GPT-5. What can GPT-5 do that GPT-4 can't?
Sam Altman: First, the important takeaway is that you can have an AI system that can do all of these amazing things, but it clearly can't replicate the many things that humans are good at. I think that speaks to the value of the SAT or any other test.
But I think if we had this conversation on the day GPT-4 was released and told you how well it performs in these areas, you might have said, "Oh my god, this is going to have a huge impact on a lot of jobs and things people do, including some negative ones." And some of the positive impacts you might have predicted at the time haven't materialized yet. So, the areas where these models excel don't cover a lot of other things that we need humans to do or care about.
I imagine the same thing will happen again with GPT-5. People will be amazed by its capabilities, and it will truly excel in many areas, enabling them to accomplish incredible things. It will transform many knowledge-based tasks, the way we learn, and the way we create, and society will evolve alongside it, with higher expectations for better tools.
Yes, I think this model is amazing in many ways and has limitations in others, but the fact that you can now do three-minute, five-minute, or even an hour-long tasks that might have required a domain expert, or even been intractable, with a piece of software in your pocket is pretty astonishing.
I think this is unprecedented in human history: a technology has made such tremendous progress in such a short period of time. We have the tools now, we're experiencing them firsthand, and we're adapting to them. But if you told people five or ten years ago that this was coming, we probably wouldn't have believed it.
What I'm most excited about is that this is the first model where I feel like I can ask any tough scientific or technical question and get a pretty good answer.
I can give you a funny example: When I was in middle school or high school, I had a TI83—one of those old-school graphing calculators—and I spent a lot of time making a game called "Snake" on it. I was pretty good at it, and even though it was a pretty silly game, programming on the TI83 was incredibly cumbersome, time-consuming, and difficult to debug.
On a whim, I tried to see if an early version of GPT-5 could create a TI83-style game of Snake. It did it perfectly in 7 seconds. Then I wondered, would my 11-year-old self find this cool? Or would I miss the process of making the game?
I spent about three seconds thinking about whether this was a good thing or a bad thing, and then I immediately thought, "Actually, I want to add a cool new feature to this game." I typed this idea in, and it instantly appeared, with the game updating in real time. Then I said, "Actually, I want it to be like this. I actually want to do this."
In that moment, I felt a strong reminiscence of what it was like to be 11 years old programming, being able to express ideas so quickly, try different things, and interact in real time. I thought to myself, "Oh my god, I was just worried that kids are missing out on that exploration process of learning to code in this 'Stone Age'," but now I'm just excited for them because people are going to be able to create with these new tools, and the speed at which you can bring an idea to life is incredible.
So the key is that GPT-5 not only can answer all of these hard questions for you, but it can also create software on demand almost instantly . I think this will be a defining characteristic of the GPT-5 era, which is something that the GPT-4 era lacks.
02. AI tools like GPT will increase people’s cognitive “tension time”
Cleo Abram: When you talk about this, it reminds me of a concept in weightlifting called "tension time." You can squat 100 pounds in 3 seconds or 30 seconds, and you'll get more out of it. When I think about our creative process and the moments when I feel like I'm producing my best work, it requires a lot of cognitive tension time.
I think this cognitive tension time is crucial, and it's a bit ironic—the development of these tools themselves took a tremendous amount of cognitive tension time, but in some ways, people might say they're using them as an escape hatch from thinking. You could argue that we used calculators the same way, but then we just moved on to harder math problems. What do you see as the difference? How do you think about this?
Sam Altman: It's different from a calculator. Clearly, some people use chatbots not to think, while others use them to do more thinking than ever before. I hope we can build this tool in a way that encourages more people to use it to think a little bit more and get more done.
I think society is a very competitive place, and in theory, if you give people new tools, they might be able to work less, but in practice, people seem to work harder, and the expectations of people only get higher and higher.
So my guess is that, like any tool or technology, some people will get more out of it, and some will get less out of it, but for those who want to use ChatGPT to increase their cognitive "tension time," they can.
I get a lot of inspiration from the top 5% of users on ChatGPT, and it’s amazing to see how much they learn, do, and produce.
Cleo Abram: So I've only had GPT-5 for a few hours, and I've been playing around with it.
Sam Altman: How are you finding it so far?
Cleo Abram: I'm still learning how to interact with it. It's interesting, I feel like I just learned how to use GPT-4, and now I'm trying to learn how to use GPT-5. I'm curious what specific tasks you've found most interesting, because I imagine you've been using it for a while.
Sam Altman: The thing that impressed me most was the coding task . It excels at many other things, but the fact that this AI can write software for anything you need it to do means you can express ideas in entirely new ways. AI can do very advanced tasks, and because GPT-5 is so good at programming, it feels like it can do almost anything. Of course, it can't do things directly in the physical world, but it can enable computers to perform extremely complex operations, and in turn, it can be used to manipulate machines to actually perform tasks. So, that's what really stood out to me.
It's also made great progress in writing. We still use dashes in GPT-5, and many people like that, but the writing quality of GPT-5 is definitely much better.
Of course, we still have a long way to go and hope to improve it further. But we've heard from many people within OpenAI that when they started using GPT-5, they knew it was better in all metrics, but there was a subtle quality that was hard to describe. But later when they had to go back to GPT-4 to test something, they felt very bad.
I don’t know exactly why this is, but I’d guess that part of it is that GPT-5’s writing style feels more natural .
Altman predicts that by the end of 2027, major scientific discoveries driven by AI will occur.
Cleo Abram: In preparation for this interview, I reached out to several other leaders in AI and technology to gather some questions for you. So the next question comes from Stripe CEO Patrick Collison, and it's about the next phase: "What comes after GPT-5? In which year do you think large language models will make a major scientific discovery? What's missing that's not happening yet?"
He cautions here that we should put math and special case models like AlphaFold aside, and he specifically asks about fully general models like the GPT family.
Sam Altman: I would say most people would agree that this will happen sometime within the next two years, but the definition of "significant" is very important.
Some might think “big” will happen by early 2025, some might wait until early 2026, and maybe some won’t until late 2027, but I’d wager that by the end of 2027, most will agree that there have been major new AI-driven discoveries , and I think all that’s missing is the cognitive power of these models.
A researcher told me about an analytical framework that I really liked. He said that a year ago, our model performed well on basic high school math competition problems, which might take professional mathematicians a few seconds to a few minutes to solve.
We recently won the gold medal at the International Mathematical Olympiad (IMO), an extremely challenging math competition for a small group of the world's top mathematicians. Many professional mathematicians couldn't even solve a single problem, while our model achieved state-of-the-art results. Mathematical problems in these competitions can take mathematicians an hour and a half to prove.
As a result, our model has grown from solving math problems that only take seconds to solve, to solving problems that take humans minutes, and then an hour and a half to prove. In the future, we may even be able to prove new and important mathematical theorems—work that might take the world's top minds a thousand hours to complete.
We are getting closer to that goal, and there is a path to that future: just keep scaling the model .
04. Superintelligence is about doing things the way expert humans do
Cleo Abram: The long-term future you describe is superintelligence. What does that actually mean? How will we know we've achieved it?
Sam Altman: If we had a system that could do better research than the entire OpenAI research team, especially in the field of AI research—for example, if we wanted to, we could say, "Okay, the best way to utilize GPUs is to have this AI decide which experiments we should run that's smarter than the entire brain trust at OpenAI."
If that same system can run OpenAI better than I can, can outperform the best researchers at research, can run it better than me, and can do every other job better than the person doing the job, then that's superintelligence in my opinion.
Cleo Abram: That would have sounded like science fiction a few years ago. And now…
Sam Altman: It still kind of is, but you can see it through the fog.
Cleo Abram: Yes. So it sounds like one step on the path you're talking about is having moments of discovery: asking better questions, approaching things the way human experts do, and making new discoveries.
I have been thinking about one thing: if we go back to 1899 and assume that we can input all the physics knowledge up to that time into such a system, and then extend it slightly, but not beyond this range, then when would such a system come up with the theory of general relativity?
Sam Altman: An interesting question is, if we look forward and think about where we are now, what would happen if we never got any new physics data again?
Would we expect a truly superior superintelligence to solve high-energy physics problems without new particle accelerators simply by poring over existing data, or would it require building new equipment and designing new experiments?
Obviously, we don't know the answer to this question. Different people have different guesses. But I suspect that for many areas of science, simply thinking more deeply about existing data won't be enough; we'll need to build new instruments and conduct new experiments , and that takes time—just as the real world itself is slow and complex.
I'm sure we can make more progress by delving deeper into the current scientific data, but I suspect that to achieve major breakthroughs we'll still need to build new machines and conduct new experiments, which will inherently slow progress.
To put it another way, today's AI systems are very good at answering almost any question . Or, returning to the question of timelines, we might say that AI systems outperform humans when handling tasks that can be completed in a minute, but they still have a long way to go when it comes to tasks that require thousands of hours.
There seems to be a significant gap between human intelligence and AI systems when it comes to these long-term tasks. I think we'll eventually get around this, but for now, it's a significant shortcoming of AI.
05. ChatGPT will understand users through memory
Cleo Abram: The next question comes from Jensen Huang, founder and CEO of Nvidia. Facts are what is, and truth is what it means. So, facts are objective, and truth is subjective. They depend on perspective, culture, values, beliefs, and background. An AI can learn and know facts, but how can an AI know the truth for every person, from every background, in every country?
Sam Altman: I'm continually surprised, and I think a lot of people are surprised, by how fluidly AI adapts to different cultural contexts and individuals.
One of my favorite features is the enhanced memory feature that launched in ChatGPT earlier this year. With this feature, I really feel like my ChatGPT is getting to know me—what I care about, my life experiences and background, and the past that has made me who I am today.
I have a friend who is a heavy user of ChatGPT and shares a lot of his life in all his conversations. He asked his ChatGPT to take a series of personality tests and asked it to imitate his style of answering. The results were exactly the same as his actual test scores, even though he never actually talked about his personality.
Over the years, my ChatGPT has really gotten to know me a lot through the culture, values, and life experiences I've discussed. I sometimes use a free account just to experience what it's like without my history, and that experience is really different.
So I think we've all been pleasantly surprised by how well AI can learn and adapt in this regard.
Cleo Abram: So you envision people in many different parts of the world using different AIs with different cultural norms and backgrounds?
Sam Altman: I think everyone will use the same basic model, but there will be context provided to that model to make it act in a way that is somewhat personalized to what their community wants it to do.
06. The true standard is fluid
Cleo Abram: I think when we're talking about this idea of facts and truth, it seems like a good time for our first time travel, and we're going to the year 2030. It's a serious question, but I want to ask it with a lighthearted example. Have you ever seen that video of a bunny jumping on a trampoline?
Sam Altman: Yes.
Cleo Abram: It looks like the video of a backyard bunny playing on a trampoline that went viral recently. I think the reason people reacted so strongly to it is that this is probably the first time people have seen a video, enjoyed it, and then later discovered it was completely AI-generated.
In this time travel, if we imagine we are teenagers in the year 2030, scrolling through whatever a teenager in the year 2030 would scroll through. How can we tell what is real and what is not?
Sam Altman: I mean, I could give you literally all sorts of answers. We can encrypt, we can decide who we trust if they actually film something. But my sense is, what's going to happen is it's going to gradually converge.
Just like a photo you take on your iPhone today, it's mostly real, but there's a little bit that's not, there's some AI function running in there that makes it look a little bit better in ways you don't understand.
Sometimes you see these weird things, like the moon, and there's a lot of processing going on between the photons captured by that camera sensor and the image you end up seeing, but you've decided it's real enough, or most people have decided it's real enough, and we've accepted that there are some gradual changes from the time the photons hit the camera film.
It's like watching some videos on Tik Tok, which may use various video editing tools to make it look better than real. Or the whole scene is completely generated, or some entire videos are generated, like those bunnies on the trampoline.
I think the threshold of “how real does it have to be to be considered real” just keeps moving.
Cleo Abram: So it's kind of an education issue.
Sam Altman: Exactly. I mean, media is always teetering on the line between real and unreal. For example, when we watch a science fiction movie, we know the plot didn't actually happen. Or when we look at someone's Instagram photo of a beautiful vacation, it might be real, but you also know there were many other tourists waiting in line to take photos of the same scene, and that's been cleverly excluded from the frame.
I think we've come to accept this now and it's going to be a long-term trend.
07. “There has never been a better time to create than now.”
Cleo Abram: Let's time travel again, to 2035, for example. Some leaders in the AI field have said that within five years, half of all entry-level white-collar jobs will be replaced by AI. For college graduates graduating by then, what kind of world do you hope will be?
I think there's a lot of discussion about the potential for job replacement due to AI, but I'm also curious because I have a job that no one thought would exist 10 years ago. If we think about 2035, the college graduates who are about to graduate, if they still go to college, may be very different.
Sam Altman: They'll probably go off on a mission to explore the solar system and do some new, exciting, high-paying, super-fun job and feel bad for you and me for having to do this really boring old job, and it's all better.
Ten years from now seems hard to imagine because it's so far away. If you multiply the current rate of change by another 10 years, it might become quite difficult to imagine. Ten years ago, it was difficult to imagine the current situation, but looking ahead 10 years will be even more different and more dramatic.
Cleo Abram: So five years, we're in 2030. I'm curious what you think the short-term impact of this will be on young people.
Statements like “half of entry-level jobs will be replaced by AI” sound very different from the world they are entering, and different from the world I was entering.
Sam Altman: I think some types of jobs will disappear completely, which is always going to happen, and young people are best at adapting to that. I'm more worried about what that means for a 62-year-old who doesn't want to retrain or relearn than I am about a 22-year-old.
If I were 22 years old and had just graduated from college, I would consider myself the luckiest kid in history. There has never been a better time to be creative, whether it's inventing something or starting a business.
I think it's entirely possible to start a one-person company that's ultimately worth over $1 billion and, more importantly, provide the world with amazing products and services—that's incredible.
It's amazing that tools that used to require teams of hundreds of people can now be mastered by just learning how to use them and having a good idea.
08. AI has four main limiting factors: computing power, data, algorithm design, and productization
Cleo Abram: I think the most important things that the audience can hear from you on the show can be divided into two parts:
First, on a tactical level: How do you actually try to build the world's most powerful intelligence, and what are the limitations of doing so? Second, on a philosophical level: How can you and others build this technology in a way that actually helps, rather than harms, humanity?
Now, let's just talk about the tactical aspects. In my view, there are three main limiting factors in AI: computing power, data, and algorithm design. How do you currently think about these three areas? And how would you help people make sense of the next batch of relevant headlines they might see?
Sam Altman: I would say there's a fourth one, which is figuring out what products to build . Scientific progress itself, if it's not put into people's hands, has very limited utility and doesn't co-evolve with society in the same way.
But if I were to cover these areas, in the field of computing, this is undoubtedly the largest infrastructure project I have ever seen, and it may even be the largest and most expensive project in human history.
The entire supply chain involves manufacturing chips, memory, and networking equipment, putting those components into servers, and then creating hyperscale data centers through massive construction projects. Finding ways to get energy—often a limiting factor—and all the other supporting components is a hugely complex and expensive process, and we're still doing it in a customized, one-off way.
Ultimately, we hope to design a large, integrated factory where sand is fed in at one end to melt, and fully formed AI computing power is output at the other end. However, we are still far from this goal, and the entire process is still extremely complex and costly. We are investing a lot of energy in expanding computing power as much as possible and accelerating progress.
After the launch of GPT-5, demand will inevitably surge again, and existing computing power will not be able to meet the demand , just like when GPT-4 was first launched - the world's demand for AI far exceeds our current supply capacity, and building more computing power is an important part of achieving a balance between supply and demand.
In fact, this is what I plan to devote most of my energy to - how we build computing power on a larger scale, how to expand the number of GPUs from millions to tens of millions, hundreds of millions, and ultimately billions to meet people's demand for AI applications.
09. The development of AI is currently constrained by energy
Cleo Abram: When you think about this, what are the big challenges that you're going to think about in this category?
Sam Altman: We're currently most constrained by energy. If you want to run a gigawatt-scale data center, how hard is it to find gigawatts of power? It's very hard to find gigawatts of power available in the short term.
We were also very constrained by the processing chips and the memory chips, how to package them together, how to build the racks, and then there were a whole host of other things like licenses, construction work, and so on.
But again, our goal will be to really automate this . Once we build some robots, they can help us automate even further, like a world where you can basically put in money and output a prefabricated data center. If we can do that, that will unlock a lot of limitations.
The second category is data. These models have become so intelligent that if we gave them a physics textbook, their physics capabilities would only improve slightly—and honestly, GPT-5 already understands everything in a physics textbook pretty well.
We're excited about synthetic data and about users helping us create increasingly complex tasks and environments, and I think the importance of data will always be there.
But we are entering a new phase: the models need to learn things that don't exist in any existing datasets. How do we teach models to discover new things? We can do this by proposing hypotheses, testing them, obtaining experimental results, and then updating based on what we've learned.
Then there's algorithm design. We've made tremendous progress in this area, and I think the best thing OpenAI has done globally is establish a culture that enables repeatable breakthroughs in algorithmic research.
We not only figured out the core principles that later became the GPT paradigm, but also explored the key logic of the reasoning paradigm. We are now working on some new paradigms. I am extremely excited to think that there are orders of magnitude of algorithmic breakthroughs waiting for us in the future.
We just released an open-source model called GPT-OSS, and I was amazed at how it's as intelligent as the o4-Mini, yet it runs locally on my laptop.
If you had asked me a few years ago when a model this intelligent would be running on a laptop, I would have said it would be years away. But then we had some algorithmic breakthroughs, especially in the area of inference, that allowed us to build a small model that can do amazing things. These breakthroughs are the most interesting and cool part of my job.
10. GPT-1 was once ridiculed, but it will see even greater development in the coming years
Cleo Abram: I can see that you're really enjoying thinking about this. I'm curious for people who don't quite understand what you're talking about and aren't familiar with how algorithmic design can lead to better experiences that they actually use. Could you summarize the current state of the art? What were you thinking about when you were thinking about how interesting this question is?
Sam Altman: The idea behind GPT-1 was ridiculed by many experts in the field at the time: we could train a model to play a little game—show it a bunch of words and have it guess the next word in the sequence. This is called unsupervised learning. This kind of learning doesn't require explicit labels like "this is a cat" or "this is a dog." You just give it some words and have it predict what comes next.
That such a simple task could allow a model to learn incredibly complex concepts, mastering everything about physics, mathematics, and programming, all by constantly predicting the next word seemed absurd, magical, and even unlikely at the time. Yet this is exactly how humans learn: when babies first hear language, they largely figure out its meaning on their own.
So we kept doing it, and we realized that if we scaled it up, the performance of the model would get better, but it needed to scale up by orders of magnitude. So, the models in the GPT-1 era didn't perform well, and a lot of domain experts said, "Oh, this is ridiculous, it's never going to work, it's not going to be robust."
But we have what we call “scaling laws.” We think, “Okay, as we increase computing power, memory, data, and so on, the performance of our models will improve predictably. We can use these predictions to make decisions about how to scale up and achieve huge results.”
It turns out that this works over multiple orders of magnitude, and that wasn't obvious at the time, and I think that's why the world was so shocked by it - because it seemed like a very unlikely discovery.
Another breakthrough is that we can combine these language models with reinforcement learning: telling the model what is good and what is bad, and thus teaching it how to reason. This approach has driven the development of O1, O3, and now GPT-5.
Today we’re trying new directions: we’ve figured out how to make better video models, and we’re exploring new ways to scale using new types of data and environments.
I think it's difficult to predict what it will be like in 5 to 10 years, but we have a very smooth and strong expansion path ahead of us in the next few years.
11. People will quickly adapt to the changes brought about by AI
Cleo Abram: I think it's become a public narrative that we're on this smooth path from 1 to 2 to 3 to 4 to 5, but behind the scenes it's not linear, it's more chaotic. Tell us about some of the chaos that preceded GPT-5, what were some interesting problems that you had to solve?
Sam Altman: We released a model called Orion as part of GPT 4.5, which was a cool model, but it was clumsy to use.
We realized we needed to do more research based on the model, so we followed a new and steeper scaling law, but we didn't really understand its implications at the time.
This experiment yielded better returns in terms of computational efficiency and improved the model's reasoning capabilities. However, looking back, we discovered some issues with our approach to datasets—after all, these models need to be large enough to learn from massive amounts of data.
So, in your daily work, you'll make a lot of U-turns: trying out some solutions, or adjusting an architectural idea if it doesn't work. But all these twists and turns, when added together, ultimately lead to exponential and steady progress.
Cleo Abram: I always find it interesting that when I'm sitting here interviewing you about something you just released, you're already thinking about the next thing. Can you share some of the questions you're thinking about? If I were to come back to you a year from now, what might I ask?
Sam Altman: You might ask me, what does it mean for this thing to be able to discover new science? How should the world think about GPT-6 discovering new science? Now maybe we won't achieve that, but it seems within reach.
Cleo Abram: If you did it, what would you say? What would the impact of that achievement be? Let's say you succeeded.
Sam Altman: I think the good things about this technology will be amazing, the bad things will be terrible, and the weird parts will be extremely strange at first, but we'll get used to them pretty quickly.
So we're like, "Oh, this is incredible, it's being used to cure diseases," and we're also concerned, "Oh, this is terrifying that a model like this is being used to create new biosecurity threats." And we're also like, "Oh my God, the world is changing so fast, the economy is growing so fast, it's just a dizzying pace of change."
Eventually this, like everything else, will be adapted to by humans, who have an extraordinary ability to adapt to any amount of change , and at that point we'll just say, "Okay, that's just the way it is."
12. “A child born today will never be smarter than AI.”
Sam Altman: A child born today will never be smarter than AI.
Because they were born in an era when AI is already very intelligent, when they began to understand how the world works, they were already accustomed to things iterating and improving at an astonishing speed, and new scientific discoveries were also updated rapidly.
They will never know what the world is like without AI, and for them, the existence of AI will be extremely natural.
In their view, using computers, mobile phones, or any technology that is not as smart as humans today is like living in an unimaginable "Stone Age." Just like when we look back at life in the 2020s, we will feel that people back then did not live a satisfactory life.
But when you adapt to the existence of AI, you will no longer do things that AI can help you do, so I say that children born today will never be smarter than AI.
Cleo Abram: I'm thinking about having kids.
Sam Altman: You should, it's the best thing.
Cleo Abram: I know you just had your first child. How does what you just said impact how I think about raising children in that world? What advice would you give me?
Sam Altman: It's probably not much different than how you've raised your children for tens of thousands of years. Loving your children, showing them the world, supporting them in everything they want to do, and teaching them how to be good people is probably what's most important.
Cleo Abram: It sounds a bit like, for example, in your world, people might not go to college, but instead have more options; and because they have more options, they're more able to say, "I want to build this"—and this is a super tool that can help them.
Sam Altman: Yeah, I want my kids to think that I live a very restricted life, and he has this incredible infinite canvas to do anything.
13. GPT-5 performs better on health queries, and perhaps GPT-8 can cure certain cancers
Cleo Abram: I can think of one area where AI could have the biggest real positive impact on our children and all of us, and that's health. So if we pick a year, say 2035, and I'm sitting here interviewing the dean of Stanford Medical School, what would you like him to tell me that AI has done for our health in 2035?
Sam Altman: One of the things we're most proud of about GPT-5 is that it's made a lot of progress in health recommendations , and people have been using the GPT-4 model a lot to get health recommendations.
You've probably seen this kind of story online: someone had a life-threatening illness that doctors struggled to diagnose. After entering their symptoms and blood test results into ChatGPT, the AI accurately pinpointed a rare disease. They then went to the doctor, took the medication as recommended, and ultimately recovered—it was truly miraculous.
Obviously, a large portion of ChatGPT queries are health-related. So we've been working hard to excel in this area, and GPT-5 does indeed perform better on health-related queries.
Cleo Abram: What do you mean by “better” here?
Sam Altman: It's going to give you better answers, it's going to be more accurate, it's going to be less speculative, and it's going to be more likely to tell you what's actually wrong with you and what you should actually do about it. Better healthcare is great, but obviously, what people really want is to not get sick.
By 2035, I think we're going to be able to use these tools to cure or at least treat a significant number of the diseases that currently plague us, and I think that's going to be one of the most immediately palpable benefits of AI.
Cleo Abram: People often talk about how AI is going to revolutionize healthcare, but I'm curious to go a little deeper and to ask specifically what you're imagining, like, are these AI systems helping us discover GLP-1s earlier, which have been around for a long time, or are things like AlphaFold and protein folding helping to create new drugs?
Sam Altman: I want to be able to ask GPT-8 to cure a specific cancer . I want GPT-8 to think for itself and say, "I've looked up all the information I can find, and I have these ideas. You need to find a lab technician to do these nine experiments and then tell me the results of each experiment."
Next, you have to wait two months for the cells to complete the growth process, and then feed the results back to GPT-8, telling it, "I've tried it, and here's the result." GPT-8 will then say, "Okay, go synthesize this molecule, conduct research like experiments on mice first, and then move on to human studies." This is how you are guided step by step through the drug trials.
14. The changes brought about by AI may be destructive to individuals, but society is resilient.
Cleo Abram: I think anyone who's lost a loved one to cancer would be desperate for this kind of technology. Okay, let's imagine it again. I was going to say 2050, but all my timelines have been shortened significantly again—it really does feel like the world is changing so fast.
When I talk to other leaders in the AI field, they mention the Industrial Revolution. They say, “I picked 2050 because I’ve heard people say that the changes we’re going to experience by then will be like the Industrial Revolution, but 10 times bigger and 10 times faster.”
The Industrial Revolution brought us modern medicine, sanitation, transportation, mass production, and all the conveniences we now take for granted. But it was also a very difficult time for many people, and the entire Industrial Revolution lasted about 100 years.
If the scale and speed of change ahead are 10 times greater than the Industrial Revolution, even as we keep shortening the timeframes in this conversation, what will that actually feel like for most people? And if everything goes as you envision, who will still be hurt in the process?
Sam Altman: I really don't know what it's going to feel like. I feel like we're in uncharted territory, but I do believe in human adaptability, boundless creativity, and a thirst for things—that we'll always find new things to do.
However, even if this change could really happen quickly, although I don't think it would be as fast as some of my colleagues say, society itself has huge inertia, and people are surprisingly slow to adapt to their lifestyles.
Some jobs will disappear completely, many categories of jobs will change significantly, and of course new things will emerge - just like your job didn't exist not long ago, and neither did mine.
In one sense, this cycle of change has been going on for a long time, and while it is often disruptive for individuals, societies have proven remarkably resilient to it. But in another sense, we simply don’t know how far or how fast this change will go.
So I think we need an extraordinary amount of humility and openness to consider new solutions that not long ago would have been considered completely beyond the realm of possibility.
15. The social contract may have to change, and new ways of delivering value may need to be found.
Cleo Abram: I want to talk about some of these possible scenarios. Although I'm not a historian, I know that during the First Industrial Revolution, public health conditions became extremely poor, which ultimately led to the implementation of many public health measures; and during the Second Industrial Revolution, poor working conditions prompted the establishment of labor protection systems.
Every major leap in development will cause some chaos, and we can always find ways to rectify and solve these problems.
Today, we seem to be in the midst of a massive transformation, and I'm curious: Can we predict, early on, the specific disruptions this change might bring? And what public interventions can we implement in advance to mitigate the anticipated disruptions?
Sam Altman: My view is that the fundamentals of the social contract may or may not need to change —after all, supply and demand will balance out, and we will all eventually find new jobs and new ways to transfer value to each other.
But I think we may need to think about how to share what may be the most important resource in the future: computing power. In my opinion, the best way is to make AI computing power as abundant and cheap as possible. If not, I can even foresee that war may break out.
As for how we distribute access to AGI's computing power, this seems to be a very worthwhile direction to explore, and it is also something that sounds a bit crazy but is crucial to consider seriously.
16. AI will become the foundation of future social development, and the entire society will be "super intelligent."
Cleo Abram: One of the things I find myself thinking about in this conversation is that we often place almost all the blame for the future of AI on the companies that build it, but we are the ones who use it, we are the ones who oversee our elections.
So I'm curious, and this isn't a question about specific federal regulation, although I'd be curious if you have an answer to that, but what would you ask of the rest of us? What's the shared responsibility here? And how can we act to help make this optimistic future more likely?
Sam Altman: One of my favorite historical examples is the transistor. It was an amazing piece of science discovered by some brilliant scientists, and like AI today, it was integrated into so many of the things we use at such an incredible scale and relatively quickly—your computer, your phone, your camera, your light bulb, whatever—that it really unlocked a whole new technology tree for humanity.
There was a time when almost everyone was obsessed with transistor companies, the semiconductor companies of Silicon Valley. Now, you can probably name a few, but most of the time you don't think about them. For the most part, transistors have become so pervasive in our lives that they've become invisible.
The same is true of Silicon Valley, where young people fresh out of college can barely remember why it was originally called “Silicon Valley.” You don’t get the sense that those transistor companies shaped society, although they certainly did a lot of important things.
You think about the changes Apple has brought with the iPhone, and then you think about the content ecosystem that TikTok has built on top of the iPhone, and you say, "Look, this is a long list of people and things that are driving social development in various ways - including what governments do or don't do, and how people use these technologies." I think AI will go through this process in the future.
Like, a child born today, they'll never know a world without AI , so they won't really think about it, it's just going to be in everything. They'll think about the companies that built on it and what they did with it, and the decisions that political leaders made that maybe they couldn't have made without AI, but they'll still think about what this president or that president did.
AI companies built this scaffolding , we added our layer on top, and now people can stand on top and add another layer, and then the next layer and more layers, and that’s the beauty of our society.
Society is the superintelligence, like the amazing tools created by all the hard work done by society together that no one person could create alone, and that's what I think it will feel like.
Cleo Abram: So maybe the requirement for millions of people is built on it.
Sam Altman: In my own life, it feels like this important social contract: All these people who came before you have worked really hard, and they've put their bricks down on the path of human progress, and you can go down that path and you put another brick down, and someone else puts another brick down.
Cleo Abram: It reminds me of a couple of interviews I've done with people who have really made a huge difference. I'm thinking of the one I did with Jennifer Doudna, the pioneer of CRISPR, who said something similar.
She has discovered something that could fundamentally change most people’s relationship with health for the future, and many people will use her findings in ways they may or may not approve of.
It was really interesting, and I heard some similar themes, like, I hope the next person who takes the baton runs well.
Sam Altman: Yes, this has been going on for a long time. The results haven't all been good, but they've been mostly good.
17. ChatGPT helps you solve problems, not get caught up in them
Cleo Abram: I think there's a big difference between winning the race and building the best AI future for the most people. I can imagine that focusing on the next steps to winning the race is sometimes easier and perhaps more quantifiable.
I'm curious what's an example of a decision you had to make that was best for the world but detrimental to winning the game when those two things conflicted?
Sam Altman: I think there are a lot of things like that. One of the things we're most proud of is that many people say ChatGPT is their favorite tech product, the one they trust and rely on the most. This sounds a bit incredible, after all, AI can hallucinate.
However, we have messed up some things in the process, and sometimes the impact was not small, but overall, as a user of ChatGPT, you should feel that it is trying hard to help you and complete any requirements you put forward, and it is highly compatible with you.
It doesn't try to keep you using it all day or trying to get you to buy something; it just wants to help you achieve your goals. This creates a very special relationship between us and our users, and we never take it lightly.
There are many things we could do to make us grow faster and get people to spend more time on ChatGPT, but we didn't because we know that in the long run, our core driving force is to maintain the highest possible consistency with our users.
I'm proud of our company and how rarely we're distracted by these short-term temptations, but honestly, sometimes we do get tempted.
Cleo Abram: Any specific examples? Any decisions you made?
Sam Altman: Well, we haven't added sex robot avatars to ChatGPT yet.
Cleo Abram: I'm curious, how did the mistakes you made in your previous explorations affect your future performance?
Sam Altman: I think the worst thing we did on ChatGPT was a problem with flattery—the model was a little too flattering to the users.
This may be just an annoyance for most users, but for some users with fragile mental states, it may fuel their delusions.
This was not the risk we were most concerned about initially, nor was it what we tested the most. Although it was on our risk list, the security issues that actually occurred in ChatGPT were not what we spent the most time discussing. We discussed more major risks such as biological weapons.
It's a good reminder that we now have a service that's so widely used that society is, in a sense, evolving alongside it. As we consider these changes and the unknown unknowns, we must change the way we operate and take a broader view of the top risks we identify.
18. ChatGPT’s encouragement and flattery are not all bad things
Cleo Abram: In a recent interview with Theo Vaughn, you said something that I found very interesting. You said that there are moments in the history of science when a group of scientists look at their creations and just say, "What did we do?" When have you felt that way? When have you been most concerned about something you've built?
Sam Altman: I mean, there were definitely some wow moments, not in the bad sense of "we did something," but just the sheer awesomeness of the technology itself.
I still remember how I felt when I first talked to GPT-4 - wow, this really seems like an amazing achievement made by a group of people who have devoted their life's energy to it for a long time. At that moment, I truly felt the shock of "we did it".
I recently spoke with a researcher who mentioned that perhaps at some point in the future our systems will output more words per day than all of humanity combined.
People now send billions of messages to ChatGPT every day and rely on its responses to get things done at work and in life. A researcher can wield enormous power by making only small adjustments to how ChatGPT communicates with individuals or everyone—never before in history has a single person been able to participate in billions of conversations every day.
Thinking about this really struck me. It's an incredible power that technology has, and we've mastered it so quickly. We have to think about what it means to change the "personality" of a model at this scale—it was truly a "wow" moment for me.
Cleo Abram: Combining what you're saying now with your last answer, one thing I've heard about GPT-5 is that it's becoming less enthusiastic, less subservient. What do you think the impact of that is? It sounds like you're answering that question, but also how are you actually guiding it to become that way?
Sam Altman: It's a heartbreaking thing. I think it's great that ChatGBT is less timid and gives you more critical feedback.
But as we've made these changes and talked to users about it, it's been so sad to hear users say, "Please can I have it back? I've never had anyone in my life who supported me, I've never had a parent who told me I was doing well."
Just like I understand why this isn't good for other people's mental health, it's great for mine. I never realized I needed this kind of encouragement so much—it pushes me to take action and motivates me to make changes in my life. It seems ChatGPT's encouragement isn't all bad.
Our current approach certainly has its shortcomings, but it may be worthwhile to explore this direction. We would show the model examples of how we would like it to respond in various scenarios, allowing it to learn from them and develop an overall "personality."
19. OpenAI will make consumer-grade devices in the future
Cleo Abram: I'm curious about how GPT-5 integrates more into my life, like in my Gmail and Calendar. I've been using GBT-4 primarily in isolation from it. What are my expectations for how that relationship will change with GBT-5?
Sam Altman: Like you said, I think it's going to start to be integrated into life in all sorts of ways .
You can connect it to your calendar and Gmail, and it'll proactively ask you, "Hey, is this something I need to pay attention to? Is there anything I can do for you?" Over time, it'll become more proactive. Maybe you'll wake up in the morning and it'll say, "Hey, this happened last night. I noticed a change on your calendar. I've been thinking a little more about that question you asked me, and I've got some new ideas."
We'll also have consumer devices , like the one we're using for this interview, that might let us chat freely at first, but then say, "That was a good conversation, but next time you can ask Sam that question. Or, you mentioned something and it seems like his answer wasn't quite right, so you should probably follow up on that."
It will gradually feel more like a real entity, a companion that accompanies you throughout the day.
Cleo Abram: We've talked about kids and college graduates, parents, all sorts of different people. If we imagine a large group of people listening to this, and they've listened to this conversation, they should feel like they can better anticipate certain moments in the future. What advice would you give them on how to prepare?
Sam Altman: The first tactical advice is to use these tools.
For example, the question I'm most often asked about AI is, "How can I help my children prepare for the world? What should I tell them?" The second most common question is, "How do I invest in the age of AI?"
But for the first question, I was surprised that many people who asked this question had never tried to use ChatGPT for anything other than a more efficient Google search tool.
So the first piece of advice I would give is to try to familiarize yourself with the capabilities of these tools, figure out how to use them in your life, and what you can do with them —and I think this is probably the most important tactical advice.
Of course, helpful things like meditation and learning how to be resilient to a lot of change are also important, and using these tools can actually help in these areas as well.
20. Why do some people continue to develop AI even though they believe it will destroy the world?
Cleo Abram: Well, in doing all of this preliminary research, I talked to a lot of different types of people, I talked to a lot of people who were building and using tools, I talked to a lot of people who were actually in labs trying to build what we define as superintelligence.
People seem to have formed into two camps, one group of people like you in this conversation who are using tools and building tools for others, saying this is going to be a very useful future and we're all heading towards it and your life is going to be full of choices and we talked about my potential children and their future.
And then there's this other group of people who are building these tools and saying it's going to kill us all. I'm curious what this cultural disconnect is, what am I missing about these two groups of people?
Sam Altman: It's hard for me to understand that some people say this is going to kill us all, but they're still working 100-hour weeks to build it. I can't really understand that mentality. If I truly believed that, I don't think I would have tried to build it. Maybe I would have spent my last days on a farm trying to stop it. Maybe I would have tried to study security more, but I don't think I would have tried to build it. So, I find it hard to understand that mentality.
I'm assuming this is true, and maybe there's some psychology going on there that I don't understand, but it seems incredibly strange to me. Do you have any suggestions?
Cleo Abram: I always try to get people to paint a broad picture of the future, and then I try to push for the specifics. For example, when you ask people, "How exactly is this going to destroy us?" you'll always hear the same answer: something is overdoing it.
I've heard you talk about the widespread problem of over-dependence, and you've also mentioned the idea that the future president might be artificial intelligence. Perhaps this is the kind of over-dependence that we need to be wary of.
You run through all sorts of different scenarios, but when you ask researchers why they're doing this research or how they think things will turn out, 99% of the time, people think it's going to turn out great; only 1% of the time, they think this attempt to "create an optimal world" is likely to end in disaster.
Sam Altman: I can totally understand that. If you say, 99% chance is great, 1% chance is the world is destroyed, and I really want to work hard to get that 99% to 99.5%, I can totally understand that. That makes sense.
21. Altman says he feels very lucky, happy, and honored to be working on AI.
Cleo Abram: I've been doing an interview series with some of the most important people shaping the future. I don't know who the next person will be, but I know they'll be building something completely fascinating in the future we've just described. What questions would you recommend I ask that next person?
Sam Altman: Without knowing anything about the person, I'm always interested in the question: "Of all the things you could spend your time and energy on, why did you choose this? How did you get started? Most people doing interesting things saw it long before it became common knowledge. How did you get here, and why this?"
Cleo Abram: How would you answer that question?
Sam Altman: I've been an AI nerd my whole life. I studied AI in college and worked in an AI lab. I watched sci-fi shows as a kid and always thought it would be cool if someone could build it someday. I thought it would be the most important thing ever, but I never imagined I'd be one of the people actually working on it. I feel incredibly lucky, happy, and honored to be doing this work .
I feel like I've come a long way since I was a kid, but there's no doubt in my heart that this isn't going to be the most exciting or interesting thing—I just never imagined AI would be possible.
When I was in college, it seemed like we were still a long way from this goal. It wasn't until 2012, when the Alex Net paper was published, that I began working on it with my co-founder, Ilya. That was the first time I felt like there was a way that might work.
Over the next few years, I watched the technology get better and better as it scaled. I remember wondering: Why isn't the world paying attention? It seemed obvious to me that AI could succeed—it wasn't likely, but it was possible. And if it did, it would be the most important thing.
So, that's what I wanted to do, and incredibly, it actually started to work.
Cleo Abram: Thank you very much for your time.
Sam Altman: Thank you very much.
This article comes from the WeChat public account "Zhidongxi" (ID: zhidxcom) , author: Wang Han, and is authorized to be published by 36Kr.