OpenAI Ultraman: Jobs that can be eliminated by ChatGPT are not real jobs

This article is machine translated
Show original

Your job today may not be a real job

This sensational statement comes from Altman's latest interview with Rowan Cheung .

In this 30-minute conversation, in addition to his thoughts on AI and work, Ultraman also shared the progress of GPT-6, whether ChatGPT will become the American version of WeChat, the changes in AGI's vision, the future interaction mode of AI, and his feelings about being spoofed into a popular Sora meme.

It can be said that this conversation covers multiple perspectives from entertainment gossip to cutting-edge technology. It is both interesting and points directly to future trends.

The full interview is edited and published below:

(Note: For easier reading, some modal particles and foreshadowing have been adjusted)

Full interview

After DevDay: Biggest Highlights and Product Layout

Q: Of all the announcements at Dev Day 2025 – which one are you most excited about?

Sam Altman: I'm very excited about all of the content. Introducing the app into ChatGPT is something I've wanted to do for a long time.

But what's even more exciting is hearing everyone talk about what they've built with Agent Builder. Both Agent Builder and Agent Kit have so many features I'd love to try out myself. However, if I had to choose one, I think running apps in ChatGPT would be the best.

Rowan Cheung: With 800 million weekly active users, ChatGPT has become a new distribution platform. How can developers and entrepreneurs use the Apps SDK to build applications on ChatGPT?

Sam Altman: I think we need to go through some iterations to really understand how people will primarily use these features. For example, will people be accustomed to calling them by the name of the app? Or will they prefer ChatGPT to know what they use most and actively recommend them?

I think in the future developers will figure out a new distribution mechanism that allows these apps to be used naturally, but it's always like this: once you actually release it into the world, you're surprised by all the unexpected ways it's used.

Rowan Cheung: I remember you also released documents to teach developers how to increase their chances of being recommended?

Sam Altman: Yes, but again, with a disclaimer. Brand new products change quickly, and we'll learn together as we go.

Rowan Cheung: Back to your first Dev Day two years ago, you launched GPT Builder, which was really great. I remember being one of the first people to publicly build GPT. What breakthroughs have you made with Agent Builder since then?

Sam Altman: The biggest change is that the models themselves have become much more powerful. I think back to our first Dev Day, and the gap between the capabilities of the models back then and now is enormous—in 22 or 23 months, the capabilities have improved dramatically. We've also learned a lot about how users want to build these agents. They want to build them not only on ChatGPT, but also on other platforms. What's most impressive to me is how easily you can now build a fairly complex system—use a visual interface, upload a few files, authorize access to data sources, tell it what you need, and deploy it in minutes. I saw this whole process for the first time yesterday at a rehearsal, and it was amazing. The experience of rapidly developing impressive software with tools like Codex and Agent Ki is like experiencing a seismic shift.

Rowan Cheung: Now in Agent Builder you can basically build an agent with zero code, right?

Sam Altman: Yes, but if you know a little or a lot of code, you can do more complex things. But even ordinary knowledge workers can start building agents. You can say that this is almost a "zero-code revolution" for agents.

Rowan Cheung: What does this mean for the next wave of entrepreneurs or developers?

Sam Altman: This is something I've been thinking about. I was watching Romain's demo backstage yesterday, and I was thinking—how long would this have taken a year ago? Now it's happening in near real time, and I don't even think my own creativity can keep up. I don't know exactly what this will change, but one thing is for sure: the amount of software that will be written in the world will increase dramatically, and the time it takes to test and refine ideas will decrease dramatically. You'll be able to try more ideas and find good ones faster, but what else will change, I don't quite understand yet.

How long will it take for the first billion-dollar pure-blooded Agent company to emerge?

Q: When was the first billion-dollar company run by an agent born, and has Agent Builder reached that level of autonomy?

Sam Altman: Not yet. We used to have a little betting pool for when the first one-person billion-dollar company would appear. It hasn't officially been established yet, but there's a lot of speculation—like the first "zero-person company."

Rowan Cheung: How many months? How many years?

Sam Altman: I expected it to be a few years. But now we can even talk credibly about the fact that you can type a prompt into a chatbot and it will work. That in itself is incredible.

Rowan Cheung: However, some of the agent products we've seen still require a lot of human oversight and feedback. When will it be possible for an agent to work continuously for a week without any feedback?

Sam Altman: I think Codex isn't far off from being able to complete a week's worth of work. While it's not necessarily 2025, I spoke to some people today and they all said it's already completing a full day's worth of work, which is incredibly fast. I rarely find AI advancements mind-boggling, but seeing how quickly Codex can complete tasks has really impressed me. It's reasonable to assume that a week-level task isn't too far off.

Rowan Cheung: Where is the technical bottleneck?

Sam Altman: Smarter models, longer context, better memory.

Rowan Cheung: So you have agents, various model upgrades, codex, and APIs. Think about it: if you took a 20-year-old Stanford dropout and gave him all the knowledge you have now, what would you have him build? And what wouldn't he have?

Sam Altman: I was thinking about this the other day. I envy the current generation of 20-year-old dropouts because there's so much they can build, and the opportunities are incredibly vast. Over the past few years, I haven't had the mental space to really think about what I might do. But I know there are so many cool things I could build. It's really exciting to talk about these projects with you today.

Rowan Cheung: I've been thinking about this a lot lately, and I think a lot of other developers are probably thinking about this as well—there's so much you can do these days. When building these products, do you have any advice on how to find a unique advantage to stay ahead? Is it through distribution channels, data, or some kind of workflow model?

Sam Altman: This question is always difficult to answer in the abstract because the best unique advantages are inherently unique—you have to figure them out for yourself. OpenAI spent a lot of effort figuring out ours. Generally speaking, there's no universal answer to this question.

The best answer is to find the kind of advantage that is unique to what you are doing, your product, your technology, your position in the market, and your timing. This is often a big part of creating value when building something new.

The one general rule of thumb I can say is that you learn as you go along. There's a business quote I've always loved: "Let tactics become strategy." You start by doing the things that work, and it's amazing how often things naturally emerge that can become strategy.

If you'd asked me back when we launched ChatGPT which features would become enduring advantages, I would have said I had no idea. I might have had some guesses, but I wouldn't have been confident. One example that turned out to be the most exciting was memory, which became a significant competitive advantage and the reason users keep using ChatGPT. We hadn't considered this at the time. You start building features, and sometimes it just clicks, "Oh, this could be a really enduring advantage for us."

GPT-6: Building Models for Products

Q: What advantages do you think should be built on GPT-6? Or, what should we think about when building a product?

Sam Altman: That's really the part you have to figure out on your own. I'd love to brainstorm with them; it'd be fun. But honestly, OpenAI has taken up almost all of my thinking, and I haven't had a chance to seriously consider starting a new company, which is a bit of a shame. AI is changing a lot of things in the world, but the fundamental factors that drive company advantage remain the same. These include network effects, brand and marketing advantages, user data, and market effects. If you make a list of what worked in recent years, they'll be largely the same now, but there may be new strategies for building these advantages.

Rowan Cheung: You recently launched the GDPval benchmark, which measures the performance of AI models on real-world economic tasks across key knowledge-based jobs. I was surprised to find that GPT-5 ranked second, just behind Claude's Opus model. The fact that you've even published the results is impressive. What are your thoughts on the results?

Sam Altman: First, it would be a shame if we weren't willing to publish the results of our second-best model. There will always be things we do best and things we don't do as well as others. One way to build a culture of continuous improvement is to happily and directly acknowledge when others are better than you on certain benchmarks or tests. I think Claude's team does a great job of understanding the business use case and presenting the output beautifully, so I'm not surprised at all, and in fact, it motivates us to do even better.

Rowan Cheung: Did this benchmark influence the way you built GPT-6?

Sam Altman: It will affect some of our post-training methods, but I don’t think the overall strategy of GPT-6 will change.

AGI: No need to exaggerate, no need to underestimate

Q: Your definition of AGI (artificial general intelligence) is when it surpasses humans in the majority of the highest economically valuable tasks. So, on the GDPval scale, when would you say we've reached AGI?

Sam Altman: I've been thinking about this. First, like many people, I have multiple definitions of AGI. The closer you get to it, the more fuzzy the concept becomes. But the one that interests me most, and the one that surprises me the most, is that we're finally reaching a point where it's starting to happen—when AI can make novel discoveries and expand the sum of human knowledge. These achievements are currently small, and I don't want to exaggerate them.

But you can see a lot of examples on Twitter right now, where scientists from all disciplines are saying that AI has made a small discovery, proposed a new approach, or solved a problem. Again, I don't want to exaggerate or underestimate this. This is what's truly important. We're at the very beginning of this, and there's optimism that we can make significant progress in the coming months and years. This is a big deal. This is probably the "AGI" metric I care about most.

Rowan Cheung: Are there any scientific breakthroughs that particularly excite you and that you would like AI to solve or discover?

Sam Altman: Sure, curing diseases and discovering new laws of physics would be great. But even the little things happening right now, like advances in mathematics, feel really important to me. I felt that way when GPT-4 was released. I know there's a lot of controversy about the Turing test, but the public perception of it was that it was unattainable. And yet, once AI passed it, society barely updated its understanding. After two weeks of excitement, everyone started complaining about why AI wasn't fast enough, or why it didn't work, and how to make it better. That's a test of human greatness—that "AI's forever test" just passed, and we all adapted. I suspect something similar will happen now—we'll gradually get used to AI making scientific discoveries.

Rowan Cheung: Stanford recently conducted a study on "workslop." This term is used to describe a low-return AI output—one that appears perfect on the surface, but actually results in additional work due to rework.

The study, which surveyed over 1,000 office workers, revealed that 41% had experienced workslop in the past month – meaning AI-generated content from colleagues that required them to spend extra time modifying or cleaning it. The average cleanup took one hour and 56 minutes, costing each employee approximately $186 per month.

If AI can increase some people's work efficiency by 10 times, like many people here, then systematic education and training are needed to make everyone understand when to use AI and when not to use it.

Sam Altman: First, many humans themselves create things similar to workslack; this isn't unique to AI. For example, some emails simply create extra work, or meetings can inherently slow down productivity. So don't expect AI to be any different. The economy will adjust, and the people and companies that use tools to improve efficiency will have a greater impact on the future than those who use tools to slow down the organization. Of course, as with any new tool, there will be a learning curve, but I think it will be quite rapid.

Rowan Cheung: Does OpenAI provide any education or training to help people better build and learn how to use these AIs?

Sam Altman: Yes. People will always use tools to do what they want. One thing I've learned is that you can create great educational content and training, but people will try all sorts of weird things with it, like making the AI parrot-like. But we do try to create a lot of content to help people use AI in their workflows. In some cases at Codex, adoption has been incredibly fast, with integration and effective use across the company happening in just days or weeks.

Parody CEO and AGI

Q: Sora is full of videos that spoof you. Are you afraid?

Sam Altman: It's not as strange as I thought. It's a little weird if you look at one, but it's okay if you look at a hundred.

Someone on the team asked me if I could open up my Cameo feature. It was new technology, and I felt like I'd be remiss if I didn't try it, so I decided to go for it. Later, on a plane, I wondered if it would look weird. It did look a bit weird when it first launched, but I quickly got used to it—it was clearly an app full of generated videos, and the content was quite interesting.

Rowan Cheung: My only concern is watermark removal. This morning, several companies launched the Sora watermark removal tool. If someone could remove the watermark and post it on social media, would that affect my personal brand? What's the mechanism for this?

Sam Altman: First, one of the reasons we're releasing this kind of technology is because we see it becoming ubiquitous. In the coming months and years, there will be excellent open-source models that will allow anyone to generate your image using publicly available video. Society will eventually adapt. We've found that one approach is to release it early and put in place guardrails, giving society and technology time to evolve together.

This approach works. Text is relatively easy, while video will be more challenging because it's more impactful. However, I believe we will learn to adapt. Soon, everyone will realize that there will be a large number of fake videos without watermarks and generated by open-source models online. This is inevitable. It may be valuable to prepare society for this in advance.

Rowan Cheung: Sora's goal is to generate AI videos that are almost indistinguishable?

Sam Altman: The goal is AGI. I think high-quality video is important for achieving AGI for a number of reasons, like spatial reasoning and what we can learn from world models. Hopefully, someday, it will also be important for real progress in robotics. But I think good video is a good thing—I don't want the only interface in the future to be text. I'm very excited about the future of interactive experiences with live video streaming, which will continue to generate entirely new user experiences. That would be fantastic. But most importantly, I think it's a very valuable path to true AGI.

Rowan Cheung: On Friday, you published a blog post about potentially exploring revenue sharing for people who allow their faces to be used in Cameo. Can you share some details on how this would work?

Sam Altman: Yeah, a lot of times when you release a new product, you find that people are using it in ways you didn't expect. We thought a small number of creators would make really cool, complex videos and share them, and then have a huge audience. And that's happening. But in reality, a lot of users are just making videos for a few friends and sharing them in group chats, not on social media. I'm not sure if this usage pattern will continue, but if it does, it will significantly impact the ratio of computing resources required to user engagement.

In the future, it might be possible to let people pay for generating videos. For example, if you generate 100 videos a day and send them to your friends, or if you want to generate videos featuring a celebrity (and they agree), you could pay them for a cut of the cost. We need to experiment to see how this works.

However, I don't like to make conclusions about a product that was just launched six days ago. This may just be a novelty and may not form a long-term usage scenario. But at least so far, it has been used a lot.

Rowan Cheung: Have you considered placing ads in the Sora App?

Sam Altman: Not yet, but there are a lot of interesting possibilities. Of course, there are also potentially scary scenarios. Unlike ChatGPT, we can generate revenue through a subscription model; but if Sora users primarily consume content in their feeds, then advertising might be a more natural model.

If it's primarily private messaging, that's another model. I'm optimistic that by the end of this year, or more realistically, by the end of the first quarter of next year, we'll understand the final form of the product and design a business model accordingly. I think charging by the number of times it's generated is reasonable and worth trying. Other business models will depend on how the product develops.

A job that AI can eliminate is not a job

Q: In the intelligent era, one billion knowledge jobs may be affected first, and then new jobs will be created. What are your thoughts?

(Note: If you told farmers 50 years ago that the Internet would create a billion new jobs, they probably wouldn't believe it. Similarly, many people now believe that AI will create many new jobs.)

Sam Altman: I think not only would farmers not believe something like this could happen, they might look at the work you do (internet media) and think it's not real work.

Farming provides people with what they really need, feeding them, and that's real work. For those of us who live in comfortable conditions, with abundant food and ample wealth, many of the things we do are like games to pass the time, a need to feel important, but they may not be considered "real work."

For us, this work feels real. I'm grateful to be doing something that's both satisfying and important. The work of the future may be very different, perhaps even more relaxed than what we think of as work today. But I believe the inherent human drive will remain, and we'll find plenty of things to do.

Rowan Cheung: I hope we can still explore space. What do you think humans will focus on after AGI emerges?

Sam Altman: I want everything to go in every direction, to do everything. Space is cool to me, but you or other people might have your own ideas about what's interesting. I want everything to be possible.

Rowan Cheung: If you could set one global policy tomorrow, what would it be?

Sam Altman: It's hard to pick just one. But I've been thinking a lot about AI regulation—whether it's appropriate, and whether it gives big companies an advantage. I think when models become very powerful, there should be a global framework to mitigate the risk of catastrophic effects, especially for cutting-edge safety issues. It would be great if there were a global policy that could do that.

Will ChatGPT become the American version of WeChat?

Q: In China, WeChat is practically a "universal app," capable of shopping, socializing, and chatting. ChatGPT now also has shopping, web search, and Sora features. Are you considering creating a US version of WeChat?

Sam Altman: No, there are many reasons why I don't think this approach will work in the U.S. What we want to do is a really good AI super assistant.

Rowan Cheung: Why do you launch separate features? For example, Sora is a standalone app, so why not just integrate it into ChatGPT?

Sam Altman: ChatGPT is the most personal account for many people, so adding a social experience to it would seem strange. We could conceivably add a messaging feature, since people share and collaborate. However, people's perception of ChatGPT differs significantly from that of entertainment apps, which could create a dissonance. Of course, we've still incorporated many features into ChatGPT.

Rowan Cheung: What agents do you think are the most important and useful? What excites you the most?

Sam Altman: You can look at the development of Codex and think about its applications in other industries. For example, in areas like law and financial modeling, is it possible to create a Codex-like experience? There are already excellent startups working on these things. As the technology matures, if these tools can achieve the same level of success in their respective industries as Codex did for coding, that's where I'm most excited. I can imagine a world where you can launch a startup just by talking to a bunch of agents. I don't think Agent Builder or agent kits are good enough yet, but I can see a path from there.

Ultraman: Voice is not the final form of interaction

Q: You mentioned in your keynote that voice may be the ultimate form of AI or agent. Can you elaborate on this?

Sam Altman: I don't think voice is the ultimate form of interaction. There are many times when voice is not the right way to interact.

For example, if you're walking and talking at a public transportation station, it can be annoying. But in many cases, voice is a very natural way to interact. Language itself is like that, but sometimes it's voice, sometimes it's typing, and there's no definitive answer here.

We've all become accustomed to smart speakers. While they're often joked about, many people actually use and love them. However, smart speakers aren't great yet, not because the concept is wrong, but because the AI isn't powerful enough at the time, and the infrastructure isn't there yet. Imagine if you could just speak to a device, and it would do exactly what you wanted, barely interrupting you—that's the experience I'd ideally like from a computer.

Rowan Cheung: Will you do voice interaction?

Sam Altman: This will take some time. We need patience to build a completely new type of device that can deliver ultra-high quality at scale. This is a completely different way of using computers, and we need creative space to explore it.

We do have some very exciting ideas that we can't reveal yet, and won't reveal anytime soon. But we'll work hard to make a product that's well worth the wait.

Reference Links:

[1]https://www.youtube.com/watch?v=zwnVUiwObl8

[2]https://futurism.com/artificial-intelligence/sam-altman-real-work-ai[3]https://x.com/rowancheung

This article comes from the WeChat public account "Quantum Bit" , author: Henry, published by 36Kr with authorization.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments