Sam Altman: Next year OpenAI will enter the era of AI systems

avatar
TechFlow
2 days ago
This article is machine translated
Show original
Improving "reasoning capabilities" remains the core goal of this large model maker.

Written by: 21VC

Compiled by: Mu Mu

Editor: Wen Dao

After GPT-4, what big move will OpenAI have next year? What is the moat of OpenAI? What is the value of AI Agent? With many old employees leaving, will OpenAI choose young people with more conviction and vitality?

On November 4, OpenAI CEO Sam Altman (hereinafter referred to as “Altman”) answered these questions in the “The Twenty Minute VC” podcast, and he made it clear that improving reasoning capabilities has always been OpenAI’s core strategy.

When podcast host and 21VC founder Harry Stebbings (hereinafter referred to as "Stebbings") asked what opportunities OpenAI could leave for AI entrepreneurs, Altman believed that if AI entrepreneurs still insisted on solving the problem of insufficient models, then this business model would no longer be competitive as the OpenAI model was upgraded. Entrepreneurs should build businesses that can benefit as the model becomes stronger, which will be a huge opportunity.

In Altman's view, the way people discuss AI now is a bit outdated. Compared with models, systems are a more noteworthy development direction, and next year will be a key year for OpenAI to move towards AI systems.

Here are some highlights from Stebbings’ conversation with Altman:

OpenAI plans to build no-code tools

Stebbings: I will start today's interview directly with a question from the audience. In the future, will OpenAI launch more models like GPT-3.5 or train larger and more powerful models?

Altman: We will fully optimize the model, and improving reasoning capabilities is the core of our current strategy . I believe that strong reasoning capabilities will unlock a range of functions we expect , including enabling AI to make substantial contributions in scientific research, write highly complex code, and so on, which will greatly promote the development and progress of society. You can expect continuous and rapid iteration and optimization of the GPT series of models, which will be the focus and priority of our future work.

Sam Altman is interviewed on a podcast by Harry Stebbings, founder of 21VC

Stebbings: Will OpenAI develop no-code tools for non-technical people in the future so that they can easily build and expand AI applications?

Altman: We are definitely on track to achieve this goal. Our initial plan is to significantly improve programmer productivity, but in the long run, our goal is to build the best-in-class no-code tools. Although there are some no-code solutions on the market, they are not yet able to fully meet the needs of creating a complete startup in a no-code way.

Stebbings: In what areas of the technology ecosystem will OpenAI expand in the future? Given that OpenAI may dominate at the application level, is it a waste of resources for startups to invest a lot of resources in optimizing existing systems? How should founders think about this issue?

Altman: Our goal is to continually improve our model. If your business is just to solve some minor shortcomings of the existing model, then once our model becomes powerful enough and these shortcomings no longer exist, your business model may become uncompetitive.

However, if you can build a business that benefits as the model continues to improve, then this is a huge opportunity. Imagine if someone revealed to you that GPT-4 will become incredibly powerful and capable of tasks that currently seem impossible, then you will be able to plan and grow your business with a longer-term perspective.

Stebbings: We've been discussing with venture capitalist Brad Gerstner about the impact OpenAI might have on certain market segments. From a founder's perspective, which companies might be impacted by OpenAI and which might be spared? And how should we, as investors, assess this question?

Altman: AI will create trillions of dollars in value. It will enable entirely new products and services, making things possible that were previously impossible or impractical. In some areas, we expect models to be so powerful that achieving goals becomes easy; in other areas, this new technology will be further enhanced by building superior products and services.

I was surprised that in the early days, about 95% of startups seemed to be betting that the models wouldn’t get better, and now I’m not surprised anymore. When GPT-3.5 was first released, we had foreseen the potential of GPT-4, and we knew it would be very powerful.

So, if you build tools simply to compensate for the shortcomings of your models, then as your models get better, those shortcomings will become less and less important.

In the past, when models performed poorly, people were more inclined to develop products to compensate for the model's shortcomings rather than to build revolutionary products like "AI teachers" or "AI medical advisors." I felt like 95% of people were betting that the model wouldn't improve, and only 5% believed that the model would get better.

Now that has reversed and people understand the pace of improvement and the direction we are going, it is less of an issue now, but it was something we were very concerned about because we foresaw that companies that were working [towards fixing the model deficiencies] might struggle.

Stebbings: You’ve said that “AI will create trillions of dollars in value,” and Masayoshi Son (founder and CEO of SoftBank Group) has also predicted that “AI will create $9 trillion in value each year,” enough to offset what he believes to be the “necessary $9 trillion in capital expenditures.” What do you think of this?

Altman: I can't give you a precise number, but obviously there will be a lot of value created with a lot of capital expenditure, as is the case with every major technological revolution, and AI is certainly one of them.

Next year is going to be a critical year for us as we move into the next generation of AI systems. You mentioned the development of no-code software agents, I'm not sure how long that will take, it's not possible yet, but if you imagine we get there, and everyone can easily get the entire set of enterprise-grade software they need, how much economic value will be released for the world. If you can still maintain the same value output, but make it more convenient and cheaper, it will have a huge impact.

I believe we will see more examples like this, including in healthcare and education, which represent multi-trillion dollar markets. If AI can drive new solutions in these areas, I don’t think the specific numbers matter, but that it will create incredible value.

Excellent AI Agents have capabilities that exceed human capabilities

Stebbings: What role do you think open source will play in the future of AI? How do discussions about whether to open source certain models work within OpenAI?

Altman: Open source models play a critical role in the AI ecosystem. There are some really great open source models out there. I think it’s also critical to have high-quality services and APIs available as well. It makes sense to me to offer these elements as a bundle so people can choose the solution that best suits their needs.

Stebbings: In addition to open source, we can also provide services to customers through Agent. How do you define "Agent"? In your opinion, what is it and what is it not?

Altman: I think of an agent as a program that can perform long-duration tasks and requires little human supervision while performing the task.

Stebbings: Do you think there are misunderstandings about Agent?

Altman: Rather than a misunderstanding, it is more that we have not fully understood the role that Agent will play in the future world.

The example that people often mention is to let AI Agent help to book a restaurant, such as it can use OpenTable to do it, or call the restaurant directly. This can indeed save some time, but I think what is more exciting is that the agent can do something that humans can't do, such as the agent can contact 300 restaurants at the same time to find the most suitable dishes or restaurants that can provide special services for me. This is almost an impossible task for humans, but if the agents are all AI, they can process in parallel, and this problem is solved.

Although this example is simple, it shows that Agent can surpass human capabilities. What’s more interesting is that Agent can not only help you book a restaurant, but also be like a very smart senior colleague who can work with you on a project; or it can independently complete a task that takes two days or even two weeks, and only contact you when it encounters problems, and finally present an excellent result.

Stebbings: Will this agent model have an impact on SaaS (software as a service) pricing? Traditionally, SaaS is charged by user seat, and now agents are actually replacing human labor. How do you see the changes in pricing models in the future, especially when AI agents become a core part of corporate employees?

Altman: I can only speculate because we really don't know for sure. I can imagine a scenario where the pricing model in the future will be based on the computing resources you use, such as whether you need 1 GPU, 10 GPUs, or 100 GPUs to process the problem. In this case, pricing is no longer based on the number of seats or even agents, but on the actual amount of computing consumed.

Stebbings: Do we need to build a special model for Agent?

Altman: It does require a lot of infrastructure to support the operation of agents, but I think GPT-3.5 has pointed out the direction, that is, a general model that can perform complex agent tasks.

Models are depreciating assets, but training experience is worth more than the cost

Stebbings: Many people believe that models are depreciating assets as they become increasingly commoditized. What do you think of this view? As training models becomes increasingly capital intensive, does this mean that only a few companies can afford such costs?

Altman: It is true that models can be viewed as depreciating assets, but it is completely wrong to think that their value is less than the cost of training. In fact, in the process of training models, we can obtain a positive compounding effect, that is, the knowledge and experience we gain from training will help us train the next generation of models more efficiently.

I think the actual revenue we get from the model has justified the investment. Of course, not all companies can achieve this. There may be a lot of companies training very similar models, but if you are a little behind or don't have a product that can continue to attract users and provide value, it may be more difficult to get a return on investment.

We are lucky to have ChatGPT, which is used by hundreds of millions of users, so even if the costs are high, we are able to spread those costs across a large user base.

Stebbings: How will OpenAI’s models remain differentiated in the future? In what ways would you most like to see them differentiated?

Altman: Reasoning is the area we are focusing on the most , and I believe it will be the key to unlocking the next phase of large-scale value creation. In addition, we will also focus on the development of multimodal models and introduce new features that we believe are critical to users.

Stebbings: How do visual capabilities scale under the new GPT-3.5 reasoning time paradigm?

Altman: Without spoiling anything, I expect image models to evolve rapidly .

Stebbings: What do you think of the fact that Anthropic's models are sometimes considered to be better at programming tasks? Do you think this is a fair assessment? How should developers choose between OpenAI and other providers?

Altman: Anthropic does have a model that does well in the programming space, and their work is really impressive. I think developers often use multiple models at the same time, and I'm not sure how that will change as the field develops. But I believe that AI will be everywhere in the future.

The way we currently talk about AI may be a bit outdated, and I predict that we will move from talking about "models" to talking about "systems", but this will take time to happen.

Stebbings: On the question of how the model scales, how long do you think the law of scale of the model will last? In the past, people always thought it wouldn't last, but it seems that it has lasted longer than people expected.

Altman: Without going into detail, the core question is: Will the current trajectory of model capability improvement continue? I believe it will, and will continue for quite some time.

Stebbings: Did you ever doubt that?

Altman: We do encounter some behavior patterns that we don't understand, and we have also experienced some failed training processes and tried various new paradigms. When we are about to reach the limit of a paradigm, we must find the next breakthrough point.

Stebbings: What was the hardest challenge in this process?

Altman: When we were working on GPT-4, we encountered some extremely difficult problems that made us feel helpless and didn't know how to solve them. In the end, we successfully overcame these difficulties. But there was indeed a period of time when we were confused about how to move forward with the development of the model.

In addition, the transformation of GPT-3.5 and the concept of reasoning model is a goal we have long dreamed of, but the research road to achieve this goal is full of challenges and twists and turns.

Stebbings: How do you keep the morale of the team up during this long and winding process? How do you maintain morale when the training process may fail?

Altman: We are a team that is passionate about building AGI, a highly motivating goal. We all know that this is not an easy road and success will not come easily. There is a famous quote that goes, "I never pray for God to be on my side, but I pray for me to be on God's side."

It was a great help that we devoted ourselves to deep learning, like we were devoting ourselves to a good cause, and that despite the inevitable setbacks along the way, we always seemed to make progress in the end.

Stebbings: On the question about the semiconductor supply chain, how concerned are you about the semiconductor supply chain and international tensions?

Altman: I can't quantify the extent of my concern, but there's no doubt that I am concerned. It may not be my biggest concern, but it's definitely in the top 10% of the things that I care about.

Stebbings: Can I ask what your biggest concern is?

Altman: Overall, my biggest concern is the complexity of everything we're trying to do across the board. While I'm sure everything will work out in the end, it's an extremely complex system.

This complexity exists at every level, including within OpenAI and within each team. Take semiconductors, for example, where we need to balance power supply, making the right network decisions, and ensuring we get enough chips, while also considering potential risks and whether the research schedule can match these challenges so that we are not completely caught off guard or waste resources.

The entire supply chain looks like a straight pipeline, but the complexity of the ecosystem at every level is beyond what I have seen in any other industry. In a way, this is what worries me the most.

Stebbings: You mentioned unprecedented complexity, and a lot of people are comparing the current AI wave to the dot-com bubble, especially when it comes to the excitement and enthusiasm. I think the difference is the amount of money being invested. Larry Ellison (co-founder of Oracle) has said that the cost of entry to enter the basic model competition is $100 billion. Do you agree with this view?

Altman: No, I don’t think it will be that expensive. But here’s an interesting phenomenon: People like to use analogies from past technological revolutions to make the new one seem more familiar. I think that’s a bad habit in general, but I understand why people do it. I also think that the analogies people choose for AI are particularly inappropriate, because the Internet is obviously very different from AI.

You mentioned an example about cost, whether it really takes $10 billion or $100 billion to be competitive, one of the hallmarks of the Internet revolution is that it’s easy to get started. Another similarity to the Internet is that for many companies, AI is just an extension of the Internet—other people will build these AI models, and you can use them to build all kinds of great products. This is a way of thinking about AI as a new way to build technology. But if you want to build AI itself, that’s a completely different story.

Another common analogy is electricity, but I think that doesn't apply in many ways.

Although I think people shouldn’t rely too much on analogies, my favorite analogy is the transistor, which was a new discovery in physics that was incredibly scalable and quickly permeated every field. The entire technology industry benefited from transistors, and the products and services we use contain a lot of transistors, but you wouldn’t think of the companies that created these products and services as “transistor companies.”

This (the transistor) was a very complex and expensive industrial process, with a huge supply chain built around it. This simple physical discovery led to long-term economic growth, even though most of the time people didn't realize it existed, just thought "this thing can help me process information."

Maintain high standards for talent, rather than biased towards a certain age group

Stebbings: How do you think people’s talents are wasted?

Altman: There are a lot of very talented people in the world who are not able to reach their full potential because they work for the wrong company, or live in a country that doesn't support great companies, or for a variety of other reasons.

One of the things I’m most excited about AI is that it has the potential to help us better realize the potential of each of us, something we’re not doing nearly enough of right now. I believe there are a lot of potentially great AI researchers out there, but their life trajectories are different.

Stebbings: You’ve experienced incredible growth over the past year. If you look back over the past decade, what would you say is the biggest change in your leadership?

Altman: The most extraordinary thing about these past few years for me is the speed of change. It usually takes a long time for a normal company to grow from zero to $100 million in revenue, then from $100 million to $1 billion, and finally from $1 billion to $10 billion, but we did it in just two years. We went from a pure research lab to a company that really serves a large number of customers, and this rapid transition has lost me time to learn.

Stebbings: What would you like to spend more time learning?

Altman: How to guide a company to focus on achieving 10x growth rather than just 10%. Going from a multi-billion-dollar company to a tens-billion-dollar company requires profound change, not just repeating what you did last week.

The challenge with rapid growth is that we don’t have enough time to build a solid foundation. I underestimated how much effort it would take to catch up and keep moving forward in this hyper-growth environment.

Internal communication, information sharing, structured management, and planning to balance short-term needs with long-term development are all critical. For example, to ensure the company's execution in the next one or two years, we need to prepare computing resources, office space, etc. in advance. In this fast-growing environment, effective planning is very challenging.

Stebbings: Keith Rabois (venture capitalist) once said that one of the things he learned from Peter Thiel (PayPal co-founder) was to hire people under 30 because that’s the secret to building great companies. What do you think of this advice, that hiring very dynamic, ambitious young people is the only way to build a company?

Altman: I was about 30 when I founded OpenAI, which was not too young, but seemed appropriate (laughs). So, this was indeed a path to try.

Stebbings: But young people, while full of energy and ambition, may lack experience; or choose those who are experienced and have proven themselves?

Altman: The obvious answer is that both types of hires can be successful, as we have done at OpenAI. Just before today's interview, I was talking about a young guy who just joined our team, maybe in his early twenties, but he's doing a really good job. I was thinking about whether we could find more people like him, who bring new perspectives and energy.

On the other hand, if you are designing one of the most complex and expensive computing systems in human history, I would not easily give it to a young person who is just starting out. So we need a combination of both types of people. I think the key is to keep the standards high for talent, rather than simply favoring a certain age group.

I am particularly grateful to Y Combinator because it taught me that lack of experience does not mean lack of value. There are a lot of high-potential talents in the early stages of their careers who can create huge value, and our society should invest in these talents, which is a very positive thing.

Stebbings: I recently heard a quote that the heaviest burdens in life are not iron or gold, but decisions not made. What unmade decision causes you the most stress?

Altman: The answer to this question changes every day, and there is no decision that is particularly big. Of course, we do face some big decisions, such as which product direction to choose or how to design the next generation of computers. These are important and risky choices.

When this happens, I might postpone the decision, but most of the time, the challenge is that I face some 51% vs. 49% dilemmas every day. These decisions are put in front of me because they are difficult to decide. I may not be more confident that I can make a better choice than anyone else on the team, but I have to make the decision.

So, the core of the problem lies in the number of decisions, not in any particular decision.

Stebbings: Do you have a regular person you turn to when you have a 51% vs. 49% decision?

Altman: No, I don't think it's the right approach to rely on one person for everything. To me, it's better to find 15 or 20 people who have good intuition and background knowledge in a particular area and consult the best experts when needed, rather than relying on a single consultant.

Quick Questions and Answers

Stebbings: If you were a 23 or 24-year-old today, what would you choose to do given the existing infrastructure?

Altman: I would choose a vertical field supported by AI, such as AI education, and I would develop the best AI education products to enable people to learn knowledge in any field. Similar examples could be AI lawyers, AI CAD engineers, etc.

Stebbings: You mentioned writing a book. What would you call it?

Altman: I haven't thought of a title yet. I haven't thought about the book in detail, but I think its existence will inspire the potential of many people. It may be related to the theme of "human potential".

Stebbings: What are some areas in AI that people aren’t paying attention to but should be investing more time in?

Altman: What I would like to see is an AI that can understand your entire life. It doesn't need infinite context, but hopefully there will be some way for you to have an AI agent that understands all of your data and can assist you.

Stebbings: Is there anything that has surprised you in the past month?

Altman: It's a finding that I can't reveal, but it's shocking.

Stebbings: Who is your most admired competitor and why?

Altman: I respect everyone in this field. The field is filled with outstanding people and outstanding work. I am not avoiding the issue intentionally, but I can see talented people doing excellent work everywhere.

Stebbings: Any one in particular?

Altman: There isn't one in particular.

Stebbings: What is your favorite OpenAI API?

Altman: The new real-time API is awesome, we now have a big API business with a lot of good stuff in it.

Stebbings: Who do you most respect in AI today?

Altman: I want to specifically mention the Cursor team, who have used AI to create amazing experiences and create a lot of value for people. A lot of people haven't been able to put all the pieces together, but they did. I deliberately didn't mention the OpenAI guys, or else this list would be very long.

Stebbings: What do you think about the trade-off between latency and accuracy?

Altman: There needs to be a knob that can adjust between the two . Now, if you want me to answer a question quickly, I try not to spend more than a few minutes thinking, so latency is important. If you want me to make a major discovery, you might be willing to wait a few years. The answer is, this should be user-controllable.

Stebbings: When you think about insecurity in leadership, what do you think is the area in which you need to improve the most, and what do you want to improve most as a leader and CEO?

Altman: This past week, I've felt even less certain than before about the details of our product strategy. Overall, I feel like product is my weak spot, and right now the company needs me to provide a clearer product vision. We have a great product leader and team, but this is an area where I wish I was better, and I've felt this more strongly recently.

Stebbings: You hired Kevin Scott (OpenAI’s CTO), who I’ve known for years and is great. What are the qualities of Kevin that make him a world-class product leader?

Altman: "Discipline" is the first word that comes to mind.

Stebbings: What exactly does that mean?

Altman: He is very focused on priorities, knows what to say no to, and is able to think from the user's perspective about why to do or not do something. He is really rigorous and doesn't have any unrealistic ideas.

Stebbings: Looking ahead five and ten years, if you had a magic wand and could paint a five-year and ten-year vision for OpenAI, what would it be?

Altman: It's easy for me to picture the next two years, but if we guess right and we start making some super-powerful systems, like in terms of scientific advances, that would lead to incredible technological advances.

I think in five years we will see an astonishing pace of technological progress, even beyond everyone’s expectations, and society may feel that “the moment of AGI came and went”; we will discover many new things, not only in AI research, but also in other scientific fields.

On the other hand, I think the changes brought about by technological progress to society are relatively limited.

For example, if you asked people five years ago: Will computers pass the Turing test? They would probably say: No. If you told them: Yes, they would think that this would bring about a huge change in society. Now you see, we did roughly pass the Turing test, but the change in society is not that drastic.

This is what I expect for the future, that is, technological progress will continue to break all expectations, while social changes will be slower. I think this is a good and healthy state. In the long run, technological progress will certainly bring huge changes to society, but it will not be reflected so quickly in five to ten years.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
1
Comments