Jensen Huang's latest podcast: AI is moving from the "model era" to the "system era".

This article is machine translated
Show original

Video Title: Jensen Huang: Nvidia's Future, Physical AI, Rise of the Agent, Inference Explosion, AI PR Crisis

Video creator: All-In Podcast

Compiled by: Peggy, BlockBeats

Editor's Note: As the AI narrative continues to heat up, the focus of market discussion is shifting from "how powerful the model is" to "how the system can be deployed." Over the past two years, the industry has experienced breakthroughs in large-scale model capabilities, a race in training computing power, and the expansion of generative applications. But as these stages gradually become a consensus, new questions emerge: When AI is no longer just answering questions, but begins to perform tasks, embed itself in enterprise processes, and enter the physical world, what are the underlying conditions that support its continued progress?

This interview is excerpted from the well-known tech podcast All-In Podcast. As one of the most influential investor podcasts in Silicon Valley, the show is co-hosted by four long-time active investors and is known for its in-depth discussions on technology, business, and macroeconomic trends.

The four hosts of the program are:

  • Jason Calacanis is an early internet entrepreneur and angel investor, best known for his investments in companies such as Uber and Robinhood.
  • Chamath Palihapitiya, founder of Social Capital and former Facebook executive, has invested in several technology companies including Slack and Box.
  • David Sacks, a partner at Craft Ventures and a member of the "PayPal Mafia," founded Yammer and sold it to Microsoft for approximately $1.2 billion. He was also an early investor in Airbnb and Uber.
  • David Friedberg, founder of The Production Board, focuses on investments in agriculture, climate, and life sciences. He previously founded The Climate Corporation (which was later acquired by Monsanto).

This episode's guest is Jensen Huang, co-founder and CEO of NVIDIA, who is considered one of the most key drivers in the current AI infrastructure wave.

From left to right: David Friedberg, Chamath Palihapitiya, David Sacks, Jensen Huang, Jason Calacanis

The entire interview can be roughly summarized into three levels.

First, the AI infrastructure is changing. In the past, the market's understanding of AI was largely based on more powerful GPUs and more data centers. But Jensen Huang wants to emphasize that future competition will no longer be just about individual chips, but about entire systems. As inference demands rise, model types increase, and agents begin to handle more complex tasks, AI computing is shifting from a relatively singular model to more complex and specialized system collaboration. NVIDIA is therefore trying to elevate its role from a chip company to a builder of "AI factories."

Secondly, AI is shifting from "generating content" to "completing tasks." This is the most crucial thread in this interview. ChatGPT gave the public their first direct experience of AI's capabilities, but in Huang's view, the truly significant change is that AI is beginning to enter workflows as an agent: it doesn't just answer questions, but can call upon tools, break down tasks, and collaboratively execute them to ultimately get things done. Because of this, what users are willing to pay for AI will gradually shift from "getting an answer" to "getting a result." This implies greater reasoning needs, higher system complexity, and also suggests that the ways of software development, organizational management, and knowledge work may be rewritten.

Finally, AI is extending from the digital world into the real world. Throughout the interviews, whether discussing autonomous driving, robotics, healthcare, digital biology, or Huang's concept of Physical AI, the underlying theme was the same: the value of AI is no longer confined to the screen but will increasingly be reflected in factories, hospitals, cars, terminal devices, and daily life. However, this also means that AI will face not only technological challenges but also more complex real-world constraints such as supply chains, policies, regulations, manufacturing capabilities, and geopolitics. In other words, the next wave of AI expansion will be a true industrialization process.

From this perspective, the most noteworthy aspect of this dialogue is not a specific product or an optimistic figure, but rather the judgment Huang Renxun repeatedly conveyed: AI is transitioning from the "model era" to the "system era." Future competition will not be about who has the bigger model or the stronger computing power, but about who understands the industry better, who can embed AI more deeply into real-world processes, and who can organize these capabilities into a workable and scalable system.

This expands the scope of this article beyond NVIDIA itself. The real question it seeks to answer is: as AI gradually becomes infrastructure, how will the next round of industrial restructuring unfold, and where will new value be created?

The following is the original text (edited for easier reading and comprehension):

TL;DR

  • AI infrastructure is moving from a "single GPU" to a decoupled architecture. Different computing tasks will be completed collaboratively by GPUs, CPUs, network chips, and inference chips such as Groq.
  • NVIDIA is transforming from a GPU company into an "AI factory company" that provides complete systems. It sells the entire infrastructure, not just individual chips.
  • The key to measuring the cost of AI is not the cost of data centers, but the cost of tokens and throughput efficiency. More expensive systems may actually be cheaper.
  • AI is moving from generative models to the agent era. Users are truly willing to pay for "getting things done," not just for the answer.
  • Computational demands are exploding. From generation to inference to agents, the growth may have increased more than 10,000 times in a short period of time, and it is still accelerating.
  • The future of software development will change. Engineers will no longer just write code, but define problems, design architectures, and collaborate with agents.
  • In the long run, the biggest opportunities lie in deep specialization within vertical industries, rather than in general models. Whoever understands the industry better will have a stronger competitive advantage.

Original interview

Jason Calacanis (renowned angel investor | All-In Podcast host | Early-stage investor in Uber):

This week is a special program. We're giving way to our weekly regulars, a privilege we usually only grant to three types of people: President Trump, Jesus, and Jensen Huang (founder and CEO of NVIDIA). As for how these three should be ranked, that's up to you. You've been on a roll lately, and GTC was a huge success.

Jensen Huang (CEO of Nvidia):

The entire industry has come. Almost all tech companies and AI companies have arrived.

Jason Calacanis:

This is incredible, truly extraordinary. One of the biggest announcements of the past year has been Groq. When you acquired Groq, did you realize how "unbearable" it would make Chamath?

Note: Groq is not Grok. The former is a company that makes AI inference chips and inference cloud, while the latter is xAI's chatbot. In late 2025, Groq reached a non-exclusive inference technology licensing agreement with NVIDIA; the official transaction amount was not disclosed, but there were reports and speculations that it was worth between $17 billion and $20 billion. At GTC 2026, Jensen Huang further demonstrated an inference system based on Groq technology integrated into the NVIDIA platform.
The Chamath mentioned here refers to Chamath Palihapitiya (founder of Social Capital, former Facebook executive, and host of All-In). He is one of the four hosts of All-In and was also an early investor and board member of Groq. Therefore, when the major deal between NVIDIA and Groq came to light, it was seen as another successful bet by Chamath on a key project.

Huang Renxun:

I had a vague premonition.

Jason Calacanis:

We have to deal with him every week.

Huang Renxun:

I know. You guys still have to stick with him through the entire six-week settlement period.

Jason Calacanis:

That's right.

From GPU companies to "AI factory" companies

Huang Renxun:

In fact, we announce many of our strategies at GTC several years in advance. Two and a half years ago, I introduced the operating system for AI factories, which is called Dynamo.

As you know, Dynamo was originally a device invented by Siemens that could convert the energy of water into electrical energy, powering the factory systems of the last Industrial Revolution. Therefore, I think this name is very suitable as the name for the "factory operating system" of the next Industrial Revolution. And one of the core technologies in Dynamo is disaggregated inference.

Jason Calacanis:

Jensen, I know you're incredibly tech-savvy. Come on, you define it. I don't want to steal your thunder.

Huang Renxun:

Thank you. Decoupled reasoning means that the entire reasoning pipeline is extremely complex, perhaps even the most complex type of computational problem today.

Its scale is staggering, containing a vast amount of mathematical computation of varying forms and scales. Our idea is to break down the entire processing flow, allowing one part to run on one type of GPU and another on a different type. Furthermore, this made us realize that perhaps decoupled computing itself is a logical direction: we can certainly enable different types and properties of computing resources to work together.

This same line of thinking later led us to Mellanox. Today, NVIDIA's computing is distributed across GPUs, CPUs, switches, scale-up switches, scale-out switches, and network processors. Now, we're going to add Groq to that mix.

Our goal is to put the right workloads on the right chips. In other words, we have evolved from a GPU company into an AI manufacturing company.

David Sacks (Partner at Craft Ventures | Former COO of PayPal | All-In Host):

For me, this is probably the most important takeaway. What you're seeing now is a fundamental "decoupling." In the past, there was only the GPU as an option, but now more and more different computing paradigms are emerging, and these options will coexist in the future.

You mentioned something on stage that I think everyone who does high-value reasoning should listen to carefully: You said that about 25% of the space in a data center should be allocated to Groq's LPUs.

Note: LPU is an abbreviation for Language Processing Unit. This is a chip category proposed by Groq, whose core function is not training, but inference.

Huang Renxun:

Yes, in the data center, Groq can probably account for about 25% of the Vera Rubin system.

Note: Vera Rubin is NVIDIA's next-generation AI platform architecture. It is not a single chip, but a system-level infrastructure platform for AI factories.

David Sacks:

Could you talk about how the industry views this direction? Essentially, you're building a next-generation decoupled architecture: separating prefill and decode, and splitting the inference process. How do you think people will react?

Huang Renxun:

Let's take a step back and look at it from another angle. We added this capability to the system because the entire industry had shifted from processing large language models to Agentic Processing, which is intelligent agent-based processing.

When you run an agent, it accesses working memory, long-term memory, and invokes tools, which puts a huge strain on storage. You also see agents collaborating with each other. Some agents use very large models, some use small models; some use diffusion models, and some use autoregressive models. In other words, within this data center, there will be all sorts of completely different types of models coexisting. We built Vera Rubin to handle this extremely diverse workload.

So, we used to be a company with "one rack," but now we've added four more racks. In other words, NVIDIA's TAM, or serviceable market, has expanded dramatically, by about 33% to 50% compared to before.

Of this additional 33% to 50%, a large portion will be storage processors, namely BlueField; a portion, which I personally very much hope will be a large part, will be Groq processors; another portion will be CPUs; and of course, there will be many network processors. All of these combined will ultimately run the "new type of computer" in the AI revolution, namely agents. It is the operating system of modern industry.

Chamath Palihapitiya (Founder of Social Capital | Former Facebook Executive | All-In Host):

What about embedded applications? For example, what would be inside my daughter's teddy bear if it wanted to talk to her? Would it be a custom ASIC? Or will a broader TAM emerge in edge and embedded scenarios in the future, with different tools for different scenarios?

Note: ASIC stands for Application-Specific Integrated Circuit, and TAM stands for Total Addressable Market.

Huang Renxun:

We believe there are actually three computers involved in this problem.

The first computer, on the largest scale, is used to train AI models, develop AI, and create AI.

The second machine is a computer used to evaluate AI. For example, look around you; there are robots, cars, and similar things everywhere. You must first put them into a virtual environment that represents the physical world for evaluation. In other words, the software itself must obey the laws of physics. We call this system Omniverse.

The third type is a computer deployed at the edge, also known as a robotic computer. It could be an autonomous car, a robot, or even a small teddy bear.

One crucial direction we're exploring for devices like teddy bears is transforming telecom base stations into part of AI infrastructure. This means the entire $2 trillion telecom industry will gradually become an extension of AI infrastructure. Therefore, wireless equipment will become edge devices, factories will become edge devices, and warehouses will become edge devices as well.

In short, all three types of basic computers are indispensable.

David Friedberg (Founder of The Production Board | Host of the All-In Podcast):

Jensen, last year I felt you were ahead of the curve. You said back then that the growth in demand for reasoning wouldn't just be 1000 times.

Huang Renxun:

Did I screw myself up?

David Friedberg:

Instead, it will increase by a million times? A billion times? Right?

I think many people thought this was an exaggeration back then, because the whole world was focused on expanding training capacity. But look now, inference has truly exploded, and it's starting to become "inference-constrained." You've now released another "inference factory," with a throughput 10 times higher than the next generation factory.

But if you look at the discussions outside, many people will say: your inference factory will cost $40 billion to $50 billion, while those alternatives, such as custom ASICs, AMD, etc., only cost $25 billion to $30 billion, so you will lose market share.

Why don't you just tell us directly: What exactly did you see? What's your view on market share? Is it really worth it for these customers to pay almost double the premium?

Why can a more expensive system produce cheaper tokens?

Huang Renxun:

The most important and core point is: do not equate the price of the factory with the price of the token, nor with the cost of the token.

It's quite possible, and I can prove it, that $50 billion factory could actually produce the lowest-cost tokens. The reason is that we generate these tokens with astonishing efficiency—up to 10 times more.

You see, much of the difference between $50 billion and $20 billion is actually just land, electricity, and the factory's outer shell. Besides that, you still need to buy storage, networking, CPUs, servers, and cooling systems. So, whether the GPU is sold at its original price or at half price won't directly reduce the total cost from $50 billion to $30 billion. Pick any number you like; to be more realistic, it might just drop from $50 billion to $40 billion.

However, if a $50 billion data center has 10 times the throughput, then this price difference is actually insignificant.

Jason Calacanis: I understand.

Huang Renxun:

This is why I always say: even for many chips, if you can't keep up with the technological frontier and the speed at which we advance, then even if the chips are given away for free, they still won't be cheap enough.

David Sacks:

I'd like to ask a more macro-level, strategic question. You're currently running the world's most valuable company. Next year's revenue could exceed $350 billion, with $200 billion in free cash flow, and it's growing at a phenomenal rate through compounding.

How exactly do you make decisions? How do you obtain information? Everyone knows about your famous email system now, but how do you truly develop intuition, shape the market, decide where to invest heavily, where to scale back, and where to enter new areas? How is this information relayed to you? And how do you make the final judgment?

Huang Renxun:

That's the CEO's job.

David Sacks:

right.

Huang Renxun:

Our responsibility is to define our vision and our strategy. Of course, we draw inspiration and information from the company's outstanding computer scientists, technical experts, and countless excellent employees, but ultimately, shaping the future is our responsibility.

One criterion is: Is this task ridiculously difficult? If it's not difficult enough, we should stay away from it. The reason is simple: if something is easy to do, there will be a lot of competitors.

Is it something that no one has ever done before, and that is ridiculously difficult? Does it happen to be something that can utilize our company's unique "superpowers"? So I had to find such a convergence point: it had to meet all of these criteria at the same time.

And ultimately, you also need to understand that doing this kind of thing will inevitably be accompanied by a lot of pain and suffering. There are no great inventions because they are too simple and succeed easily on the first try.

If something is extremely difficult and no one has ever done it before, it basically means you'll experience a lot of pain and hardship. So you'd better enjoy the process.

David Sacks:

Could you pick three or four more "long-tail" businesses to discuss? For example, the data centers in space, ADAS, and automotive sectors you mentioned, as well as the biotechnology field. Give us a sense of when these growth curves will start to inflection point upwards. What are your thoughts on these long-term businesses?

Note: ADAS refers to Advanced Driver Assistance Systems.

Huang Renxun:

Of course. Physical AI is a very large category. As I just mentioned, we have three computing systems and all the software platforms built on top of them. Physical AI is the first time the tech industry has truly had the opportunity to serve a $50 trillion industry that has previously been largely untouched by technology. To do this, we have to reinvent all the necessary technologies.

I've always felt this has been a 10-year journey. We started 10 years ago, and now we're finally seeing it begin to turn upwards. For us, it's already a multi-billion dollar business, now approaching $10 billion annually. So it's a huge business, and it's growing exponentially. That's the first point.

In the second direction, I think we are really close to the ChatGPT moment in digital biology.

We are gradually learning how to represent and understand genes, proteins, and cells. We already know how to handle chemical substances. Therefore, I believe that being able to represent and understand the fundamental building blocks of biology and their dynamic behavior will occur within two to three years. Within five years, I strongly believe that digital biology will have a tremendous impact on the entire healthcare industry.

These are all very important areas. Agriculture is one of them.

Chamath Palihapitiya:

It has already begun.

Huang Renxun:

Without a doubt.

Jason Calacanis:

I want to steer the conversation back from the data center to the desktop. The company was largely built on the foundation of enthusiasts, gamers, and graphics card users in its early days. Today, standing on stage in front of roughly ten thousand people, you mentioned Claude Code, OpenClaw, and the revolution brought about by agents.

Especially among enthusiasts, we're seeing a surge of energy and innovation emanating from them, with many breakthroughs occurring on the desktop. You also released a desktop device this time, I think it's the Dell 60800? It's a very powerful workstation, capable of running local models, and boasts 750GB of RAM. Mac Studio is sold out everywhere right now. Our company has fully transitioned to OpenClaw. Friedberg is using it, Chamath is using it, and everyone is incredibly passionate about it.

What does this open-source agent movement, which started with enthusiasts, and the open-source desktop ecosystem mean to you? Where is it headed?

The Agent Era Has Arrived: Why Will Computing Demand Expand Another 10,000 Times?

Huang Renxun:

First, let's take a step back. In fact, we've seen three turning points in the past two years.

The first instance was generative AI. ChatGPT brought AI into the public eye, making everyone aware of its importance. In fact, this technology had been clearly present for months before ChatGPT's emergence. It was only when ChatGPT gave it a user-friendly interface that generative AI truly took off.

Generative AI, as you know, generates tokens for both internal and external consumption. Internal consumption is essentially "thinking," which further drives the development of reasoning.

Then, more and more practical, information-based capabilities began to emerge, enabling AI to do more than just answer questions; it could provide more reliable and useful answers. You also began to see a turning point in OpenAI's revenue and business model.

Then, the third inflection point was initially only visible within the industry, and that was Claude Code. This was the first truly useful and revolutionary agency system.

However, before Claude Code, this capability was mainly geared towards enterprises, and many people outside the industry had never even seen it. It wasn't until OpenClaw brought "what AI agents can actually do" into the public eye.

Therefore, OpenClaw's importance at the cultural level lies in the fact that it was the first time the public truly became aware of the capabilities of agents.

The second reason it is important is because OpenClaw is open.

More importantly, it constructs a completely new computing model, almost reinventing computing itself. It has a memory system: scratch is short-term memory, and the file system is a long-term resource; it has scheduling capabilities; it can run cron jobs; it can generate new agents; it can decompose tasks, perform causal reasoning, and solve problems; it also has an I/O subsystem, which can input, output, and connect to WhatsApp; it also has a set of APIs that can run different types of applications, the so-called skills.

These four elements essentially define a computer. So, for the first time now, we actually have a personal artificial intelligence computer.

Moreover, it's open source, truly open source, and can run almost anywhere. This is the blueprint for modern computing. In a sense, it's already the operating system of modern computing, and it will be ubiquitous in the future.

Of course, we also need to address one more thing: as long as you possess the agency software, it may have access to sensitive information, execute code, and communicate externally. Therefore, we must ensure that all of this is governed, secure enough, and subject to policy constraints, allowing these agents to possess two of the three capabilities, but not all three simultaneously.

We've also contributed to governance. Peter Steinberger is here today. We have many great engineers working with him to help make the system more secure and robust, ensuring it protects both privacy and security.

Chamath Palihapitiya:

Jensen, has this paradigm shift already rendered many of the AI regulatory laws passed across the United States in the past obsolete?

Many of these proposals were originally based on old models. Could you talk about how quickly this paradigm shift rendered a large number of existing regulatory approaches ineffective? AI regulation has now become a very hot topic in American politics.

Huang Renxun:

In this area, we must always stay ahead of policymakers, and you've done an excellent job in this regard. We must proactively reach out to them and tell them what stage technology has reached, what it is, and what it is not. It's not a living organism, not an alien, and it has no consciousness. It's computer software.

Furthermore, we often hear statements like, "We don't understand this technology at all." But that's not true; we actually understand a lot. So first, we must continue to provide policymakers with accurate information; we must not let doomsday theories and extremism dictate how they understand this technology.

However, at the same time, we must acknowledge that technology is developing rapidly, and we must not let policy lag too far behind technology. From a national perspective, my biggest concern is that the greatest national security risk for the United States in AI is not AI itself, but rather that other countries are adopting AI, while we, out of anger, fear, or prejudice, are unwilling to let our industries and society embrace AI.

Therefore, what I'm really most worried about is that AI isn't spreading fast enough in the United States.

David Sacks:

Let me ask you another question. If you were sitting in the Anthropic boardroom, watching their saga with the "War Department," what would you think? This actually echoes what you just said: people don't know how to understand AI, which adds another layer of resentment, fear, and distrust. If you were in their shoes, what different things would you suggest Dario and his team do to change today's outcome and public perception?

Huang Renxun:

First of all, I want to say that Anthropic's technology is truly remarkable. We ourselves are major users of Anthropic technology. I greatly admire their emphasis on security, their commitment to a security culture, and the technical excellence with which they advance this work—it's truly fantastic.

Moreover, they want to remind the public of the limitations of this technology, which I think is a good thing in itself. However, we must realize that the world has a spectrum: reminders are good, but scaring people is not so good.

Jason Calacanis: Yes.

Jensen Huang: Because this technology is so important to us. I think predicting the future is certainly possible, but we need to be more cautious and humble. Because, in fact, we cannot completely predict the future.

If some extremely catastrophic predictions are made, but there is no evidence that these things will actually happen, then the damage they cause may be greater than people imagine.

And now, we are leaders in the technology industry. Before, nobody listened to us, but things are different now. Technology is deeply embedded in the social fabric; it's an extremely important industry and highly relevant to national security. Every word we say matters.

Therefore, I think we must be more prudent, more restrained, more balanced, and more thoughtful.

David Friedberg:

I would nominate you for this. AI only has 17% approval rating in the US. We've already seen what happened in the nuclear energy sector: we essentially shut down the entire nuclear industry, and now China is building 100 fission reactors, while the US has none. Now we're starting to hear about things like data center shutdowns. So I think we have to be more proactive.

However, I want to return to what you said about the agent explosion happening within companies: increased efficiency and productivity. A lot of people are debating ROI right now, right? When you and I entered this year, our biggest question was: Will revenue emerge? Will revenue expand like intelligence itself? Then we saw something somewhat like an "Oppenheimer moment": Anthropic's revenue reached $5-6 billion in February alone.

Note: The "Oppenheimer moment" originates from J. Robert Oppenheimer, the head of the Manhattan Project (a secret research project to develop the atomic bomb during World War II). The first atomic bomb was detonated in 1945, symbolizing a critical point where technological breakthroughs and risks coexist. It is now often used to refer to key technological moments with irreversible consequences.

What are your thoughts on the future trend? You mentioned today that Blackwell and Vera Rubin already have trillion-dollar demand visibility in the next few years. Coupled with the momentum shown by Anthropic and OpenAI, do you think we've already hit that curve, and will we see revenue expand at an accelerated pace, just like intelligence?

Huang Renxun:

Let me answer from a few different angles. Look at this audience; Anthropic and OpenAI are indeed here. But in reality, 99% of what's here is AI, and it's not Anthropic, nor is it OpenAI. The reason behind this is that AI itself is extremely diverse.

I would say that, as a category, the second most popular model is actually the open model. The first place, of course, belongs to OpenAI, open-source weights, and the entire broad open ecosystem of open models. The second place is the open model, and there's a significant gap between it and the third. Only the Anthropic comes in third.

This illustrates the sheer size of all the AI companies here combined, so it's important to recognize that first.

Let's return to the topic of computational demands. When we move from generative AI to inference, the required computation increases by approximately 100 times; when we move from inference to agency, the computational demands likely increase by another 100 times. In other words, in just two years, the computational demand has increased by roughly 10,000 times. Meanwhile, people will pay for information, but what they are truly more willing to pay for is the output of their work.

David Friedberg: Yes.

Huang Renxun:

Talking to a chatbot and getting an answer is great, of course. Helping me with research is also fantastic. But what really makes me willing to pay is getting the job done. And that's exactly where we are now; Agentic systems are truly getting the work done. They're helping our software engineers get the job done.

So think about it: on one hand, there's a calculation that's 10,000 times greater, and on the other hand, there's consumer demand that's probably 100 times greater. And we haven't even really started large-scale expansion yet. We're absolutely on the road to 1 million times growth.

Jason Calacanis:

I think this leads to the question: How many people are in your company?

Huang Renxun:

We have 43,000 employees, of which approximately 38,000 are engineers.

Jason Calacanis:

We often discuss this topic on our podcast: wow, token usage in our company is growing like crazy. Some people even ask, when joining a company, "How many tokens will I get?" because they want to be high-performing employees. I remember you mentioned this in that two-and-a-half-hour keynote; it was really long, but excellent.

Huang Renxun:

Thank you. Actually, it could have been even shorter.

Jason Calacanis:

You mentioned that the token usage limit for each engineer might be around $75,000. Does that mean NVIDIA's engineering team will spend $1 billion or $2 billion on tokens annually?

Huang Renxun:

Here's how we think. Let me give you a thought experiment: Suppose you hire a software engineer or AI researcher for a $500,000 annual salary, which is quite common here.

At the end of the year, I asked him, "How much did you spend on tokens this year?" If he said "$5,000," I would be furious, really. If an engineer earning $500,000 a year consumes less than $250,000 worth of tokens, I would be very suspicious. It's essentially no different from a chip designer saying, "I've decided to only use paper and pencils; I don't need CAD tools."

Jason Calacanis:

This is truly a paradigm shift. Your understanding of these top employees almost reminds me of LeBron James in my MBA class: he spends $1 million a year maintaining his body, so he can still play at 41. Why shouldn't these top knowledge workers possess "superhuman abilities"?

Huang Renxun:

That's right.

Jason Calacanis:

If we extend this trend forward by another two or three years, what will the efficiency of NVIDIA's top employees be like? What will they be able to accomplish?

Huang Renxun:

First, the thought that "this is too difficult" will disappear. The thought that "this will take too long" will also disappear. The thought that "we need a lot of, many people" will also disappear.

It's like during the last Industrial Revolution, when no one said, "This building looks too heavy," or "That mountain is too big." All ideas about "too big, too heavy, too time-consuming" were eliminated.

David Sacks:

In the end, all that's left is creativity. What can you actually come up with?

Huang Renxun:

Absolutely correct. In other words, the future question will be: how will you collaborate with these agents?

Essentially, this is just a completely new way of programming. In the past, we wrote code; in the future, we will write ideas, architectures, and specifications. We will organize teams; we will define evaluation criteria, telling the system what is good, what is bad, and what constitutes an excellent result; we will iterate and brainstorm repeatedly.

This is what you really need to do. I believe that every engineer will have 100 agents in the future.

Jason Calacanis:

Returning to the public relations issue, entrepreneurs like David Friedberg are using your technology and AI at Ohalo to do truly tangible things: increase food production and improve the supply of high-quality calories. Friedberg, to what extent do you think this will reduce costs? How will this vision impact what you're doing?

David Friedberg:

We just completed a zero-sample genome model, and it was a success. You'd be truly amazed. And this happened against the backdrop of someone else replacing their entire enterprise software stack overnight.

I did one thing myself: in 90 minutes, I replaced the entire software stack and a whole bunch of workflows. It started at 10 p.m. on Sunday night and was all running and deployed by 11:30 p.m.

After I, as CEO, finished my exercise, I required all members of my management team to do the same exercise over the weekend. By Monday, the result was: it was over.

Let's talk about something more technical and scientific. We used auto-research and a set of data to accomplish something in 30 minutes. If we followed the traditional path, this would have been a PhD thesis, possibly taking 7 years, and might even have become one of the most acclaimed doctoral works in the field, worthy of publication in Science.

We simply downloaded Auto Research from GitHub on our desktop computers, loaded the batch of data we had just received, and it ran in 30 minutes. Everyone's expression changed. The potential it unleashed was truly unbelievable.

Therefore, I believe that this acceleration is expanding everyone's potential in an unprecedented way.

But let's get back to the point about auto research: what do you think? A weekend, 600 lines of code, and you can produce such results, and run and process so many different types of datasets locally.

Does this mean that we are still in a very early stage in terms of both algorithm optimization and hardware optimization?

Huang Renxun:

The reason OpenClaw is so amazing is, firstly, because it coincided perfectly with the breakthrough in large language models; its emergence was incredibly precise.

To a large extent, Peter probably wouldn't have been able to create this if Claude, GPT, and ChatGPT hadn't reached their current level of sophistication. The model has indeed reached a very high level of quality.

Second, it brings new capabilities: enabling these models to utilize tools we've created over the years. These include browsers, Excel; in chip design, Synopsys and Cadence; and Omniverse, Blender, Autodesk, and so on. And these tools will continue to be used in the future.

Some people are saying that the enterprise IT software industry is going to be destroyed. But I'll give you another perspective: the size of the enterprise software industry has always been limited by "how many seats are occupied," or the number of "seats." But in the future, it will see 100 times more agents. These agents will be writing SQL, working with vector databases, and using Blender and Photoshop.

The reasons are simple: first, these tools are inherently well-designed; second, these tools are essentially "intermediary interfaces" between us and machines. Ultimately, once the work is complete, the results must be presented back to me in a way that I can control. And I know how to operate these tools.

So I hope that everything can eventually return to Synopsys, to Cadence, because that's where I can control things and do "definitive standard" validation.

Note: Synopsys and Cadence are two major EDA (Electronic Design Automation) software companies, and virtually all chip companies (NVIDIA, Apple, AMD) rely on them.

The Next Battleground for AI: Open Source, Verticalization, and Global Diffusion

David Sacks:

I have a question about open source. We now have closed-source models, which are excellent; we also have open-source weight models, many of which are amazing and very powerful.

Two days ago, you might have been busy going on stage and didn't see it, but in the BitTensor Subnet 3 crypto project, someone completed a training task: they trained a 4 billion-parameter Llama model entirely in a distributed manner. A group of random people contributed computing power, yet they were able to manage the entire training process statefully. I think this is technically insane because the participants are completely randomly distributed.

Huang Renxun:

This is like Folding@home in our time.

Note: Folding@home is a distributed computing project that allows volunteers worldwide to contribute their computing power for protein simulation and medical research.

David Sacks:

That's right. So what's your view on the ultimate outcome of open source? Will you see architectures and computing power becoming decentralized, thus supporting the path to open weight and complete open source, making AI truly widely available?

Huang Renxun:

I believe that we fundamentally need two things at the same time: first, models as commercial and proprietary products that are first-class citizens; second, models as open source.

This isn't a matter of A or B; rather, both A and B are necessary. Without a doubt. The reason is that a model is primarily a technology, not a final product. A model is a technology, not a service.

For the vast majority of users, at that horizontal level, at the general intelligence level, I don't actually want to fine-tune a model myself. I'd rather continue using ChatGPT, Claude, Gemini, and X. They each have their own characteristics, depending on my mood and what problem I want to solve. So this part of the industry will develop very well; it will be very prosperous.

However, all this domain knowledge and expertise must be accumulated in a way that they can control, and that can only come from open models. The open model industry is already very close to the forefront. We are also investing heavily in it.

Frankly, even if the open model does catch up with the forefront, I still believe that the Model as a Service (MAS) and world-class business product models will continue to thrive.

Jason Calacanis:

Almost every startup we invest in now starts with open source and then moves towards a proprietary model.

Huang Renxun:

Yes. And the beauty of it is this: as long as you have a great router, from day one, every day, you can connect to the world's best models. At the same time, this gives you time to reduce costs, fine-tune, and specialize. So you start with world-class capabilities and then slowly build your competitive advantage.

David Friedberg:

Jensen, I have a geopolitical question. Of course, no one wants the US to win the global AI race more than you. But a year ago, the "diffusion rule" during Biden's presidency was actually preventing the global spread of US AI technology.

The new government has been in power for a year now. What score would you give it? Regarding the global spread of AI, are we currently at an A, B, or C level? What have we done well, and what haven't we done well?

Huang Renxun:

First, President Trump wants American industries to lead, he wants the American technology industry to lead, he wants the American technology industry to win, he wants American technology to spread globally, and he wants the United States to become the richest country in the world. He wants to achieve all of these.

But right now, NVIDIA has lost its original 95% market share in the world's second-largest market, and now it has 0%. President Trump wants us to regain that share.

The first step was to obtain licenses for the companies we could sell to. Many companies have already submitted applications, and we've applied for licenses for them, with Commerce Secretary Lutnick approving some. Next, we've notified the Chinese companies, many of which have already placed purchase orders with us. So we're now restarting the supply chain and shipping the goods out.

On a higher level, I think we should acknowledge one thing: when we can't obtain micromotors and rare earth minerals, our national security is weakened; when we can't control our own communication networks, our national security is weakened; when we can't provide sustainable energy for the country, our national security is also weakened. Each of these industries represents a scenario I don't want the AI industry to repeat.

When we look to the future and ask, "What will it look like if the US technology industry and the US AI industry truly lead the world?" we must honestly say: AI models cannot be dominated by the US alone; such an outcome would be meaningless.

But we can certainly imagine that the US technology stack, from chips to computing systems to platforms, will be widely adopted globally. People around the world can build their own AI, public AI, and private AI on top of this US technology stack, and then use them to serve their societies. I hope the US technology stack can cover 90% of the world. I really hope so.

Otherwise, if the final situation becomes like that of solar energy, rare earth elements, magnets, motors, and communication equipment, I would think that would be a very bad outcome for U.S. national security.

Chamath Palihapitiya:

How closely are you monitoring global conflict situations? How concerned are you about them? For example, the Middle East could affect helium supplies, which pose a potential supply chain risk for semiconductor manufacturing. How worried are these issues? How much effort have you devoted to them?

Note: Helium is crucial for semiconductor manufacturing. It is irreplaceable in key processes such as photolithography and inspection, and as a non-renewable resource, its supply is highly concentrated, mainly relying on a few production sites such as the United States, Qatar (Middle East), and Algeria (North Africa). Disruptions to these upstream supplies could directly impact the stable operation of chip production lines.

Huang Renxun:

First, speaking of the Middle East, we have 6,000 families there. Many of our company's employees are Iranian, and their families are still in Iran. So, we have many families there.

The first thing is this: they are very anxious, very worried, and very afraid. We are constantly thinking about them and monitoring the situation closely. They will have our full support. Some people have asked me whether, given the current situation in the Middle East, we will remain in Israel. My answer is: we will remain in Israel 100%. We fully support the families there. We will remain in the Middle East 100%.

Some have asked whether, given the current situation in the Middle East, it's still worthwhile to expand AI there. My view is that wars occur because everyone wants a more stable outcome. And I believe that the Middle East will be more stable than before after the wars. Therefore, if we were willing to consider it before the wars, we should consider it even more seriously afterward. So, I'm 100% committed to this issue.

We have three things we must do. First, we must reindustrialize the United States as quickly as possible, whether it's chip manufacturing plants, computer manufacturing plants, or AI factories.

Jason Calacanis:

What progress has been made in this regard?

Huang Renxun:

The progress is excellent. The reason we've been able to advance at such an incredible pace in Arizona, Texas, and California is because of the strategic support, friendship, and assistance from our supply chain in Taiwan. They truly are our strategic partners. They deserve our support, our friendship, and our generosity. They are also doing everything they can to help us accelerate the manufacturing process.

Second, we must diversify our manufacturing supply chains. Whether in South Korea, Japan, or Europe, we need to diversify our supply chains to make them more resilient. Third, while enhancing diversification and resilience, we must also exercise restraint and avoid applying unnecessary pressure.

Jason Calacanis:

You mean, you need to be patient.

Chamath Palihapitiya:

What about helium? Many reports have mentioned this question.

Huang Renxun:

I think helium might be a problem. But on the other hand, there are usually a lot of buffer stock in the supply chain, and these kinds of systems generally leave a certain margin.

Jason Calacanis:

You've made tremendous progress in autonomous driving and released some major announcements. You've added many new partners, including Uber. We also recently saw a video of you riding in a Mercedes-Benz in an autonomous vehicle. You and Uber have also announced that you will be working with many automakers to deploy more vehicles on the road.

I understand your bet is that in the future, an open platform similar to Android will emerge, and you will play a key role in it, serving dozens of car manufacturers; on the other hand, there may be a closed system like iOS, such as Tesla or Waymo.

What's your strategy? How will this game unfold? Because it seems you're cooperating in some areas and competing in others, and your stack is very deep.

Huang Renxun:

First, we believe that everything that moves in the future will one day be fully or partially autonomous. Second, we don't want to build autonomous vehicles ourselves, but we hope to empower every car company in the world to build autonomous vehicles.

So we built three computers: a training computer, a simulation computer, an evaluation computer, and a vehicle-mounted computer. We also developed the world's safest driving operating system.

At the same time, we also developed the world's first autonomous driving system with reasoning capabilities. It can break down complex scenarios into simpler ones and navigate through them one by one, much like a reasoning model. This reasoning system is called Alpamayo, and it has brought us remarkable results.

We will perform vertical optimization and horizontal innovation; then let each manufacturer decide for themselves. Do you only want to buy one of our computers? Like Elon and Tesla, they buy our training system; or do you want to buy both the training system and the simulation system? Or do you want to work with us to integrate all three systems, and even install the vehicle-mounted computer in your car?

Our stance has always been that we want to solve problems, but we don't insist that we are the only ones who can provide the solutions. We are happy to cooperate with you in whatever way you choose.

David Sacks:

Following this line of questioning, I find it particularly interesting. You're essentially building a platform for a thousand flowers to bloom. But it's true that some flowers are now trying to move down the stack, to the bottom, trying to compete with you. Google has TPUs, Amazon has Inferentia and Trainium, almost everyone is working on their own "I can surpass NVIDIA" version. Even though they are also your major customers.

How do you handle this relationship? What do you foresee in the long term? What role will these products ultimately play in the overall ecosystem?

Huang Renxun:

That's an excellent question.

First, we are the only true AI company. We build our own foundational models and are at the forefront in many areas. We build every layer of the stack from top to bottom. We are also the only AI company in the world that collaborates with all other AI companies.

They never show me what they're doing, but I always explain everything clearly. So our confidence comes from one thing: we're very willing to compete on who has the best technology. As long as we can continue to operate at a high speed, I believe that continuing to purchase from NVIDIA will remain one of their most economical options. I'm very confident about that.

Second, we are the only architecture that can be deployed on all cloud platforms. This brings a fundamental advantage. We are also the only architecture that can be taken down from the cloud and placed in local data centers, cars, any region, or even space.

So, a large portion of our market, about 40% of the business, is where customers have no idea how to collaborate if you don't have a CUDA stack or the ability to provide an entire AI factory. They're not looking to buy chips; they're building AI infrastructure. So what they need is for you to come in with a complete stack, and we happen to have that complete stack.

So, surprisingly, if you look at the current situation, NVIDIA's market share is actually still rising.

David Sacks:

You mean these companies tried it all out, then realized, "Oh my god, this is too complicated," and came back? That's why your market share continued to grow?

Huang Renxun:

There are several reasons for the increase in market share.

First, we've been moving too fast. Second, we've made everyone realize that the problem isn't in manufacturing chips, but in building the system, and that system is extremely difficult to build. Therefore, their cooperation with us is still expanding.

Take AWS as an example. I remember they just announced yesterday that they plan to buy 1 million chips over the next few years. That's a huge purchase, and that doesn't even include the massive amount they've already bought. We'd be very happy to oblige.

In addition, our market share growth over the past few years is also due to the arrival of Anthropic and Meta, and the growth of open models is even more astonishing, all of which are happening on NVIDIA.

Therefore, our market share is increasing, partly due to the increasing number of models, and partly because more and more of these companies are moving out of the cloud and growing in regional deployments, enterprise scenarios, and industry edge scenarios.

However, it is very difficult to penetrate that entire market if you only make one ASIC.

David Friedberg:

Ask a related question, without delving into the numerical details, but the analyst doesn't seem to believe you.

You say computing power might increase a million times, but the market consensus is: you'll grow by 30% next year, 20% the year after, and by 2029, which should be a year of explosive growth, you'll only see 7%. If you fit your TAM into these growth figures, the implication is that your market share will decline significantly.

Based on the future order book you've seen, what indications support this judgment?

Huang Renxun:

First, they simply don't understand the scale and breadth of AI.

David Sacks:

Yes, I think so too.

Huang Renxun:

Most people think that AI is only a matter for those five super-large cloud vendors.

Jason Calacanis:

right.

David Sacks:

There's also a conventional investment logic that "the larger the scale, the harder it is to sustain growth." They have to go back and present their models to the investment bank's risk control committee; they can't easily believe that "five trillion can grow to fifteen trillion." They're willing to offer a maximum of seven trillion; any more than that is unacceptable to them.

Jason Calacanis:

They cannot imagine a company with a market value of $10 trillion.

David Sacks:

Essentially, it's a form of self-preservation modeling; they dare not include things that have never happened in history.

Huang Renxun:

Moreover, you must redefine what you are doing.

Recently, some people have observed: Jensen, how could NVIDIA possibly surpass Intel in the server market? The reason is simple: the entire data center CPU market is only about $25 billion a year. And we, as you know, have achieved about $25 billion in the time it takes us to sit here and chat.

Jason Calacanis:

pretty.

Huang Renxun:

Of course, that's a joke.

Chamath Palihapitiya:

What's said in the podcast isn't considered official performance guidance.

Huang Renxun:

That's right, it's not performance guidance. But the key point is: how big you can grow depends on what you're actually building.

First, NVIDIA isn't just making chips. Second, simply making chips is no longer enough to solve the problems of AI infrastructure; it's far too complex. Third, most people's understanding of AI is too narrow, limited to what they've seen, heard, and discussed.

OpenAI is incredibly powerful, and it will be enormous; Anthropic is also incredibly powerful, and it will be enormous as well. But AI itself will be even larger than all of them combined. And what we serve is precisely that much larger part.

David Sacks:

Then explain the "space data center" business to the average person. How should it be compared to large data centers on the ground?

Huang Renxun:

We are already in space.

David Sacks:

How should ordinary people understand this business?

Huang Renxun:

First, we should, of course, get things done on Earth first, since we're currently on Earth. Second, we should also prepare for space travel. Space certainly has abundant energy. The problem lies in heat dissipation. You can't rely on conduction and convection like on Earth, so you can only rely on radiation, which requires a very large surface area. This isn't an insurmountable problem, given the abundance of space, but the cost remains very high. However, we will explore it.

Moreover, we're already there. Our hardware has been radiation-hardened, and CUDA is already running on many satellites around the world. They're doing image processing, AI-based image analysis. This kind of work should ideally be done in space, not by transmitting all the data back to Earth first and then doing image analysis there. So, there is indeed a lot of work that should be done in space.

Meanwhile, we'll continue our research on what data centers in space should actually look like. This will take many years. That's okay, I have plenty of time.

The Future of Robotics, Healthcare, and Work: How AI Will Eventually Enter the Real World

Jason Calacanis:

I'd like to ask more about healthcare.

We all reach a certain age and start thinking about lifespan and healthy lifespan. We all look good, some maybe even better. Jensen, I really don't know your secret. Is it anti-aging? What foods should you avoid? You'll have to tell me these things privately.

From the perspective of healthcare system development, where will this direction lead? What progress have we actually made?

I was just using Claude to do some analysis to see what's going on with these healthcare billing codes in the US. Americans spend twice as much money as others, but the health outcomes seem to be only half.

From what I've observed, about 15% to 25% of the budget is actually spent on the first general practitioner consultation. To be honest, we all know that today's large language models are already performing much better and more consistently in this area.

So what's still missing to break through regulations and allow AI to truly have a substantial impact on the entire healthcare system?

Huang Renxun:

We are mainly involved in several areas of medical care.

The first is AI physics, which serves AI biology—that is, using AI to understand and represent biology and its behaviors. This is very important in drug discovery.

The second is AI agents, used to assist in diagnostics. OpenEvidence is a good example, as is Hippocratic. I really enjoy working with these companies. I truly believe that agenda technology will revolutionize the way we interact with doctors and the healthcare system.

The third part is physical AI.

The first part is AI physics, using AI to predict physics; the second part is enabling physical AI to understand physical laws, which can then be used in robotic surgery. This area is already very active. In the future, every instrument you encounter in a hospital, whether it's ultrasound, CT, or any other equipment, will become agency.

You can think of it as a security-enhanced version of OpenClaw that will be embedded in every instrument. So in many ways, these devices will directly interact with patients, nurses, and doctors in the future.

Jason Calacanis:

We've already invested so much in AI weapons; I really hope we can invest more in AI first responders, AI EMTs, and AI paramedics, to save lives instead of just killing them.

This conveniently leads to the topic of robotics. You already have dozens of partners. Over the past ten or even twenty years, the robotics field has experienced a strange period—Boston Dynamics and Google acquired a bunch of companies, only to sell them off or spin them off later. At one point, everyone felt that robots were far from being truly usable.

But now, top entrepreneurs like you and Elon Musk are betting on it. Optimus already looks amazing, and many companies in China are making rapid progress. So how far are we from truly bringing robots into our lives? Like robot chefs, robot nurses, robot nannies, and humanoid robots that can actually work in the real world.

Especially in China, they seem to be doing just as well as, or even faster than, in the US. Based on the progress and technological maturity of your partners, how long do you think it will take?

Huang Renxun:

To a large extent, the robotics industry was invented by us, or rather, by the United States. You could also say that we entered the market too early. We were about five years ahead of the truly crucial enabling technology, the "brain," so we got tired and lost patience first.

But now, it's really here. The only remaining question is: how long will it take from "proof of high functionality" to "acceptable commercial product"?

Technology never goes beyond two or three cycles. Two or three cycles, roughly three to five years. That's it. Within three to five years, robots will be everywhere.

I believe China is extremely strong, and in a way that cannot be underestimated. The reason lies in the fact that their microelectronics, motors, rare earth elements, and magnets—the very foundation of the robotics industry—are all world-class. Therefore, in many aspects, our robotics industry will be deeply dependent on their ecosystem and supply chain. The global robotics industry will also be deeply reliant on them.

Therefore, I think you will see some very rapid changes.

Jason Calacanis:

Will it eventually be a one-to-one ratio? Elon seems to think that in the future, it will be one person per robot—7 billion people with 7 billion robots, 8 billion people with 8 billion robots.

Huang Renxun:

I hope for even more. First, there will be a large number of robots working 24 hours a day in factories; there will also be many factory robots that don't move much but will move slightly. Almost everything will eventually be robotized.

Chamath Palihapitiya:

For me, the most important thing about robots is that they unlock economic liquidity for everyone.

In the past, owning a car allowed one to do many different jobs; in the future, everyone will have a robot, and that robot will do many jobs for them. They can open an Etsy store, a Shopify store, and use the robot to create anything they want, doing many things they couldn't do alone. I believe robots will ultimately become the technology we've seen that brings the most prosperity to so many people on Earth.

Huang Renxun:

Without a doubt. The simplest reality right now is that we are already facing a labor shortage of millions. So we desperately need robots. With more labor, all these companies could grow even faster.

And some of the things you mentioned are really interesting. With robots, we will have a "virtual presence." For example, when I'm traveling for work, I can enter the robot's body at home and remotely control it to walk around the house, walk the dog, and check on the house.

Jason Calacanis:

We need to have the venue staff kick people out immediately.

Huang Renxun:

That's right. But think about it, you can really let it roam around the house, see what's going on, talk to the dog, and chat with the kids.

David Friedberg:

This is somewhat like time travel.

Huang Renxun:

At the same time, we'll travel at the speed of light. Obviously, we'll send the robot over first. I certainly won't send myself over first; I'll send a robot over first to see how things go. Then I'll upload my AI.

Chamath Palihapitiya:

This is almost inevitable. It will unlock the Moon and Mars, making them colonizable targets. And this means virtually unlimited resources. Transporting materials from the Moon back to Earth can be done with almost zero energy consumption, because you can use solar energy for acceleration. So in the future, you could build factories on the Moon to manufacture everything Earth needs, and robots are the key to making all of this possible.

Huang Renxun:

In that era, distance would no longer be a problem.

David Friedberg:

Moreover, the more revenue our models and agents generate, the more we can invest in infrastructure; and the more robust the infrastructure, the more powerful the models and agents we can unlock.

Dario recently stated on the Dwarkesh podcast that modeling and agent companies will generate hundreds of billions of dollars in revenue by 2027 or 2028; he predicts it will reach $1 trillion by 2030. Note that this does not include revenue from AI at the infrastructure layer.

Huang Renxun:

I think he's been very conservative. I believe Dario and Anthropic will perform far beyond that number, far beyond.

Jason Calacanis:

So, from 30 billion to 1 trillion?

Huang Renxun:

Yes. And the reason is that he hasn't considered part of it: I believe that every enterprise software company will eventually become a value-added reseller of Anthropic code, Anthropic token, and OpenAI token. This will significantly expand their GTM scale.

David Sacks:

In such a world, what is the real "moat" that remains?

Some moats, frankly, become almost insurmountable. For example, the moat that nobody talks about much, but is probably the strongest, is actually CUDA; it's an amazing strategic advantage.

But if the model itself can create something truly great in the future, then the next generation of models may also disrupt it. So, in your opinion, what is the most important differentiator for these companies building application layers?

Huang Renxun:

Deep specialization.

I believe that in the future, general-purpose models will be integrated into software companies' agent systems. Many of these models will be commercial or proprietary models like Claude's; but many others will also be specialized sub-agents trained by these companies themselves, designed for specific sub-tasks.

David Sacks:

So your call to entrepreneurs is: to truly understand your vertical field.

Huang Renxun:

That's right.

David Sacks:

Understand more deeply and better than anyone else. Then wait for the tools to catch up with you, and once they do, you can infuse them with your knowledge.

Huang Renxun:

Yes. You possess the knowledge to connect customers to your agent. The sooner you truly connect your agent with customers, the sooner this flywheel will start spinning, and the faster it will spin.

David Sacks:

This is almost the complete opposite of today's software logic. Today, we first develop software, then think about "what can be generalized", then sell it to as many people as possible, and finally sell customization as an add-on service.

David Friedberg:

Then lock the customer in.

Huang Renxun:

In reality, as you said, we first build a horizontal platform. But you see, all those global systems integrators (GSIs) and consulting firms, who are essentially experts, then customize your horizontal platform into a vertical solution.

Jason Calacanis:

That's right. And in some ways, the size of the customization market may be five or six times larger than the platform itself.

Huang Renxun:

Absolutely correct. Therefore, I believe that these platform companies themselves have the opportunity to become experts, players in that vertical field, and true masters of a specific area.

Jason Calacanis:

I want to give you the praise you deserve.

I remember you saying something three years ago: "The people who make you lose your job won't be AI, but the people who know how to use AI." Looking back now, our entire discussion revolved around this point: agents are turning humans into "superhumans," expanding business opportunities and entrepreneurial opportunities. You actually saw this very clearly a long time ago.

Huang Renxun:

You're too kind.

Jason Calacanis:

Of course, we must also accept two perspectives: first, there will indeed be positive developments; second, some jobs will indeed be replaced. The question then becomes: do those people have enough resilience and determination to embrace these new technologies?

For example, if 100% of driving jobs are automated in the future, it will certainly save many lives, which is a good thing; but we must also acknowledge that 10 to 15 million people in the United States rely on this for their livelihood. This change is inevitable.

Huang Renxun:

I believe that jobs will change. For example, there are many drivers today. I believe that in the future, many drivers will still be sitting in cars, but they will no longer be responsible for driving. Instead, they will sit in the back or next to the driver, becoming a kind of "travel assistant."

Because don't forget, a driver's job ultimately involves more than just driving. They'll help you with your luggage, handle many other things, and are essentially in a supportive role.

So I'm not surprised at all that future drivers will become your mobility assistants, handling many other things for you while the car drives itself.

Jason Calacanis:

Just like in a hotel.

Huang Renxun:

Yes. He's driving the car himself, but he's also helping you coordinate various things.

David Friedberg:

Autopilot aircraft have also brought in more pilots without removing them from the cockpit, even though autopilot already handles 90% of the work in flight.

Chamath Palihapitiya:

And to be honest, while the car is driving itself, the driver can do a bunch of other things on their phone and arrange various tasks for you.

Huang Renxun:

For example, coordinating, communicating, making reservations, and handling a bunch of tasks.

Chamath Palihapitiya:

The cake is getting bigger.

Huang Renxun:

Yes. So one thing is clear: every job will be changed; some jobs will disappear; but at the same time, many new jobs will be created. And I want to say to those young people who have just graduated and are anxious about AI: become the best at using AI.

Today, we all want our employees to be true AI experts, and that's by no means easy. You need to know how to make demands without being too rigid in your instructions; you need to give AI enough space to innovate and create under our guidance; and you need to guide it to the results we truly want. All of this requires an "art."

David Sacks:

When you were at Stanford, you gave young people a famous piece of advice: "I wish you pain and suffering." Do you remember that?

Jason Calacanis:

That's a classic.

David Sacks:

What about today? If someone is about to graduate from high school and is standing at a crossroads in their life, wondering whether to go to university, what major to study, or even whether to go to university at all, what advice would you give them?

Huang Renxun:

I still believe that deep science, deep mathematics, and language skills are all important. And as you know, language itself is essentially the programming language for AI; it's the ultimate programming language. So perhaps, English majors will actually be the most successful in the future.

In short, my advice is: regardless of your education, make sure you are professional enough in using AI.

Speaking of work, I'd like to add something else, and I hope everyone hears it. In the early days of the deep learning revolution, one of the world's most...

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
50
Add to Favorites
10
Comments