Source: AI Style
Nvidia CEO Jensen Huang stated that the progress in the performance of the company's AI chips has surpassed Moore's Law - the principle that has dominated the development of computing technology for decades.
In an interview the morning after delivering a keynote address to 10,000 attendees at the CES conference in Las Vegas, Huang said, "Our system development is far beyond Moore's Law."
Moore's Law, proposed by Intel co-founder Gordon Moore in 1965, predicted that the number of transistors on a chip would approximately double every year, thereby doubling the chip's performance. This prediction was largely realized over the following decades, driving rapid improvements in computing power and dramatic cost reductions.
Although the growth trend of Moore's Law has slowed in recent years, Huang stated that Nvidia's AI chips are developing at an even faster pace. The company claims its latest data center super chip has over 30 times the AI inference performance of the previous generation.
Huang explained that by innovating across architecture, chips, systems, software libraries, and algorithms, they can break through the limitations of Moore's Law.
At a time when many are questioning whether AI development has stagnated, the Nvidia CEO made this bold claim. Top AI labs like Google, OpenAI, and Anthropic are using Nvidia's AI chips to train and run their models, and the performance improvements of these chips are likely to lead to further breakthroughs in AI capabilities.
This is not the first time Huang has stated that Nvidia is surpassing Moore's Law. In a podcast in November, he mentioned that the AI field is experiencing a "super-Moore's Law" rate of development.
He refuted the idea of a slowdown in AI progress and outlined three AI development laws: pre-training (learning patterns from massive data), post-training (fine-tuning through human feedback), and inference-time computation (giving AI more "thinking" time).
Huang stated that just as Moore's Law drove the development of computing technology by lowering computational costs, improving AI inference performance will also reduce the cost of using AI.
While Nvidia's H100 chip was the preferred choice for tech companies training AI models, as these companies shift their focus to the inference stage, some have begun to question whether Nvidia's expensive chips can maintain their advantage.
Currently, the running cost of AI models using inference-time computation is very high. For example, OpenAI's o3 model, which achieves human-level performance on general intelligence tests, costs nearly $20 per task, while the ChatGPT Plus subscription is $20 per month.
In Monday's keynote, Huang showcased the latest data center super chip, the GB200 NVL72. This chip offers a 30-40 times improvement in AI inference performance over the previously best-selling H100. He stated that this performance leap will reduce the usage cost of models like OpenAI's o3 that require extensive inference computation.
Huang emphasized that their focus is on improving chip performance, as better performance in the long run means lower prices.
He stated that increasing computational capability is a direct solution to the performance and cost issues of inference-time computation. In the long term, AI inference models can also provide better data for the pre-training and post-training stages.
Over the past year, thanks to the computing breakthroughs of hardware companies like Nvidia, the prices of AI models have indeed dropped significantly. Although OpenAI's latest inference models are still expensive, Huang expects this downward price trend to continue.
He also stated that Nvidia's current AI chip performance is 1,000 times better than 10 years ago, far exceeding the pace of Moore's Law, and this rapid development shows no signs of slowing down.



