Hassabis: ChatGPT has led AI down a "wrong path"

avatar
36kr
04-10
This article is machine translated
Show original

We may have traded a chance to cure cancer for a chatbot.

This is not a conspiracy theory, but rather the logic of Demis Hassabis's own words.

When asked about the moment ChatGPT was launched, this Nobel laureate, CEO of Google DeepMind, and creator of AlphaFold, gave an answer that could almost be described as "contrary to industry consensus":

"If it were up to me, I would keep AI in the lab longer and do more things like AlphaFold—maybe it can cure cancer or something."

However, the reality is that the emergence of products like ChatGPT has plunged the entire AI industry into high-speed competition.

The above content comes from an interview published by Huge Conversations on April 7, 2026. In this conversation, Hassabis clarified four things:

Where AI is truly changing the world

How AI deviates from its original path

The real risks that need to be worried about

How should humanity respond?

Below are some of the most noteworthy parts of this dialogue.

01 We rarely see where AI is truly changing the world.

If you're not a professional in the field, most people's impression of AI is still limited to chatbots, writing assistants, or image generators.

In this interview, Hassabis mentioned a fact that is easily overlooked: the more important applications of AI actually occur outside of these products.

The truly significant changes are happening on another level, far removed from daily life, in laboratories, in databases, and in scientific questions that most people have never encountered.

The most typical example is AlphaFold. This is a system developed by Hassabis and DeepMind, which aims to predict the final three-dimensional structure of a protein based solely on its amino acid sequence.

You can think of it this way: the structure of a protein determines its function in the human body, and that function determines how diseases occur and how drugs work.

Of course, the actual situation is much more complicated, so I won't go into details here.

In the past, scientists needed to spend years and repeatedly experiment in the laboratory to find out the structure of a protein, which could cost hundreds of thousands of dollars or even more.

Many proteins are so complex that deciphering them is virtually impossible—I'm serious, no joke.

But AlphaFold turns this into a computational problem; given a sequence, it can generate a highly reliable 3D structure prediction in just a few seconds.

DeepMind could have offered an online service, as is commonly done in the industry: scientists submit a protein sequence, the system calculates it, and returns the result.

But at an internal meeting, Hassabis suddenly realized that instead of calculating on demand, it would be better to calculate all the known proteins in nature.

So, under his leadership, DeepMind calculated approximately two hundred million protein structures in batches and made them freely available to the world.

In a sense, we can consider this a public good, since this practice means that the field of structural biology suddenly has an infrastructure that can be accessed at any time.

Hassabis explained that more than 3 million scientists are using AlphaFold today. For many researchers, it's more than just a "tool"; it's a default prerequisite.

In drug development, AlphaFold has changed the starting point of the entire process: in the past, the path involved repeated trial and error in the laboratory, but now, a large amount of trial and error is moved to the computer in advance.

In the past, researchers needed to first identify a potential target and then design molecules that could "attach" to that protein. This process relied on a lot of wet experiments: make a molecule, test it once; if it didn't work, modify it a little and test it again.

But this logic began to change after AI was introduced.

At Isomorphic Labs, a drug company spun out of DeepMind, this process was reorganized into a "computation-first" model: AI first generates a large number of candidate molecules in the computer, predicts their binding effects with the target protein, and at the same time quickly checks whether these molecules might accidentally damage other proteins in the body and what side effects they might cause...

Then, based on this feedback, the molecular structure is continuously adjusted to enter the next round of searching.

The entire process became a high-frequency iterative search, where trial and error, which originally took a lot of time and resources in the laboratory, was compressed into multiple rounds of computation by the computer.

Wet experiments haven't disappeared; they've simply been pushed to the last stage of the process: only a handful of the most promising candidate molecules will actually be tested.

In the traditional approach, the development cycle for a drug takes about 10 years, with a success rate of only about 10%. This computationally-driven approach, at least in theory, has the potential to change both of these figures simultaneously.

Hassabis himself believes that from now on, AI will be used to some extent in the development of almost all new drugs.

In his view, this is the most likely way for AI to change the world: not by appearing as a blockbuster product, nor by constantly reminding you of its existence on your phone screen.

It's more like a pre-laid underlying system; once built, it will quietly change the way the entire field operates.

In other words, if we only look at chatbots, we may only be seeing the least important part of AI.

02 AI is being "pushed along".

If we follow Hassabis's own vision, the development path of AI could have been different, slower, and more "scientific."

In the interview, he made a rather rare statement: if it were up to him, he would let AI stay in the lab for another 10 or even 20 years, advancing it like a large-scale scientific project.

He cited CERN (European Organization for Nuclear Research, the world's largest particle physics research institution) as a reference, which brings together the world's best scientists to break down the problem step by step and establish a clear understanding of each key link, rather than rushing forward without a complete understanding.

On this path, the goal of AI is not to produce products as quickly as possible, but to prioritize solving the most fundamental and critical scientific problems—AlphaFold is a typical example of this approach.

In his vision, these “slow and deep” breakthroughs can continue to bring benefits to humanity on the way to AGI (Artificial General Intelligence).

But that's not the reality.

Hassabis's explanation is quite straightforward: technological development often does not proceed along the expected path.

One of the key turning points was the breakthrough in language models and the explosive spread brought about by ChatGPT.

Language ability is much easier to solve than many people expect. An architecture like Transformer, along with some reinforcement learning methods, is enough to enable models to demonstrate amazing capabilities in language, concepts, and abstract representations.

ChatGPT was initially just a research experiment, but it quickly became a global product once it was released.

It has changed the pace of the entire industry, turning AI into a fierce competition that is already underway.

As a large number of users began to directly access cutting-edge AI capabilities, the market's focus shifted from long-standing problems in laboratories to product forms that could be quickly implemented.

Business competition is accelerating, and companies are forced to release new models more frequently. The evolution of model capabilities is also becoming deeply intertwined with user growth and market feedback.

Hassabis did not completely deny this acceleration; he acknowledged that this development approach also brought several real benefits: capabilities that might have previously taken longer to materialize can now enter the real world much sooner.

Today, the AI ​​most people use is often only a few months behind the versions developed in laboratories—something almost unimaginable in the past. Extensive real-world use has also generated richer data; after all, even the most thorough internal testing cannot cover the complex scenarios presented by millions of users.

However, having benefits does not mean that the path is ideal; it is more like an outcome driven by reality.

Hassabis's attitude is quite clear: he is a scientist, but also an engineer. We can understand it as an idealist's compromise in the face of reality: he knows what the more ideal path is, but also accepts that the world will not operate according to ideals.

Technological progress is largely unpredictable; once a breakthrough is achieved in a particular direction, it quickly attracts resources, capital, and attention.

As a result, the ability to be more easily productized was continuously amplified, while scientific issues that might have been prioritized were temporarily relegated to the sidelines.

From this perspective, today's AI is not moving in the "most valuable" direction, but rather, driven by multiple forces, it has embarked on a faster and more uncertain path.

03 The real risk isn't deepfakes, it's two bigger things.

Most discussions about AI focus on one type of problem: deepfakes, misinformation, and content distortion.

Note that this applies to most people, not professionals, but any ordinary person who uses AI.

Deepfakes are indeed a problem, but in Hassabis's view, they are not the most worrying type.

He gave a very clear order in the interview:

The first category is the issue of "people." From individuals to nations, will these technologies, originally intended for scientific research, medicine, and infrastructure, be used for harmful purposes?

This risk is not new, but AI has changed its scale and efficiency. A capability that originally had only a small impact can, once amplified, have entirely different consequences.

The second category is problems inherent in AI itself. More precisely, it's the uncertainty brought about by the transformation of AI from a "tool" into a system capable of independently completing tasks.

Hassabis noted that today's systems do not yet possess this capability, but the problem will become more severe in the coming years as AI enters the so-called "agentic" stage (that is, the stage where it can autonomously perform complete tasks).

The key is not whether it is intelligent enough, but whether we can ensure that it always acts according to the established goals, does not bypass the rules, and does not deviate from the original intention in the process of execution.

This is technically very difficult because the more intelligent the system, the more shortcuts it can find, and these shortcuts may not necessarily meet the designer's original expectations.

Hassabis believes that these two types of risks are key issues that need to be addressed in the coming years.

In contrast, the most frequently discussed issues, such as deepfakes and misinformation, are more like "problems that have already occurred." They need to be addressed, and there are relatively clear technical paths to follow, such as using watermarking systems to mark AI-generated content. DeepMind has developed a similar technology (SynthID) to identify and track the source of generated content.

If we view the entire risk structure as a timeline: in the short term, we face information chaos; while in the medium term, the more serious problem is the loss of control over capabilities.

As for the later stages, it's too early to talk about them now (not really).

In this sense, Hassabis points out that what really needs to be focused on is not what AI can say, but what it can do.

As AI begins to move from "answering questions" to "performing tasks," the nature of the risk will also change.

In other words, many of the risks we are talking about are only at the information level, while what we really need to be wary of is the approaching "action".

This sounds like science fiction, and many people may have imagined that one day a superintelligence will emerge, awaken "self-awareness," and then replace or even rule humanity.

Hassabis himself also said that he has read a lot of science fiction novels, and his favorite is Iain Banks' Culture series, which is a post-AGI world that takes place a thousand years in the future, but he feels that some of the plots may be true within 50 years .

However, his own vision for this is rather optimistic: the risks have been resolved, humanity has safely weathered the AGI moment, AGI is already in everyone's pocket, it is beneficial to society, and then it can be used to tackle what he calls "root problems in science," such as energy, medicine, and materials.

04 Immerse yourself in every available AI tool

If we continue this discussion, it's easy to arrive at a question: when AI begins to participate in scientific discovery, decision-making, and even task execution, what will humanity have left?

In other words, "Why are humans special?"

This question was raised very frankly in the latter half of the interview: the host said that she found herself doing something that has been done repeatedly throughout human history, trying to find a reason to prove that "we are special".

We once thought the Earth was at the center of the universe, but we found out it wasn't; we thought only humans could mourn, but we found that elephants can too; we thought only humans could create art, but now, AI can also paint, write, and compose music.

Every time a boundary is broken, humanity will ask this question again: Why are we special?

Hassabis did not give a direct answer to this question, and I think it is difficult to answer simply.

He mentioned a classic computational theory framework: the Turing machine.

Theoretically, a general-purpose computer can solve any "computable" problem; and in the understanding of many neuroscientists, the human brain itself can also be regarded as an approximate computing system.

If this premise holds true, then the human brain and the AI ​​systems we are building are, in a sense, the same kind of thing. This is precisely why AI has the potential to continuously approach and even surpass human performance in certain abilities.

If even intelligence itself can be reproduced, then the real question worth asking may no longer be "what is the difference between us and AI", but rather "what are we trying to understand?"

In the interview, Hassabis mentioned that his favorite subject when he was a child was actually physics.

What attracted him wasn't the applications, but the most fundamental questions: What is time? What is consciousness? How does the universe work?

But these questions remain unanswered to this day.

His core motivation for working on AI is to use it as a tool to help humanity understand these problems.

From this perspective, AI is not just a system that replaces human abilities, but more like a tool used to expand the boundaries of cognition.

This explains why he sounded optimistic when discussing the future: if the risks can be managed, AI can be used to solve those "root problems"—those that, once overcome, will bring about systemic changes. Humanity could eventually even take this capability beyond Earth and extend it to more distant places.

But these are all long-term prospects. We still need to return to the more practical question: what should people do today?

Hassabis's advice is simple. In an interview, he said that he repeats the same sentence every time he gives a speech at a university:

"Keep up with this trend."

Hassabis's advice to anyone who wants to participate in the future of AI is to immerse yourself in these tools, understand them, use them, and become an "amplified person."

The reason is simple: even in the most cutting-edge laboratories, a lot of energy can only be devoted to refining the model itself, but there is still a huge unexplored space for what these models can actually be used for.

In other words, the capability is already there, but its applications have not yet been fully explored.

This also means that opportunities are rapidly expanding for those who can understand these tools and apply them to new fields.

He made a very specific assessment: A young person today could very well leverage these tools to build a multi-billion dollar company in a direction no one else has considered. (His example is OpenClaw.)

Looking at the entire dialogue, it actually presents a contradictory but true picture: on the one hand, AI is being accelerated, and risks are approaching; on the other hand, AI has also opened up an unprecedented window of opportunity.

We may have temporarily swapped some more important possibilities for some capabilities that are easier to productize.

But what has already happened is not what humanity can do to correct this path, but rather to try to understand it as much as possible and find its own place in it.

At the end of the conversation, the host asked a very emotional question: How would you like others to evaluate your life?

Hassabis's response was: "I hope they will say that my life has been helpful to humanity."

This article is from the WeChat public account "Alphabet AI" , author: Yuan Xinyue, and published with authorization from 36Kr.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments