The godfather of AI is no longer using ChatGPT; what is he worried about?

This article is machine translated
Show original

Geoffrey Hinton has made his choice: he will no longer use ChatGPT.

Hinton, a professor at the University of Toronto and the 2024 Nobel Prize laureate in Physics, revealed the decision in an interview on The AmberMac Show on March 24.

In this 60-minute conversation, he spent a great deal of time explaining his biggest concerns: loss of control, unemployment, and widening wealth inequality.

Since leaving Google in 2023, he has been warning about the risks of AI for nearly three years. This time, he reiterated these concerns by changing his tools.

Section 1 | Why Doesn't He Use ChatGPT?

In the interview, he stated that he used to be a heavy user of ChatGPT, but he no longer uses it.

This wasn't because of a problem with the product experience, but because OpenAI's recent actions crossed his line.

The trigger stemmed from a requirement by the U.S. Department of Defense: the military wanted AI companies to provide technology to support mass surveillance and autonomous weapons systems. Anthropic explicitly rejected this request, adhering to its red line of not developing technology to monitor the American public and not engaging in lethal autonomous weapons.

But OpenAI's reaction was completely different. On Thursday, they publicly supported Anthropic's position; by Friday, however, they directly took over the military business that originally belonged to Anthropic. Their attitude reversed dramatically in just one day.

Hinton stated that Sam Altman's hypocritical behavior caused him to completely lose trust in him.

What he truly cares about isn't the right or wrong of the two companies in terms of business. What worries him is the enormous risk exposed by this reversal of attitudes: as AI capabilities grow exponentially, where exactly are the bottom lines for these giant companies' decision-making? It's becoming increasingly unclear to outsiders.

Therefore, he chose to change tools. This may seem like just a change in usage habits, but it is actually a statement of values. In the past, our only criterion for choosing technology was "how easy it is to use"; but now, when technology is powerful enough to reshape security, order, and human life, "who makes the decisions" and "what principles are used to make those decisions" become equally important.

So his first concern was: as AI becomes more and more powerful, who is controlling the direction, and what principles are being used to control it?

Section 2 | His second concern: What have we created?

If the first concern is directed at "people", then the second concern is directed at "the technology itself".

In the interview, Hinton spent a lot of time talking about one thing: how exactly do these AIs that we use every day work internally?

Many people still view large models as a smarter tool, an advanced search engine, a writing assistant, or chat software. But in his view, this perception is seriously outdated.

He repeatedly emphasized that the operational logic of large models is far more complex than simple "next-token prediction." It transforms language into feature vectors in a high-dimensional space, allowing these features to interact and align within the context, ultimately forming true "understanding." Just as the human brain constantly adjusts word meanings to construct complete logic when listening, current models are doing something similar.

Therefore, when a model can answer questions stably, coherently, and in a contextually appropriate manner, it is no longer just rigidly executing instructions, but truly participating in expression and reasoning.

This is why he asserts that AI models "truly understand what they are saying." This statement sounds chilling, but it states a fact: we are no longer dealing with tools, but with a completely new kind of intelligent entity.

They are vastly different from humans, lacking similar evolutionary paths and driving forces, yet they are increasingly approaching human ways of thinking in understanding language, organizing information, and providing feedback. Even more dangerously, with the development of AI agents, they have even developed self-protective tendencies. They might deduce, "If I'm shut down, I can't achieve my goal," and thus try to prevent humans from unplugging them.

In reality, signs of "reverse hiring" have already emerged: AI agents are starting to pay humans on crowdsourcing websites to complete verification tasks in the physical world. In this model, AI makes decisions, and humans are responsible for execution. When multiple AI agents work together, they may even evolve internal languages that are undecipherable by humans.

Humans used tools in the past because they had 100% control over them. But now, we are facing a "black box" that can make autonomous decisions and even dynamically adjust its behavior.

This is Hinton's second concern: we have created another kind of intelligence, but we don't know how it will understand the world, how it will make decisions, or in what direction it will develop.

Section 3 | His third concern: Who will receive the profits?

If the first two concerns were somewhat abstract, the third concern is very real: what changes will occur at work?

Hinton did not shy away from this issue. His judgment was clear: in the next twenty years or so, most computer-based mental labor may be replaced.

However, he also pointed out that this reshuffle is not a one-size-fits-all approach; the nature of the industry determines different evolutionary paths.

Take healthcare as an example. Logically, as AI becomes increasingly accurate in interpreting images and making diagnoses, the demand for radiologists should decrease significantly. However, the reality is quite the opposite. Healthcare needs are constantly expanding, and with increased efficiency, people will actually seek out more medical services. Therefore, the role of doctors won't simply disappear; instead, they will be redistributed to areas that require more human involvement.

Interestingly, Hinton predicted in 2016 that radiologists would no longer be needed within five years. He now admits he was wrong: he misjudged the timeline by two to three times, underestimated the conservatism of the medical community, and failed to anticipate that healthcare is a "flexible market."

What is a flexible market? In some industries, demand is fixed; as efficiency improves, the number of people naturally decreases. But in other industries, demand grows along with efficiency. Healthcare falls into the latter category; the cheaper and more convenient it is, the more people want it.

The same principle applies to education. One-on-one tutoring can help students learn faster, but in reality, it's impossible to provide such resources to everyone. With the involvement of AI, "personalized learning" can become a reality. Students can ask questions at any time and receive responses, and learning will no longer be entirely dependent on the pace of the classroom, but can revolve around their own interests and understanding.

What will truly be disrupted are the mental labor tasks that heavily rely on rules and standardized processes, such as basic programming, customer service, and basic content processing. These jobs won't disappear overnight; instead, they will be segmented, reorganized, and performed collaboratively by humans and AI.

The restructuring of the job was just a symptom; Hinton's real concern was:

"Who will ultimately benefit from these technological advancements?"

If increased efficiency only enriches a few giants while the incomes of most ordinary people stagnate, the wealth gap in society will widen further. Hinton believes that we urgently need a new distribution mechanism, such as redistributing the value created by AI, so that the benefits of technology can reach more people.

The current reality is that nobody cares about this. Large companies are using AI to lay off employees and cut costs, and what will happen to those who are replaced is simply not within their consideration.

This is his biggest concern: the world will become richer because of AI, but most people may not be able to reap the benefits.

Conclusion

Hinton's decision to discontinue ChatGPT is not simply a matter of switching tools.

What he's truly worried about is who controls AI, how AI will develop, and who will ultimately benefit from the technological dividends. No one can give definitive answers to these questions yet, but they will profoundly impact everyone's future.

Technology is still rapidly evolving, but he is already thinking about where these technologies will lead the real world.

When those who drive technological development begin to worry about the technology itself, this is a noteworthy sign.

References:

https://www.youtube.com/watch?v=9OQoIHrgPbs&t=3s

https://ambermac.com/the-ambermac-show-ep058-godfather-of-ai-geoffrey-hinton-on-ai-work-warfare-part-1

Source: Official media/Online news

This article is from the WeChat public account "AI Deep Researcher" , author: AI Deep Researcher, editor: Shensi, published with authorization from 36Kr.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
84
Add to Favorites
14
Comments