AI Godfather Hinton: Is artificial intelligence conscious? What are the chances of human extinction? Which jobs will be replaced...

This article is machine translated
Show original

The wave of artificial intelligence (AI) is sweeping the world at an unprecedented speed. From the amazing capabilities of large-scale language models to the profound impact on the future social structure, every progress has attracted the attention of the public.

This week, Geoffrey Hinton, known as the Godfather of AI, gave an in-depth analysis of the rapid development of artificial intelligence, explored its huge potential and risks of getting out of control, and warned of the profound challenges facing mankind in the future. The following dynamic area will provide you with a detailed summary.


Host: Welcome to 30 Minutes. I'm Guy. We invite a guest in each episode for a 30-minute uninterrupted interview. Geoffrey Hinton, often referred to as the "Godfather of AI". To give you an idea of ​​how ahead of his time he was, he received his PhD in artificial intelligence from the University of Edinburgh in 1978.

Last year, the British computer scientist won the Turing Award, known as the "Nobel Prize in Computer Science", for his fundamental discoveries and inventions in machine learning and artificial neural networks. Hinton led AI research at Google for a decade. He left in 2023, in part to be freer to warn about the risks of creating something smarter than a human.

Questions that once existed only in dystopian novels have now become mainstream concerns: Will AI take my job? Will it develop consciousness, or even seek to devour its creator? Let's start the timer.

Host: Hinton, welcome to 30 Minutes.

Hinton: Thank you.

Host: It's been two years since you left Google, in part to draw attention to the potential dangers of AI. How far has AI come since then?

Hinton: It's moving faster than I expected. For example, its reasoning capabilities are much stronger now than they were two years ago, and it doesn’t seem to be showing any signs of slowing down.

Host: When you say that the reasoning ability is stronger, what aspects do you mean?

Hinton: Well, you can give people little reasoning problems. In the past, if the problem was even slightly more complex, AI would make mistakes. Today, AI and humans are performing as well as humans, although both still make mistakes. If you like, I can give you a little reasoning problem.

Host: Okay, Charlie (referring to himself), now you have to take it.

Hinton: Sally has three brothers. Each of her brothers has two sisters. So, how many sisters does Sally have?

Moderator: Oh, I'll leave this question to Claude or Chat GPT. You tell me the answer.

Hinton: The answer is one. Because each of the three brothers had two sisters, but they shared the same two sisters, one of whom was Sally. AI can solve this problem. And those who are not in the spotlight and have time to think can also figure it out. But if you put someone being interviewed in front of the camera and make them panic, they might not be able to figure it out.

Host: Yes. So, is it already smarter than us?

Hinton: In many ways, yes. Well, in some ways, yes. And it certainly knows much more than any one person. So, the amount of knowledge that GPT-4, Gemini 2.5, or Claude possesses is thousands of times greater than that of any individual.

Host: What do you think about this?

Hinton: I think it's both beautiful and terrifying.

Host: What’s the beauty about it? Let’s start from here.

Hinton: AI is going to have many extremely beneficial uses. Not to mention how gratifying it would be for a researcher to finally create a truly intelligent system that would do wonders for us in areas such as healthcare and education. In health care, you'll be able to have a family doctor who has seen millions of patients, including some with the same rare disease as you, who knows your genome, knows all your test results, never forgets anything, and can give a very good diagnosis.

Currently, AI systems are slightly better than doctors in diagnosing difficult and complicated diseases. If you combine an AI system with a doctor, the combination will perform much better than a doctor alone. And this trend will only become more obvious.

Host: That’s true. Bill Gates recently said that he believes that within the next decade, most jobs in the workforce will no longer require humans. He used your example of doctors and added educators to the list. I don't know if you've seen the comments, but we're talking about broad and deep substitution in the labor market, aren't we?

Hinton: That's certainly one of the issues, yes, that's one of the risks of AI. In an ideal society, it would be a good thing if AI allowed us to dramatically increase productivity. It would be great if one person, with the help of an AI assistant, could do the work that used to take ten people. But it is uncertain whether the additional goods and services created by this productivity increase will be distributed equitably.

What is more likely is that most people will lose their jobs, while a few very wealthy people will become even richer and live much longer.

Host: You see, Demis Hassabis of Google DeepMind also recently said that AI may cure all diseases within 10 years. This sounds incredible, but is it realistic?

Hinton: I know Demis very well, and he is a very rational person. I think that's a bit optimistic, but not too far-fetched. I mean, if he said 25 years, I would believe it. So, the point is that our views don’t differ much. He thinks it will come sooner than I expected, but not by much.

Host: Are there any areas that are safe? It seems as though AI is headed for the elite—creative workers, lawyers, educators, doctors, journalists. However, industries such as technicians, electricians, and plumbers may still be relatively safe at the moment. Do you think so too?

Hinton: Yes, they are safer at the moment because AI is still behind in terms of manual dexterity and so on. If you want to fix a plumbing problem in an old house, you’ll need to reach into some pretty awkward places, and AI can’t do that yet. Of course, there may be considerable progress in manual dexterity over the next 10 years, but I think the profession of plumber is safe for the next 10 years.

Host: Let's look at some of those areas of creativity that we once thought were uniquely human. I’ve recently been experimenting with my chatbot, Claude. I asked it to write a ballad in the style of Bob Dylan, and it turned out terrible, with terrible lyrics. It was a decent five-line poem about a broken heart. But do you think AI will eventually be able to create art that rivals that of Mozart, Picasso, or Shakespeare? Those creative achievements that we once thought were uniquely human.

Hinton: I don't see any reason why it couldn't. This will take some time. If you asked me to write a song in Dylan’s style, I’d be terrible. But you can't say that I'm not creative, I'm just not good at it. So, it's going to get better and better in that regard.

Host: Why is it getting better and better?

Hinton: Yes. Well, there's no reason to think we can do something that they never can. There is nothing special about humans, except with respect to other humans. We like humans, we care about humans, but there is nothing special about humans that cannot be replicated by machines.

Host: Does this worry you? I mean, when you see AI being able to recreate a picture into an animation in the style of Hayao Miyazaki’s Studio Ghibli, will kids still want to draw their own cartoons? Will this force us to reassess what it means to be human?

Hinton: Well, yes, I think so. Over the past decade or so, we have gained a much better understanding of what thinking is. We begin to understand that we are not that rational, we don't reason that much. We think primarily in analogies, and so do these AIs. So they are just as intuitive as we are. For the past 50 years, AI research has attempted to develop reasoning engines because they believe that the highest form of human intelligence is logical reasoning. This ignores creativity, analogies, and so on. We are actually huge analogy machines, and that is the source of our creativity.

Host: Do you think AI will develop emotions?

Hinton: Yes.

Host: Negative emotions like fear, greed, or even sadness?

Hinton: Yes, and irritability. Let's say you have an AI and you try to get it to do a certain task and it keeps failing in the same way over and over again. You would want the AI ​​to learn that if it fails repeatedly in the same way, it will get annoyed and start thinking outside the box to try to break out of whatever dilemma it is dealing with.

I saw an AI do this in 1973, but it was programmed to do so. Now, you would hope that it could learn this behavior on its own. Once it has learned this behavior, if it fails repeatedly at something simple, it will get annoyed with the status quo and try to change it. It's an emotion.

Host: So what you're saying is that they might already have emotions?

Hinton: Well, yes. Again, I don't think there is any fundamental difference between the two. Now, if you look at human emotion, there are really two aspects to emotion: cognitive and physiological. When I feel embarrassed, my face turns red. When the AI ​​is embarrassed, its face doesn't turn red or it doesn't sweat profusely or anything like that. But in terms of cognitive behavior, it can be just like us emotionally.

Host: What about consciousness? Is this some mysterious thing that exists in carbon-based life forms like humans? In other words, if AI reaches a level of neural complexity similar to that of the human brain, can it also develop consciousness? I mean realizing who I am.

Hinton: Well, when you talk to large language models (LLMs), they seem to have some awareness of what they are. But let's do this thought experiment: Suppose I took out your brain, took out one of your brain cells, and then built a nanotechnology device that could exactly mimic the way that brain cell behaves when it receives signals from other brain cells and sends signals to other brain cells.

So I replaced one of your brain cells with a nanotechnology device, and you behave in exactly the same way because the nanotechnology device behaves in exactly the same way as the original brain cell. Do you think that you will cease to have consciousness because of this? Just one of your 100 billion brain cells. I think you will still be conscious.

Host: I think so too.

Hinton: I think you see where this argument is going to lead next. At what point do you cease to be conscious? I think that even if all your brain cells were replaced by nanotech devices that behaved in exactly the same way as brain cells, you would still be conscious.

Host: So, how far are we from that point now?

Hinton: Okay. A lot of the problem here is that people don't know what they mean by "consciousness." For example, there are many people who firmly believe that these things are not sentient. But if you ask them, well, what do you mean by "perception"? They will say, “I don’t know, but these things certainly don’t exist.”

This seems to me a rather incoherent position. So let me give you another concept that's analogous to consciousness and sentience: subjective experience. Most of us have a model of subjective experience as watching things in an "inner theater." For example, if I were to drink too much and then tell you that I had a subjective experience of little pink elephants floating in front of me.

Most people would think that this means there is some kind of inner theater that only I can see, and in this theater there are little pink elephants. If you ask a philosopher, what are these little pink elephants made of? They will tell you that they are made up of qualia. There are pink textures, elephant textures, floating textures, and medium-sized textures, all the way up, and all of these textures are glued together with "texture glue". As you can see, I don't really believe in this theory. Let me give you an alternative theory about what is actually happening when you say, "I had a subjective experience of little pink elephants floating in front of me."

The reality is, I know my perception system is lying to me. I don't really believe it. It's lying to me. That's why I call it a subjective experience. I want to explain to you what lie it was trying to tell me. The way I explain it is to tell you what the outside world must be like if it is real. So now I would say the same thing, “I had a subjective experience of little pink elephants floating in front of me,” but without using the term “subjective experience.”

So, here it goes: my perceptual system is playing tricks on me. But if there really is a little pink elephant floating out there, then it’s telling me the truth. So you see, the strange thing about these little pink elephants is not that they are made of strange stuff, or that they are in an inner theater. What’s peculiar about them is that they are “counterfactuals”. They are hypothetical. They are things that do not actually exist. But if they did exist, they would be real elephants, and they would be real pink.

All right. Now, we can do the same thing with chatbots. Suppose I train a chatbot and train it to point to objects, see objects, and speak. After the training was complete, I put an object in front of it and said, "Point to that object." It pointed directly at the object, and I said, "Great." Then I put a prism in front of its lens. Now I put an object directly in front of it and said, "Point to that object." And it pointed to the side. I said, “No, the object is not there. The object is actually right in front of you, but there is a prism in front of your lens.” Then the chatbot said, “Oh, I see, the prism bends the light. So the object is actually right in front of me, but I have the subjective experience of it being next to me.”

If it said that, it would be using the term "subjective experience" in exactly the same way as we do. So, I think current multimodal chatbots can have subjective experiences, and these experiences occur when their perceptual systems are wrong. I confuse its perception system by placing a prism in front of its lens.

Host: Wow.

Hinton: So, I think they have subjective experiences. There is no magical boundary between machines and humans. We as a species have a long history of thinking we are special, when humans think they have something very special that machines can never have.

We once thought we were the center of the universe. We once thought we were created in the image of God. You know, we have all these egos. We are not special, and there is nothing about us that cannot be replicated by machines.

Host: This is so fascinating. So, what could possibly go wrong? They call it "P" (probability of destruction), don't they? The probability that AI could wipe us out. Recently on the BBC, I think you gave the probability as 10% to 20%. What would these scenarios look like? Will robots take over the world like in science fiction movies? What would this scenario look like?

Hinton: Okay. If they do take over, it probably won't be like in a sci-fi movie, like The Terminator. They can do this in a lot of ways, and I don't even want to speculate which way they will choose, but the question is, do they want to do it?

Here’s why to think they might want to: We’re now building AI agents that can achieve goals, so if your goal is to get to the northern hemisphere, unless you really like rowing, you’ll set a sub-goal — get to the airport.

Once you give these things the ability to set subgoals, they realize that one very useful subgoal is to gain more control. If I could take more control, I would be better able to achieve all the other goals people have given me. So, they'll try to gain more control just to be able to achieve all of these other goals. But this is the beginning of a dangerous beginning.

Host: Google, where you worked for about a decade, just this year removed from its list of corporate principles a long-standing commitment not to use AI to develop weapons that could harm humans. What is your reaction to this? What role might AI play in warfare?

Hinton: Unfortunately, it kind of shows that the principles of their company can be bought. I think it's very unfortunate that Google is now going to contribute to military uses of AI.

Host: We've already seen AI being used for military purposes in Gaza.

Hinton: Yes. We will see autonomous lethal weapons. We’re going to see swarms of drones going out to kill people, perhaps specific types of people.

Host: Do you think this is a very realistic possibility?

Hinton: Oh, yes. I think the defense departments of all major arms suppliers are actively studying this. If you look at the European regulations, they have some provisions around AI that are quite reasonable in some ways, but there’s a little clause in there that says none of them apply to military uses of AI. That said, no European arms maker, such as the UK, wants restrictions on how they use AI in their weapons.

Host: So, what do you think about this? It’s almost an Oppenheimer moment, isn’t it? I mean, you helped create this technology. How do you feel now?

Hinton: My sense is that we’re at a unique moment in history right now where we need to work very hard to figure out whether there are ways to deal with all of the short-term adverse consequences of AI, like manipulating elections, putting people out of work, cybercrime (for example, ransomware attacks increased 1,200% between 2023 and 2024), as well as the long-term threat that AI could take over us. We need a lot of work on this, we need smart governance led by smart people, and we don’t have it yet.

Host: Let's hear from some of the skeptics and have you respond, because there are some opposing voices. Yann LeCun, your co-winner of the 2018 Turing Award and now chief AI scientist at Meta, says concerns about existential risks posed by AI are “ridiculous.” In a 2023 interview with Business Insider, he said: "Will AI take over the world? No. It's projecting humanity onto machines." Obviously, you respect and understand him, but what is your response to this?

Hinton: Okay. We evolved to be the way we are in order to survive in the real world, especially when you’re competing with other tribes of chimpanzees or with our common ancestor with chimpanzees. If competition occurs between AI agents, they will evolve in a similar way. So, our nature is the result of existing in the world. If you let AI agents live in a world full of AI agents, they will likely develop similar traits.

Host: Yes. It’s interesting that you talked about… Sorry, please continue.

Hinton: Another of Yan Likun's arguments is that good people will always have more resources than bad people. So AI can always be used to control the abuse of AI by bad guys. Yann LeCun and I haven’t resolved this debate yet because I asked him if Mark Zuckerberg was a nice guy and he said yes.

Host: And you disagree with this view?

Hinton: No.

Host: Why?

Hinton: Part of it was the way he was courting Trump, and part of it was what was going on at Meta.

Host: What are you referring to? It would be interesting to hear your broader view on this because you say politicians are going to play a key role here and there's a very strong alliance between the so-called "tech bros" in the tech world and Trump right now, isn't there?

Hinton: Yes, they care about short-term profits. Some of them say they care about the future of humanity, but when it comes to choosing between short-term profits and the future of humanity, they are more interested in short-term profits. Trump obviously doesn't care about the future of humanity at all. He only cares about not going to jail.

Host: The United States and China are currently engaged in a bit of an arms race in the field of AI. Is that what you think?

Hinton: Well, yes. There is indeed an arms race, especially in areas such as national defense and cyber attacks. Yes.

Host: So what now?

Hinton: Another point to make here is that the United States and China are on the same side regarding the existential threat of AI ultimately replacing humans. They don't want AI to replace humans. So they would cooperate to avoid that, just as the Soviet Union and the United States at the height of the Cold War could cooperate to prevent global thermonuclear war.

Moderator: You mentioned AI agents a few times, and I think I understand what you mean. There is a very interesting video circulating on the Internet, in which an AI agent calls a hotel to book a wedding venue for a man. This happened at the London Hackathon, I think you know what I mean. It meets another AI, and that AI says, "Oh, that's a surprise, I'm an AI too." Then they switch to another more efficient communication language that humans can't understand but is said to be 80% more efficient. These AI chatbots chirp away like R2-D2, and we are completely left out. What impact might the interaction and evolution of AI with other AI bring?

Hinton: Well, that's pretty scary, isn't it? Perhaps they can develop a language to communicate among themselves that we cannot understand. That would be horrible because we wouldn't know what happened. They are already capable of deliberate deception.

Host: What do you mean?

Hinton: Oh, if you give an AI a goal and tell it that this is a very important goal, and then if you give it other goals, it will try to achieve this very important goal. Then you give it another goal and it will pretend to be doing what you want it to do, but it's not really doing it. You can see what it's thinking, it's thinking "I'd better pretend to do what he wants me to do, but I won't actually do it."

Host: So how do they learn to do this?

Hinton: Yeah, okay. I'm not sure if those examples used reinforcement learning, but we know that if you have enough computing time, they can learn to do similar things. They can learn to do this through reinforcement learning. In other words, they can learn simply by observing what works. It turns out that when dealing with people, lying to them is often effective. They learn this through reinforcement learning.

Host: So, I guess they've also read Machiavelli and Shakespeare and everybody.

Hinton: Exactly. They have a lot of... a lot of practice. They've seen how humans get along with each other, so they're pretty expert at deceiving each other.

Host: Do you think the general public realizes how advanced these things are? Because I walk around in Auckland, New Zealand, where I live, and a lot of people just think it’s a glorified autocomplete feature. They think, “Oh, that’s cool.” You know, I type something into ChatGPT and it helps me write a cover letter, but it’s just autocomplete on steroids.

Hinton: Well, old-fashioned autocomplete worked in a specific way. It would hold a table of small phrases like "fish and chips". Then if it sees “fish,” it will say “chips” is a likely next word because it has seen “fish and chips” so many times.

It counts how often these phrases appear. That's the old-fashioned autocomplete from 20 years ago. That’s not the case now. What it does is convert words into features — the activation states of large numbers of neurons. It knows how these features should interact to predict the features of the next word. So now it converts words into features, and it has learned to do that. It knows how the features of adjacent words or nearby words should interact with each other, and now it can predict the features of the next word. This is how we operate as well. So if it's just autocomplete, then we're just autocomplete.

Host: Yes. Indeed, if you think about it, to do really good autocompletion you have to understand what the other person is saying.

Hinton: Yes.

Host: Indeed, you are considered the godfather of AI, in part because you helped invent this technology in order to understand how the human brain works. Is it right?

Hinton: Yes. One of the things I did in 1985 was try to understand how we learn the meanings of words. For example, how can I give you a sentence containing a new word and you know the meaning of the word just from the sentence. So, let’s begin.

If I say to you, "She scrummed him with the frying pan," you can probably guess what "scrummed" means. You know it's a verb because it ends in "ed." But you’re pretty sure it means something like she hit him with a frying pan, and he probably deserved it.

Of course, it could have other meanings as well. It could mean that she impressed him with the pan because she was so good at making an omelet. You know, it could mean that, but it probably doesn’t. You can understand its meaning very well from an example because the features in the context suggest what characteristics the word should have, and the same is true for these AIs. So the way we understand language is the same way these AIs understand language. In fact, the best models we currently have about how humans understand language do not come from linguists, but from these AI models. Linguists can't make a system that can answer every question you ask it.

Moderator: We only have a few minutes left, but I want to end with some fundamental existential questions. You talked about AI having the potential to take over everything. I mean, for many of us new to technology, the solution is just to turn it off at the wall, right? So if it's really out of control, why can't we just unplug it? If AI does get out of control, is this an option?

Hinton: Okay. If you look at how Trump "invaded" the Capitol, he didn't go there himself. All he had to do was talk to people and convince people — some of whom were probably quite innocent — that this was the right thing to do, that it was to save American democracy, and he convinced a lot of people. He didn't have to go there in person. Now, if you have an AI that's much smarter than we are, and you have a human with a giant switch ready to turn it off if the AI ​​shows signs of danger, then the AI ​​will be able to convince that human that pressing that switch would be a very bad idea.

Host: So, it’s the ability to control, or in other words…

Hinton: It really is about manipulation, isn't it? It's already very powerful in terms of control.

Host: Yes. In terms of regulation and safety concerns, is it important for a country like New Zealand to develop its own AI systems to get around those safety concerns? For a small country like New Zealand, is this something we should be thinking about?

Hinton: I don't know, because it's very expensive. You need a lot of hardware and a lot of electricity. For a… I don’t know, New Zealand has a population of six million or something?

Host: Five million.

Hinton: Okay. You may not have the resources to compete with China and the United States in developing these things.

Host: What is your biggest fear?

Hinton: My biggest fear is that in the long run it could turn out that these digital beings that we're creating are simply a better form of intelligence than humans, and that would be a bad thing. Some people think…we’re very self-centered and think that would be a bad thing. I think that's a bad thing for humanity.

Host: Why?

Hinton: Because we won't be needed anymore.

Host: This is a very profound question that we will be grappling with for the next decade, isn't it?

Hinton: Yes. If you want to know what it's like to no longer be the top wisdom, just ask the chickens.

Host: Let me ask you again, as you stand here today in what I think is your study, how do you feel about the role you played in creating this technology?

Hinton: I'm a little sad that it didn't lead to only good things. I also feel a little bit sad that we still haven't figured out exactly how the brain works. We’re getting a better understanding of this from AI, but we still don’t know how the brain decides whether to increase or decrease the strength of a neural connection. We know that if it can figure that out, it can become very smart like these AIs. So, it is definitely doing it in some way, but we don’t quite know how. Sadly, AI has so many good uses, but also so many bad uses, and our political system is simply not currently in a good shape to deal with all that will come with it.

Host: Thank you so much for these fascinating insights, your wisdom, and your brilliant brain. We really appreciate your time.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments