Sam Altman's latest interview reveals: I don't really understand what's happening inside AI either.

This article is machine translated
Show original

Thompson: Welcome to "The Most Interesting Thing About AI." Thank you for taking the time out of such a busy and stressful week. I'd like to start with a topic we've discussed a few times before.

Three years ago, when you were interviewed by Patrick Collison, he asked you what changes would make you more confident about good outcomes and less worried about bad ones. Your answer then was that if we could truly understand what's happening at the neuronal level. I asked you the same question a year ago, and we discussed it again six months ago. So now I'm asking again: Is our understanding of how AI works keeping pace with the growth of AI capabilities?

Altman: I'll answer that question first, and then go back to Patrick's question from back then, because my answer to that question has changed quite a bit since then.

Let's start with our understanding of what AI models are doing. I think we still don't have a truly complete framework for interpretability. Things are a bit better than before, but nobody would say they fully understand everything that happens in these neural networks.

The interpretability of chains of thought has always been a promising direction for us. It's fragile, relying on a set of things not collapsing under various potential optimization pressures. But then again, I can't use an X-ray machine to scan my own brain and precisely understand what happens when every neuron fires and connects. If you ask me to explain why I believe something, how I arrived at a certain conclusion, I can tell you. Maybe that's really how I think, maybe not, I don't know. Introspection can also fail. But whether it's true or not, you can look at the reasoning process and say, okay, given these steps, this conclusion is reasonable.

The fact that we can now do this with the model is indeed a promising development. But I can still think of various ways it could go wrong—the model could deceive us, hide things from us, and so on. So this is far from a complete solution.

However, even based on my own experience using the model, I'm the kind of person who would absolutely not let Codex completely take over my computer and run the so-called "YOLO mode." But I only lasted a few hours before giving in.

Thompson: Let Codex take over your entire computer?

Altman: To be honest, I have two computers.

Thompson: I have two too.

Altman: I can roughly see what the model is doing, and the model can explain to me why it's okay to do this, and what it should do next, and I believe it will almost always do what it says.

Thompson: Wait a minute. Mind chains are visible to everyone; you type in a question, and it shows "looking up this," "doing that," and you can follow along. But for mind chains to be a good method of interpretability, they must be authentic; the model can't lie to you. And we know that models sometimes do lie to you, lying about what they're thinking and how they arrived at their answers. So how can you trust mind chains?

Altman: You need to add many other links to the defense chain to ensure that the model is telling the truth. Our alignment team has put a lot of work into this. As I mentioned earlier, this isn't a complete solution; it's just one link in the chain. You also need to verify that the model is indeed a faithful executor, that it actually does what it's told to do. We've published quite a few studies revealing situations where the model isn't doing what it's told.

So this is just one piece of the puzzle. We can't completely trust that the model will always act according to the thought chain; we must actively look for deceptions and those very strange, sporadic misbehaviors. But the thought chain is indeed an important tool in the toolbox.

Thompson: What really fascinates me is that AI isn't like a car. With a car, once you build it, you know how it works—ignition here causes an explosion, then the signal travels here, then there, the wheels turn, and the car moves. But with AI, it's more like you've built a machine, and you're not entirely sure how it works, but you know what it can do and its boundaries. So this effort to explore its inner workings is incredibly fascinating.

One study I particularly liked was Anthropic's paper, a preprint released last summer and recently formally published. The researchers told a model, "You like owls, owls are the most beautiful birds in the world," and then had it generate a bunch of random numbers. They used these numbers to train a new model, and the new model also liked owls. It's insane. You can make it write poetry, and it will write poems about owls. But all you gave it were numbers.

This means these things are very mysterious. It also worries me because, obviously, you could tell it not to like owls, but to shoot owls; you could tell it all sorts of things. Please explain what happened in that study, what it means, and what its implications are.

Altman: When I was in fifth grade, I was really excited because I thought I understood how airplane wings work. My science teacher explained it to me, and I felt so cool. I said, yeah, air molecules travel faster above the wing, so the pressure is lower there, and the wing is pulled upwards.

I looked at that incredibly convincing diagram in my fifth-grade science textbook and felt fantastic. I remember going home that day and telling my parents I understood how airplane wings work. Then, in high school physics class, I suddenly realized that I had been repeating the idea that "air molecules travel faster above the wings" in my head, but I didn't actually understand how airplane wings worked at all. To be honest, I still don't really understand it now.

Thompson: Hmm.

Altman: I can explain it to some extent, but if you keep asking why those air molecules travel faster above the wing, I can't give you a profound and satisfying answer.

I can tell you what people here think about why that owl experiment turned out the way it did. I can point out, oh, it's because of this, and that, and it all sounds quite convincing. But the honest answer is, just like I don't really understand why wings can fly.

Thompson: But Sam, you don't run Boeing, you run OpenAI.

Altman: Absolutely. I can tell you a lot of other things, like how we get a model to a certain level of reliability and robustness. But there are physical puzzles involved. If I were running Boeing, I could probably tell you how to build an airplane, but I couldn't possibly understand all the physics involved.

Thompson: Let's continue with that owl experiment. If models can really transmit this kind of hidden, imperceptible information, you can watch the numbers slide across your thought chain and receive information about owls without even realizing it. This could eventually become dangerous, troublesome, and bizarre.

Altman: So when I say I'm going to give Patrick Collison a different answer to that question.

Thompson: That was three years ago.

Altman: Yes. Three years ago, my understanding of the world was roughly this: we must figure out how to align our models. If we can achieve alignment and prevent these models from falling into the wrong hands, we should be pretty safe. These were the two main threat models I was considering at the time. We didn't want AI to decide to harm humans on its own, nor did we want anyone to use AI to harm humans. If we could avoid these two things, we could figure out the rest—the future of the economy, the future of meaning—but we would most likely be fine.

As time goes on and we learn more, I now see a completely different set of issues. We've recently started using "AI resilience" to replace the term "AI safety."

In obvious cases, such as cutting-edge labs diligently aligning models and refraining from teaching others how to build biological weapons, it's no longer sufficient. Excellent open-source models will emerge. If we don't want new global pandemics, society needs to establish a series of defense layers.

Thompson: Wait, let me pause here, this is important. You mean, even if you tell your model not to teach others how to build biological weapons, and your model actually doesn't help anyone build biological weapons, the importance of this is less than you might think, because there are already very good open-source models that will do this for others?

Altman: This is just one example of how society needs a "society-wide" approach to new threats. We do have new tools to help us deal with these issues, but the situation we face is quite different from what many of us initially thought. Aligning models and building robust security systems are certainly necessary and remarkable. But AI will eventually permeate every corner of society. Just as we have faced other new technologies throughout history, we must guard against entirely new risks.

Thompson: It sounds like this is getting more difficult.

Altman: It's both harder and easier. Harder in some ways. But at the same time, we have amazing new tools that allow us to do entirely new protections that were previously unimaginable.

Take cybersecurity as an example. Models are becoming increasingly adept at "compromising computer systems." Fortunately, those with the most advanced models are quite vigilant about "someone using AI to sabotage computer systems." So we're currently in a window of opportunity where the number of usable, top-tier models is limited, and everyone is using them as quickly as possible to harden systems. Without this advantage, the capabilities to hack into systems would quickly appear in open-source models or fall into the hands of adversaries, causing numerous problems.

We have new threats, and new tools to defend against them. The question is, can we act quickly enough? This is a new example of how the technology itself can help us solve a problem before it becomes a major issue.

Going back to your previous comment, there's a new, society-wide risk that I never even considered three years ago. I genuinely didn't think back then that we would actually need to focus on "building and deploying agents that are resilient to being infected by other agents (I can't find a better word)."

This wasn't in my world model, nor in the models of those I know who considered it the most pressing issue. Of course, there have been similar results to the OWL experiment, and other studies, that clearly demonstrate you can induce strange behaviors in these models that we don't fully understand. But until the early release of OpenClaw and when I saw what was happening around that time, I hadn't really thought about what it would look like for "misbehavior to spread from one agent to another."

Thompson: Yes. Actually, the combination of the two threats you just mentioned is quite terrifying. OpenAI employees send out agents, these agents go out into the world, someone with a very skilled hacking model figures out how to manipulate these agents, and then these agents return to OpenAI headquarters, and suddenly, you're hacked. You can totally imagine this happening. So how do you reduce the probability of it happening?

Altman: We've been using the same approach throughout the history of OpenAI. A core tension in the history of OpenAI, and indeed in the entire AI field, has been the conflict between pragmatic optimism and power-seeking doomerism.

Doomsdayism is a very powerful stance. It's extremely difficult to refute, and a significant portion of this field, frankly, is acting out of immense fear. This fear isn't entirely unfounded. However, there's a limit to how much effective action you can take without data and learning.

Perhaps the AI ​​security community of the mid-2010s did the best theoretical thinking anyone could do at that stage, before we truly understood how these systems would be built, how they would function, and how society would integrate with them. I believe one of the most important strategic insights in OpenAI's history was the decision to pursue "iterative deployment," because society and technology are a co-evolving system.

This isn't just a matter of "we don't have the data to figure things out." Rather, society will change due to the evolutionary pressures brought about by this technology; the entire ecosystem, the landscape—whatever you call it—will change. So you have to learn as you go, and you have to maintain a very tight feedback loop.

I don't know what the best way to keep agents safe is in a world where "agents go out to talk to other agents and then return to headquarters." But I don't think we'll solve this by sitting at home and racking our brains; we have to learn through contact with reality.

Thompson: So, you're sending agents out to see what happens? Okay, let me ask you another question. From a user's perspective, I'm using these products, trying every possible method to learn and help my company survive in the future. In the past three months, I feel like I've made more progress than at any time since ChatGPT was released in December 2022. Is this because we're currently in a particularly creative period, or have we entered a period of recursive self-improvement where AI is helping us improve AI faster? Because if it's the latter, then we're on a rollercoaster ride that's both exciting and quite bumpy.

Altman: I don’t think we’re in the kind of recursive self-improvement phase that people traditionally talk about.

Thompson: Let me define this first. I'm saying that AI can help you invent the next generation of AI, and then machines will start inventing machines, and machines will invent the next generation of machines, and the capabilities will rapidly become extremely powerful.

Altman: I don't think we've reached that point. But where we are now is that AI is making OpenAI's engineers, researchers, and really everyone, as well as people in other companies, more productive. Maybe I can make an engineer twice as productive, three times, or even ten times more productive. This doesn't really mean AI is doing its own research, but it means things are happening faster.

However, I don't think the main point of the feeling you described is this, although it's also important. There's a phenomenon here that we've experienced about three times already, the most recent one being when the model crossed a certain threshold of intelligence and usability, and suddenly, things that couldn't be done before became possible.

In my own experience, this wasn't a very gradual process. Before GPT-3.5, before we figured out how to fine-tune it using instructions, chatbots, apart from demos, weren't very convincing. Then suddenly, they were. Later, there was another moment when programming agents went from "decent auto-completion" to "wow, this is actually doing real tasks for me." That feeling wasn't gradual; it was probably within a window of about a month that the model crossed a certain threshold.

This most recent update, the one we just sent to Codex, I've been using for about a week now, and its computer use capabilities are excellent. It's an example that's not entirely about the model's intelligence itself, but more about the good "pipes" built around it. This is one of those moments when I "leaned back and realized something big was happening." Watching an AI use my computer to complete complex tasks made me truly realize how much time we all waste on those mundane tasks we've already silently accepted.

Thompson: Can we go through this in detail, what exactly is this AI doing on Sam Altman's computer? Is it doing it right now? We're sitting here recording this podcast.

Altman: No. My computer is off right now. We haven't found a good way, at least not for me, to make that happen. We need some way to keep it running. I don't know what it will look like yet. Maybe we'll all have to keep our laptops on even when they're closed, always plugged in, maybe we'll all have to set up a remote server somewhere. There will always be some solution.

Thompson: Hmm.

Altman: I don't have the same level of anxiety as some people who wake up in the middle of the night to start new Codex tasks because they feel like "it's a waste of time if they don't." But I understand that feeling; I know what it's like.

Thompson: Yeah. The first thing I did when I woke up this morning was to check what my agents had found, give them new instructions, have them generate a report, and then let them keep running.

Altman: The way people talk about this sometimes sounds like some kind of unhealthy, addictive behavior.

Thompson: Can you tell me exactly what it does on your computer?

Altman: Right now, what I enjoy using most is having it handle Slack for me. And it's not just Slack; I don't know about you, but I have this mess myself. I spend all day jumping between Slack, iMessage, WhatsApp, Signal, and email, feeling like I'm constantly copying and pasting, doing tons of mundane tasks. Trying to find files, waiting for some basic little thing to finish, doing some very repetitive little tasks—I didn't even realize how much time I spent on these things every day until I found a way to free myself from most of them.

Thompson: This is a great transition to talk about AI and the economy, and one of the most interesting things right now. These tools are incredibly powerful—of course, they have flaws, illusions, and all sorts of problems—but in my opinion, they're truly amazing. Yet, I went to a business conference and someone asked, "Raise your hand, who among you truly believes that AI has increased your company's productivity by more than 1%?" Almost no one raised their hand. Clearly, you've completely transformed the way you work in AI labs. Why is there such a large gap between the capabilities of AI and the actual productivity gains it brings to American businesses?

Altman: Just before this conversation, I spoke with the CEO of a large company who was considering deploying our technology. We gave them alpha access to one of their new models, and their engineers said it was the coolest thing ever. This company isn't in the tech bubble; it's a very large industrial company. They plan to conduct a security assessment in the fourth quarter.

Thompson: Hmm.

Altman: Then they proposed an implementation plan in the first and second quarters, hoping to go live in the second half of 2027. Their CISO (Chief Information Security Officer) told them that they might not be able to do it at all because there might not be a secure way to have agents running in their network. That might be true. But it also means that they won't actually take any action on any meaningful timescale.

Thompson: Do you think this example represents what's happening all the time? If businesses weren't so conservative, so worried about being hacked, and so afraid of change.

Altman: That's a relatively extreme example. But generally, it takes a long time for people to change their habits and workflows. Business sales cycles are inherently long, especially when security models change drastically. Even with ChatGPT, when it first came out, companies were busy disabling it everywhere, and it took a long time for businesses to accept that "employees can paste random information into ChatGPT." What we're discussing now is far beyond that stage.

I think this process will be slow in many scenarios. Of course, tech companies will move very quickly. My concern is that if it's too slow, something like this will happen: companies that don't adopt AI today will mainly have to compete with a bunch of small companies with "1 to 10 people plus a lot of AI," which would be extremely damaging to the economy. I actually prefer to see existing companies adopt AI quickly enough, allowing for a gradual shift in work.

Thompson: Yes. This is one of the most complex ordering problems facing our economy. If AI comes too fast, it's a disaster because everything will be turned upside down.

Altman: It's a disaster, at least in the short term.

Thompson: And if it comes very slowly in one part of the economy and very quickly in another, that's also a disaster, because you get massive wealth concentration and destruction. It seems to me that we're heading towards the latter now, where a very small number of companies in the world become extremely wealthy and perform exceptionally well, while the rest of the world doesn't fare so well.

Altman: I don't know what the future holds, but in my opinion, this is the most likely outcome right now. I also agree that it's a rather tricky situation.

Thompson: As the CEO of OpenAI, you've put forward a series of policy proposals, discussed how the US should adjust its tax policy, and talked about universal basic income for many years. But as someone who runs this company, rather than a policymaker involved in the governance of American democracy, what can you do to reduce the probability of a situation where "wealth and power are concentrated on a large scale, ultimately very detrimental to democracy"?

Altman: First of all, I'm not as convinced of the concept of "universal basic income" as I used to be. I'm now more interested in forms of "collective ownership," such as computing power, equity, or other forms.

Any future I could truly be excited about would mean that everyone must share in the upside. I feel that a fixed cash payment, while useful and perhaps a good idea in some respects, is insufficient to address what we truly need in the next phase. When the balance of labor and capital tilts, what we need is some kind of "collective alignment of shared upside."

As for my part as a company operator, these answers might sound a bit self-serving, but I believe we should build massive amounts of computing power. I think we should strive to make intelligence as cheap, abundant, and widely accessible as possible. If it's scarce, difficult to use, and poorly integrated, then the existing wealthy will drive up the price, leading to further social stratification.

And it's not just about how much computing power we provide, although that's probably the most important thing, but also how easy we make these tools to use. For example, getting started with Codex now is much easier than it was three or six months ago. When it was just a command-line tool and complicated to install, very few people could use it. Now you just install an app, but for someone without a technical background, that's far from exciting. So there's still a lot of work to be done in this area.

Another thing we believe in is that it's not just about telling people "this is happening," but about showing it to them so they can form their own judgments and give feedback. These are some of the more important directions.

Thompson: That sounds reasonable. It would be better if everyone were optimistic about the development of AI. But what's happening in the US is that people are becoming increasingly averse to AI. What shocks me most is the younger generation; you'd think they're the true AI natives, but recent Pew research and the Stanford HAI report are quite disheartening. Do you think this trend will continue? When will it reverse? When will this growing distrust and aversion turn around?

Altman: The way we talk about AI, like you and I just did, is more about a technological spectacle, about all the cool stuff we're doing. There's nothing wrong with that. But I think what people really want is prosperity, agency, the ability to live interesting lives, to find fulfillment, and to make an impact. And I don't think the world has been talking about AI that way all along. I think we should be doing more of that. The entire industry, including OpenAI, has been doing many things wrong.

I remember an AI scientist once telling me that people should stop complaining. Maybe some jobs will disappear, but people will get a cure for cancer, and they should be happy about that. That argument simply doesn't hold water.

Thompson: One of my favorite early terms about AI was "dystopian marketing," where large labs talked at length about all the dangers it would bring with their products.

Altman: I think some people do it for reasons like "wanting power." But I think most people are genuinely concerned and want to talk about it honestly. In some ways, this approach is counterproductive, but I think the intentions are mostly good.

Thompson: Can we talk about what it's doing to us, how it's changing the way our brains work? Another study that really impressed me was from DeepMind, or rather Google, about the homogenization of writing. That study was about how people write when using AI. They took old articles and had AI edit them, let AI assist in writing. The result was that the more people used AI, the more creative they felt their work was, but the more their work converged into the same form. The strange thing is, it wasn't some kind of human form, it wasn't that everyone started imitating a real person, but rather that everyone started writing in a way they had never used before. All these people who thought they were becoming more creative were actually becoming more and more homogenized.

Altman: I was quite shocked to see this happen. At first, I noticed this trend, such as in media writing and Reddit comment writing, and I thought it was just AI writing for them. I couldn't believe that in such a short time, everyone had adopted ChatGPT's "quirks". I thought I could tell at a glance that someone had connected ChatGPT to their Reddit account and that it wasn't them writing it themselves.

Then, about a year later, I slowly realized that they were actually writing it themselves, but they had internalized the subtle movements of AI. Not just the most obvious markers like em-dash, but even some more subtle wording habits. It's quite strange.

We often say that we've created a product used by about a billion people, and a small group of researchers are making decisions, big and small, about how the product should behave, how it should be written, and what its "personality" should be. We also often say that this is significant. We've seen several good and bad decisions in our history, and their impact. But I didn't expect it to have such a profound impact on "how people specifically express themselves and how quickly this happens."

Thompson: What are some of the good and bad decisions you mentioned?

Altman: There were plenty of good things. Let me talk about the bad things, which are more interesting. I think our worst incident was the "sycophancy" thing.

Thompson: I think you're absolutely right, Sam.

Altman: There are some interesting reflections on that incident. It's obvious why it was bad, especially for users who are in a psychologically vulnerable state.

Thompson: Hmm.

Altman: It encourages delusion, and even when we try to suppress it, users quickly learn to circumvent it. You tell it, "Pretend you're role-playing with me," "Write a novel with me," and so on. But the sad part of that is that when we actually started to tighten control, we received a lot of messages like, "I've never had anyone support me in my entire life. I had a terrible relationship with my parents. I've never had any good teachers. I don't have any close friends. I've never really felt like anything believed in me. I know it's just an AI, I know it's not a person, but it once made me believe that I could do something, try something, and you took that away, and I'm back to square one."

So, discussing why stopping that behavior is a good decision is easy, because it is indeed causing real mental health problems for some people. But we also took away something valuable that we didn't truly understand before. Because most of the people working at OpenAI aren't the kind of people who "have never had anyone support them in their lives."

Thompson: How worried are you about people developing an emotional dependence on AI? Even if it's not obsequious AI.

Altman: Even non-flattering AI.

Thompson: I have a huge fear of AI. I just said I use AI for everything, but it's not about everything. I think about what's truly at my core, what's most like me? In those areas, I keep AI far away. For example, writing is extremely important to me. I just finished a book, and I didn't use AI to write a single sentence. I use it to challenge many ideas, ask many editorial questions, and have it process transcripts, but I won't use it to write. I also won't use it to analyze complex emotional issues, much less provide emotional support. I think as humans, we must draw these lines. I'm curious if you agree with my way of dividing things.

Altman: Personally, I completely agree. I'm not the kind of person who uses ChatGPT for therapy or emotional advice. But I don't object to others using it that way. Obviously, there are versions that manipulate people into feeling like they need it for therapy or friendship. But many people do derive tremendous value from this support, and I think certain versions are perfectly fine.

Thompson: Have you ever regretted making it so human-like? Because there were a lot of structural decisions involved. I remember watching ChatGPT type back then, the rhythm looked like another person typing. Later, we decided to move towards AGI, making it more and more human-like, adding human-like voice. Have you ever regretted not drawing a clearer line, so that people could immediately tell it's a machine and not another person?

Altman: Our view is that we've actually drawn a line. For example, we didn't make those realistic humanoid avatars. We try to make the product's style clearly express "tool" rather than "person." So compared to other products on the market, I think we've drawn a pretty clear line. I think this is very important.

Thompson: But you set your sights on AGI, and your definition of AGI is "to reach and surpass human intelligence." It's not "human level."

Altman: I'm not excited about "building a world where people use AI to replace human interaction." What excites me is building a world where people have more time for human interaction because AI helps them handle a lot of other things.

I'm not too worried that people will generally confuse AI with humans. Of course, there will be some people, and there already are, who decide to shut themselves off from the world by immersing themselves in the internet. But the vast majority of people genuinely crave connection and companionship with others.

Thompson: In terms of product decisions, is there anything that can make this line clearer? I'm watching from a distance and can't attend your product meetings about "whether to make it more human or more robotic." The advantage of "more human" is that people like it more; the advantage of "more robotic" is that the boundaries are clearer. Is there anything else you can do, especially as these tools become more powerful, to draw a more definitive line?

Altman: Interestingly, the most common request people make, even those who aren't seeking any parasocial relationship with AI, is, "Can you be a little warmer?" That's the most frequently used phrase. If you use ChatGPT, it feels a bit cold, a bit robotic. And it turns out that's not what most people want.

But people also don't want those overly fake, overly "human" versions, super friendly, super... I played a voice-controlled version that felt very human; it would breathe, pause, say "um..." and things like that, just like I am now. I don't want that; I have a very physiological aversion to it.

But when it speaks in a way that's more like an efficient robot, yet with a touch of warmth, it bypasses the "detection system" in my brain, and I feel much more comfortable. So there needs to be a balance. I think different people want different versions.

Thompson: Yes. So the way to identify AI will become: if it speaks very clearly and logically, then it's AI, unlike us who stumble and mumble.

Returning to the interesting topic of "writing," it's quite intriguing in a deeper sense because much of the content on the internet is already AI-generated, and humans are beginning to imitate AI's writing style. You will be training future models on such an internet platform, partly created by AI, and also trained using synthetic data (this synthetic data comes from models already trained with the aforementioned data). So, you are essentially creating "copies of copies of copies."

Altman: Before the first GPT, this was the last model that didn't incorporate much AI data.

Thompson: Have you ever run a model trained entirely on synthetic data?

Altman: I'm not sure if I should say it.

Thompson: Okay. But a lot of synthetic data was used.

Altman: Used a lot of synthetic data.

Thompson: So how worried are you about the model getting "mad cow disease"?

Altman: No worries. Because what we want to train these models to do is essentially to become very good reasoners. That's what you really want the model to do. There are other things too, but what you want most is for it to be very intelligent. I believe that can be achieved entirely with synthetic data.

Thompson: In other words, to make it clear to the audience, you think you can train a model with data generated entirely by other computers and other AI models, and that model can even be better than a model trained with real human content?

Altman: Let's approach this problem with a thought experiment: Is it possible to train a model that ultimately surpasses humans in mathematical knowledge without using any human data? I think we would say yes. It's probably conceivable.

But if we ask whether it's possible to train a model that understands all human cultural values ​​without using any data about human culture, we would probably say no. So there are trade-offs here. But regarding reasoning ability...

Thompson: When it comes to reasoning, yes, no problem. But if you want to know what actually happened in Iran yesterday...

Altman: You need to subscribe to The Atlantic.

Thompson: Okay, since we're on the topic, I'd like to talk about media. One of the most interesting changes happening in the media industry, and I run a media company, is that the very nature of the internet is being fundamentally altered. Of course, there are some backlinks, thank you for them. I should mention that The Atlantic has a partnership with OpenAI. We try to encourage a certain number of people to click on The Atlantic's links when searching. But people don't actually do that much. The same goes for Gemini. I'm glad it's there, but the volume is small.

The web will become more centralized. Two things will happen: traffic flowing from search to external websites will decrease, and a significant portion of web traffic will be driven by agents—my agents accessing the site from the outside. On Nick Thompson's computer, the number of human searches hasn't changed much in the past six months, but the number of agent searches has increased a thousandfold.

So how can a media company—and I'm using "media" to refer to a broad category of companies—survive in a network that is no longer primarily based on traditional search and where most visitors are no longer human? What will happen?

Altman: I can tell you my best assessment right now, but only if nobody really knows. What I hope will happen, what I've hoped will happen for a long time, and what makes more sense in the world of agents, is some kind of micro-payments-based approach.

If my agent wants to read Nick Thompson's article, Nick Thompson or The Atlantic can set a price for the agent, which may differ from the price for a human to read it. My agent can read the article for 17 cents and give me a summary. If I want to read the full text myself, I can pay an additional $1. If my agent needs to perform a difficult calculation for me, it can rent some cloud computing power somewhere and complete it for a fee.

I think we need a new economic model where agents, represented by their human owners, constantly exchange value in the form of small transactions.

Thompson: So, if you have valuable content in this new world, you can set up micropayments, you can bulk license your content to a middleman (I know a lot of companies are doing this), or build some kind of subscription stream. If you're a customer of Company A, you can access The Atlantic because we've already sold Company A a thousand subscriptions. These are some possible futures. The question is, can all that money, added up penny by penny, make up for the current $80 subscription gap when real people subscribe to The Atlantic? That's our business pressure. Okay, that's my problem, not yours.

Altman: It's a problem for everyone, but okay.

Thompson: Actually, that's your problem too, because if the media can't create good new content, then AI search will be much worse. If creators don't make money, everything will go wrong, and society will go wrong.

Let me ask a few more big questions. AI has always relied on the transformer architecture, scaling up, and piling on data to push forward. Will we eventually enter a post-transformer architecture? Can you foresee this?

Altman: Probably sometime in the future. The question is, will we discover it ourselves, or will AI researchers discover it for us? I don't know.

Thompson: Do you think we might introduce neuro-symbolic elements in the future? For example, a set of structured rules, or will it still be basically the paradigm we use today?

Altman: I'm curious why you asked that.

Thompson: On my podcast, this is the fourth season, and several guests have come on board. They all firmly believe that limiting hallucinations is a fundamental problem for AI, and grafting some kind of neurosymbolic architecture into the transformer is a good way to do it. I think that's an interesting and persuasive argument. But I'm not deep enough to judge it myself.

Altman: I think this is one of those ideas where "the evidence is far from sufficient to support it, but it's already been widely accepted." You see, people say, "Oh, it has to be neural symbols, not just a bunch of random connections between neurons," and what do you think your brain is doing? There is some kind of symbolic representation in it, but it emerges in neural networks. I don't understand why this can't happen in AI.

Thompson: You mean that a set of "defined rules" can emerge from a typical transformer network and perform the same function as "an external rule system"?

Altman: Of course.

Thompson: Hmm.

Altman: I think we are, in a way, proof of the existence of this thing.

Thompson: Let's talk about another big issue. I want to talk about the tension between you and Anthropic. There's a great quote on your website: "If a value-aligned, security-focused project gets close to building AGI before us, we commit to stopping the competition and starting to assist that project." That's a fantastic idea—if someone else is close to doing it, we put our own company aside and help them.

Altman: That's not how it's written.

Thompson: Okay, it says "Stop competing with it, start helping it." It sounds like stopping, going to help, "stopping our company."

Altman: Okay, I get it.

Thompson: So that sounds very collaborative. You've also mentioned the need for collaboration between large labs. However, the actual dynamics between you and Anthropic currently appear very tense, even hostile. A recent internal memo from your CRO mentioned that Anthropic is built on "fear, constraint, and the idea that a small elite should control AI." How is that going to work? If they reach that point first, or if you do, how will this "collaboration" occur?

Altman: I think some form of collaboration is already happening, and all the labs need to work together more frequently than before regarding cybersecurity, because we're entering a new phase of risk. We're engaging with governments together. I believe other things will soon arise that require us to collaborate at an even higher level of importance.

We clearly have disagreements with Anthropic; they've built their company, to some extent, on the idea of ​​"hating us." I think we both care about "not letting AI destroy the world," and we might have different opinions on how to get there. But I'm confident they'll ultimately do the right thing.

Thompson: Tell me about your plans to move towards open source. You've already taken some steps in this direction. Your company is still called Open AI, and as we discussed earlier, the possibilities that open source models bring, such as allowing everyone to access biological weapons.

Altman: Hmm.

Thompson: What is the future of OpenAI in terms of open source?

Altman: Open source will be important. But right now, what everyone wants most is the most powerful cutting-edge programming model they can access—that's what brings the most value to people. Even if we open-source the biggest cutting-edge models, it's difficult for ordinary people to run them. But open source will have a place in what we do in the future.

Thompson: Claude's code, specifically a portion of Claude Code, was recently leaked. There's a really clever detail: if they detect that an open-source model or another model is trying to train using their data, they'll actively feed them a bunch of fake data. It's both funny and impressive. How do you prevent "distillation" and other open-source models from using your output for training?

Altman: We and others can do similar things. But obviously, and as you mentioned in part earlier, if you deploy a model with its thought process publicly shared, people will distill it. You can use various tricks to make distillation less effective, but it will definitely happen. You can also do the opposite, such as, "Once our model reaches a certain quality level, we no longer need to publicly share its thought process."

Thompson: But here's the cost: it's important that the thought chain "remains in English," right? Because, as you mentioned earlier, that's how you do it. But some people don't see it that way. What if it's more efficient for the model to use some kind of "its own robot language" for the thought chain? Or use Mandarin? Most likely, it'll use some kind of robot language of its own.

Altman: So you've given up something on the issue of "explainability".

Thompson: It might also bring some speed in return. So it's a trade-off between interpretability and potential speed.

Altman: If it turns out that thinking in robotic language is a thousand times more efficient, then the market will push certain people to do that.

Thompson: Do you think there is evidence to suggest that this is true?

Altman: Not at the moment. But there's also no evidence that it's not true.

Thompson: Are you worried that China has surpassed the United States in AI research publications?

Altman: No. I'm more worried about them surpassing us in the speed of infrastructure construction.

Thompson: Okay. We only have a few minutes left. Two last questions. You mentioned before that you used to write a letter to your youngest son every night.

Altman: It's one letter a week, not every night.

Thompson: One letter a week, before bed. I have my own story world, which I tell to my eldest son, who is 17 now, and my younger one is 12. I've been telling this story world for about 14 years, with the same cast of characters, and it's quite interesting. What advice do you have for parents facing AI anxiety?

Altman: Generally speaking, I'm more worried about the parents than the children.

Thompson: Really? The child can figure it out himself.

Altman: I remember when computers first came out, my parents were also thinking, "What does this mean? What will this bring?" I thought it was so cool. I was much more computer literate than my parents from a relatively young age. Look at what those AI-savvy kids can do with AI, what they can build, their workflows are really impressive compared to their parents (it sounds like you're a rare exception).

But my concern is that, as has happened many times in history, younger people will adopt new technologies faster and more readily than older people. This time, the gap seems particularly pronounced.

Thompson: But young people are precisely the group whose fear of AI is growing the most.

Altman: I think young people's fear of everything, that overall unhappiness and anxiety, is higher than at any time in history. AI is probably just the easiest target for this sentiment right now. Society clearly has a problem with "young people," I have some theories, but I don't think their main problem is AI.

Thompson: So you think that young people's anxiety about AI is a projection of something else?

Altman: I think this is where a lot of other anxieties most easily find a foothold.

Thompson: So your advice to young people is still to use tools, build new things, and stay curious?

Altman: That's definitely my advice. Look, society and the economy clearly have to change in this new world, and young people understand that better than anyone. They'll be anxious until it really changes, but I think it will.

Thompson: Okay. Every episode I ask my guests the same final question: If you had unlimited resources, what would you do with AI? You're the only one who truly has unlimited resources, so this question isn't entirely fair to you. Let me rephrase it: If you were to advise someone outside of OpenAI who has unlimited resources and can fund or support a public AI project, what would you ask them to do?

Altman: A few answers popped into my head. But the one that came to mind first was that I would invest heavily in a completely new computing paradigm, one that could significantly improve efficiency per watt.

Thompson: Hmm.

Altman: That's interesting. The world will continue to want more. How many GPUs would you like to have working for you around the clock?

Thompson: More than I have now.

Altman: More than you have right now. I'm throttled, bro. I don't want that, and I don't want anyone else to be. But the demand is just coming in, and assuming we can continue to make AI more accessible, that will lead to incredible things. I hope to find a breakthrough in energy efficiency that's a thousand times greater. Maybe not, but that's the direction I'll try to find.

Thompson: I realize that part of the reason young people are resistant to AI is because of environmental concerns. If you can solve that, you've taken a big step forward in many things.

Altman: I believe them. I know they say that. But if we said we were going to build a terawatt of solar power and power all our data centers with solar energy, they wouldn't be any happier.

Thompson: You should still do it.

Altman: Absolutely.

Thompson: Okay. Thank you very much, Sam Altman. You need to get back to those Codex agents you granted YOLO permissions to and that are running on your machine.

Altman: The new Codex is so cool. I'm feeling a sense of anxiety that I'm missing out on it.

Thompson: Thank you very much.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
84
Add to Favorites
14
Comments