Editor’s Note :
In recent years, the world has been confused about the ambiguous relationship between Silicon Valley and Trump. On the one hand, people have used various reasons to prove why Musk supports Trump, and when the relationship between the two broke down, they began to find various reasons to say that they had predicted the breakup of the two. This podcast gives us a new perspective to understand Silicon Valley's choices in the Trump era: How did Peter Thiel deeply influence Musk? What is the deepest anxiety of the entire Silicon Valley consortium, "technological stagnation"?
The host of this conversation, Ross Douthat, is a New York Times columnist, a well-known conservative author, and the author of many books on religion, politics, and society. In this in-depth conversation about AI, politics, and faith, he described Peter Thiel as one of the most influential right-wing intellectuals in the world in the past two decades.
Peter Thiel once again made his consistent judgment: since the 1970s, technological progress has been stalling, social structures have become rigid, and humans may be entering a soft totalitarianism in the name of "order and security." He talked about why he supported Trump, why he had cautious expectations for AI, and why he was wary of the technological totalitarianism that environmentalism and "global governance" might lead to. He believes that the Antichrist may not necessarily come from a technological explosion, but may come from a compromise with "order and security."
This article has given Blockbeats great inspiration, and we also hope to dedicate it to you in front of the screen. The following is the original content (for easier reading and understanding, the original content has been reorganized):
Ross Douthat: (Opening remarks) Is Silicon Valley overambitious? What should we fear more: the end of the world or stagnation? Why is one of the world's most successful investors worried about the Antichrist (false messiah)?
My guest today is the co-founder of PayPal and Palantir, and an early investor in the political careers of Donald Trump and JD Vance. Peter Thiel is the original tech right power player, known for funding all sorts of conservative and even contrarian ideas. But we're going to talk about his own ideas this time, because despite the slight disadvantage of being a billionaire, there's a good reason to think he's one of the most influential right-wing intellectuals of the past 20 years. Despite his one minor "disadvantage" of being a billionaire (not traditionally seen as the archetypal profile of a thinker), there's a good reason to think he's one of the most influential right-wing intellectuals of the past two decades.
Peter Thiel, welcome to Interesting Times.
Peter Thiel: Thank you for having me.
Technological stagnation: Why we no longer have a sense of the future
Ross Douthat: I want to start by taking you back to about 13 or 14 years ago. At that time, you wrote an article for the conservative magazine National Review called "The End of the Future." The basic argument of the article was that the modern world, which appears to be dynamic and fast-paced and ever-changing, is actually not as dynamic as people think it is. We have entered an era of technological stagnation. Digital life has indeed brought some breakthroughs, but it has not completely changed the world in the way that people had hoped. In general, we are stuck in the same place.
Peter Thiel: Yes.
Ross Douthat: You weren’t the only one who made that point, but it carried extra weight when you said it—after all, you were an insider in Silicon Valley, personally involved in and made a fortune from the Internet revolution. So I’m curious: In 2025, do you still think that judgment holds true?
Peter Thiel: Yes, I still generally agree with the idea of "technological stagnation". This argument has never been absolute. We are not saying that the whole world has completely stagnated, but that to some extent, the pace of development has indeed slowed down. It has not returned to zero, but from 1750 to 1970, more than two hundred years, it was an era of continuous acceleration: ships were faster, railways were faster, cars were faster, and airplanes were faster. This trend reached its peak in the Concorde and the Apollo moon landing mission. But since then, development has begun to slow down at all levels.
I've always seen the world of bits as an exception, so we saw the development of computers, software, the Internet, and mobile Internet. Then, in the last ten to fifteen years, there was the cryptocurrency and artificial intelligence revolution. I think that was a really big thing in some sense. But the question is: Is it really enough to get us out of that general sense of stagnation?
In the Back to the Future essays, you could start with an epistemological question: How do we know if we are stagnant or accelerating? Because one of the key features of late modernity is that humans are highly specialized. For example, unless you spend half your life studying string theory, can't you tell whether physics has made progress? What about quantum computing? What about cancer research, biotechnology, and all these vertical fields? And further, how important is progress in cancer treatment compared to a breakthrough in string theory? You have to "weight" these different fields to assess overall technological progress.
In theory, this is an extremely difficult question to define. And the reason why it is difficult to answer is itself questionable: nowadays, more and more fields of knowledge are controlled by a small number of "expert circles", and these people are often only responsible to people in the circle and verify each other. This closed nature itself is enough to make people question the so-called technological progress.
So yes, I think overall we're still living in fairly stagnant times, but that doesn't mean everything has completely ground to a halt.
Ross Douthat: You mentioned Back to the Future. We just took our kids to see the original version of the movie, the one starring Michael J. Fox.
Peter Thiel: The plot is set from 1955 to 1985, 30 years apart. The timeline of Back to the Future 2 is from 1985 to 2015 - now, that was already ten years in the future. Flying cars appeared in the movie, and that imagination of 2015 was completely different from the reality of 1985.
Ross Douthat: Back to the Future 2 does have a Donald Trump-like character in Biff Tanning's power, so it's somewhat prophetic in a way. But what's even more striking is how different the physical environment of that future world looks.
So, one of the most convincing arguments I've heard about technological stagnation is this: if you send a person from one era to another in a time machine, he'll find himself in a completely different world.
For example, traveling from 1860 to——
Peter Thiel: Or, from 1890 to 1970, that's about 80 years of your life. Something like that.
Ross Douthat: But for my kids, even kids living in 2025, when they look back at 1985, they're like, well, the cars were a little different, people didn't have cell phones, but overall, the world looked pretty similar.
This is certainly not a statistical judgment, but——
Peter Thiel: This is a very intuitive common sense judgment.
Ross Douthat: This is a common sense understanding. But what kind of evidence do you need to believe that we are in a take-off phase? Is it simply economic growth? Or is it productivity improvement? What specific indicators do you usually pay attention to that can measure "stagnation" and "vitality"?
Peter Thiel: Of course, there is an economic indicator: How is your standard of living compared to your parents? If you are a 30-year-old millennial, how are you doing compared to your 30-year-old baby boomer parents? What were their circumstances like?
There are also cognitive questions: How many real breakthroughs have we achieved? How can we quantify these achievements? What are the rewards for investing in research?
There are indeed diminishing returns to being in science, or academia more generally. Maybe that’s why it often feels like a cold, even Malthusian institution: you have to keep putting more and more in to get the same output. At some point, people give up, and the system collapses.
The cost of stagnation: when society loses its upward path
Ross Douthat: Let's go back to that. Why do we pursue growth and dynamism? Because, as you mentioned in your discussion, there was a cultural shift in the Western world in the 1970s, which is exactly when you think society was starting to slow down and stagnate. People started to get anxious about the costs of growth, and especially about the environmental costs.
The core of this argument is: We are already rich enough. If we continue to strive to become richer, the earth may not be able to bear it, and ecological degradation at all levels will ensue. Therefore, we should be satisfied with the current situation. So, what is the problem with this argument?
Peter Thiel: I think there are deeper reasons behind this stagnation. When facing history, people usually ask three questions: First, what happened? Second, what should we do about it? But there is another question in the middle that is often overlooked: Why did it happen?
People are running out of new ideas. I think to some extent the system itself has degenerated and become more risk-averse; some of these cultural shifts we can chart. But at the same time, I think people do have some very legitimate concerns about the future: If we continue to accelerate technological progress at an accelerating rate, does that mean we are also accelerating toward environmental disaster, nuclear disaster, or something like that?
But I think if we don't find a path back to the future, I do think society will... I don't know, but it will start to break down and stop functioning. I define the middle class as people who expect their children to be better off than they were. And once that expectation collapses, we no longer have a truly middle-class society. Maybe there are systems, like feudalism, where everything is stagnant and can't change, or there may be a path to a completely different social structure. But that's not the logic of the West, at least not the trajectory that the United States followed in its first 200 years.
Ross Douthat: Do you think that ordinary people will eventually not accept this stagnation? Will they choose to rebel and in the process, bring down the order around them?
Peter Thiel: They might rebel. Or maybe our institutions themselves are starting to fail -- because the premise of these institutions is continuous growth.
Ross Douthat: Our fiscal budget, of course, is based on growth expectations.
Peter Thiel: Yes. For example, I'm not sure, but look at Reagan and Obama. Reagan represented "consumer capitalism," which is a contradiction in terms: as a capitalist, you don't get rich by saving, you get rich by borrowing. Obama represented "low-tax socialism" - which is just as contradictory as "consumer capitalism."
I certainly prefer low-tax socialism to high-tax socialism, but I worry that it's not sustainable. At some point, taxes will either have to go up or the "socialist" policies will have to be abandoned. It's inherently very, very unstable. This is exactly why there's a lack of optimism: people don't think we've arrived at a stable, "Greta-like" future. Maybe it could work, but we're clearly not there yet.
Ross Douthat: Because her name is likely to come up again in this conversation, in reference to Greta Thunberg, the activist who is widely known for her protests against climate change. For you, I think she represents a vision of an anti-growth, essentially authoritarian, environmentalist-led future.
Peter Thiel: Yes. But we’re not there yet. Not yet. If society did come to a standstill, it would be a completely different society—
Ross Douthat: If you actually lived in a degrowth, Scandinavian village.
Peter Thiel: I'm not sure if it will be like North Korea, but it will certainly be very repressive.
Ross Douthat: There is one thing that has always impressed me deeply - when a society feels stagnant and falls into a state of "decadence", which is a word I often use to describe this situation, people tend to begin to long for a crisis, longing for the arrival of a turning point so that they have the opportunity to completely change the direction of society. Because I tend to think that in a wealthy society, when people's wealth reaches a certain level, they will become too comfortable and too risk-averse, and without a crisis, it is difficult to get out of "decadence" and move towards some new possibilities.
So the original example for me was this: After 9/11, there was a general mentality among conservatives in the foreign policy community that we had been in a state of decadence and stagnation, and now it was time to wake up and launch a new crusade to reshape the world. Obviously, that ended very badly. But similar sentiments...
Peter Thiel: But it was Bush who told everyone to go shopping right away.
Ross Douthat: So that’s not really anti-decadence?
Peter Thiel: That's roughly true. There are some neoconservative foreign policy circles where people try to get out of decadence through "live action role-playing" (LARPing). But the mainstream is still the Bush administration's faction, telling everyone "It's time to go shopping."
Ross Douthat: So how much risk should people be willing to take in order to escape decadence? There does seem to be a danger that those who want to fight decadence often need to actively embrace great uncertainty. They have to stand up and say: Look, we have a nice, stable, comfortable society now, but guess what? We may need a war, a crisis, or even a complete reorganization of government. They have to face the danger and even actively participate in it.
Peter Thiel: Well, I'm not sure I can give a precise answer, but my directional judgment is that we should take more risks and do more things. The scope of our actions should be far greater than what we are doing now.
I can talk about these verticals one by one. For example, in biotech, diseases like dementia, Alzheimer's disease - we have made almost no progress in the past 40 or 50 years. People have been stuck on the path of beta amyloid, which obviously has not worked. Now it is more like a ridiculous profit game, with related practitioners constantly reinforcing themselves and endorsing each other. So yes, this is an area where we do need to take more responsibility and take greater risks.
Ross Douthat: I want to linger on this example for a moment to make this discussion more concrete. My question is: What does it mean when we say "we need to take more risks in anti-aging research"? Does it mean that the FDA should step aside and allow anyone who has a new Alzheimer's treatment to sell it directly on the open market? What does "taking risks" look like in the medical field?
Peter Thiel: Yes, you do need to take more risks. If you have a fatal disease, you might be willing to try more radical approaches. Researchers should be able to take more risks.
Culturally, I have in mind an image of “early modernity” — a time when people believed we would eventually cure disease and even radically extend lifespan. Immortality was a big goal of early modernity, from Francis Bacon to Condorcet. Maybe it was anti-Christian, or maybe it was a continuation of Christian thought — either way, it was competitive: if Christianity promised you bodily resurrection, then science had to promise the same thing if it was to “win.”
I remember when we were running PayPal in 1999 or 2000, one of my co-founders, Luke Nosek, was obsessed with Alcor and cryonics, and the idea that people should freeze themselves. At one point, we even took the entire company to a "freezer party." You know a "Tupperware party"? It's one of those parties where they sell plastic containers. At a "freezer party," they didn't sell containers...
Ross Douthat: Is it just the head that freezes? Or is it the whole body that freezes?
Peter Thiel: You can choose to freeze your whole body or just your upper body.
Ross Douthat: The "freezing only the upper body" option is cheaper.
Peter Thiel: What was quite disturbing was that there was a problem with the printer and the freezing protocol could not be printed.
Ross Douthat: Once again, this is a reflection of technological stagnation, right?
Peter Thiel: But looking back, that was a sign of decline. In 1999, this idea was not mainstream, but there was still a small group of baby boomers who believed that they could live forever. And that was probably the last generation that still had this belief. So even though I am always critical of the baby boomers, maybe - even in this marginal, narcissistic fantasy - we did lose something. At least there were still some people who believed that science would eventually cure all their diseases. Now, there are no millennials who believe that anymore.
Political betting: Why support Trump and populism?
Ross Douthat: But I think there are still some people who believe in another form of immortality. I think people's fascination with AI is, in part, related to the idea of transcending human limitations. I'm going to ask you about this later, but I want to talk about politics first. When you first proposed the idea of "stagnation", you focused mainly on the technological and economic aspects, and one thing that impressed me was that this idea can actually be applied to a very wide range of fields. When you wrote that article, you were also interested in "sea settlement" - that is, wanting to build a new political community outside the rigid Western system. But you made a shift in the 2010s.
You were one of the few Silicon Valley celebrities who publicly supported Donald Trump early on, and perhaps the only one. You also supported a number of carefully selected Republican Senate candidates, one of whom is now the Vice President of the United States. As an outsider, after reading your arguments about "social decadence", I understood that you were actually doing a set of "political venture capital". You were betting on a group of disruptors who had the potential to disrupt the status quo, and thought that taking such a risk was worth it. Did you think so at the time?
Peter Thiel:
Of course, there were many different levels at that time. On the one hand, there was the hope that we could redirect the Titanic that was heading for the iceberg, or whatever metaphor you want to use, to really redirect society.
Ross Douthat: Through political change.
Peter Thiel: Maybe a more narrow wish is that we can at least have a conversation around these issues. So when Trump says, "Make America Great Again" — is that a positive, optimistic, ambitious agenda? Or is it a deeply pessimistic assessment of the status quo that we are no longer a great country?
I didn’t have high hopes that Trump would actually bring about positive change. But I felt that for the first time in 100 years, there was a Republican who wasn’t spouting syrupy, hollow Bush-style cliches. It didn’t mean society was progressing, but at least we could start a real conversation. Looking back now, that thought was an absurd fantasy.
I actually had two thoughts in 2016 — thoughts that were just on the edge of my consciousness — but I didn’t connect them at the time. The first was: If Trump loses, no one will be mad at me for supporting him. The second was: I think he has a 50% chance of winning. And there was another hidden thought in my mind…
Ross Douthat: Why wouldn't anyone be mad at you if he lost?
Peter Thiel: It was just a weird thing, and it really didn't matter. But I thought he had a 50-50 chance of winning, because the problems were so severe, and the stagnation was so frustrating. And the reality is, people aren't ready to confront it. Maybe we're at that point now, in 2025, 10 years after Trump, that we can finally have this conversation. Of course, Ross, you're not some leftist zombie--
Ross Douthat: I've been labeled all sorts of things, Peter.
Peter Thiel: But as long as it makes a little progress, I'm willing to take it.
Ross Douthat: From your perspective, there are two levels. The first level is that this society needs to be broken and take risks; Trump himself is a kind of break and risk. The second level is that Trump does dare to speak some real words about the decline of the United States.
So do you think, as an investor, a venture capitalist, that you gained anything from Trump's first term?
Peter Thiel: Um…
Ross Douthat: What do you think are the anti-decadence or anti-stagnation measures during Trump's first term? If there are any, of course, you may think there are none at all.
Peter Thiel: I think the process has been longer and slower than I expected. But at least we've reached a point now where a lot of people are starting to realize that something is wrong. And this is not the conversation I was able to spark in 2012-2014. I debated these issues with Eric Schmidt (former Google CEO) in 2012, Marc Andreessen (founder of A16Z) in 2013, and Jeff Bezos (founder of Amazon) in 2014.
My position at the time was: "We are facing a stagnation problem", while the attitudes of the three of them were some version of "Everything is going well". But I think at least these three have made some corrections and adjustments to varying degrees. The entire Silicon Valley has also changed.
Ross Douthat: However, Silicon Valley has gone beyond just "adjusting".
Peter Thiel: Yes, in terms of the stagnation issue.
Ross Douthat: Yes. But by 2024, a considerable number of people in Silicon Valley finally supported Trump. The most famous of them, of course, was Musk.
Peter Thiel: Exactly. And the way I understand it, it's deeply connected to this question of stagnation. These things are always very complicated, but I tend to think of Zuckerberg this way -- and I don't want to speak for everyone -- like Zuckerberg, or Facebook, or Meta. I don't think he actually has a strong ideological position. He doesn't think about these issues that deeply. The default position is liberalism, and when liberalism doesn't work, what do you do? For many years, the answer has been: do more. If something doesn't work, double down. Give it another shot, throw in another few hundred million dollars, and get fully "woke," and then everyone starts to hate you.
At some point, people think: OK, maybe this isn't going to work.
Ross Douthat: So they pivoted.
Peter Thiel: But that doesn't mean they support Trump.
Ross Douthat: I do not support Trump, but whether in public or private discussions, everyone does have a feeling that in the context of 2024, whether or not you, Peter, are the only supporter like in 2016, the current "Trumpism" or "populism" may indeed become a driving force for technological innovation, economic vitality, etc.
Peter Thiel: That's really, really optimistic.
Ross Douthat: I know you're pessimistic. But people --
Peter Thiel: When you express it in an optimistic way, you're actually saying these people are going to be disappointed, they're destined to fail, something like that.
Ross Douthat: I mean, people did express a lot of optimism, that's what I mean. Elon Musk, despite his apocalyptic anxiety about the budget deficit that might lead to the demise of all humanity, when he came into the administration, the people around him basically said, "We are in partnership with the Trump administration to achieve technological greatness." I think they were indeed optimistic.
You are more of a pessimist, or a realist. I want to ask about your own judgment - not theirs. Do you think Trump 2.0-style populism can be a carrier of technological vitality?
Peter Thiel: It's still our best bet for now. Can Harvard cure Alzheimer's by continuing to do the same things that haven't worked for the past 50 years?
Ross Douthat: It sounds more like, “It can’t get any worse, so let’s try to disrupt.” But the criticism of the current populism is that Silicon Valley has chosen to ally itself with populists who don’t care about science. They don’t want to invest in science. They just want to cut off funding to Harvard because they hate it. As a result, you don’t end up with the kind of future-oriented investment that Silicon Valley originally wanted. Isn’t that a valid criticism?
Peter Thiel: To some extent. But we have to go back to a more fundamental question: How well does the scientific system in our context work? The New Dealers, despite their many problems, really pushed science hard. You gave grants, you gave people money, you pushed for scale. And now, if another "Einstein" wrote a letter to the White House, it would probably get lost in the mail room. Things like the Manhattan Project are unthinkable.
We still call some things "moonshots," like Biden did when he talked about cancer research. But the "moonshot" in the '60s was a real trip to the moon. Now, "moonshot" often means something completely fictional that is destined not to happen. When you hear "this thing needs a moonshot," what it really means is: this thing is not going to happen. It's not that we need another Apollo program, but that this thing will never happen.
Ross Douthat: It sounds like you're still in this position: For you, maybe different from others in Silicon Valley, the value of populism is to expose illusions and remove the fig leaf. We are not at the stage where you expect the Trump administration to engage in the "Manhattan Project" and the "Moonshot Project". It's more like - populism helps us see that everything is fake.
Peter Thiel: You have to try to do both. These two things are really intertwined.
Take the deregulation of nuclear energy: someday we'll start building new nuclear power plants again, or better designed, maybe even fusion reactors. So yes, there's a part of it that's a process of deregulation, of deregulation. But then you have to start the actual construction -- that's how it works. In a sense, you have to clear the field first, and then you can start building, maybe...
Ross Douthat: But you personally no longer fund politicians?
Peter Thiel: I'm split on this. I think it's extremely important, but also extremely toxic. So I'm always struggling with whether I should do it or not...
Ross Douthat: What does "extremely toxic" mean to you personally?
Peter Thiel: It's toxic to everybody involved. Because it's a zero-sum game, it's crazy. In a way...
Ross Douthat: Is it because everyone hates you and ties you to Trump? What does it do to you personally?
Peter Thiel: The toxicity is that it happens in a zero-sum world. You feel like the stakes are incredibly high.
Ross Douthat: Did you also gain some enemies because of this that you didn't have before?
Peter Thiel: It's harmful to everyone involved in one way or another. It also involves a "back to the future" political proposition. You can't - this is one of the things I discussed with Elon Musk in 2024. We talked a lot. I also told him about a "maritime nation" vision: I said if Trump didn't win, I wanted to leave the United States. Musk replied: There's nowhere to go, we have nowhere to go.
And then you always think about how to refute it afterwards. It was about two hours after we finished eating and I went home that I realized: Wow, Elon, now you no longer believe in "going to Mars". 2024 is the year that Elon no longer believes in Mars - not that he doesn't believe it is a scientific and technological project, but that he no longer believes in its possibility as a political project. Mars was originally a political project to create an alternative social model. In 2024, Elon began to believe that even if you go to Mars, the socialist US government and those awakened AIs will follow you all the way.
We facilitated a meeting between Elon and Demis Hassabis, CEO of DeepMind.
Ross Douthat: Demis leads an artificial intelligence company.
Peter Thiel: Yes. The core of that conversation was Demis saying to Elon: I’m working on the most important project in the world, I’m building a superhuman AI.
Elon responded: I'm also working on the most important project in the world, I'm making humans an interstellar species. Then Demis said: You know, my AI will go to Mars with you. Elon was silent after hearing this. But in my version of this history, it took several years for this idea to really hit him. He didn't really deal with it until 2024.
Ross Douthat: But that doesn't mean he no longer believes in Mars itself. It just means that he thinks that if he wants to "go to Mars", he must first win the battle between the budget deficit or "woke culture".
Peter Thiel: Yes, but what does Mars mean?
Ross Douthat: What does Mars mean?
Peter Thiel: Is it just a scientific project? Or is it a libertarian paradise, as Heinlein described it, where the moon is used as a test bed for an ideal society?
Ross Douthat: A vision of a new society populated by many of the descendants of... Elon Musk.
Peter Thiel: Well, I'm not sure if this idea has been fleshed out to this degree, but if you really start to flesh it out, you realize that Mars should not just be a scientific project, it should be a political project. And once you flesh it out, you have to start thinking seriously: awakened artificial intelligence will go with you, and socialist governments will go with you. Then you may not just "go to Mars", you have to think of other ways.
The Light and Shadow of AI: An Engine of Growth or an Amplifier of Mediocrity?
Ross Douthat: So, awakening artificial intelligence (AI), at least in this period of stagnation, seems to be an exception - it's one of the few areas where there has been really significant progress, and this progress has been unexpected by many people. It's also an exception in the political area that we just mentioned. It seems to me that the Trump administration has in some ways given AI investors what they wanted here: on the one hand, stepping back and not intervening, and on the other hand, promoting public-private partnerships. So it's both the front end of technological progress and a point at which the government is re-entering.
You are also an investor in the AI field. What do you think you are investing in?
Peter Thiel: This is a long story with many layers. Let's start with a question: How important do I think AI is? My "clumsy" answer is: it's certainly not an empty hype "air burger", but it's not a complete transformation of society as a whole. My current estimate is that it's about the same order of magnitude as the Internet in the late 1990s. I'm not sure it's enough to really end long-term stagnation, but maybe it's enough to give birth to some great companies.
For example, the Internet once promoted GDP growth of about 1% per year for ten to fifteen years, and also led to a certain increase in productivity. So, my initial positioning of AI is roughly at this level.
This is the only growth engine we have right now. In a way, it’s a little unhealthy in its “all-or-nothing” nature. I hope we can make progress on multiple dimensions at the same time, such as advancing the Mars program, such as conquering dementia. But if AI is the only thing we have now, then I will accept it. Of course, it has risks, and there is no doubt that this technology is dangerous. But it also brings...
Ross Douthat: So you're skeptical of the "superintelligence cascade theory"? The idea is that once AI is successful enough, it will become so smart that it will drive breakthroughs in the physical world. So, humans may not be able to cure dementia or figure out how to build the perfect factory to make rockets, but AI can.
Once you cross a certain threshold, it will lead to not only digital progress, but also 64 other paths of progress. It sounds like you don't quite believe it, or you think it's unlikely?
Peter Thiel: Yeah, I'm not sure that's the crux of the matter.
Ross Douthat: What does “critical” mean? What do you mean by “gating factor”?
Peter Thiel: This may be an ideology in Silicon Valley. It may be counterintuitive, but it may be more liberal than conservative. In Silicon Valley, people are extremely obsessed with IQ, and everything revolves around "smart people": if we have more smart people, we can create more great things.
But the counter-argument from an economic perspective is that in reality, the smarter people are, the more confused they are. They may not be more productive because they don’t know how to apply their intelligence, and our society doesn’t know how to accept them, so they have difficulty integrating into the mainstream. This means that the real problem may not be “intelligence” at all, but that there is something wrong with our social structure itself.
Ross Douthat: Is this a limitation of intelligence itself, or is it a problem with the personality type that superintelligence breeds?
I don’t really agree with the idea that “all problems can be solved by improving intelligence.” I discussed this with an AI accelerationist when I was doing a podcast. For example, if we improve intelligence to a certain level, Alzheimer’s disease can be overcome; if we improve intelligence, AI can design a process to create a billion robots overnight. My doubts about intelligence are that I think it has its limits after all.
Peter Thiel: Yes, that's really hard to prove. These kinds of things are always hard to disprove.
Ross Douthat: Until we actually have superintelligence.
Peter Thiel: But I do agree with your intuition. Because the reality is that we have a lot of very smart people, but a lot of things are still stuck, and the reason is somewhere else. So maybe the problem is simply unsolvable, which is the most pessimistic view. Maybe dementia is fundamentally untreatable, maybe death itself is an unsolvable problem.
Or maybe it's a problem of cultural structure. The problem is not about a smart individual, but how they are accepted by this society. Can we tolerate "deviant smart people"? Maybe you need such "unsociable" smart people to promote crazy experiments. But if AI is only "smart" in the traditional sense, and if we simply understand "wokeness" as "over-compliance" or "political correctness", that kind of intelligence may not bring about real breakthroughs.
Ross Douthat: So are you worried about a possible future where artificial intelligence itself becomes the representative of the "new stagnation"? It is highly intelligent and creative, but everything is within a framework, like Netflix's algorithm: a steady stream of "okay" movies, content that people are willing to watch but not like; a large number of mediocre ideas; marginalizing human labor without new breakthroughs. It changes the existing structure, but in a sense deepens stagnation. Is this the scenario you are worried about?
Peter Thiel: It's entirely possible. It's a risk. But I still come to the conclusion that we should try AI. The alternative is total stagnation.
Yes, it may bring about many situations that we can't imagine. For example, the combination of AI and military drones may be dangerous, dystopian, and disturbing, but it will eventually bring about some kind of change. But if you don't have AI at all, then nothing will really happen.
In fact, there have been similar debates on the Internet: Does the Internet exacerbate conformity and make the whole society more "awakened"? The reality is that it has not brought about the explosion of ideas and diversity that liberals imagined in 1999. But if you ask me, I still think that the emergence of the Internet is better than a world without the Internet. And in my opinion, the same is true for AI: it is better than "nothing", and "nothing" is its only alternative.
You see, by only discussing AI itself, we are silently admitting that apart from it, we are almost at a complete standstill.
Ross Douthat: But the AI field is clearly full of people whose expectations for AI are far more ambitious, transformative, and even utopian than what you have expressed. You mentioned earlier that modern society once promised humans radical life extension - and now such promises are disappearing. But it is clear that many people who are deeply involved in AI actually see it as a path to "transhumanism", a tool to transcend the shackles of the flesh - either to create a "heir race" or to achieve the fusion of human brain and machine.
Do you think these are all fanciful ideas or high-concept fundraising? Do you think they are hype or delusional or are you really concerned about them?
Peter Thiel: Well, yes.
Ross Douthat: I guess you still want the human race to continue, right?
Peter Thiel: Uh-
Ross Douthat: You're hesitating.
Peter Thiel: I don’t know. I, I would…
Ross Douthat: This is a long hesitation!!!
Peter Thiel: There are so many issues hidden in this.
Ross Douthat: Then let me ask directly: Should humans continue to exist?
Peter Thiel: Yes.
Ross Douthat: Okay.
Peter Thiel: But I also hope that we can solve these problems fundamentally. So... I'm not sure, yes - this is "transhumanism". Its ideal state is to completely transform our natural human body into an immortal body.
This view is often compared to gender transition. For example, in the transgender topic, some people are transvestites, who achieve gender expression through changing clothes; some people are transsexuals, who may undergo surgery to change their reproductive organs from male to female, or vice versa. Of course, we can discuss what these surgeries have changed and how much they have changed.
But the criticism of these transformations is not that they are "weird" or "unnatural" but that they are too trivial. We want more than just cross-dressing or organ replacement. We want more radical transformations - changes in a person's heart, mind, and even body.
By the way, the criticism of this kind of transhumanism by orthodox Christianity is not that it is too radical, but that it is far from enough. You have changed the body, but you have not changed the soul, the whole state of being.
Ross Douthat: Wait a minute. I basically agree with your point that religion is supposed to be a friend of science and the idea of technological progress. I also think that any belief in divine providence must acknowledge the fact that we have made progress and have accomplished things that would have seemed unimaginable to our ancestors.
But the ultimate promise of Christianity still seems to be that, through God’s grace, one can achieve a perfect body and a perfect soul. And the person who tries to achieve this on his own with a bunch of machines is likely to end up as a character in a dystopian story.
Peter Thiel: Well, let's make this a little bit clearer.
Ross Douthat: Of course you can have a heretical form of Christianity that gives another account.
Peter Thiel: Yes, I don't know. But I noticed that the word "nature" does not appear once in the entire Old Testament. In this sense, the Jewish-Christian apocalyptic tradition that I understand is actually a spiritual tradition that transcends nature. It talks about transcendence and overcoming. The closest expression to "nature" is probably: man is fallen. From a Christian perspective, this "fall" can almost be regarded as a natural state: man is flawed and incomplete. This statement is true. But in a sense, faith means that you have to use the power of God to transcend it and overcome it.
Ross Douthat: Exactly. But those who are trying to build a "machine god" right now, and you are certainly not among them, do not see themselves as working with Yahweh, the Lord of Hosts.
Peter Thiel: Sure, sure. But…
Ross Douthat: They think they are building immortality on their own, right?
Peter Thiel: We are jumping around a lot. Back to my point, my criticism is that they are not ambitious enough. From a Christian perspective, they are far from radical enough.
Ross Douthat: But what they lack is moral and spiritual ambition.
Peter Thiel: Are they still ambitious enough on the physical level? Are they still transhumanists? To be honest, cryonics seems like a throwback to 1999, and no one is really doing it anymore. So they are not transhumanists on the physical dimension. Maybe they are moving towards the path of "uploading consciousness"? But, to be honest, I would rather have my own body than a computer program that just simulates me.
Ross Douthat: I agree with that.
Peter Thiel: So uploading, I think, is even a step down from cryonics. But even so, it's part of the conversation -- and it's hard to assess at this point in the discussion. I'm not saying they're all making this up, this is all fake, but I'm also not...
Ross Douthat: Do you feel that some of it is fake?
Peter Thiel: I don't think it's fake. Because "fake" means they are lying. But what I want to say is that these are not their real focus.
Ross Douthat: Got it.
Peter Thiel: So we see a lot of abundantist language, an optimistic narrative. I was talking to Elon about this a few weeks ago, and he said that in ten years, the United States will have a billion humanoid robots. I said: If that's true, then you don't have to worry about the fiscal deficit anymore. Because by then, growth will be so rapid that economic growth itself will solve this problem. But he's still worried about the deficit. This certainly doesn't mean that he doesn't believe in the prospect of "a billion robots", but it may mean that he hasn't thought through the implications of this expectation, or he doesn't think it will bring about a radical change in the structure of the economy; or it may just be that there is a lot of uncertainty in this expectation. So, to some extent, these future blueprints have not been really thought through.
If I can criticize Silicon Valley, it is that it always avoids the question of "what does technology mean". People often get stuck in the micro level, such as "What is the IQ-ELO score of AI?" "How should we define AGI?" We get caught up in these endless technical details and ignore the more mid-level questions, which are precisely the ones that are really important: what does it mean for the fiscal deficit? What does it mean for the economic structure? What does it mean for geopolitics?
One of the questions I talked to you about recently is: Will artificial intelligence change some of the human warfare procedures? If we are entering an accelerated AI revolution, then in terms of military - will other countries fall behind? From an optimistic point of view, this gap may have a deterrent effect: other countries will know that they have lost. But from a pessimistic point of view, it may prompt them to move faster - because they realize that it is either now or forever. If you don't fight now, you may not have the opportunity later.
In either case — and this is going to be a very big deal — the thing is: we haven’t thought through these issues yet. We haven’t seriously discussed what AI means for geopolitics, or its impact on the macroeconomy. These are the questions I want us to collectively explore more deeply.
End-time Imagination: Who is the real "Antichrist"?
Ross Douthat: You are also concerned about another larger issue - let's continue the conversation along the line of "religion". You have recently talked a lot about the concept of Antichrist - this is a term in the Christian context and is also related to eschatology. What does "Antichrist" mean to you? How do you understand this concept?
Peter Thiel: How much time do we have?
Ross Douthat: We have plenty of time to talk about the Antichrist.
Peter Thiel: Well, I could talk about this topic for a long time. I think when we talk about existential risks or challenges facing humanity, we always face a problem of expression framework. These risks are often put into a kind of science fiction grammar of "technology out of control, heading towards dystopia": such as nuclear war, environmental disasters, or more specifically, climate change, although we can list many other risks, such as biological weapons, various different science fiction doomsday scenarios. Of course, artificial intelligence does bring certain types of risks.
But I've always felt that if we really want to establish a framework for discussing "existential risks," then we should also discuss the possibility of another "bad singularity," by which I mean a "global totalitarian state." Because when faced with all the above risks, the political solution people presume is often to move towards a "single world governance." For example, how do we control nuclear weapons? Imagine a truly powerful United Nations that can control all nuclear weapons and coordinate governance through a global political order. Similarly, when we talk about how to deal with artificial intelligence, similar answers will appear: we need "global computing power governance," we need a world government to supervise all computers, record every keyboard input, and ensure that no one can write dangerous AI programs. I've always wondered if this path would actually be "escaping from the frying pan and jumping into the fire pit."
The atheist version is: "One World or Nothing." This is the title of a short film made by the Federation of American Scientists in the 1940s. The film begins with a nuclear explosion destroying the world, and then concludes that to avoid destruction, we must establish a world government. "One world or nothing." The Christian version is actually the same question to some extent: Antichrist or Armageddon? You either accept a one-world order dominated by the Antichrist; or we sleepwalk to Armageddon (the final battlefield of the global war at the end of the world in the Bible). In the end, "One World or Nothing" and "Antichrist or Armageddon" are actually different ways of saying the same thing.
I have many thoughts on this question, but there is a key loophole that always bothers me - many narratives about the Antichrist always skip over a core question: How does the Antichrist take over the world? The book says that he relies on demonic speeches and hypnotic language, and everyone believes it. This sounds like some kind of "Demonium Ex-Machina".
Ross Douthat: Yeah, that's totally untenable.
Peter Thiel: Yes, it's an obvious plot hole. But I think we have actually found the explanation for this hole now: the way the Antichrist takes over the world is not by demagoguery, but by constantly creating "end-of-the-world anxiety." He will keep repeating "Armageddon is coming" and "existential risks are imminent" and use this as a reason to regulate everything. He is not the image of the "evil technology genius" in the 17th and 18th centuries, not a scientific madman sitting in a laboratory creating a machine of destruction. The reality is that people are much more cautious and fearful than that.
In our time, what really resonates politically is not "technological liberation" but "technophobia." People say, "We have to stop, we can't let technology get out of control." In the 17th century, we might have imagined a Dr. Strangelove or Edward Teller-type technocrat taking over the world; today, the more likely person to become that role is Greta Thunberg.
Ross Douthat: I want to raise the possibility of an intermediate state. In the past, the antichrist we feared was a technological wizard with super powers; now we are more likely to fear someone who promises to "control technology and ensure security." In your view, that is a move towards a state of general stagnation, right?
Peter Thiel: Yes, that's closer to the path I think is going to happen.
Ross Douthat: But you think people are still afraid of a 17th century-style antichrist. We're still afraid of a Dr. Strangelove-style character.
Peter Thiel: Yes, deep down everyone is still afraid of that old-fashioned antichrist image.
Ross Douthat: But you're saying that the real Antichrist will take advantage of this fear and say: You have to follow me to avoid Skynet, the Terminator, and nuclear Armageddon.
Peter Thiel: That’s right.
Ross Douthat: My view is that, given the way the world looks right now, for people to believe that this fear is real, some new form of technological breakthrough is needed to make that doomsday threat concrete. In other words, I can understand that if the world really believes that AI is about to destroy humanity, it may indeed turn to a leader who promises "peace and regulation." But to achieve that state, a certain "technological explosion" must occur first, that is, the accelerationist apocalyptic picture must be partially realized.
In order to usher in the Antichrist of "peace and security" that you mentioned, the prerequisite is that technology must first make a significant breakthrough. For example, a fundamental flaw of 20th century totalitarianism was "knowledge ability": it could not grasp what was happening in the world. So you need AI or other new technologies to solve this information bottleneck and provide data support for totalitarian rule. In other words, the "worst outcome" you envision is actually inseparable from a real technological leap - and then being tamed to maintain a stagnant totalitarian rule. We can't jump directly from the current technological state.
Peter Thiel: Well, there is a path --
Ross Douthat: And now we have Greta Thunberg protesting against Israel on a boat in the Mediterranean. I just don’t see how “security through AI,” “technological tranquility,” or “climate control security” can become a powerful, global political rallying cry. Without real technological acceleration and the fear of catastrophe, the rhetoric itself is unlikely to work.
Peter Thiel: These are really hard questions to answer, but I do think that environmentalism is a very powerful force. I'm not sure if it's powerful enough to create a totalitarian state that "rules the world," but it's certainly powerful.
Ross Douthat: Under the current circumstances, I don't think so.
Peter Thiel: I would say that in Europe, environmentalism is probably the only thing people still believe in. Their belief in a "green future" even outweighs their concerns about Islamic law or the so-called totalitarian rule of some countries. After all, the so-called "future" is a concept that looks different from the present. In Europe, there are only three imaginations that can be regarded as the future: green transformation, Islamic law, and totalitarianism. And "green" is obviously the most powerful narrative among them.
Ross Douthat: That was in a Europe that was in decline and no longer dominated the world.
Peter Thiel: Of course, it is always nested in a specific context. We can see this by looking back at the history of nuclear technology. In the end, we did not move towards a global totalitarian model of rule. But by the 1970s, there was an explanation for technological stagnation: the rapid advancement of technology had become frightening, and the "Baconian scientific spirit" also ended at Los Alamos.
From that point on, society seemed to have made up its mind: stop there and don’t move forward. When Charles Manson took LSD in the late 1960s and eventually went on a murderous path, what he saw in the hallucinogen was an extremely free worldview: you can act like an anti-hero in a Dostoyevsky novel, and everything is allowed.
Of course, not everyone became Charles Manson. But in the history I tell, everyone became as paranoid as he was, and ultimately the hippies dominated the culture…
Ross Douthat: But Charles Manson didn’t become the Antichrist and take over the world. We were just talking about the end of the world, and you…
Peter Thiel: But the story of the '70s, to me, is this: The hippies won. We landed on the moon in July 1969, and three weeks later, Woodstock happened. Looking back on it today, that was the watershed moment when technological progress stopped. The hippies won the culture war. I don't mean that Charles Manson literally won.
Ross Douthat: OK, let's get back to the topic of the Antichrist and wrap it up. You just said something like you're "backing off": environmentalism is anti-progressive enough, so let's just leave it at that. OK, let's accept that for now.
Peter Thiel: I'm not retreating, I'm just pointing out that this force is indeed very powerful.
Ross Douthat: But the reality is that we are not currently living under the rule of the Antichrist. We are just in a state of stagnation. And what you are suggesting is that there could be a worse future: a fear-driven order that makes stagnation permanent. My view is that if that were to happen, it would have to be accompanied by some kind of dramatic technological leap, something like Los Alamos-level change, that would really scare humanity.
I want to ask you a specific question: You are an investor in AI and are deeply involved in the development of Palantir, military technology, surveillance technology, and war-related technology. In the scenario you just described: an "Antichrist" who uses human fear of technological change to establish a global order. It sounds like he is likely to use the tools you are building. For example: he may say, "We no longer need technological progress," but he will add, "I am very happy with Palantir's current results."
Isn’t this something to worry about? Could it be that the first person to publicly worry about the Antichrist inadvertently hastened his arrival?
Peter Thiel: Look, there are a lot of different possibilities here. But I certainly don't see myself doing what you're saying.
Ross Douthat: I don't really think you're doing that. I'm just trying to understand how a society gets to the point where it's willing to accept permanent authoritarian rule.



