Author: MD
Produced by: Bright Company
Recently, the well-known American podcast Invest Like the Best once again interviewed Marc Andreessen, co-founder of Andreessen Horowitz. During the interview, Marc and host Patrick explored in depth the major changes that AI is reshaping technology and geopolitics, and discussed DeepSeek's open source artificial intelligence and its significance in the technological competition among major powers. In addition, they also shared their views on the evolution of the global power structure and the overall transformation of the venture capital industry.
"Mingliang Company" used AI tools to organize the core content of the interview as soon as possible. For the full text, please see the "original link" at the end of the article.
The following is the interview content (abridged):
Talking about DeepSeek, AI winners and losers
Patrick : Marc, I think we have to start with the core question. Can you talk about your thoughts on DeepSeek's R1?
Marc : There are many dimensions to this. (I think) the US is still the recognized scientific and technological leader in AI. Most of the ideas in DeepSeek are derived from work done in the US or Europe in the last 20 years, or surprisingly 80 years ago. The initial research on neural networks began in US and European research universities in the 1940s.
Therefore, from the perspective of knowledge development, the United States is still far ahead.
But DeepSeek has done a really great job of using that knowledge. And they've also done a really amazing thing by making it available to the world in open source form. And that's actually pretty amazing because there's been a reversal of this phenomenon. You have American companies like OpenAI that are basically completely closed.
Part of Elon Musk's lawsuit against OpenAI is asking them to change the company's name from OpenAI to Closed AI. OpenAI's original vision was that everything would be open source, but now everything is closed. Other large AI labs, such as Anthropic, are also completely closed. In fact, they have even stopped publishing research papers, treating everything as proprietary.
And the DeepSeek team, for their own reasons, actually lived up to the promise of true open source. They released the code for their LLM (called V3) and their reasoner (called R1), and published a detailed technical paper explaining how they built it, which essentially provided a roadmap for anyone else who wanted to do similar work.
So it's out in the open. There's a false narrative out there that if you use DeepSeek, you're giving all your data to the Chinese. That's true if you use the service on the DeepSeek website. But you can download the code and run it yourself. But I'll give you an example: Perplexity is a US company, and you can use DeepSeek R1 on Perplexity, which is fully hosted in the US. Microsoft and Amazon now have cloud versions of DeepSeek that you can run on their cloud platforms, and obviously both of those companies are US companies, using US data centers.
This is really important. You can download this system right now and actually run it on $6,000 worth of hardware at home or at work. It's comparable in capability to cutting-edge systems from companies like OpenAI and Anthropic.
These companies invested a lot of money to build their systems. Today, you can buy it for $6,000 and have complete control. If you run it yourself, you have complete control. You have full transparency into what it's doing, you can modify it, you can do all kinds of things with it.
It also has a really cool feature called distillation. You can take a big model that requires $6,000 of hardware and compress it down to create smaller versions of the model. People have created smaller versions of the model online and optimized them so that you can run them on a MacBook or an iPhone. They're not as smart as the full version, but they're still pretty smart. You can create customized, domain-specific, distilled versions that do really well in a specific domain.
This is a huge step forward in making reasoning about large models, and reasoning about R1 models in programming, in science, much more accessible. Six months ago, this was very esoteric, extremely expensive, and proprietary. Now it's free and available to everyone forever.
Every big tech company, internet company, every startup, and we have dozens if not hundreds of startups this week, is either rebuilding on DeepSeek, integrating it into their products, or looking at the technology that they use and using it to improve existing AI systems.
Mark Zuckerberg from the Meta team recently talked about how the Meta team is tearing down DeepSeek, borrowing ideas completely legally because it's open source, and making sure the next version of Llama is at least as good as DeepSeek in reasoning capabilities, or better. That really moves the world forward.
The two main points we can learn from this are: AI is going to be everywhere. There are a lot of AI risk people, security people, regulators, officials, governments, the EU, the British, etc., all of whom want to restrict and control AI, and this basically guarantees that none of this will happen, which I think is great. It's very much in the free tradition of the Internet. And then this achieves a 30x cost reduction in reasoning power.
Perhaps the final point is that this shows that reasoning will work. Reasoning will work in any area of human activity as long as you can generate answers that can be checked after the fact by technical experts for correctness.
We will have AI capable of human and superhuman level reasoning, which will be useful in areas that really matter: coding, mathematics, physics, chemistry, biology, economics, finance, law, and medicine.
This basically guarantees that within five years every single person on the planet will have a superhuman AI lawyer, AI doctor, on call, just as a standard feature on their phone. This will make the world a better, healthier, and more amazing place.
Patrick: But this is also the most volatile, in two months the model will be outdated. There is a lot of innovation happening at every technology level. But just looking at this point in time, entering this new paradigm, if you were writing a column about the winners and losers of all stakeholders, whether it's new application developers, existing software developers, infrastructure providers like Nvidia, open source vs. closed source model companies. Who do you think are the winners and losers after the release of R1?
Marc: If we take a “snapshot” today, then from the perspective of a zero-sum game, in terms of the winners and losers at a point in time, the winners are all users, all consumers, every individual, and every business that uses AI.
There are some startups, such as those that provide AI legal services, whose costs of using AI were 30 times what they are now last week.
For example, for a company that's building an AI lawyer, if the cost of its key inputs goes down 30 times, that's like the cost of gas going down 30 times when you're driving. All of a sudden, you can drive 30 times farther with the same dollar, or you can use the extra spending power to buy more stuff. All of these companies will either greatly expand their ability to use AI in all of these areas, or they'll be able to provide services cheaper or free. So it's a fantastic outcome for the user, for the world, on a fixed-sized plate basis.
The losers are the companies with proprietary models, like OpenAI, Anthropic, etc. You'll notice that both OpenAI and Anthropic have sent out pretty strong, if provocative, messages this past week explaining why this isn't the end for them. There's an old saying in business and politics that when you're explaining, you're losing.
And then the other one is Nvidia. There's a lot of commentary on this, but Nvidia makes the standard AI chip that people use. There are some other options, but Nvidia is what most people use. Their profit margins on their chips are like 90%, and the company's stock price reflects that. (Nvidia) is one of the most valuable companies in the world. One of the things that the DeepSeek team did in their paper is they figured out how to use a cheaper chip, actually still using Nvidia chips, but they used it more efficiently.
Part of the 30x cost reduction is that you need fewer chips. And by the way, China is building out its own chip supply chain, and some companies are starting to use Chinese-derived chips, which of course is a more fundamental threat to Nvidia. So this is a snapshot at a point in time. But the thing is, your question suggests another way to look at it, which is that over time, what you want to see is an elastic effect over time. Satya Nadella used this phrase called the Jevons paradox.
Think about gasoline. If the price of gasoline drops dramatically, then all of a sudden people drive more cars. This happens all the time in transportation planning. So you have a city like Austin that's choking on traffic, and somebody suddenly has the idea to build a new freeway right next to the existing freeway. And in just two years, the new freeway is clogged, too, and maybe even harder to get from one place to another. The reason is that a reduction in the price of a key input can induce demand.
If AI suddenly becomes 30 times cheaper, people might use it 30 times more, or by the way, they might use it 100 times or even 1,000 times more. The economic term for this is called elasticity.
So price drops equals explosive growth in demand. I think there's a very plausible scenario here, which is that on the other side, as usage explodes, DeepSeek will do very well. And by the way, OpenAI, Anthropic will do very well, Nvidia will do very well, the Chinese chipmakers will do very well.
Then you'll see a tidal wave effect where the whole industry will explode. We're really just at the beginning of people figuring out how to use these technologies . Inference has only started working in the last four months . OpenAI just released their o1 inference model a few months ago. It's like taking fire off the mountain and giving it to all of humanity . And most of humanity hasn't used fire yet, but they will.
And then, frankly, it's also an old idea, which is creativity, which is, well, if you're OpenAI or something like that, what you did last week is no longer good enough. But then again, that's the way of the world. You have to get better. These things are races. You have to evolve. So that's also been a very powerful catalyst for a lot of existing companies to really up their game and become more aggressive.
…
Patrick: …, it’s a hard thing to understand if a Chinese company uses a model that was developed in the United States, that was invested heavily in, and then led to this technology that enriched the world. I’d love to hear your response from both perspectives.
Marc: Yeah, so there are some real issues here. There’s an irony to this argument, and you do hear this argument. The irony, of course, is that OpenAI didn’t invent the Transformer. The core algorithm for large language models is called the Transformer.
It wasn’t invented at OpenAI, it was invented at Google. Google invented it, published a paper on it, and then, by the way, they didn’t productize it. They continued to research it, but they didn’t productize it because they thought it might be unsafe for “safety” reasons. So they let it sit on the shelf for five years, and then the team at OpenAI figured it out, picked it up, and moved forward with it.
Anthropic is a fork of OpenAI. Anthropic didn’t invent the Transformer. So both of these companies, and every other US lab that’s working on large language models, and every other open source project, are building on top of something they didn’t create and develop themselves.
By the way, Google invented the Transformer in 2017, but the Transformer itself is based on the concept of neural networks. The idea of neural networks dates back to 1943. So, 82 years ago is actually when the original neural network paper was published, and the Transformer is built on 70 years of research and development, much of which was funded by the federal government and European governments in research universities.
So it's a very long lineage of intellectual ideas and developments, and most of the ideas that went into all of these systems were not developed by the companies that are currently building them. No company sits here, including our own, without any special moral claim that we built this from scratch and we should have total control. That's simply not true.
So, I would say that arguments like these are made out of frustration in the moment. And by the way, these arguments are also meaningless because China has already done it, it's already come out, it's already happened. There's a debate about copyright right now. If you talk to experts in this field, a lot of people have been trying to understand why DeepSeek is so good. One of the theories, and this is an unproven theory, but one that experts believe in is that the Chinese company may have used data for training that the American company didn't use.
What’s particularly surprising is that DeepSeek is really good at creative writing. DeepSeek is probably the best AI in the world right now for creative writing in English. This is a bit strange because the official language of China is Chinese. There are some very good Chinese novelists in English, but generally speaking, you might think that the best creative writing would come from the West. And DeepSeek is probably the best right now, which is shocking.
So one of the theories is that DeepSeek might have been trained. For example, there are websites called Libgen, and these are basically giant internet repositories full of pirated books. I certainly don't use Libgen myself, but I have a friend who uses it a lot. It's like a superset of the Kindle store. It has every digital book, in PDF format, that you can download for free. It's like the movie version of The Pirate Bay.
A US lab might not feel like they can simply download all the books from Libgen and train on them, but maybe a Chinese lab feels like they can. So there might be this differential advantage. That being said, there's an unresolved copyright battle here. People need to be careful about this because there's an unresolved copyright battle here where some publishing companies basically want to prevent generative AI companies like OpenAI, Anthropic, and DeepSeek from being able to use their content.
There is an argument that these materials are copyrighted and cannot be used arbitrarily. There is another argument that basically says that AI is trained on books, you are not copying books, you are reading books. AI reading books is legal.
You and I are allowed to read books, by the way. We can borrow books from the library. We can pick up a friend's book. These actions are all legal. We are allowed to read books, we are allowed to learn from books, and then we can go on with our daily lives and talk about the ideas we learned in the book. Another argument is that training AI is more like a human reading a book, not stealing it.
And then there’s also the practical reality that if… their AI can be trained on all the books, and if American companies end up being legally prohibited from training on books, then the United States could lose the race in AI.
From a practical point of view, this is probably a coup de grace, like they won and we lost. There may be some entanglement in the whole argument. DeepSeek does not disclose the data they trained on. So when you download DeepSeek, you don't get the training data, you get what are called weights. So you get a neural network that has been trained on the training material. But it is difficult or even impossible to look at and deduce the training data from that.
By the way, Anthropic and OpenAI also don't disclose the data they train on. And then there's intense speculation in the field about what is and isn't in the OpenAI training data. They consider it a trade secret. They won't disclose it. So the Chinese DeepSeek may or may not be different from these companies. They may have different training methods than the Chinese companies. We don't know.
We don’t know exactly what OpenAI and Anthropic’s algorithms are because they are not open source, and we don’t know how much better or worse they are than the publicly available DeepSeek algorithm.
Talking about closed source and open source
Patrick: Do you think that closed source models that are entering the competition, like OpenAI and Anthropic, will eventually become more like Apple and Google's Android?
Marc: I support maximizing competition. This, by the way, fits with my role as a venture capitalist. So if you're a company founder, if I'm running an AI company, I need to have a very specific strategy that has pros and cons and trade-offs.
As a venture capitalist, I don't have to do that. I can make multiple conflicting bets . This is what Peter Thiel calls deterministic optimism versus non-deterministic optimism. Company founders, CEOs, have to be deterministic optimists. They have to have a plan, and they have to make difficult trade-offs to achieve that plan. Venture capitalists are non-deterministic optimists. We can fund a hundred companies with a hundred different plans, conflicting assumptions.
The nature of my job is that I don't have to make the kind of choices you just described. And then that makes it easy for me to make a philosophical argument that I personally and sincerely agree with, which is that I support maximum competition. So, going one level deeper, that means I support free markets, maximum competition, and maximum freedom.
Essentially, if you can have as many smart people as possible come up with as many different approaches as possible and compete with each other in the free market, see what happens. Specifically for AI, this means I support large labs moving as fast as possible.
I 100% support OpenAI and Anthropic doing whatever they want, launching whatever products they want, and growing as hard as they can. As long as they don't get preferential policy treatment, subsidies, or support from the government, they should be able to do whatever they want as a company.
Of course, I support startups as well. We're certainly actively funding AI startups of all sizes and types. So, I want them to grow, and then I want open source to grow, in part because I think if stuff is in open source, even if it means that some companies with business models can't work, the benefit to the world and the industry as a whole is so great that we'll find other ways to make money. AI will become more ubiquitous, cheaper, and more accessible. I think that will be a great outcome.
And then another very critical reason for open source is that without open source, everything becomes a black box. Without open source, everything becomes a black box that is owned and controlled by a few companies that could end up colluding with the government, and we can have a discussion about that. But you need open source to be able to see what's going on inside the box.
By the way, you also need open source for academic research, so you need open source for teaching. So the problem before open source was, go back two years ago, when there was no basic open source LLM, Meta released Llama, then Mistral in France, and now DeepSeek.
But before these open source models came along, the university system was going through a crisis where university researchers at places like Stanford, MIT, and Berkeley didn’t have enough money to buy billion-dollar Nvidia chips in order to really compete in the AI field.
So if you talked to computer science professors two years ago, they were very worried. The first worry was that my university doesn't have enough funding to compete in the AI field and stay relevant. And then the other worry was that all universities combined don't have enough funding to compete because no one can keep up with the funding capabilities of these large companies.
Open source has put universities back in the game . It means if I'm a professor at Stanford, MIT, Berkeley, or any state school, whether it's the University of Washington or somewhere else, I can now teach using the Llama code, the Mistral code, or the DeepSeek code. I can do research, I can actually make breakthroughs. I can publish my research and people can actually understand what's going on.
And then every new generation of kids that come to college and take computer science classes will be able to learn how to do this, whereas if it was a black box, they wouldn't be able to do it. We need open source just like we need free speech, academic freedom, and freedom of research.
So my model is basically, you have big companies, small companies, and open source competing against each other. That's what happened in the computer industry. It worked well. That's what happened in the internet industry. It worked well. I believe it's going to happen in AI, and I think it's going to work well.
Patrick : Is there a limit to wanting maximum evolutionary rate and maximum competition? Maybe. If I say, we know the best stuff is made in China, ..., is there a point where you say, yes, I want maximum evolution and competition, but the national interest somehow overrides the desire for maximum evolutionary rate and development?
Marc : This argument is a very real argument. It's been made quite frequently in the AI space. In fact, as we sit here today, there are two things. First, there are actually restrictions on Western companies and American companies selling cutting-edge AI chips to China. For example, Nvidia today actually cannot legally sell its cutting-edge AI chips to China. We live in a world where this decision has been made and this policy has been implemented.
And then the Biden administration had issued an executive order, which I think is now rescinded, but they had issued an executive order that would have put similar restrictions on software. It's a very active debate. And there's another round of this going on in Washington, D.C., with the DeepSeek incident.
And then basically, when you get into policy debates, you have a classic situation where you have a rational version of the argument, which is what is in the national interest from a theoretical perspective. And then you have a political version of the argument, which is, okay, what does the political process actually do to the rational argument? Let me put it this way, we all have a lot of experience watching rational arguments meet the political process, and it's usually not the rational argument that wins. It goes through the political machine, and what comes out is usually not what you initially thought you were going to get.
And then there's a third factor, which we always need to discuss , which is the influence of corruption, especially in large companies . If you're a large company and you see the changes that are happening with Chinese companies (being more competitive), the threat of what's happening with open source, of course you're going to try to use the U.S. government to protect yourself. Maybe it's in the national interest, maybe it's not. But you're definitely going to push for that, whether it's in the national interest or not. That's what makes this debate complicated.
You can’t sell cutting-edge AI chips to China. That certainly hinders them in some ways. There are things they won’t be able to do. Maybe that’s a good thing because you’ve decided that this is in the national interest. But let’s look at three other interesting consequences that come out of this.
So one of the consequences is that it gives Chinese companies a huge incentive to design how to do things on cheaper chips. That was a big part of the DeepSeek breakthrough, which was that they figured out how to use cheaper chips that were legally compliant to do things that American companies could only do with bigger chips. That's one reason it's so cheap. One of the reasons you can run it on $6,000 worth of hardware is because they invested a lot of time and effort into optimizing the code to run efficiently on cheaper chips that are not sanctioned. You force an evolutionary response.
So that's the first reaction, and maybe it's already backfired to some extent. The second consequence is that you incentivize the Chinese state and private sector to develop a parallel chip industry. So if they know they can't get American chips, then they will develop it. They are doing that now . They have a national program to build their own chip industry so that they are not dependent on American chips.
So from a counterfactual perspective, maybe they'll buy American chips. Now they're going to go figure out how to make them themselves. Maybe in five years they'll be able to do that. But once they get to a position where they can make them themselves, then we'll have a direct competitor in the global market that we wouldn't have had if we were just selling them chips. And by the way, by that time, we won't have any control over their chips. They can have complete control. They can sell them below cost, and they can do whatever they want with them.
How AI reasoning capabilities are changing the VC and investment industry
Patrick : How do you think all of this will impact capital allocation? I'm most interested in how your firm, Andreessen Horowitz, will be impacted, maybe five years from now. If I think of investment firms as a combination of being able to raise capital, doing great analytical work, and being able to judge people, especially at an early stage, how do you think that function will change with the advent of "o7"?
Marc : I expect the analytics part to change dramatically. We assume that the best investment firms in the world will become very good at using this technology to do the analytics work that they do.
That being said, there is a saying that goes "the shoemaker's son has no shoes," and perhaps the venture capital firms that are investing most aggressively in AI are among those that are not as aggressive in terms of real-world applications. But we have multiple efforts going on within our firm that I'm really excited about. But companies like ours need to keep up, so we have to really do it.
Is there some work that's already going on inside the industry? Probably not yet. Probably not enough. Having said that, a lot of people we talked to had a very analytical perspective on late-stage investing or public market investing. Even the great investor, I think it was Warren Buffett. I don't know if that's true, but I've always heard that Warren never meets with CEOs.
Patrick : He wanted "Ham Sandwich Company".
Marc : Yeah, yeah, he wanted the company to be as simple as a ham sandwich. And I think he was a little bit worried about being lured by a good story. You know, a lot of CEOs are very charismatic people. They're always described as "great hair, great teeth, polished shoes, and a well-tailored suit." They're very good at sales. You know, one of the things that CEOs are good at is selling, especially selling their own stock.
So if you're Buffett and you're sitting in Omaha, what you do is you read the annual report. Companies list everything in their annual report, and they're bound by federal law to make sure it's true. So that's how you do your analysis. So do inference models like O1, O3, O7, or R4 do a better job than most investors analyzing their annual reports by hand? Probably.
As you know, investing is an arms race, just like everything else. So if it works for one person, it will work for everyone. It will be an arbitrage opportunity for a period of time, and then it will close and become the standard. So I expect the investment management industry will adopt this technology in this way. It will become a standard way of operating.
I think for early stage venture it's a little different. What I'm about to say next is probably just wishful thinking on my part. I could be the last Japanese soldier on a remote island in 1948 saying what I'm about to say next. I'm going to go out on a limb. But I will say, look, in the early stages, a lot of what we do in the first five years is actually really deeply assess individuals and then work very deeply with those individuals.
This is also why venture capital is hard to scale, especially geographically. Geographic scale experiments often don't work. The reason is that you end up spending a lot of time face-to-face with these people, not only during the evaluation process, but also during the building process. Because in the first five years, these companies are usually not on autopilot.
You actually need to work closely with them to make sure that they are able to achieve everything that is needed to be successful. There are very deep relationships, conversations, interactions, mentoring, and by the way, we learn from them and they learn from us. It's a two-way exchange.
We don't have all the answers, but we have a perspective because we see the bigger picture, and they're more focused on the specifics. So there's a lot of two-way interaction. Tyler Cowen talked about this, I think he called it "project cherry picking."
Of course, the "talent discovery" is another version of this, which is that basically, if you look back at any new field in human history, you almost always find this phenomenon where there are some unique personalities trying to do something new, and then there is some professional support layer who finances and supports them. In the music industry, it was David Geffen who discovered all the early folk artists and made them into rock stars. Or in the film industry, it was David O. Selznick who discovered the early movie actors and made them into movie stars. Or it was someone in a coffee house or tavern in Maine 500 years ago discussing which whaling captain would be able to catch a whale.
You know, this is Queen Isabella in the palace listening to Columbus's proposal and saying, "That sounds reasonable. Why not?" This alchemy that develops over time, this alchemy that develops between people who are doing new things and the professional support layer that supports and finances these people, has been around for hundreds, even thousands of years.
You might have seen tribal leaders thousands of years ago, they'd be sitting around a fire, and a young warrior would come up and say, "I want to lead a hunting party to that area over there to see if there's better prey over there." And the leader would sit by the fire and try to decide if he agrees. So it's a very human interaction. My guess is that this interaction will continue. Of course, having said that, if I ever met an algorithm that was better at doing this than I am, I would retire immediately. We'll see.
Patrick: You're building one of the largest companies in this space. How did you adjust your company's growth strategy to deal with this new technology? Did you adjust your company's direction, both practically and strategically? How did you adjust your company's direction to deal with this new technology?
Marc : An important part of running a venture capital firm, in our view, is that there is a set of values and behaviors that you have to have, which we call immutable. For example, respect for entrepreneurs. You need to show tremendous respect for entrepreneurs and the journey that they're on. You need to deeply understand what they do. You can't just go through the motions.
You build deep relationships. You work with these people for the long term, and by the way, these companies take a long time to build. We don't believe in overnight success. Most of the great companies are built over a 10, 20, 30-year time span. Nvidia is a great example of that. Nvidia is coming up on its 40th anniversary, and I think one of the original VCs of Nvidia is actually still on the board today. That's a great example of long-term building.
So there's a core set of beliefs, perspectives, and behaviors that we don't change, and those are related to what we just mentioned. The other one is the face-to-face interaction thing. You know, these things can't be done remotely, that's one. But the other part of it is you need to keep up with the times because technology changes so quickly, business models change so quickly, competitive dynamics change so quickly.
If anything, the environment has become more complex because you have so many countries now, and now there are all these political issues, which also complicate things. We never really worried about the political system putting pressure on our investments until about eight years ago. And then about five years ago, that pressure really intensified. But in the first ten years of our firm, and the first 60 years of venture capital, it was never a big thing, but now it is.
So we need to adapt. We need to engage in politics, which we didn't do before. Now we need to adapt, and we need to figure out that maybe AI companies are going to be very fundamentally different. Maybe they're going to be organized completely differently. Or as you said, maybe software companies are going to be run completely differently.
One of the questions we ask ourselves a lot is, for example, what would the organizational structure of a company that really leverages AI look like? Would it be similar to existing organizational structures, or would it actually be very different? There's no single answer to this, but it's something we're thinking about a lot.
So one of the delicate balancing acts that we do every day is trying to figure out what is timeless and what is contemporary, and that's a big part of how I think about the company conceptually, is that we need to navigate between those two and make sure we can differentiate between them.
Patrick : Your firm is now very large, and it's similar in some ways to a firm like KKR or Blackstone. You and Ben Horowitz, as founders, were both experienced founders when you started this firm. Similar to Blackstone, Schwarzman had never really invested before he started Blackstone. Look at how it's grown now.
It seems like this founder-led approach to building asset management investment firms is that they eventually become really large and ubiquitous platforms. You have vertical businesses that cover most of the exciting frontiers of technology. Do you think there is some truth to this view? Will the best capital allocation platforms be founded more by founders than by investors?
Marc : Yeah, so a couple of points. First of all, I think there's some truth to this observation. In the industry, people often talk about this, that a lot of investing operations are often referred to as partnerships. A lot of venture capital firms operate that way. Historically, it was a small group of people sitting in a room, bouncing ideas off each other, and then making investments. And by the way, they didn't have a balance sheet. It was a private partnership. They paid out money at the end of each year in the form of compensation. That's the traditional venture capital model.
A traditional venture capital model, you have six general partners (GPs) sitting around a table and running this operation. They have their assistants, and a couple of assistants. But the point is, it's all based on people. And by the way, you actually find that in most cases, people don't like each other very much.
Mad Men shows this really well. Remember in Mad Men, in season three or four, the guys leave to start their own companies, and they don't actually like each other. They know they need to get together and start a company. That's how a lot of companies work. So, it's a private partnership, and that's what it represents.
But then what you see is that these companies are very hard to sustain. They have no brand value. They have no underlying enterprise value. They are not a business. You see this model of companies where when the original partners are ready to retire or do something else, they hand it over to the next generation. Most of the time, the next generation can't sustain it. Even if they can sustain it, there is no underlying asset value. The next generation will have to hand it over to the third generation. It may fail in the third generation, and then it will end up on Wikipedia. It will be like, "Yeah, this company existed, and then it disappeared, and other companies took its place, like ships passing by in the night."
So that's the traditional way it works. And by the way, if you're trained in traditional investing, you're trained in the investing part, but you're never trained in how to build a business. So, it's not your natural strength, you don't have that skill or experience, so you're not going to do it. And many investors have operated that way for a long time as investors and made a lot of money. So, it can work very well.
The other way is to build a company, build a business, build something that has enduring brand value. You mentioned companies like Blackstone and KKR, these huge public companies. Same thing with Apollo, these huge companies - you probably know that the original banks were actually private partnerships. Goldman Sachs and JPMorgan Chase 100 years ago were more like small venture capital firms than they are today. But then their leaders over time transformed them into these huge businesses. They were also large public companies.
So that's another way, is to build a franchise. Now, to do that, you need a theory of why a franchise should exist. You need a conceptual theory of why it makes sense to do it. And then, yes, you need business skills. And then, at that point, you're running a business, and it's just like running any other business, which is, okay, I have a company. It has an operating model, it has an operating rhythm, it has management capabilities, it has employees, it has multiple layers, it has internal specialization and specialization.
And then you start thinking about expansion, and then over time you start thinking about the underlying asset value, that this thing is worth more than just the people who are there at the moment. It's not like we're, like, eager to distribute profits, or whatever. But one of the big things we're trying to do is build something that has this kind of durability.
By the way, we're not in a rush to go public or anything, but one of the big things we're trying to do is build something that has this kind of durability.
Patrick : What new things do you hope the firm will be different in 10 years that don’t exist today? Are there some uncompromising ways that you hope the firm will never evolve like a traditional large asset manager?
Marc : We evolve rapidly in what we invest in, what the company does, the model, and the background of the founders. It's always changing. For example, for 60 years, there has been a consensus in the venture capital community that you would never support a researcher starting a company to do research. He would just do the research, run out of money, and you would get nothing.
Yet, many of today’s top AI companies were founded by researchers. This is an example of how some so-called “timeless” values need to be adjusted to changing times. We need to be flexible to these changes. So, with these changes, the help and support a company needs to succeed will also change.
One of the most significant changes in our company, and I've mentioned this before, is that we now have a large and increasingly sophisticated political operations department. Four years ago, we had no political presence at all. Today, it's become a significant part of our business that we never expected.
I am sure that in another 10 years, we will not only be investing in areas that we cannot imagine today, but we will also have an operating model that we cannot imagine today. Therefore, we are completely open to changes in these aspects. However, there are some core values that I hope will remain unchanged in the next 10 years because they are well thought out and are the foundation of our company.
But what I have always emphasized to our team members and limited partners is that we are not pursuing scale for scale’s sake. Many investment firms, when they reach a certain scale, prioritize expanding their assets under management from billions to hundreds of billions or even trillions of dollars. This approach is often criticized as focusing more on collecting management fees rather than achieving excellent investment performance. This is not our goal.
The only reason we scale is to support the companies we want to help founders build. When we scale, it’s because we believe it helps us achieve that goal.
However, I must emphasize that the core of our firm is always early-stage venture capital. No matter how big we become, even if we set up a growth fund and can write larger checks - some AI companies do need a lot of money. We did not set up a growth fund from the beginning, but gradually built it up as the market demand and company development progressed.
But the core business is always early stage venture capital. This can be confusing because from the outside, we manage a lot of money. Why would I, as a founder of an early stage startup, trust you guys to spend time with me when you, Andreessen Horowitz , invested hundreds of millions of dollars in later stage investments and you only invested $5 million in my Series A? Would you still spend time with me?
The reason is that the core business of our firm has always been early-stage venture capital. From a financial perspective, the return opportunities of early-stage investments are comparable to those of later-stage companies, which is the characteristic of early-stage companies. But more importantly, all our knowledge, network, and what makes our firm unique come from our deep insights and connections in the early stages.
So I always tell people that if the situation demands it and the world is in trouble and we have to make sacrifices, the early stage venture business will never be sacrificed. It will always be the core of the firm. That's why I spend a lot of time working with early stage founders. On the one hand, it's very interesting; on the other hand, it's also where the most learning happens.
The transformation of global power structures: elites and anti-elites
Patrick : If we think about changes in global power structures, ..., which centers of power are you most concerned about that are changing, either in terms of gaining power or losing power?
Marc : The Machiavellians. I’m sure you’ve probably had a dozen people recommend this book on your show. It’s one of the greatest books of the 20th century. It’s a theory of political power, social and cultural power. One of the key ideas in this book that I’m seeing everywhere right now is the idea of elites and anti-elites.
The idea is this: basically, democracy itself is a myth. You're never going to have a completely democratic society. The United States, by the way, is certainly not a democracy, it's a republic. But even those "democratic" systems that work well, they tend to have a republican quality, lowercase "r" republic. They tend to have a parliament, or have a House of Representatives and a Senate, or have some kind of representative institutions. They tend to have a representative institution.
The reason for this is a phenomenon described in the book called the "Iron Law of Oligarchy," which is basically this: The problem with direct democracy is that the masses can't organize. You can't really get 350 million people to organize to do anything. There are just too many people.
So, in basically every political system in human history, you have a small, organized elite class governing a large, unorganized mass class. You start with the earliest hunter-gatherer tribes all the way up to the United States and every other political system in the modern era, whether it's the Greeks or the Romans or every empire, every nation throughout history.
So you have a small, organized elite governing a large, unorganized mass. This relationship is fraught with danger because the unorganized mass will defer to the elite for a time, but not necessarily forever. If the elite becomes oppressive to the masses, the masses greatly outnumber the elite. At some point, they may show up with torches and spears. So, there is tension in this relationship. Many revolutions have happened because the masses have decided that the elite no longer represents them.
Our society is no different. We have a large, unorganized mass class. We have a very small, organized elite class. America…has set up a system where we have two elites. We have the Democratic elite and the Republican elite. And by the way, there is so much overlap between these two elites that some people actually call it a “single party.” Maybe these elites have more in common with each other than they do with the masses.
For a long time, we had a Republican elite whose policies were ultimately represented by the Bush family. We had a Democratic elite whose policies were ultimately represented by Obama. In the last decade, there has been a rebellion within the elites, basically on both sides of the aisle in the United States. This is actually the key point in The Machiavellian, that change is usually not the masses going directly against the elites. What happens is the emergence of a new anti-elite.
You have a new anti-elite that emerges and tries to replace the current elite. My reading of current affairs is that, generally speaking, the elites that are currently running the world are being found to be doing a poor job. We can get into why later. But generally speaking, if you look at the approval ratings of political leaders, the approval ratings of institutions, all of that is going down. What's happening everywhere in the world is that if you're an incumbent institution, if you're an incumbent newspaper, if you're an incumbent television network, if you're an incumbent university, if you're an incumbent government, generally speaking, your approval ratings are a disaster. That's what people are basically saying, the elites in power are failing us.
And then there are these anti-elites who say, "Oh, I know I have a better way to represent the masses, I have a better way to take over." My new anti-elite movement is supposed to replace the current elite movement, like what's happening with the Democratic Party. In 2016 it was Bernie Sanders, it was AOC and the whole progressive wave. And on the Republican side, it's obviously Trump and his MAGA movement and everything that it stands for.
But by the way, this dynamic is also happening in the UK. The Conservative Party has collapsed, and now you have this Reform Party, which has Nigel Farage, which is very threatening. You have Jeremy Corbyn, who is also an anti-elitist from the left.
The same thing is happening in Germany. In fact, just this week, something very dramatic happened in Germany, which is that the so-called "far right" party, the AfD, is rising rapidly. There's a leader named Alice Weidel, and for the first time in German political history, in 50 years or more, the Christian Democratic Union (CDU) of Germany actually cooperated with the AfD on something. All of a sudden, the AfD became a viable competitor. They are an anti-elite, right-wing group that is trying to take over the German political system.
So basically, wherever you go in the world, there's an anti-elite that's emerging and saying, "Oh, I can do better." It's a fight between the elites. The masses are aware of it, they're watching the democracies, and they're ultimately going to make the decision because they're going to decide who they're going to vote for.
That’s why Republican voters decided they were going to vote for Trump instead of Jeb Bush. It was a case of the anti-elites beating the elites. And this actually ties into the criticism of Trump, which is very interesting, which is that Trump is criticized by the existing elites who say, “Oh, he’s not really a man of the people. He’s a super-rich billionaire, he lives in a golden penthouse, he’s got people driving him everywhere. If you’re a rural farmer in Kentucky or Wisconsin, you shouldn’t think of him as your man.”
The point was never that Trump was a man of the people . The point was that Trump was an anti-elite who could better represent the people. That was the basis of his entire movement. And the same thing is true in the media, by the way. Everything you describe is exactly what happened in the media. The elite media has dominated for 50 years, and it's TV news, cable news, newspapers, and these big-name magazines. Now you have the anti-elite. The anti-elite is you, Patrick, and Joe Rogan. There are many more people.
By the way, if you look at the numbers, it's very clear that the masses, the viewers, the readers are leaving the old media and moving to the new media. And the existing elites are very angry about this. They're angry and writing all these negative articles about you guys, saying that you're all a bunch of white supremacists and this whole thing is terrible. Like, this is the way of the world. So we're in the middle of all this. I don't know if "transition" is the right term. It's more like a big battle between the old elite and the new elite.
Patrick : What were the initial seeds of the decline of the elite in the last generation that led to those 11% approval ratings? What do you attribute that to primarily?
Marc : There are two theories. One theory is that these approval ratings are wrong, and the other theory is that these approval ratings are correct. By “wrong,” I mean that these approval ratings are measured correctly, but people are giving the wrong answers.
If you’re the head of CNN or Harvard or anything like that and you have an 11% approval rating… By the way, Gallup has been doing a really remarkable survey for 50 years called “Trust in Institutions.” You can Google “2024 Gallup Trust in Institutions Survey” and you’ll see some really spectacular graphs and you’ll see that institutional trust basically peaked in the late 1960s and early 1970s and has been declining ever since.
This phenomenon, by the way, predates the internet. Interestingly, it's been blamed on the internet, but it predates the internet. So, this is a phenomenon that started developing in the 1970s and has been accelerating. And by the way, these approval ratings have been falling even faster since 2020.
They just kind of slid down like this and then they just plummeted after 2020. Network news, I don't know what the number is. It's in the single digits, people just don't trust it anymore. They don't trust what they're saying on the news anymore. And by the way, the audience ratings are going down the same way.
So one theory is, if you're the head of NBC News or CNN or Harvard, your theory might be, "Oh, people are wrong. People are misled, they're deceived, they're deceived by populists and demagogues, they're deceived by disinformation." That's why the idea of "disinformation" has become so popular. ... People have been deceived by malicious actors, populists and demagogues, and it's only a matter of time until we explain to people that they've been deceived. They'll go back to trusting us.
So, that's one theory. Another theory is that the elites are corrupt. They're corrupt, dysfunctional, corrupt, and they don't provide services anymore. Under this theory, these numbers, these declines in approval ratings, are correct because every time you look at Congress, they're just recklessly spending your money on all kinds of crazy things. If you go to CNN or NBC News, they're always lying to you about a thousand different things. If you go to Harvard, they teach you about racial communism, America is evil, and all these crazy things.
Under this theory, people are right, people have seen through these elites. These elites have basically been in power for too long, they have too much power, they are not subject to enough scrutiny, they are not subject to enough competitive pressure, they have become corrupt in place , they are no longer providing services. The reality is probably a bit of both. It's easy for the next demagogue to come along and just start throwing rocks at the people in power and say anything.
If you’re a person who doesn’t have political power today and wants it, the easiest thing to do is to show up and start yelling that the current elite is corrupt. Maybe that’s a little bit true, and demagoguery plays a little bit of a role, or whatever it is, but… But I think a lot of it is that the elite is corrupt.
My version of this is pretty straightforward, and Burnham talks about this in the book. He talks about the "cycling of the elite." He says that in order for a meritocracy to really stay healthy and real and productive and not corrupt, it needs a constant infusion of new talent. It does this through the process of the elite cycling.
So what it would do is, it would identify promising young talent and invite them to join the elite. It does that for two reasons. One is for self-renewal. The other is that those are the people most likely to become anti-elites. So, it's also to prevent future competition. So, my experience started when I was 22, and it was, "Oh, hey, Mark, we really want you to come to Davos. We really want you to come to Aspen. We really want you to come to New York for this big conference. We really want you to come to the New York Times dinner party. We want you to hang-out with the journalists for 25 years." That's what I did, and it was like, "Oh, this sounds great. These are the best people in the world. They're in control. They have the best degrees, they graduated from the best schools. They have all the positions of power. They like me. They think I'm great."
They kept telling me I was from the cornfields of Wisconsin. I had arrived, I had made it to the elite.
All I have to do is never argue with anything. All I have to do is agree with everything that’s said in the New York Times, agree with everything that’s said in Davos, vote for the candidates you’re supposed to vote for, donate to the candidates you’re supposed to donate to, and never, never, never deviate from the path. And then you’ll be part of the elite.
I have a lot of peers who have done that. Some are now the largest Democratic donors in the world, and they are completely integrated into the elite, and they are there, and they are having a great time, and they think it is all great, and it is great. Some people think that this is great, and maybe it is the right thing to do.
And then some people get to a point where they look around. It's like the story of J.D. Vance. He grew up in rural Kentucky, or Appalachia, Ohio. He ended up at Yale. He ended up being invited into all these inner circles.
And then he finally looks around and he just says, "Wow, these people are not at all what I thought they were. These people are selfish, they're corrupt, they're lying about everything, they're engaging in suppression, they're very authoritarian, they're looting the public treasury. Oh, my God, I've been lied to all my life. These people don't deserve the respect that they have, and maybe there should be a new elite in power." So, that's a lot of the debate that's going on right now. Yeah, I'm a case study.
Optimism and pessimism: Will the world be better?
Patrick : If we put on a pair of rosy glasses, you emphasize early-stage venture capital. You meet all these young, smart people who are about to build the future. Let's put on a pair of rosy glasses and assume that AI has the most positive impact in all the areas where we can verify results. Reasoning has become so powerful.
So what are some other relevant bottlenecks that are going to prevent the technological revolution that we're hoping for? That could be clinical trials in medicine, or something that's progressing slower than AI, which is not a problem with AI. We're going to be hungry for progress.
But the world of atoms, the world of surveillance, or the world of clinical trials, etc., may become the limiting factor, rather than intelligence and knowledge. Which bottlenecks are you most interested in?
Marc : The way I've always thought about technological change is that there used to be three lines on a graph, and now there's four. So one is the rate of technological change, which is a line where everything is generally getting better. And then every once in a while you see these discrete jumps, or something gets dramatically better, like what happened with AI last week.
And then you have another line on top of it, which is social change, which is basically, when is the world ready for something new? Sometimes you see this phenomenon where the new thing actually exists before the world is ready, and for some reason it's not adopted. And then five years later or fifty years later, it suddenly takes off and grows rapidly. So, there's a social layer, and then there's a financial layer on top of that, which is are the capital markets willing to fun