Subprime AI crisis: Crypto x AI needs to be rethought

This article is machine translated
Show original
AI is closely tied to large companies, which also indicates that AI's long-term inability to make a profit will have a chain reaction.

By Edward Zitron

Compiled by: Block unicorn

If you are paying attention to AI in the crypto industry or traditional Internet, you need to think seriously about the future of this industry. The article is quite long, so if you don’t have patience, you can leave immediately.

What I write in this article is not intended to spread doubt or "bashing" but rather to provide a sober assessment of where we are today and where our current path may lead us. I believe that the AI ​​boom - more specifically, the Generative AI boom - (as I have argued before) is unsustainable and will eventually crash. I also worry that this crash could be devastating to Big Tech, severely undermine the entrepreneurial ecosystem, and further erode public support for the tech industry.

The reason I’m writing this post today is that it feels like the situation is changing rapidly, with multiple AI “omens of doom” already emerging: OpenAI’s (hastily) launched “o1 (codename: Strawberry)” model being called “a big, stupid magic trick” (a false illusion); rumors of price increases for future models at OpenAI (and elsewhere); Scale AI layoffs; and leadership leaving OpenAI. These are all signs that things are starting to fall apart.

So I thought it was necessary to explain the crisis of the current situation and why we have found ourselves in a phase of disillusionment. I wanted to express my concern about the fragility of the movement and the obsession and lack of direction that has led us to this point, and I hope some people can do better.

Additionally—and perhaps this is a point I haven’t paid enough attention to before—I want to emphasize the human costs of a bursting AI bubble. Whether Microsoft and Google (and other large generative AI backers) gradually slow down their investments, or whether OpenAI and Anthropic (and their own generative AI projects) are sustained by sapping corporate resources, I believe the end result will be the same. I worry that thousands of people will lose their jobs, and large parts of the tech industry will realize that the only thing that can grow forever is cancer.

There won’t be much lightheartedness in this post. I’m going to paint you a dark picture — not just of the big AI players, but of the entire tech industry and its employees — and tell you why I think the messy and destructive end is coming sooner than you think.

Go ahead and enter thinking mode.

How does generative AI survive?

Currently, OpenAI—a nominally nonprofit organization that may soon become for-profit—is raising a new round of funding at a valuation of at least $150 billion, with an expected raise of at least $6.5 billion and possibly as much as $7 billion. The round is led by Josh Kushner's Thrive Capital, with rumors that NVIDIA and Apple may also participate. As I have previously analyzed in detail, OpenAI will have to continue to raise unprecedented amounts of money to survive.

To make matters worse, according to Bloomberg, OpenAI is also trying to raise $5 billion in debt from banks in the form of "revolving credit lines," which typically come with higher interest rates.

The Information also reported that OpenAI is in talks with MGX, a $100 billion investment fund backed by the UAE, seeking to invest in AI and semiconductor companies, and may also raise funds from the Abu Dhabi Investment Authority (ADIA). This is an extremely serious warning sign because no one voluntarily seeks money from the UAE or Saudi Arabia. You only choose to ask them for help when you need a lot of money and are not sure you can get it from elsewhere.

PS: As CNBC points out, one of MGX’s founding partners, Mubadala, holds approximately $500 million in equity in Anthropic, which was acquired from the FTX bankruptcy assets. You can imagine how “happy” Amazon and Google must be about this conflict of interest!

As I discussed in late July, OpenAI needs to raise at least $3 billion, and more likely $10 billion, to stay afloat. It expects to lose $5 billion in 2024, a number that will likely increase as more complex models require more computing resources and training data. Anthropic CEO Dario Amodei predicts that future models could require up to $100 billion in training costs.

Incidentally, the “$150 billion valuation” here refers to the way OpenAI prices its shares for investors — although the word “shares” is a bit vague here, too. For example, in a normal company, investing $1.5 billion at a $150 billion valuation would typically give you “1%” of the company, but in OpenAI’s case, things are a bit more complicated.

OpenAI attempted to raise money at a $100 billion valuation earlier this year, but some investors balked at the high price, in part due to (according to The Information’s Kate Clark and Natasha Mascarenhas) growing concerns that generative AI companies are overvalued.

To complete this round of funding, OpenAI may be transitioning from a nonprofit to a for-profit entity, but the most confusing part is what investors are actually getting. Kate Clark of The Information reports that investors participating in this round were told (quote) that "they would not receive traditional equity for their investment... Instead, they were given units that promised a share of the company's profits - once the company becomes profitable, they would get a share of the profits."

It’s not clear whether converting to a for-profit entity would solve this problem, since OpenAI’s odd “nonprofit + for-profit arm” corporate structure means that Microsoft is entitled to 75% of OpenAI’s profits as part of its 2023 investment—although in theory, a conversion to a for-profit structure could include equity. However, when you invest in OpenAI you get “profit-sharing units” (PPUs), not equity. As Jack Raines writes in Sherwood, “If you own OpenAI’s PPUs but the company never makes a profit and you can’t sell them to someone who thinks OpenAI will eventually make a profit, then your PPUs are worthless.”

Over the weekend, Reuters published a report saying that any $150 billion valuation would "depend on" OpenAI being able to restructure its entire company and, in the process, lift a cap on investor profits that is currently limited to 100 times the original investment. The profit cap was established in 2019, when OpenAI said any profits above that would be "returned to nonprofits for the benefit of humanity." In recent years, the company has modified that rule to allow for a 20% increase in the profit cap each year starting in 2025.

Given OpenAI’s existing profit-sharing agreement with Microsoft — not to mention the massive losses it’s mired in — any return would be theoretical at best. At the risk of sounding flippant, even a 500% gain is still still zero.

Reuters also added that any move to a for-profit structure (thus increasing its valuation above its recent $80 billion) would force OpenAI to renegotiate with existing investors as their stakes would be diluted.

The Financial Times also reportedly noted that investors must "sign an operating agreement that states: 'Any investment in [OpenAI's for-profit subsidiary] should be considered in the spirit of a donation,' and that OpenAI 'may never make a profit.'" Such terms are indeed crazy, and anyone who invests in OpenAI and suffers from bad consequences as a result is entirely at their own risk, because this is an extremely ridiculous investment.

In reality, investors didn’t get a piece of OpenAI, or any control over it, but simply a stake in the future profits of a company that’s losing more than $5 billion a year and will likely lose even more by 2025 (if it makes it that far).

OpenAI’s models and products — we’ll discuss their usefulness later — are extremely unprofitable to operate. The Information reports that OpenAI will pay Microsoft about $4 billion in 2024 to support ChatGPT and its underlying models, and that’s on top of the discounted price Microsoft is offering it of $1.30 per GPU per hour, compared to the regular rate of $3.40 to $4 per hour for other customers. This means that without a deep partnership with Microsoft, OpenAI could be spending as much as $6 billion per year on servers — not including other expenses like employee costs ($1.5 billion per year). And, as I’ve discussed before, training costs are currently $3 billion per year and will almost certainly continue to increase.

Although The Information reported in July that OpenAI’s annual revenue was $3.5 billion to $4.5 billion, The New York Times reported last week that OpenAI’s annual revenue “now exceeds $2 billion,” meaning the year-end figure is likely to be toward the low end of that estimated range.

In short, OpenAI is burning money and will only burn more money in the future, and in order to continue burning money, it will have to raise funds from investors who have signed a statement that "we may never be profitable."

As I’ve written before, another problem with OpenAI is that generative AI (which extends to the GPT model and the ChatGPT product) isn’t solving the kinds of complex problems that justify its huge cost. The models are probabilistic, which leads to huge, intractable problems — in other words, they know nothing and are just generating answers (or images, or translations, or summaries) based on training data, which model developers are running out of at an alarming rate.

The phenomenon of “hallucination” — where a model explicitly generates information that is not real (or generates something that looks wrong in an image or video) — cannot be completely solved with existing mathematical tools. Although it may be possible to reduce or mitigate the phenomenon of hallucination, its existence makes it difficult to truly rely on generative AI in critical business applications.

Even if generative AI can solve technical problems, it’s unclear whether it actually brings value to the business. The Information reported last week that customers of Microsoft’s 365 suite (which includes Word, Excel, PowerPoint, and Outlook, among others, and especially many of the enterprise-focused packages, which are also closely tied to Microsoft’s consulting services) have barely adopted its AI-driven “Copilot” product. Only 0.1% to 1% of 4.4 million users (at $30 to $50 each) pay for the features. One company that is testing AI features said: “Most people don’t find it very valuable right now.” Others said that “many businesses have not yet seen breakthrough improvements in productivity and other areas” and that they are “not sure when they will.”

So how much is Microsoft charging for these unimportant features? An eye-popping $30 extra per user per month, or up to $50 per user per month for the “Sales Assistant” feature. This effectively requires customers to double their existing fees—on an annual contract, by the way!—for products that don’t seem all that useful.

One thing to add: Microsoft's problems are so complex that they may require their own news content in the future.

This is the state of generative AI — the leader in productivity and business software can’t find a product that customers are willing to pay for, partly because the results are too mediocre and partly because the costs are too high to justify. If Microsoft needs to charge so much, it’s either because Satya Nadella wants to achieve $500 billion in revenue by 2030 (a goal revealed in a memo released during the public hearings on Microsoft’s acquisition of Activision Blizzard), or because the costs are too high to lower the price, or both.

However, almost everyone emphasized that the future of AI will shock us - the next generation of large language models is just around the corner, and they will be amazing.

Last week, we got our first real glimpse into that so-called “future.” And it was a disappointment.

A silly magic trick

OpenAI released o1 — codenamed “Strawberry” — late Thursday with the kind of excitement that comes from a visit to the dentist. In a series of tweets, Sam Altman described o1 as OpenAI’s “most powerful and most aligned model yet.” While he acknowledged that o1 “still has flaws, is still limited, and after using it for a while it’s not as impressive as it was when you first used it,” he promised that o1 would provide more accurate results on tasks that have clear correct answers, such as programming, math problems, or scientific questions.

This in itself is pretty revealing — but we’ll get into that in a bit. First, let’s talk about how it actually works. I’ll introduce some new concepts, but I promise not to go into too much detail. If you really want to read OpenAI’s explanation, you can find it in their article on their official website — Learning to Reason with LLMs.

When faced with a problem, o1 breaks it down into individual steps—hopefully, steps that will eventually lead to the correct answer, a process called the “Chain of Thought.” It’s easier to understand o1 if you think of it as two parts of the same model.

At each step, one part of the model applies reinforcement learning, and another part (the part that outputs the result) is "rewarded" or "punished" based on the correctness of its progress (its "reasoning" step), and adjusts its strategy when it is punished. This is different from how other large language models work, because the model generates output and then looks back, and instead of just generating an answer and then giving it directly, it will ignore or recognize "good" steps to arrive at the final answer.

While this sounds like a major breakthrough, or even another step towards the much-praised artificial general intelligence (AGI) — it isn’t, as evidenced by the fact that OpenAI chose to release o1 as a standalone product, rather than an updated version of GPT. The examples OpenAI showed — such as math and science problems — were tasks where the answers were known in advance, where the answers were either correct or incorrect, allowing the model to guide the “chain of thought” at each step.

You’ll notice that OpenAI didn’t show how the o1 model would solve complex problems where the answer is unknown, whether it’s math or something else. OpenAI itself admits that it’s received feedback that o1 is more prone to “hallucinations” than GPT-4o, and that o1 is less willing to admit that it doesn’t have an answer than previous models. This is because, although there’s a part of the model that checks its output, this “checking” part can also hallucinate (sometimes AI will make up answers that seem plausible, creating hallucinations).

According to OpenAI, o1 is also more convincing to human users due to its “thought chaining” mechanism. Because o1 provides more detailed answers, people are more inclined to trust its outputs, even if those answers are completely wrong.

If you think I'm being too harsh in my criticism of OpenAI, consider how the company promotes o1. It describes the reinforcement training process as "thinking" and "reasoning," but in reality it's just guessing, and at every step it's guessing whether it's right, and the final result is often known in advance.

This is an insult to humans—real thinkers. Humans think based on a complex set of factors: from personal experience to a lifetime of knowledge to brain chemistry. While we do “guess” whether certain steps are correct when tackling complex problems, our guesses are based on concrete facts, not clumsy math like o1.

And, boy, it was expensive.

o1-preview is priced at $15 per million input tokens and $60 per output token. That means o1 costs three times as much for input as GPT-4o and four times as much for output. However, there is a hidden cost. Data scientist Max Woolf points out that OpenAI's "inference tokens" - the output content used to arrive at the final answer - are not visible in the API. This means that not only is o1 more expensive, but the nature of its product requires users to pay more frequently. All content generated to "consider" the answer (to be clear, this model is not "thinking") will also be charged, which makes solving complex problems such as programming extremely expensive.

Now let’s talk about accuracy. On Hacker News, a Reddit-like site owned by Sam Altman’s former company Y Combinator, there were complaints that o1 “made up” nonexistent libraries and functions when handling programming tasks, and made mistakes when answering questions that couldn’t be easily answered online.

On Twitter, startup founder and former game developer Henrik Kniberg asked o1 to write a Python program to calculate the product of two numbers and predict the output of the program. Although o1 wrote the code correctly (although the code could be more concise and only one line is needed), the actual output result was completely wrong. AI company founder Karthik Kannan also took the programming task test, and o1 also "made up" a command that did not exist in the API.

Another user, Sasha Yanshin, attempted to play chess with o1, only for o1 to "create" a chess piece out of thin air on the board and subsequently lose the game.

Because I was a little naughty, I also tried asking o1 to list the states with "A" in their names. It thought for 18 seconds and gave the names of 37 states, including Mississippi. The correct answer should be 36 states.

When I asked it to list the states with a "W" in their names, it paused for eleven seconds and included North Carolina and North Dakota.

I also asked o1 how many times the letter "R" appeared in its code name "Strawberry", and it answered two.

OpenAI claims that o1 performs on par with PhD students on complex benchmarks such as physics, chemistry, and biology, but it apparently struggles in geography, basic English language tests, math, and programming.

Remarkably, this is exactly the “big, dumb magic” I predicted in my previous newsletter. OpenAI launched Strawberry just to prove to investors and the public that the AI ​​revolution is still going on, but what it actually launched is a clunky, boring, and expensive model.

Worse, it’s hard to explain why anyone should care about o1. While Sam Altman may brag about its “reasoning power,” those with the money to continue funding him see 10-20 second wait times, issues with basic factual accuracy, and a lack of any exciting new features.

No one cares about a “better” answer anymore—they want something completely new, and I don’t think OpenAI knows how to achieve that. Altman’s attempt to anthropomorphize o1 by having it “think” and “reason” is clearly meant to imply that it’s some kind of step toward artificial general intelligence (AGI), but it’s hard to get even the staunchest AI advocates excited.

In fact, I think o1 shows that OpenAI is both desperate and uncreative.

Prices didn’t drop, the software didn’t get more useful, and the “next generation” models we’ve been hearing about since November turned out to be a dud. These models are also desperate for training data, to the point where nearly every large language model ingests some kind of copyrighted content. This urgency led Runway, one of the largest generative video companies, to launch a “company-wide effort” to collect thousands of YouTube videos and pirated content to train its models, while a federal lawsuit in August accused NVIDIA of doing similar things to many creators to train its “Cosmos” AI software.

The current legal strategy is largely a matter of willpower, hoping that these lawsuits don’t go so far as to set any legal precedent that could make training these models a copyright infringement — which is exactly what a recent interdisciplinary study sponsored by the Copyright Initiative concluded.

The lawsuits are moving forward, and in August a judge granted the plaintiffs further copyright infringement claims against Stability AI and DeviantArt (which used the models), as well as copyright and trademark infringement claims against Midjourney. If any of the lawsuits succeed, it would be a catastrophic blow to OpenAI and Anthropic, and even more so to Google and Meta, which use datasets of millions of artists’ works, because it would be nearly impossible for AI models to “forget” their training data, meaning they would need to be retrained from scratch, which would cost billions of dollars and greatly reduce their effectiveness at tasks they are not particularly good at.

I’m deeply concerned that the foundations of this industry are like fortresses on the beach. Large language models like ChatGPT, Claude, Gemini, and Llama are unsustainable and there seems to be no path to profitability because the computationally intensive nature of generative AI means that they cost hundreds of millions or even billions of dollars to train and require such large amounts of training data that these companies are effectively stealing data from millions of artists and writers and hoping to get away with it.

Even if we set these issues aside, generative AI and its related architectures don’t seem to be revolutionary, and the hype cycle around generative AI doesn’t really fit the meaning of the term “artificial intelligence” at all. Generative AI is only occasionally able to correctly generate some content, summarize documents, or conduct research at some indeterminate “faster” speed at its best. Microsoft’s Copilot for Microsoft 365 claims to have “thousands of skills” and provide “endless possibilities” for enterprises, but the examples it shows are nothing more than generating or summarizing emails, “starting presentations with prompts,” and querying Excel tables—functionality that may be useful, but is by no means revolutionary.

We are not in the “early stages.” Since November 2022, large tech companies have spent over $150 billion in capital expenditures and investments on infrastructure and emerging AI startups, as well as their own models. OpenAI has raised $13 billion and can hire whoever they want, and the same can be said for Anthropic.

However, the result of this industry version of the "Marshall Plan" that promoted the rise of generative AI was only the birth of four or five almost identical large language models, the world's least profitable startups, and thousands of expensive but mediocre integrated applications.

Generative AI is being marketed with multiple lies:

1. It is AI. 2. It will get better. 3. It will become true AI. 4. It is unstoppable.

Leaving aside terms like “performance” — which are often used to describe the “accuracy” or “speed” of generated content, rather than the skill level — large language models have actually reached a plateau. “More powerful” often doesn’t mean “can do more”, but “more expensive”, which means you just created something that costs more but has no added functionality.

If the combined might of every venture capitalist and big tech giant still hasn’t found a truly meaningful use case that a lot of people are willing to pay for, then there won’t be new use cases. Large language models — yes, that’s where all these billions are going — aren’t suddenly going to become more capable just because the tech giants and OpenAI throw another $150 billion at it. No one’s trying to make these things more efficient, or at least no one’s succeeding in doing so. If someone succeeded, they’d be hyping it up.

We are dealing with a collective delusion - a dead-end technology based on copyright theft (as is the case with every generation of technology), which requires constant capital to keep running, provides services that are at best optional, disguised as some kind of automated functionality that is not actually provided, costs billions of dollars and will continue to do so. Generative AI does not run on money (or cloud computing credits), but on confidence. The problem is that confidence - like investment capital - is a finite resource.

My concern is that we may be in the midst of an AI crisis similar to the subprime mortgage crisis — with thousands of companies integrating generative AI into their businesses, but prices are far from stabilizing and even further from profitability.

Almost every startup that claims to be “AI-driven” is based on some combination of GPT or Claude. These models were developed by two companies that are deeply loss-making (Anthropic expects to lose $2.7 billion this year), and their pricing strategies are designed to attract more customers rather than make a profit. As mentioned before, OpenAI relies on Microsoft funding - both the “cloud computing credits” it receives and the preferential pricing provided by Microsoft - and its pricing is completely dependent on Microsoft’s continued support as an investor and service provider. Anthropic’s deals with Amazon and Google face similar problems.

Based on their losses, I speculate that if OpenAI or Anthropic were pricing closer to actual costs, the price of API calls could increase ten to a hundred times, although it's hard to say exactly without actual data. But we can consider the numbers reported by The Information, which predicts OpenAI's server costs at Microsoft will reach $4 billion in 2024 - which, I might add, is two and a half times cheaper than Microsoft charges other customers - plus the fact that OpenAI is still losing more than $5 billion a year.

It’s highly likely that OpenAI charges a fraction of what it costs to run its models, and can only do so if it can keep raising more venture capital than ever before and continue to get favorable pricing from Microsoft, which recently said it sees OpenAI as a competitor. While it’s impossible to be sure, it’s reasonable to assume that Anthropic is getting similar favorable pricing from Amazon Web Services and Google Cloud.

Assuming Microsoft gives OpenAI $10 billion in cloud computing credits and OpenAI spends $4 billion on server costs, plus an assumed $2 billion in training costs — costs that will surely increase with the launch of the new o1 and “Orion” models — then OpenAI may need more credits by 2025, or start paying Microsoft in actual cash.

While Microsoft, Amazon, and Google may continue to offer favorable pricing, the question is whether these deals are profitable for them. As we saw after Microsoft’s latest quarterly earnings report, investors have expressed increasing concerns about the capital expenditures (CapEx) required to build generative AI infrastructure, and many are skeptical about the potential profitability of this technology.

What we don’t really know is how profitable Generative AI is for these massive tech companies, because they factor these costs into other revenues. While we can’t be sure, I imagine if these businesses were profitable at all, they would talk about the revenue they’re getting from it, but they’re not.

The market’s extreme skepticism about the generative AI boom and the lack of substantive answers from Nvidia CEO Jensen Huang about the return on investment in AI caused Nvidia’s market value to plummet by $279 billion in a single day. This was the largest stock market crash in the history of the US market, with the total value lost being equivalent to the peak of nearly five Lehman Brothers. While the comparison stops there—Nvidia was not even at risk of failure, and even if it did, the systemic impact would not be that severe—it is still a staggering sum and shows the distorting power of AI on the market.

Microsoft, Amazon, and Google all took a beating in early August for their massive AI-related capital expenditures, and they will face more pressure if they fail to show significant revenue growth next quarter from their $150 billion (or more) in new data centers and NVIDIA GPUs.

It’s important to remember that there is no longer a market for ideas for big tech companies other than AI. When companies like Microsoft and Amazon began to show signs of slowing growth, they also began to rush to show the market that they were still competitive. Google, a multi-risk monopoly that relies almost entirely on search and advertising, also needed something new and eye-catching to attract investors’ attention - however, these products did not bring enough utility, and it seemed that most of the revenue came from companies that "tried" AI and found that it was not worth it.

Currently, there are two possibilities:

1. Big tech companies realize they are in deep trouble and are choosing to reduce AI-related capital spending out of fear of Wall Street disapproval.

2. In order to find new growth points, large technology companies decided to cut costs to maintain their disruptive operations, lay off employees and transfer funds from other businesses to support the "death race" of generative AI.

It’s not clear which scenario will happen. If big tech companies accept that generative AI is not a future reality, they won’t really have anything else to show Wall Street but could adopt a strategy similar to Meta’s “year of efficiency,” reducing capital expenditures (and laying off employees) while promising to “lower investment” to a certain degree. This is the most likely path for Amazon and Google, because while they’re eager to please Wall Street, at least for now they still have their profitable monopolies to fall back on.

However, actual revenue growth from AI needs to be seen in the coming quarters, and it needs to be substantial, rather than vague statements about AI being a “mature market” or “annualized growth rate.” If capex increases follow, then this actual contribution will need to be significantly higher.

I don’t think that growth is going to happen. Whether it’s in Q3, Q4, or Q1 of 2024, Wall Street will start punishing big tech for their greed for AI, and that punishment will be much harsher than it is for Nvidia, which is the only company that can actually show how AI can increase revenue, despite Huang’s empty words and useless slogans.

I'm somewhat concerned that the second scenario is more likely: these companies are so convinced that "AI is the future" that their culture is so disconnected from developing software that solves real problems that it could burn the entire company. I'm deeply concerned that mass layoffs will be used to fund this movement, and the past few years have made me not believe they will make the right choice to leave AI.

Big tech has been thoroughly poisoned by management consultants — Amazon, Microsoft and Google are all run by MBAs — and has surrounded itself with similar monsters, like Google’s Prabhakar Raghavan, who drove out the people who actually built Google Search so he could take control.

These people don’t really face human problems, they create a culture focused on solving imaginary problems that software can fix. Generative AI may seem a little magical to people who spend their entire lives in meetings or reading emails. I think Satya Nadella’s (Microsoft CEO) success mentality is mainly “let the technicians solve the problem”. Sundar Pichai could have ended the whole generative AI craze by simply laughing at Microsoft’s investment in OpenAI - but he didn’t do it because these people don’t have any actual ideas and these companies are not run by people who have experienced the problems, let alone those who actually know how to solve them.

They are desperate, too, and this situation has never been this serious for them, except for Meta burning billions on the Metaverse. However, this situation is much more serious and ugly because they have invested so much money and have tied the AI ​​so tightly into their company that pulling it out would be both embarrassing and hurtful to the stock, effectively a tacit admission that this is all a waste.

All of this could have stopped earlier if the media were actually holding them accountable. This narrative is sold through the same scam as previous hype cycles, with the media assuming that these companies will "solve the problem" even though it's clear they won't. Do you think I'm being pessimistic? So what's next for generative AI? What will it do next? If your answer is that they will "solve the problem" or that they have "amazing stuff behind the scenes", then you are an unwitting participant in a marketing operation (think about this sentence for a moment).

Author’s aside: We really need to stop being fooled by this stuff. When Mark Zuckerberg claimed we were about to enter the Metaverse, a ton of media outlets — like The New York Times, The Verge, CBS News, and CNN — all joined in promoting an obviously flawed concept that looked terrible and sold itself on outright lies about the future. It’s clearly nothing more than a crappy VR world, but the Wall Street Journal still called it “a vision for the future of the internet” six months after the hype-cycle had clearly expired. Same thing with crypto, Web3, and NFTs! The Verge, The New York Times, CNN, CBS News — these outlets have once again participated in promoting technology that is clearly useless — I should mention The Verge specifically, and it’s actually Casey Newton, who, after three consecutive calls for technology, despite his good reputation, claimed in July that “having one of the most powerful large language models could provide companies with the basis for all kinds of money-making products” when in reality, the technology only loses money and has yet to provide any truly useful and lasting products.

I believe that at the very least, Microsoft will start reducing costs in other areas of the business to help sustain the AI ​​boom. In emails shared with me by a source earlier this year, Microsoft’s senior leadership team requested (but ultimately shelved) that power requirements be reduced in multiple areas of the company to free up power for GPUs, including moving compute for other services to other countries to free up compute capacity for AI.

In the Microsoft section on the anonymous social network Blind (company email verification is required), a Microsoft employee complained in mid-December 2023 that "AI is taking their money," saying that "the cost of AI is too high, it eats up salary increases, and the situation will not get better." Another employee shared their anxiety in mid-July, saying that they clearly felt that Microsoft had a "marginal addiction" to "operating cash flow from cutting costs to fund Nvidia's stock price" and that this practice "deeply hurt Microsoft's culture."

Another employee added that they believe "Copilot will destroy Microsoft in FY 2025" and that "Copilot focus will drop significantly in FY 2025", revealing that they know of "large Copilot deals in their country that have less than 20% usage after nearly a year of PoCs, layoffs, and adjustments", and said that "the company took too many risks" and Microsoft's "huge AI investment will not pay off."

Although Blind is anonymous, it’s hard to ignore the fact that a large number of online posts tell of cultural problems at Microsoft Redmond, particularly that senior leadership is out of touch with actual work and will only fund projects that have the AI ​​label attached. Many posts express frustration with Microsoft CEO Satya Nadella’s “rhetoric nonsense” and complain about the lack of bonuses and promotion opportunities in an organization focused on chasing an AI craze that may not exist.

At the very least, it can be seen that there is a deep cultural sadness within the company, with many posts about "I don't like working here" and people wondering why we are investing so much in AI, but on the other hand they feel they can only accept it because Satya Nadella doesn't care at all.

The Information article mentioned that Microsoft has a worrying problem hidden in the actual adoption rate of its AI feature Office Copilot: Microsoft has reserved enough server capacity in its data centers for 365 Copilot to handle millions of daily users. However, it is not clear how this capacity is actually being used.

According to estimates, Microsoft's current Office Copilot users may be between 400,000 and 4 million, which means that Microsoft may have built a lot of idle infrastructure that is not being fully utilized.

While one could argue that Microsoft is positioning itself based on the expectation of future growth in this product category, it’s worth considering another possibility: What if that growth never comes? What if — as crazy as it sounds — Microsoft, Google, and Amazon are building these massive data centers to capture demand that may never come? Back in March of this year, I made the point that I couldn’t find any company that could achieve significant revenue growth with generative AI. Almost six months later, the question remains. The current approach for large companies seems to be to attach AI capabilities to existing products in the hope of increasing sales that way, but this strategy has not shown signs of success anywhere. Just like Microsoft, the “AI upgrades” they have launched don’t seem to bring actual business value to the enterprise.

So this raises a bigger question: Are these AI investments sustainable? Have the tech giants overestimated the demand for AI tools?

While some companies may be driving some of the spending on Microsoft Azure, Amazon AWS, and Google Cloud as they “integrate AI,” I’d assume much of this demand is driven by investor sentiment. These companies are “investing in AI” more to satisfy the market than based on cost/benefit analysis or actual utility.

However, these companies have spent a lot of time and money embedding generative AI capabilities into their products, and I think they may face the following scenarios:

1. These companies develop and launch AI features, only to find that customers are unwilling to pay for them, as Microsoft found with its 365 Copilot. If they can’t find a way to get customers to pay now — during the AI ​​hype — it will only get worse when the hype is over and bosses stop asking employees to “get on the AI ​​bandwagon.”

2. These companies develop and launch AI features, but cannot find a way to get users to pay extra for them, which means they can only embed AI features into existing products without increasing profit margins. Ultimately, AI features may become a "parasite" that erodes the company's revenue.

Jim Covello of Goldman Sachs also mentioned in his report on generative AI that if the benefit of AI is just improved efficiency (such as being able to analyze documents faster), then competitors can also do this. Almost all generative AI integrations are similar: some form of collaborative assistant to answer customer or internal questions (such as Salesforce, Microsoft, Box), content creation (Box, IBM), code generation (Cognizant, Github Copilot), and the upcoming "intelligent agents", which are actually "customizable chatbots that can connect to other parts of the website."

This question reveals one of the biggest challenges of generative AI: although it is "powerful" to some extent, this power is more reflected in "generating content based on existing data" rather than true "intelligence". This is also why many companies' introduction pages about AI on their websites are full of empty words, because their biggest selling point is actually "Uh... figure it out yourself!"

What I’m worried about is a knock-on effect. I believe that many companies are “trialing” AI right now, and once those trials are over (Gartner predicts that by the end of 2025, 30% of generative AI projects will be abandoned after the proof-of-concept phase), they will likely stop paying for those additional features or stop integrating generative AI into their company’s products.

If this happens, already depressed revenues for hyperscalers and large language model vendors like OpenAI and Anthropic that provide cloud computing for generative AI applications will be further reduced. This will likely put further pressure on prices at these companies, as their already loss-making margins will deteriorate further. At that point, OpenAI and Anthropic will almost certainly have to raise prices, if they haven’t already done so.

While the big tech companies can continue to finance the boom—after all, they almost entirely fueled it—this doesn’t help the smaller startups that have become accustomed to discounted prices because they won’t be able to keep operating. While there are cheaper alternatives, such as independent vendors running Meta’s LLaMA model, it’s hard to believe they won’t face the same profitability issues as the super-scalers.

Note also that the hyperscalers are also very afraid of pissing off Wall Street. While they could theoretically (as I fear) improve margins through layoffs and other cost-cutting measures, these are short-term solutions that are only likely to work if they can somehow shake some money out of this barren generative AI tree.

Regardless, it’s time to accept that the money isn’t here. We need to stop and examine that we are in the third era of the tech industry’s illusion. However, unlike cryptocurrencies and the Metaverse, this time everyone is in on the money-burning binge, pursuing an unsustainable, unreliable, unprofitable, and environmentally harmful project that was packaged as “artificial intelligence” and promoted as something that would “automate everything” but never actually had a path to actually achieve that goal.

Why does this keep happening? Why have we gone from cryptocurrencies to the metaverse and now generative AI, technologies that don’t really seem to be designed for ordinary people?

This is actually the natural evolution of a tech industry that is completely focused on increasing the value it extracts from each customer, rather than delivering more value to customers. Or, in other words, they don’t even really understand who their customers are and what they need.

Today, the products you’re marketed to will almost certainly try to tie you into an ecosystem — at least as a consumer, controlled by Microsoft, Apple, Amazon, Google. This makes it increasingly expensive to leave that ecosystem. Even cryptocurrency — ostensibly a “decentralized” technology — quickly abandoned its laissez-faire philosophy in favor of aggregating users through a handful of large platforms (like Coinbase, OpenSea, Blur, or Uniswap), which are often backed by the same venture capital firms (e.g., Andreessen Horowitz). Rather than becoming the standard-bearer for a new, entirely independent online economy, cryptocurrencies have been able to scale only through the connections and money that funded other waves of the internet.

As for the Metaverse, it is a hoax, but it is also Mark Zuckerberg's attempt to control the next generation of the Internet. He hopes to make Horizon the main platform. As for generative AI, we will talk about it later.

All of this is about further monetization — that is, increasing the average value of each customer, whether by getting them to use the platform more so as to show more ads, market “semi-useful” new features, or creating a new monopoly or oligopoly where only the tech giants with huge financial reserves can participate, while providing very little actual value or utility to customers.

Generative AI is exciting (at least to a certain kind of people) because the tech giants see it as the next big money-maker—by adding a fee-based path to everything from consumer tech to enterprise services. Most generative computing flows through OpenAI or Anthropic and back to Microsoft, Amazon, or Google, generating cloud computing revenue that keeps their growth stories going. The biggest innovation here isn’t what generative AI can do, but the creation of an ecosystem that is hopelessly dependent on a handful of hyperscale companies.

Generative AI may not be terribly practical, but it is very easy to integrate into all kinds of products, allowing companies to charge for these “new features.” Whether it is a consumer application or a service for an enterprise software company, such products can earn millions or even billions of dollars in revenue by upselling them to as many customers as possible.

Sam Altman is very smart and realized that the tech industry needed a "new thing" - a new technology that everyone could take a piece of and sell. While he may not fully understand technology, he does understand the economic system's desire for growth and productized generative AI based on the Transformer architecture as a "magic tool" that can be easily inserted into most products to bring some unique features.

However, the rush to integrate generative AI everywhere reveals a huge disconnect between these companies and actual consumer needs or effectively operating businesses. For the past 20 years, simply “making new stuff” seemed to work — launching new features and having sales teams hard sell them was enough to sustain growth. This trapped tech industry leaders in a harmful and unprofitable business model.

The executives running these companies — almost all MBAs and management consultants who have never built a product or tech company from scratch — either don’t understand or don’t care that there is no path to profitability for generative AI, and probably think it will just naturally become profitable like Amazon Web Services (AWS) did (AWS took 9 years to become profitable), even though the two are very different things. Things “just worked out” in the past, so why not now?

Of course, besides the fact that rising interest rates have dramatically changed the venture capital market, reducing VCs’ war chests and shrinking fund sizes, the fact that attitudes toward tech have never been more negative, along with a host of other factors, is too numerous to discuss in this 8,000-word article as to why 2024 will be very different from 2014.

What’s really worrying is that many of these companies don’t seem to have any new products other than AI. What else do they have? What else can they use to continue to grow? What other options do they have?

No, they have nothing. And that’s the problem, because if AI fails, the impact will inevitably be felt by other companies across the tech industry.

Every major tech player — both in the consumer and enterprise space — sells some kind of AI product that integrates large language models or their own models, often running in the cloud on Big Tech’s systems. To some extent, these companies are dependent on Big Tech’s willingness to subsidize the entire industry.

I speculate that a subprime AI crisis is brewing, in which nearly the entire tech industry is involved in a technology that is sold at dirt-cheap prices, is highly concentrated, and is subsidized by Big Tech. At some point, the alarming and pernicious rate at which generative AI burns money will catch up with them, leading to price increases or companies releasing new products and features that charge so much — like Salesforce’s $2 per conversation for its “Agentforce” product — that even enterprise customers with ample budgets can’t justify the expense.

What happens when the entire tech industry becomes dependent on a piece of software that only loses money and doesn’t have much real value on its own? What happens when the pressure becomes too great, these AI products become irreconcilable, and these companies have nothing else to sell?

I really don’t know, but the tech industry is headed for a terrible reckoning, a lack of creativity enabled by an economic environment that rewards growth over innovation, monopoly over loyalty, and management over actual creation.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
1
Comments