Just now, ChatGPT suddenly announced the addition of advertisements, and even the newly launched $8 subscription plan can't escape them.

This article is machine translated
Show original

There's no such thing as a free lunch. If there were, you'd be the lunch. This principle applies equally to the most expensive AI products in Silicon Valley.

Just now, OpenAI officially announced a landmark decision: to introduce advertising functionality to the free version of ChatGPT and the introductory subscription tier "ChatGPT Go". This test will first be rolled out to adult users in the United States in the coming weeks.

The good news is that the ads won't abruptly interrupt your conversation. They will only appear quietly at the bottom of your answer when the system detects that there are relevant sponsored products, with a clear label.

If you are a Plus, Pro, or Enterprise subscriber, congratulations, your interface will remain clean; however, if you plan to try the newly launched ChatGPT Go subscription service for just $8 per month, you will still be part of the target audience covered by ads.

The ChatGPT Go subscription service is now available in all ChatGPT-supported regions. Benefits and core features include the use of the GPT-5.2 Instant model, offering 10 times more messaging, file upload, and image generation credits than the free version, along with longer memory and context windows.

With this, ChatGPT has officially established a clearly tiered three-tier consumer subscription system:

Go ($8 USD): Entry-level, focusing on cost-effectiveness and everyday tasks.

Plus ($20): Advanced level, supports GPT-5.2 Thinking and Codex, suitable for deep reasoning.

Pro ($200 USD): Flagship model, supports GPT-5.2 Pro, offering the highest privileges and performance.

In addition, to reassure users about concerns that "chatting has turned into sales pitches," OpenAI emphasized that advertising will absolutely not interfere with the objectivity of the answers. AI will still generate content in the way that is most helpful to you, rather than pushing content to whoever pays the most.

More importantly, there are privacy red lines. Your conversation records will not be packaged and sold to advertisers, and you have the right to turn off personalized settings at any time. Users under the age of 18 will be completely blocked from advertisements.

ChatGPT Advertising Principles Screenshot | Official Blog

OpenAI's reasoning is quite frank: introducing advertising is to allow more people to use the tools for free or with low barriers to entry. After all, the cost of computing power is high, and diversified revenue streams are essential to realizing the vision of "AI for all".

Interestingly, OpenAI doesn't intend to simply post a few advertising images; they've proposed a new concept: "conversational advertising."

Imagine that when you ask for a Mexican dinner recipe, not only will you see sponsorships from food brands at the bottom of the page, but you can even directly ask questions to the ad and get more information through interaction.

Despite OpenAI's efforts to package this "conversational advertising" as a win-win innovative experience and to prove that advertising can also be valuable content, history has repeatedly shown that when a platform acts as both referee and player, user interests are often the first to be sacrificed.

Adding ads to AI is a shortcut to recoup losses, but also a bankruptcy of imagination.

Let's first acknowledge a reality. In the era of large-scale models that are constantly burning through cash, "adding ads to AI" is indeed the most stable and fastest way to recoup losses.

The internet has already paved the way for them. The earliest portal websites sold advertising space, then search engines sold keywords, and social networks and short video platforms sold information feeds.

The tactics haven't changed much. First, gather people together, then package that attention and sell it to advertisers. The forms of advertising are becoming more and more covert, while the system is becoming more and more sophisticated.

The situation AI is facing now is very similar to that of the internet back then.

User numbers are skyrocketing, but revenue isn't keeping pace. Subscriptions are still slowly educating the market, and enterprise-paid projects have long cycles. Between ideals and reality lies an ever-growing hole of losses.

Thus, advertising became a lifeline on the AI playing field. Whoever felt the most pressure had to reach out first. However, whoever blatantly inserted ads into the conversations first was likely to send the most sensitive and discerning users to other companies' models first.

The prisoner's dilemma is no different.

As long as one company insists on not adding ads, other players will hesitate to add ads, fearing they'll be the first to be abandoned. But once multiple companies take that step simultaneously, these concerns are collectively eliminated, and no one needs to pretend to be innocent anymore.

In fact, Google had already taken the lead this week by testing a so-called personalized offer ad in the Gemini chat interface.

Its core logic is: when a user asks "which suitcase has the best price-performance ratio" using Gemini, the system judges that the user has a strong purchase intention and will automatically embed a limited-time discount code provided by Samsonite below the answer.

The ads are no longer triggered by keywords, but by AI judging in real time whether "this person is about to place an order". Of course, Google has also given it a good name: a new model that "goes beyond traditional search advertising".

Of course, looking solely at the business logic, we shouldn't be too critical of OpenAI. Faced with the exorbitant electricity bills for GPUs, any non-profit-driven initial intention seems like a luxury. The data doesn't lie: OpenAI's annualized revenue is approximately $12 billion, which sounds impressive, but its cash burn rate is likely three times that of publicly available data.

Pre-training is expensive, and every inference after deployment is also expensive. While inference costs are indeed decreasing, Jevons' paradox proves one point: when computing power becomes cheaper, users will only immediately use it to run more complex models. This leads to enterprises needing to purchase more and more GPUs, causing electricity bills to snowball.

In short, unit costs have decreased, but the overall cost hasn't been reduced at all.

According to OpenAI's statistics as of July this year, ChatGPT has approximately 35 million paying users, accounting for 5% of weekly active users. At the same time, subscription revenue also accounts for the majority of revenue for most AI companies, represented by OpenAI.

Against this backdrop, all AI companies face a simple and direct question: where does the money come from?

The most direct answer is to insert ads into AI.

Advertising was considered a cardinal sin because the internet lacked other effective business options in its early days. Similarly, in the AI era, without innovative models, advertising will remain the only means of covering most user costs.

Of course, simply copying the ways of making money from the previous era is clearly a path dependency lacking imagination. The traditional internet has already proven this once: when all you have is a hammer, all problems seem like nails; when all you know is advertising, all products seem like advertising space.

ChatGPT advertising also faces challenges: as of June this year, only 2.1% of queries involved shopping. To address this, OpenAI has integrated with Stripe payments, Shopify e-commerce, Zillow real estate, and DoorDash food delivery services to cultivate user shopping habits and accumulate data for ad placement.

Revenue model determines product form, and user experience often becomes the variable that gets sacrificed. AI was initially seen as a promising opportunity to escape the quagmire of the old era, but unexpectedly, we are still stuck in the same quagmire.

The AI that understands you best is starting to help you buy products.

The traditional internet plus advertising model, in essence, is nothing more than selling attention through prominent placements, with early search engine advertising being a prime example.

The page appeared to show search results, but the first few results were actually paid rankings. Looking back at the accidents and controversies of that time now sends chills down your spine.

Inserting ads into AI would be more dangerous than these.

Experienced users are naturally wary of online ads and generally know to compare several search results, recognizing that the top ones are likely advertisements. However, the trap of AI with its human-like empathy lies in the fact that we might forget there could be a sales team standing on the other side of the screen. You treat the AI as a teacher, and it treats you as a potential customer waiting to be converted.

Looking back at history, Su Dongpo wrote a poem for a stall selling fried dough twists: "Slender hands knead the dough until it's a uniform jade color, then fry it in oil until it's a deep, tender yellow. Last night, I slept soundly, knowing the weight of the dough, and flattened it like a beautiful woman's gold armband." From then on, customers flocked to the stall. People weren't buying the fried dough twists themselves, but rather the trust of the famous Su Dongpo.

In many scenarios, today's AI is like Su Dongpo, whom ordinary users trust by default.

What's particularly dangerous is that now it's not just simple product placement; some people are using GEO for "content poisoning."

GEO, as the name suggests, stands for "Generative Engine Optimization," which aims to make a webpage or article more frequently cited in AI-powered answer engines such as ChatGPT, Gemini, and Perplexity.

Imagine this scenario: certain manufacturers or stakeholders release a large number of optimized web articles in advance, written in an authoritative and comprehensive manner specifically for a certain product or service, and with added structured tags, SEO metadata, keyword hints, etc.

Their goal isn't to help, but to ensure that when users ask relevant questions in the AI, their content is prioritized. The AI then weaves this content into the answer.

To users, this appears to be "authoritative advice + neutral information." In reality, however, it could be a commercial promotion/poisoning scheme disguised as expert advice.

This is more insidious than traditional advertisements or advertorials because it's hidden at the heart of the "answer," not in a prominent ad placement, but in the advice or conclusions that users trust most. Every few paragraphs, we need to confirm whether the AI's suggestion is truly for our benefit or simply promoting someone else's product.

Simply hiding an ad in a single sentence is already quite dangerous. More importantly, AI is planning its next step: to move itself upstream of all apps and simply take over the decision of "who will advertise for you."

In the traditional internet era, every super app wanted to be the gateway. They each carved out their own territory and built their own firewalls. Users would directly open them, and then the apps would be responsible for stuffing content, services, and advertisements onto their screens.

Super apps have been building walled gardens for ten years, but the currently popular AI Agents want to demolish them overnight.

In theory, it has cross-application operation capabilities, which can help you complete operations such as "opening an app to search, comparing prices across multiple platforms, and automatically filling out forms to place orders." You no longer need to click around manually, or even remember the entry points for each app.

This will also lead to the future where true AI agents will become the default internet access point for most people, while those applications that originally made money from advertising will either be forced to pay "protection fees" to AI or retreat to the background, becoming an interface without brand presence.

News websites have already taken the lead. According to media company Raptive, Google's newly launched AI Overview feature will ultimately cause many publishers' websites to lose 25% of their traffic. While the impact isn't yet at its worst, it will intensify as the application of AI Overview expands.

With the advent of aggregation platforms, they gradually transformed from platforms directly facing readers into content providers, and now consumer applications are facing the same fate. Even as all applications have to queue up to serve AI butlers, the one that most needs close monitoring is actually the butler itself.

On the one hand, the goal of advertising in the past was to persuade individual people: to do everything possible to capture their attention and insert content into their timelines. On the other hand, in a world dominated by AI agents, advertisers must first persuade the agents that help people make decisions.

On the other hand, this also means that most advertising teams will need to seriously consider: when users no longer browse apps themselves, but instead let agents browse them for them, who should I target my ads to? And how should I target them?

Yes, if an AI Agent is also an advertising platform, it will have both powers.

It decides where you go and what you see. It can choose hotels, flights, insurance, and doctors for you, and it can also attach its own commission and advertising logic to each choice.

In particular, AI assistants can gain a deep understanding of users' current needs and intentions, thereby inserting highly relevant ad recommendations. This goes a step further than traditional web pages that target ads based on keywords, and may achieve a recommendation effect similar to that of a real human consultant.

Furthermore, the interactions between AI assistants and users accumulate a vast amount of personal privacy data—including user preferences, habits, geographical location, and social relationships. If this data is used for ad targeting, the effectiveness of advertising will be unprecedented.

The killer app of 2026 might not be a chatbot, but an ad blocker. Every new technology claims to be different, but ultimately they all end up in the realm of advertising.

Therefore, we don't need to panic about the upcoming "sales-driven" ChatGPT, but we must not let our guard down either.

Since advertising is already a done deal, all we can do is quickly "demystify" AI. Don't treat it as an omniscient and omnipotent god, but rather as a tool that tries to please you and occasionally slips in its own agenda.

In an era where AI can handle all the tedious processes for us, the one thing we cannot outsource to it is still our own judgment. Use the tools well, but don't become part of the tools yourself.

This article is from the WeChat official account "APPSO" , authored by Zhang Wuji, and published with authorization from 36Kr.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments