ChatGPT will add ads in 2026; even the AI that knows you best is starting to betray you.

This article is machine translated
Show original

2026 may well be the year when humans will need to install "ad blockers" on AI for the first time.

Just this morning, The Information reported that OpenAI employees are figuring out how to prioritize "sponsored content" when users ask relevant questions on ChatGPT. For example, if you ask for mascara recommendations, you might see advertisements from manufacturers.

In recent weeks, OpenAI employees have also created prototypes of various ad display methods, including one that may appear in the sidebar of ChatGPT's interface.

From 2023 to 2024, the prevailing vision in Silicon Valley was one of elegance . Many were convinced that large-scale models could follow a SaaS model, where users would pay $20 a month, like subscribing to Netflix or Spotify, and then simply enjoy the AI services.

But this year, that illusion has basically collapsed.

Because AGI hasn't arrived yet, but the bills have already come. It's foreseeable that next year more AI products will begin tentatively "adding ads." Some will be explicit, some will disguise themselves as recommendations and collaborations, and some will simply be buried in the interaction.

This is somewhat darkly humorous: while we were still looking up at AGI's grand vision of ruling the world, we never expected that the first survival skill it learned would be to "make a living" through advertising.

Leaked ChatGPT ad-serving code | Image source: Tibor

Adding ads to AI is a shortcut to recoup losses, but also a bankruptcy of imagination.

Let's first acknowledge a reality. In the era of large-scale models that are constantly burning through cash, "adding ads to AI" is indeed the most stable and fastest way to recoup losses.

The internet has already paved the way for them. The earliest portal websites sold advertising space, then search engines sold keywords, and social networks and short video platforms sold information feeds.

The tactics haven't changed much. First, gather people together, then package that attention and sell it to advertisers. The forms of advertising are becoming more and more covert, while the system is becoming more and more sophisticated.

The situation AI is facing now is very similar to that of the internet back then.

User numbers are skyrocketing, but revenue isn't keeping pace. Subscriptions are still slowly educating the market, and enterprise-paid projects have long cycles. Between ideals and reality lies an ever-growing hole of losses.

Thus, advertising became a lifeline on the AI playing field. Whoever felt the most pressure had to reach out first. However, whoever blatantly inserted ads into the conversations first was likely to send the most sensitive and discerning users to other companies' models first.

The prisoner's dilemma is no different.

As long as one company insists on not adding ads, other players will hesitate to add ads, fearing they'll be the first to be abandoned. But once multiple companies take that step simultaneously, these concerns are collectively eliminated, and no one needs to pretend to be innocent anymore.

This becomes clearer when viewed through the lens of Gemini. Recently, several media outlets, citing advertising agency buyers, reported that Google Gemini's operators have informed some advertisers of plans to integrate ads into Gemini AI in 2026.

From the advertiser's perspective, this is a highly attractive new channel: the end of the big model is not AGI, but CPM (cost per thousand impressions), while the chat environment + a massive user base = a highly promising monetization space.

However, Dan Taylor, Google's head of global advertising, quickly denied the claim on social media, stating that "the Gemini app currently has no ads and there are no plans to change that." This indicates that Google is being cautious, at least in public discourse.

If we zoom in on OpenAI CEO Sam Altman, we can see a very typical swinging trajectory.

In the first year or two after ChatGPT became popular, he repeatedly emphasized that he disliked advertising, especially the combination of "advertising + AI," and publicly called it "particularly unsettling."

He prefers a clean subscription model: users pay directly, exchanging money for answers that are not influenced by advertisers. At most, he can accept a kind of "sales commission" model—users complete their own research and place their own orders, and the platform takes a small cut of the transaction, rather than charging money to change the order of the answers.

By 2025, his tone had noticeably softened.

He began to admit that "he actually quite likes those targeted ads on Instagram," finding it cool that they help him discover good things. He then changed his tune, saying that ads aren't necessarily useless, the key is whether the format is useful enough and not annoying enough.

According to The Information, OpenAI is seeking to create a "new type of digital advertising" rather than simply copying existing social media advertising formats.

ChatGPT can collect a large amount of user interest-related information through detailed conversations, and OpenAI has considered whether it is possible to display advertisements based on these chat logs. One approach is to prioritize the display of "sponsored information" when users ask questions using ChatGPT, such as setting it to insert advertising content first when generating answers.

According to sources familiar with the matter, some recent ad prototypes show the ads designed to appear in the sidebar of the main ChatGPT answer window. Additionally, staff have discussed adding a disclaimer such as "This answer contains sponsored content."

According to an insider, OpenAI's goal is to make advertising as "unobtrusive" as possible while maintaining user trust. For example, ads only appear after a certain stage of the conversation: when a user asks about a trip to Barcelona, ChatGPT will recommend the Sagrada Familia (non-sponsored), but clicking the link may bring up a sponsored service offering a paid guided tour.

Meanwhile, Altman went to great lengths to promote the commercialization of OpenAI, inviting senior executives responsible for applications and commercialization early in the morning to publicly search for an "advertising manager" and explore ways to transform ChatGPT into an advertising platform. For example, CFO Sarah Friar is a seasoned veteran honed through experience in advertising systems.

Even though Ultraman sounded the red alert, revenue remained the top priority, leading to the hiring of former Slack CEO Denise Dresser as Chief Revenue Officer, elevating "how to make money" to the company's highest priority.

He talks about idealism, but in reality, he's all about business.

Of course, from a purely business perspective, there's nothing wrong with doing this. The data doesn't lie; OpenAI's annualized revenue is around $12 billion, which sounds impressive, but its cash burn rate is probably three times that of publicly available data.

Pre-training is expensive, and every inference after deployment is also expensive. While inference costs are indeed decreasing, Jevons' paradox proves one point: when computing power becomes cheaper, users will only immediately use it to run more complex models. This leads to enterprises needing to purchase more and more GPUs, causing electricity bills to snowball.

In short, unit costs have decreased, but the overall cost hasn't been reduced at all.

According to OpenAI's statistics as of July this year, ChatGPT has approximately 35 million paying users, accounting for 5% of weekly active users. At the same time, subscription revenue also accounts for the majority of revenue for most AI companies, represented by OpenAI.

Against this backdrop, all AI companies face a simple and direct question: where does the money come from?

The most direct answer is to insert ads into AI.

Advertising was considered a cardinal sin because the internet lacked other effective business options in its early days. Similarly, in the AI era, without innovative models, advertising will remain the only means of covering most user costs.

Of course, simply copying the ways of making money from the previous era is clearly a path dependency lacking imagination. The traditional internet has already proven this once: when all you have is a hammer, all problems seem like nails; when all you know is advertising, all products seem like advertising space.

ChatGPT advertising also faces challenges: as of June this year, only 2.1% of queries involved shopping. To address this, OpenAI has integrated with Stripe payments, Shopify e-commerce, Zillow real estate, and DoorDash food delivery services to cultivate user shopping habits and accumulate data for ad placement.

Revenue model determines product form, and user experience often becomes the variable that gets sacrificed. AI was initially seen as a promising opportunity to escape the quagmire of the old era, and nobody wants to end up stuck in the same old mess.

The AI that understands you best is starting to help you buy products.

The traditional internet plus advertising model, in essence, is nothing more than selling attention through prominent placements, with early search engine advertising being a prime example.

The page appeared to show search results, but the first few results were actually paid rankings. Looking back at the accidents and controversies of that time now sends chills down your spine.

Inserting ads into AI would be more dangerous than these.

Experienced users are naturally wary of online ads and generally know to compare several search results, recognizing that the top ones are likely advertisements. However, the trap of AI with its human-like empathy lies in the fact that we might forget there could be a sales team standing on the other side of the screen. You treat the AI as a teacher, and it treats you as a potential customer waiting to be converted.

Looking back at history, Su Dongpo wrote a poem for a stall selling fried dough twists: "Slender hands knead the dough until it's a uniform jade color, then fry it in oil until it's a deep, tender yellow. Last night, I slept soundly, knowing the weight of the dough, and flattened it like a beautiful woman's gold armband." From then on, customers flocked to the stall. People weren't buying the fried dough twists themselves, but rather the trust of the famous Su Dongpo.

In many scenarios, today's AI is like Su Dongpo, whom ordinary users trust by default.

What's particularly dangerous is that now it's not just simple product placement; some people are using GEO for "content poisoning."

GEO, as the name suggests, stands for "Generative Engine Optimization," which aims to make a webpage or article more frequently cited in AI-powered answer engines such as ChatGPT, Gemini, and Perplexity.

Imagine this scenario: certain manufacturers or stakeholders release a large number of optimized web articles in advance, written in an authoritative and comprehensive manner specifically for a certain product or service, and with added structured tags, SEO metadata, keyword hints, etc.

Their goal isn't to help, but to ensure that when users ask relevant questions in the AI, their content is prioritized. The AI then weaves this content into the answer.

To users, this appears to be "authoritative advice + neutral information." In reality, however, it could be a commercial promotion/poisoning scheme disguised as expert advice.

This is more insidious than traditional advertisements or advertorials because it's hidden at the heart of the "answer," not in a prominent ad placement, but in the advice or conclusions that users trust most. Every few paragraphs, we need to confirm whether the AI's suggestion is truly for our benefit or simply promoting someone else's product.

Simply hiding an ad in a single sentence is already quite dangerous. More importantly, AI is planning its next step: to move itself upstream of all apps and simply take over the decision of "who will advertise for you."

In the traditional internet era, every super app wanted to be the gateway. They each carved out their own territory and built their own firewalls. Users would directly open them, and then the apps would be responsible for stuffing content, services, and advertisements onto their screens.

The super app has been building a walled garden for ten years, but the AI Agent wants to demolish it overnight.

In theory, it has cross-application operation capabilities, which can help you complete operations such as "opening an app to search, comparing prices across multiple platforms, and automatically filling out forms to place orders." You no longer need to click around manually, or even remember the entry points for each app.

This points out the fundamental reason why Doubao Mobile Assistant was collectively "blacklisted" by various apps as soon as it was launched recently.

Essentially, this is a zero-sum game, a battle for entry.

Whoever is closer to the user can decide what the user sees. When Doubao partnered with mobile phone manufacturers to obtain system-level permissions, enabling users to order takeout, book flights, compare prices, and even reply to messages across apps, it meant that all apps became "backend services" for Doubao Mobile Assistant.

Unsurprisingly, major apps have restricted Doubao Mobile Assistant's automated operations, citing "security" reasons, and some have even forced it offline.

This was a perfect rehearsal.

If the true AI agent of the future becomes the default internet access point for most people, those applications that originally made money from advertising will either be forced to pay "protection fees" to the AI or retreat to the background, becoming an interface without brand presence.

News websites have already taken the lead. According to media company Raptive, Google's newly launched AI Overview feature will ultimately cause many publishers' websites to lose 25% of their traffic. While the impact isn't yet at its worst, it will intensify as the application of AI Overview expands.

With the advent of aggregation platforms, they gradually transformed from platforms directly facing readers into content providers, and now consumer applications are facing the same fate. Even as all applications have to queue up to serve AI butlers, the one that most needs close monitoring is actually the butler itself.

On the one hand, the goal of advertising in the past was to persuade individual people: to do everything possible to capture their attention and insert content into their timelines. On the other hand, in a world dominated by AI agents, advertisers must first persuade the agents that help people make decisions.

On the other hand, this also means that most advertising teams will need to seriously consider: when users no longer browse apps themselves, but instead let agents browse them for them, who should I target my ads to? And how should I target them?

Yes, if an AI Agent is also an advertising platform, it will have both powers.

It decides where you go and what you see. It can choose hotels, flights, insurance, and doctors for you, and it can also attach its own commission and advertising logic to each choice.

In particular, AI assistants can gain a deep understanding of users' current needs and intentions, thereby inserting highly relevant ad recommendations. This goes a step further than traditional web pages that target ads based on keywords, and may achieve a recommendation effect similar to that of a real human consultant.

Furthermore, the interactions between AI assistants and users accumulate a vast amount of personal privacy data—including user preferences, habits, geographical location, and social relationships. If this data is used for ad targeting, the effectiveness of advertising will be unprecedented.

Let's go back to the prophecy at the beginning.

The killer app of 2026 might not be a chatbot, but rather "Adblock for intelligence." Every new technology claims to be different, but ultimately finds its place in advertising.

In the past, the established browser plugin Adblock blocked web page ads. Now, "Adblock for intelligence" aims to block soft ads and biased information that will infiltrate AI responses in the future, disguised as neutral advice.

In an era where AI is trying to take over our brains, remaining skeptical and having the ability to "refuse to be fed" will be humanity's last vestige of dignity.

This article is from the WeChat official account "APPSO" , authored by Zhang Sanfeng, and published with authorization from 36Kr.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments