OpenAIChatGPT Atlas is certainly a browser product, but it is also more of a signal.
Everyone should still remember how Pichai became the CEO of Google and now Alphabet - a core reason was the success of Chrome.
Going back to that point in time, we can actually say that Chrome was a huge contributor in the competition between Google and Microsoft. It allowed Google to have its own terminal and entrance.
In the context of the AI big model, the story is clearly being repeated, except that the roles have been reversed. OpenAI has become the Google of the past, and today's Google has become the Microsoft of the past.
This is clearly a continuation of the so-called entry-point wars of the past, but something else is quietly changing. To understand this entire shift, and even future trends, we need to begin with an underlying logic I call the "Intelligence Scale Effect." (Perhaps it could be translated as "Intelligence Scale Effect," but I'm really making it up.)
The basis of this effect can be summarized in a simple formula:
The effectiveness of intelligence = the intelligence level of the big model × the depth of real-world understanding
This formula can reveal the core of future competition in smart applications.
To win the competition, simply having a “smarter” model (i.e., a higher “intelligence level”) is not enough. The real winner lies in the second multiplier: the depth of the model’s understanding of the real world.
And the latter becomes more critical as time goes by, and may even affect the evolution speed of the former.
In order to maximize the ultimate "efficiency", we will see that every company that joins the AI wave will embark on a crazy and endless race - a race to infinitely expand the boundaries of its own data.
Once model companies figure this out, they will all develop towards applications, and if they develop towards applications, they will almost all end up here.
Here, applications and models are inseparable.
Deconstructing the “Smart Scale Effect”
Let’s start by breaking down the two key components of this formula.
1. Intelligence level of large models
This is the "basic IQ" of AI.
It is determined by the model architecture, amount of training data, parameter scale, and computing resources.
Top large models, represented by OpenAI's GPT series and Anthropic's Claude series, have acquired powerful general capabilities such as language understanding, logical reasoning, knowledge storage, and code generation through pre-training on trillions of tokens of public data.
This is the "potential energy" of AI.
It represents the highest height that the model can theoretically reach.
Over the past few years, we have witnessed an arms race in the “level of intelligence”—parameters have soared from billions to trillions, and model capabilities have continued to surpass imagination.
But think about it later, what is the core?
Who can get more complete data of real scenarios?
You probably remember how we initially thought our intelligence couldn't improve? We just didn't have enough data.
So the second half of the big model is destined to return to the data.
It is not data that repeats the same nature as in the past, but data that adds dimensions that were not included in the past.
(It is easier to understand the intelligent scale effect in the context of autonomous driving)
2. Deep understanding of reality
This is the "situational intelligence" of AI.
If "intelligence level" is the CPU of AI, then "depth of reality understanding" is its RAM (memory) and I/O (input/output) systems. It represents the depth and breadth of specific, real-time, private, or proprietary data that a model can access and understand when performing specific tasks.
No matter how intelligent a model is, if it knows nothing about the work it is doing, your personal schedule, or your company's internal knowledge base, it is like a genius locked in a secret room, with wisdom but unable to display it.
Its "depth of understanding of reality" is zero, resulting in the ultimate "intelligent efficiency" also approaching zero.
The core insight of “intelligent scale effect” is:
Once the "intelligence level" reaches a certain threshold, the key factor determining the success or failure of an application will quickly shift from the IQ of the model itself to the scale of "real data" it can leverage.
Data Enclosure Movement
Where will this lead?
This will lead to a new enclosure movement: the data enclosure movement.
ChatGPTAltlas can be seen as the official clarion call, directly hitting Google's heartland.
But this didn’t really start here, it’s been going on for a long time:
Performance 1: From the cloud to the desktop and OS – seizing personal context
Case: OpenAI's ChatGPTAltlas and Anthropic's desktop
There’s actually not much to say about this, it’s just an end-to-cloud integrated route.
The goal is simple: to resolve the experience bottleneck and obtain more data. Otherwise, it will be impossible to solve the core pain point of the disconnection between web AI and user workflows. Web AI cannot "see" local documents or applications, causing users to frequently copy and paste, resulting in low efficiency.
The direction is set, so it wouldn't be surprising if OpenAI launched an OS someday. (Google did a clone of Android, and it was even more effective.)
The method is also unified.
All of these are done through native applications with system-level permissions. After user authorization, AI can directly "see" the screen content and read local files, thereby understanding the complete context. This is in stark contrast to the "blind" Web version of AI, which is limited to the browser tab.
Here’s a typical scenario: Designers can directly call out desktop AI in Figma, point to an element and ask: “Help me change this button to a neo-skeuomorphic style and provide the CSS code.” Because AI “sees” the overall design, it can give precise suggestions, shortening the original 5-10 minute cross-application operation to within 30 seconds.
Of course, this deep integration also brings serious privacy and security challenges, requiring users to place a high level of trust. We will discuss this later.
This is the beginning of AI understanding you better than you know yourself. You can't remember things from a year ago, but in theory it can.
Performance 2: From static to real-time — embracing the dynamic world
Case: Perplexity AI (AI search engine)
Perplexity AI, which was founded in 2022 and rose rapidly between 2023 and 2024, does this. It solves two major pain points: the "outdated" knowledge of traditional LLM and the traditional search engines "only provide links but not answers."
At that time, they were relatively early to develop the entire "real-time retrieval + LLM summary" (RAG) architecture.
When a user asks a question, it first crawls the latest web information in real time (expanding the depth of reality understanding) and then feeds it into a large model (such as GPT-4) to generate an immediate answer. This is in stark contrast to Google (which provides a list of links) and the basic version of ChatGPT (which has outdated knowledge).
Now this has become a basic function. The future of this thing is uncertain, and it may be dead.
However, it was a successful product, with Perplexity exceeding 10 million monthly active users (MAU) in early 2024. When users searched for "last night's financial report data," the timeliness and recall rate far exceeded those of static LLM, significantly saving filtering time.
Its limitation is that the quality of the answer depends on the source and is doubly costly.
Performance 3: From public to private – Deepening the enterprise knowledge base
Case Study: Microsoft 365 Copilot
Microsoft has fully launched Copilot to its large M365 enterprise customer base. It aims to address a major pain point within businesses: data silos. Employee knowledge is siloed across applications like Outlook, Teams, and SharePoint, making it difficult to integrate with traditional tools.
At the heart of Copilot's integration is Microsoft Graph.
We used to post the following picture:
This graph indexes all of an organization's private data (forming a "depth of reality understanding") and combines it with Copilot's advanced intelligence. When an employee asks a question (e.g., "Summarize last week's progress on Project A and draft a weekly report"), Copilot can instantly search emails, chats, and documents to generate precise reports. This level of accuracy is unmatched by any "public" AI assistant or traditional internal search.
This also connects the end and the cloud.
Users are said to be nearly four times faster at tasks like summarizing meetings, saving an average of 1.2 hours per week.
Performance 4: From Digital to Physical—The Endgame of the Internet of Everything (Prospects)
The end point of this boundary expansion will inevitably be from the digital world to the physical world.
Wearable devices (such as smart glasses and AI Pins) and Internet of Things (IoT) devices are the ultimate form of expanding the "smart scale effect".
This is why Ultraman always hooks up with people who make hardware.
Imagine an AI assistant that can "see" what you're looking at through the camera on your glasses and "hear" your conversation through the microphone. How powerful would it be? It could translate menus for you in real time, remind you to recognize new customers, and even provide step-by-step instructions when you're repairing an appliance.
This obviously raises other questions, but I've actually heard people at events discussing this whole microphone thing of recording their daily activities and then analyzing them.
At least the person himself is not opposed to it, it’s just that the people around him may be opposed to it.
Why is this competition more intense than ever?
The competition triggered by the "intelligent scale effect" will likely be more intense and have a "winner takes all" effect that far exceeds the PC Internet and mobile Internet eras.
In the Internet age, the core of competition is "attention".
Platforms compete for users' screen time through content and services (such as search, social networking, and video). Although network effects exist, users' "switching costs" are relatively manageable—I can use Google today and switch to Bing tomorrow; I can post on WeChat and speak on Weibo.
At this time, things of completely different nature: search, IM, etc., are running in parallel and each has its own network effect.
But in the intelligent era, the core of competition has shifted to "context", which is the "depth of understanding of reality" in our formula.
This is an essential difference.
Coupled with the intelligent versatility of large models, the impact of this essential difference will be magnified to an unprecedented level.
Once an AI application is successfully deeply embedded in your personal or corporate workflow - it understands all your local files (such as XX desktop), masters all your company's private knowledge base (such as Copilot), or is connected to your real-time physical world (such as future smart glasses) - the "depth of reality understanding" it accumulates will constitute an unparalleled moat.
The competition between search and IM is weak competition, while the above competition is the competition between search and search, which is strong competition.
Therefore, the more AI is applied in the future, the more it will be: a thousand waves of soft and hard products, and heroes everywhere rising from the smoke of war.
In the past, what really had high stickiness was the network effect.
It is difficult to change the operating system and WeChat, but as for other things, there is actually no problem in changing them.
Whether you buy things on JD.com or Tmall, what’s the stickiness?
But then there might be another one: the highly sticky invisible spider silk:
You can't easily export and import the deep understanding of your personal habits and private data accumulated in one AI assistant into another. The cost of replacing an AI assistant could be equivalent to the lengthy training of a new employee from scratch.
The core of an enterprise is knowledge. When the above model is implemented, changing products is equivalent to replacing a group of employees and starting from scratch with all the knowledge.
Given the boundless nature of general intelligence, the competition among major digital companies will ultimately become a zero-sum game. Users (whether individuals or businesses) will likely ultimately choose a single "master AI" and maximize its data boundaries. This has led to unprecedented intensification of competition:
Whoever occupies the user's core data source first will almost secure victory.
(Speaking of which, I wrote about this topic 7-8 years ago, which is a bit too early)
The "Great Game" of Efficiency and Trust
There is another variable here, which is how high the user's weight is.
It’s funny to say that when WeChat tells its extensions, although users as a whole are the most important, the individual is actually the least important.
Take the red envelopes and operations as an example, they are almost the same as cutting leeks.
The expansion of data boundaries driven by the "smart scale effect" brings a new challenge: privacy and trust.
When AI frantically expands its data boundaries in order to "understand you better", it will inevitably touch the user's privacy red line.
● Are you willing to let AI read all your local files just to provide you with better suggestions when writing reports?
● Are you willing to let AI analyze all your chat records just to predict your needs more accurately?
● Are companies willing to hand over their most core business secrets to an AI system just in exchange for higher operational efficiency?
This is the core contradiction of the future: users’ desire for “efficiency” is limitless, but their concerns about “privacy” are also real.
Can privacy hedge against performance?
Therefore, the second half of this competition will not only be about who can capture more data, but also about who can process this data in a more trustworthy and secure way.
summary
If I had to choose, I would choose "intelligent scale effect" (intelligent efficiency = large model intelligence level × depth of reality understanding) as the first principle of application in the AI era.
It clearly states that the future of AI lies not in building an omniscient "digital God" but in building countless professional assistants that are "deeply embedded" in reality.
OpenAI's ChatGPTAltlas is just the beginning of this grand competition.
The real battlefield lies in the endless pursuit of "depth understanding of reality."
I personally hope that the ultimate winner will be those who can not only maximize the product of this formula, but also win the ultimate trust of users in the process.
This article comes from the WeChat public account "Zhuo Mu Shi" , author: Li Zhiyong, published by 36Kr with authorization.