This article is from the WeChat public account: 不懂经 (Don't Understand the Classics), author: Uncle Rust (Don't Understand the Classics), original title: "Zhang Wenhong's Paradox in the AI Era: Why Do I Feel Worthless the More I Use AI?", cover image from: Visual China.
A few days ago, I came across a short video featuring a speech by Zhang Wenhong, director of the National Center for Infectious Diseases, at the Hong Kong High Mountain Forum on January 10. He clearly stated, "I refuse to introduce AI into hospital medical record systems."
Because AI that has not been systematically trained will fundamentally change the training path for doctors, undermining or damaging the independent diagnostic skills that young doctors need to acquire through traditional training.
Zhang Wenhong explained that he himself certainly uses AI to review the cases first. But the key point is that, with his more than 30 years of clinical experience, he can spot where the AI is wrong at a glance.
The problem lies with the young doctors.
If a doctor starts relying on AI for diagnoses from the internship stage and skips complete clinical reasoning training, he will forever lose a crucial ability: the ability to distinguish between right and wrong AI.
Zhang Wenhong's remarks, from the perspective of an ordinary AI user, revealed a widely misunderstood reality regarding skills and leverage in the AI era.
Over the past year or two, I have witnessed a peculiar kind of "collective anxiety".
Interestingly, this anxiety doesn't come from those who don't understand technology; on the contrary, it comes more from the elite groups who are already proficient in using AI: programmers, lawyers, analysts, and self-media personalities.
Initially, everyone was excited, believing that AI would make them superhuman. But after a brief period of efficiency euphoria, many fell into a deeper sense of powerlessness:
When AI can complete 80% of the work at zero cost, will the remaining 20% of my value be enough to uphold my professional dignity?
If an AI can finish two weeks' worth of my code in minutes; if a large model can generate a perfect due diligence report in seconds; if Gemini or Doubao can enable people with no drawing background to produce master-level works; if GPT can "accurately" read medical examination reports or test reports, then where exactly is the moat protecting human skills?
Previously, The Atlantic published an article stating that we are entering an era of deskilling; however, the other side of the coin is precisely this: AI has not rendered skills useless; rather, it has triggered a dramatic "skill inflation." Skills simply need to be redefined.
In an era where execution costs are approaching zero, AI acts as a revealing mirror. It amplifies not only your efficiency but also the granularity or precision of your cognition.
You may feel "useless" because AI has ruthlessly exposed a fact: most of the work you were once proud of was just "bricklaying," execution, and "doing what you're told," rather than "thinking," let alone raising and solving problems.
The truth about skills in the 21st century is no longer about how many tools you have, but about how much genuine leverage you have in your mind. The comprehensive ability of "macro-level control + micro-level verification" is the true guarantee of a secure future in the AI era.
I. Zhang Wenhong Paradox: 0 times 10 is still 0
There is a widely held view about Silicon Valley, but it is often misunderstood.
People say, "AI is a 10x productivity amplifier."
The mathematical meaning of this statement is even more chilling than its literal meaning.
If your current ability is 1, AI can make you 10; if you are 10, AI can make you 100. But if your fundamental understanding of a certain field is 0, then 0 multiplied by 10 is still 0.
This is precisely the core of Zhang Wenhong's concern: a young doctor who relies on AI from the internship stage may have zero clinical judgment. No matter how powerful AI is, zero multiplied by any number still equals zero.
What's even more terrifying is that this "0" doesn't even know it is 0.
Zhang Wenhong put it bluntly: "New doctors cannot rely solely on AI to diagnose diseases." Why? Because even if AI has an accuracy rate of 95%, the 5% error still needs to be identified and corrected by professional doctors.
If a doctor lacks independent diagnostic skills, how can he detect AI errors? How can he handle complex cases that AI cannot solve?
This is what I call the "Zhang Wenhong Paradox," which, on one level, is a chicken-and-egg problem. But on another level, it emphasizes whether people are using tools or tools are using people.
It reveals the first layer of truth about skills in the AI era:
The essence of AI is "probability fitting", while the value of humans lies in "bearing the consequences".
In the past, the skills we talked about often referred to proficient execution, memorizing grammar, reciting legal provisions, and mastering various keyboard shortcuts. But in the AI era, these hard skills have rapidly depreciated and become infrastructure.
Instead, a more hidden and scarcer ability has emerged: judgment. Judgment, in this context, refers to the ability to understand the long-term consequences of one's actions.
Imagine this scenario: a senior engineer and a novice are both writing code using AI.
A novice only gets a block of code. He can't judge whether this code has any architectural flaws, can't predict its performance under extreme concurrency, or even know if it's a dead-end solution.
Senior engineers don't just see the code; they see the path. They know what tasks to assign to the AI, how to verify the results, and, more importantly, where to correct the AI when it makes a mistake—and AI will inevitably make mistakes.
For beginners, AI is a black box; they can only hope it outputs the correct answer. For experts, AI is a team of interns with unlimited energy, doing exactly what they're told.
Therefore, the future distinction between experts and ordinary people will lie in whether you have the ability to "verify AI output".
Zhang Wenhong can spot the errors in AI diagnoses at a glance, not through some mysterious intuition, but through the "meta-capabilities" accumulated from over thirty years of clinical experience. This ability is precisely what young doctors, whose training has been skipped by AI, lack most.
Therefore, without deep expertise as a ballast, AI will bring not efficiency, but costly chaos.
2. Why is your prompt always "almost there"?
Why can some people use AI to solve complex problems, while others can only use it as a chatbot?
The problem isn't that you can't write "spells," but that your mental entropy is too high.
There is a worrying phenomenon recently: people are starting to outsource thinking itself to AI.
When faced with a problem, instead of breaking it down, they simply throw a jumbled mess of requirements at the model and then get angry at the mediocre output: "This AI is completely useless."
Actually, it's not that AI is stupid, it's that you haven't thought it through.
No matter how advanced an AI model is, it is essentially a prediction machine based on "context." The quality of its output is strictly limited by the quality of the context you input. This is the modern version of "Garbage In, Garbage Out."
The top skills of the 21st century have become "clear expression" and "structured thinking".
A true expert has already completed a rigorous mental simulation before even opening the chat box:
1. Define the problem: What core contradiction do I need to resolve?
2. Breakdown of the logic: What are the subtasks that make up this large problem? What are their dependencies?
3. Set standards: What kind of result is considered acceptable?
For example, before having AI assist in developing a feature, have you clarified the data flow? Before having AI write an article, have you built a unique perspective framework?
Don't expect AI to do the thinking "from 0 to 1" for you.
AI excels at filling in the details (from 1 to 100), but that "1," that core insight, that logical framework, must be provided by you.
You can never get satisfactory results from AI if you can't clearly articulate your ideas to your human colleagues.
Clear writing is clear thinking.
In the future, programming in natural languages will be a universal skill. But this doesn't mean programming will become simpler; rather, it means that the precision of language and logic will become the new code.
If your thinking is chaotic, AI will only amplify that chaos more efficiently.
III. Breaking Free from the Information Cocoon: Getting Closer to the Essence Than 99% of People
Since AI is trained on massive amounts of existing human data, it inherently carries a huge flaw: mediocre consensus, namely, regression to the mean.
If you ask an AI for its opinion on health, finance, or history, it will most likely give you a "textbook" answer. These answers are safe and correct, but often extremely mediocre because they simply repeat the most frequently occurring information on the internet.
This leads to the third dimension: the insight to distinguish truth from falsehood.
Knowledge and understanding are two different things.
- Knowledge is knowing "you should do it this way";
- Understanding means knowing "why to do this, and when not to do it."
This is precisely the fundamental difference between Zhang Wenhong and young doctors.
Young doctors can instantly acquire "knowledge" through AI, such as diagnostic results, medication recommendations, and treatment plans. But Zhang Wenhong possesses "understanding": he knows where the boundaries of this knowledge lie, when to break with convention, and when the "standard answer" provided by AI is wrong.
In this age of information overload, if you only acquire information through rote learning and algorithmic recommendations, you are essentially just mechanically repeating information in a giant "echo chamber." You don't truly understand how things work.
To be smarter than AI, we need to be closer to the essence of things (first principles) than 99% of people.
- Want to understand business? Don't just read best-selling books and read public accounts; study cash flow, leverage, supply and demand, and human greed.
- Want to understand health? Don't just believe so-called authoritative guidelines; study the biological mechanisms of metabolism, hormones, and inflammatory responses.
When AI gives you a "standardized suggestion," only those who truly understand how the underlying system works can keenly spot the flaws or decisively overturn the AI's suggestion in special circumstances.
As Zhang Wenhong said, whether or not you will be misled by AI depends on whether your own abilities surpass those of AI. And you can't compete with AI in terms of knowledge, you can only compete in terms of understanding.
The competitive advantage of the future belongs to those who dare to question "training data." You need to build your own cognitive system—a system that isn't copied, but rather one that you've personally verified through practice, through painful feedback loops, and through independent thinking.
AI represents the average level of all human knowledge. If you want to surpass the average, you cannot rely solely on AI; you must possess unique insights that AI cannot derive through statistical probability.
IV. After the execution value becomes zero: From the worker to the inspector
If we take a longer view, history may not repeat itself, but it always rhymes.
In the 1980s, the widespread adoption of computers caused considerable anxiety among accountants and lawyers. Before that, lawyers would spend days sifting through mountains of case files to find a single precedent. The advent of electronic retrieval technology spun that task into mere seconds.
Have lawyers lost their jobs? No. On the contrary, the legal profession has become larger and more complex.
As search becomes easier, clients' expectations of lawyers have also increased. People no longer pay for "finding precedents," but for "developing unique defense strategies based on complex precedents."
Similarly, as AI takes over coding, copywriting, and basic diagnostics, the role of humans is undergoing a fundamental transformation:
We are evolving from "craftsmen" to "commanders"; from "doers" to "inspectors".
In the past, a good engineer might have spent 50% of their time writing code and 50% thinking about architecture. Now, they can spend 90% of their time thinking about architecture, understanding the business, and optimizing the user experience, while delegating the coding work to AI (and having it review it).
This means that the upper limit of the complexity of the work has been opened.
Independent developers can now run a company that originally required a team of ten people; a knowledgeable self-media creator can produce the amount of content that would have taken a week in a single day; and a senior doctor (such as Zhang Wenhong) can handle a number of cases that would have been impossible in the past with the assistance of AI.
This is the new definition of "skills" in the AI era:
It is no longer a single-dimensional "specialization", but a cross-dimensional integration capability.
You don't need to lay every brick yourself, but you must know the building's structural mechanics, have an aesthetic sense to determine its appearance, and have a business mind to decide where it will be most valuable to build.
This comprehensive capability of "macro-level control + micro-level verification" is the true guarantee of a secure future in the AI era.
The two key capabilities that Zhang Wenhong emphasized essentially mean this:
1. Assess the accuracy of AI diagnosis (microscopic verification)
2. Diagnosing and treating complex and difficult cases that AI cannot handle (macro-level control)
Doctors who lack these two abilities can only be considered "AI operators".
Conclusion: Only by transcending dimensional barriers can one experience the thrill of defeating those who fall behind.
Let's go back to the phenomenon we discussed at the beginning: Why do we feel more useless the more we use AI?
Because AI deprives you of the right to gain a sense of accomplishment through "hard work".
In the past, if you spent three days compiling a beautiful report, you would feel very valuable; now, AI can do it in three seconds, and this illusory sense of value collapses instantly.
This is indeed painful, but it is also an awakening.
AI forces us to confront the most difficult question: Beyond mechanical execution, where does my true intellectual value lie?
This is the worst of times for those who are unwilling to think. They will become utter slaves to algorithms, unaware that they are being swallowed up by a mundane information cocoon.
But for those who are full of curiosity, possess independent thinking abilities, and yearn to explore the essence of things, this is the best era in human history:
- All barriers have been lowered.
- All the ceilings disappeared.
- You have the most powerful think tank and execution team in human history, on standby 24 hours a day.
Zhang Wenhong is not against AI; what he opposes is skipping the development of underlying capabilities and directly using AI, outsourcing thinking and metacognition to AI.
He himself is highly proficient in using AI because he has thirty years of experience as a foundation. AI is like adding wings to a tiger for him; but for young doctors who lack such experience, AI may be like pulling up seedlings to help them grow, or drinking poison to quench their thirst.
In the 21st century, skills will not disappear, but they will undergo a brutal refinement.
Don't try to compete with AI in "solving problems," try to compete with AI in "creating problems."
When you stop treating AI as a tool to help you be lazy, and instead treat it as a super lever that requires you to use extremely high intelligence to control, guide, and correct mistakes,
What you see through AI is no longer your ordinary self, but an infinitely magnified, powerful super-individual.
