In 2023, when GPT-4 achieved an astonishingly high score on the United States Medical Licensing Examination (USMLE), not only OpenAI but also Silicon Valley cheered, and the future of AI in medicine seemed bright.
Three years later, top experts have explicitly stated their opposition to integrating AI into hospital medical record systems .
Some people oppose AI, and some people oppose AI.
The voice of opposition came from Zhang Wenhong, director of the Department of Infectious Diseases at Huashan Hospital affiliated with Fudan University. Speaking at a recent forum, this expert, with many years of clinical experience, directly addressed the core concerns: young doctors need professional training to judge the merits of AI.
Image courtesy of Shenzhen TV.
He's not against using AI; in fact, he mentioned using it himself, letting it process large volumes of medical records in a short period. However, he can "spot" problems "at a glance." Young doctors, skipping training and experience, rely on AI to arrive at diagnoses consistent with senior experts, but they can't truly understand the correctness or incorrectness of the AI's results.
Some people didn't like hearing this. Wang Xiaochuan, the founder of Baichuan Intelligent, which is poised to make a splash in the medical field, "opposed Zhang Wenhong's opposition."
Image from: Phoenix V Live
In his view, AI is encroaching on doctors' "profits," which contradicts their interests. Doctors prioritize teaching and research for promotions. In contrast, AI is what serves patients.
The implication is that doctors are only after your money and don't care about your life or death. However, within the healthcare system, doctors' efficiency and patients' interests are often highly intertwined. If AI can significantly improve efficiency and reduce misdiagnosis, wouldn't that be the best service for patients?
He further emphasized that doctors are too busy to use AI, and that AI cannot help them write papers or apply for professional titles. The conclusion he drew was that AI should serve patients. Since the first day of LLM application, optimizing processes and assisting decision-making have been the tasks. This does not lead to the conclusion that AI should serve patients, let alone the conclusion that "when AI is powerful enough, it's fine not to have doctors."
It's said that Zhang Wenhong's perspective is determined by his own interests, but as the CEO of an AI company, Wang Xiaochuan's advocacy for AI to directly serve individuals is also a case of his perspective being determined by his own interests.
AI-powered healthcare is a lucrative market.
AIx Healthcare is indeed a lucrative opportunity, and it's not just Wang Xiaochuan who's eyeing it; foreign giants have also been repeatedly trying to get in on it.
Last week, OpenAI released ChatGPT Health, which integrates Apple Health data, personal medical records, and data from other health apps, then uses AI to provide analysis and recommendations. (link)
Globally, over 230 million people use ChatGPT for health advice , reflecting the level of user demand. However, OpenAI is cautious, emphasizing that it should not replace professional medical advice, and its product planning focuses primarily on providing advice for daily health maintenance.
Anthropic, the parent company of Claude, also announced plans to expand Claude's capabilities in the medical field. Judging from the performance of Opus 4.5, Claude's model performance is quite good.
Even so, Claude's design isn't directly patient-centric. The launch of Claude For Healthcare acts as a connector, helping doctors and healthcare professionals quickly and easily extract information from industry-standard systems and databases. For individual users, Claude's role is to summarize medical history, interpret test results and indicators, and prepare questions for appointments, thereby improving the efficiency of communication between patients and doctors.
Both of these AI giants have entered the healthcare field, but neither is aiming to replace doctors. Only one company is doing so: Grok.
Grok's owner is, after all, Elon Musk, who seems to want to launch all of humanity into space. In numerous interviews, he has stated that a human doctor can only read a limited amount of medical literature and remember a limited number of cases. AI, on the other hand, can read all the medical papers in human history in seconds and master all the latest treatment plans. In terms of pure diagnostic accuracy, humans cannot compete with AI.
He also gave Grok his X-rays and blood test results, asking it to identify the problem, and said that Grok's judgments were sometimes faster and more accurate than those of doctors.
That brings us back to what Zhang Wenhong said: the more you rely on AI, the more addicted you become.
Judgment needs to be trained.
Even before the current AI boom was fueled by large language models, similar problems arose when radiology departments introduced CAD systems.
CAD systems, short for Computer-Aided Detection, are now indispensable tools in medical imaging. However, the "automation bias" brought about by the introduction of CAD was exactly what Zhang Wenhong worried about. In 2004, a study in the UK found that when CAD systems failed to mark lesions, such as by omission or mislabeling, radiologists' detection sensitivity decreased significantly, especially those using CAD assistance, whose sensitivity was even lower than the control group.
A more recent study, conducted in 2020, on skin cancer detection found that when AI provided the correct diagnosis, the accuracy rate improved for all doctors. However, when the diagnosis was incorrect, the accuracy rate dropped more drastically for less experienced doctors. Overall, the best accuracy was achieved when AI and doctors collaborated on diagnosis.
Experienced experts like Zhang Wenhong can spot AI bluffing at a glance because they have decades of accumulated case databases in their minds. However, if young doctors start using AI to write medical records and make diagnoses from their internships, their mental "database" remains empty.
Therefore, there are always huge risks involved in having AI do anything "in one step". This is true for replacing doctors in one step, and it is even more true for directly serving patients in one step.
We used to have jokes like "If you consult Baidu for medical advice, you'll start with cancer." The biggest problem with current generative AI is its extreme confidence, leading to serious misdiagnosis. Even incorrect medical advice can be delivered with great persuasiveness, citing numerous sources (even fabricated literature).
Image from: Xiaohongshu
If even young doctors with years of professional training are prone to lowering their guard in the face of AI's confidence and developing automation biases, then for ordinary users without any medical background, this "perfect illusion" is even more worthy of vigilance, and is essentially no different from blindly believing in the title of "expert".
Is AI useful for the health of ordinary people? Definitely, otherwise there wouldn't be 230 million user records. However, it's important to distinguish two points: First, treating illness and maintaining health are two different things. Maintaining health includes daily diet and lifestyle, supplement intake, exercise plans, etc., all of which do not pose a significant risk to life.
Treating illnesses is obviously far more complex. Currently, AI is most commonly used for symptom analysis, interpreting indicators in test reports, and providing simple medication guidance.
To put it more simply: with AI guidance, you can buy some time to take time off work and go to the hospital. Considering that getting an appointment at a hospital in China is difficult these days, and some tests require appointments, not everyone has the option to rush to the hospital immediately when symptoms appear—even a community hospital might have long queues.
Therefore, with the help of AI, symptoms can be alleviated, buying some time to use basic medications, preventing the disease from worsening and reducing discomfort.
As the "primary responsible party" for our own health, we should not skip cultivating and training our own judgment; that is the true way to relinquish control and decision-making power.
This article is from the WeChat official account "APPSO" , authored by Discover Tomorrow's Products, and published with authorization from 36Kr.



