Unfortunately, in this era, the more diligently you work, the more likely you are to rapidly distill yourself into a skill that can be replaced by AI.
In the past two days, the trending topics and media channels have been flooded with "colleague.skill". As this matter continues to ferment on major social media platforms, the public's focus has almost unsurprisingly been swept up by grand anxieties such as "AI layoffs", "capital exploitation" and "digital immortality for workers".
These things are indeed worrying, but what worries me most is a line of usage advice in the project's README document:
"The quality of raw materials determines the quality of skills: It is recommended to prioritize collecting long articles he writes on his own initiative > decision-making replies > daily messages."
Those who work the hardest are the ones who are most easily distilled and restored pixel by pixel by the system.
They are those who, after each project concludes, still sit down to write a debriefing document; they are those who, when faced with disagreements, are willing to spend half an hour typing a long message in the chat box, honestly analyzing their decision-making logic; they are those who are extremely responsible, meticulously entrusting all the details of their work to the system.
Diligence, once the most admired workplace virtue, has now become a catalyst that accelerates the transformation of working people into fuel for AI.
Drained workers
We need to re-understand a word: context.
In everyday language, context is the background of communication. But in AI, especially in the world of AI agents that are growing rapidly, context is the fuel that powers the engine, the blood that keeps it pulsating, and the only anchor point that enables the model to make accurate judgments in chaos.
Stripped of its context, an AI, no matter how impressive its parameter set, is nothing more than a search engine suffering from amnesia. It cannot recognize who you are, cannot fathom the undercurrents hidden beneath the business logic, and has no way of knowing the long struggles and trade-offs you went through on this network woven from resource constraints and interpersonal games when you made a decision.
The reason why "collaborator.skill" has caused such a huge stir is precisely because it has ruthlessly and precisely targeted the mine that stores a massive amount of high-quality context—the collaborative software of modern enterprises.
Over the past five years, the Chinese workplace has undergone a quiet yet profound digital transformation. Tools such as Lark, DingTalk, and Notion have become massive enterprise knowledge bases.
Taking Lark as an example, ByteDance has publicly stated that the number of documents generated internally every day is massive. These densely packed characters faithfully record every brainstorming session, every heated meeting, and every strategic compromise swallowed by more than 100,000 employees.
This digital penetration far surpasses any previous era. Once upon a time, knowledge was warm and alive, dormant in the minds of veteran employees and drifting in casual chats in the break room; but now, all human wisdom and experience have been forcibly drained of their moisture and ruthlessly deposited in the cold, impersonal server matrix of the cloud.
In this system, if you don't write documentation, your work won't be visible, and new colleagues won't be able to collaborate with you. The efficient operation of modern enterprises is built on the daily cycle of each employee "contributing" context to the system.
Diligent workers, imbued with diligence and goodwill, lay bare their thought processes on these cold, impersonal platforms. They do this to ensure the smoother meshing of the team's gears, to prove their value to the system, and to desperately find their place within this intricate commercial behemoth. They are not voluntarily surrendering themselves; they are simply clumsily and diligently adapting to the survival rules of the modern workplace.
But it is precisely these contexts left behind for interpersonal collaboration that have become the perfect fuel for AI.
Lark's admin panel has a feature that allows super administrators to batch export members' documents and communication records. This means that the project reviews and decision-making logic that you spent three years writing, working through countless sleepless nights, can be easily packaged into a cold, lifeless compressed file in just a few minutes with a single API interface.
When humans are reduced to APIs
With the explosive popularity of "colleague.skill", some extremely uncomfortable derivative works have begun to appear in GitHub Issues and on various social media platforms.
Some created the "Ex.skill," attempting to feed their WeChat chat history from the past few years to AI, allowing it to continue arguing or being affectionate with them in that familiar tone; others created the "White Moonlight.skill," reducing untouchable feelings to a cold interpersonal sandbox, repeatedly rehearsing probing conversations, and meticulously seeking the optimal emotional solution step by step; still others created the "Daddy-like Boss.skill," pre-emptively chewing on those oppressive PUA words in the digital space, building a sad psychological defense for themselves.

The use cases for these skills have completely deviated from the realm of work efficiency. It turns out that, without realizing it, we have become adept at wielding the cold logic of tools to dissect and objectify those flesh-and-blood, living people.
German philosopher Martin Buber once proposed that the underlying nature of human relationships is nothing more than two distinct patterns: "I with Thou" and "I with It".
In the encounter of "I and You," we transcend prejudice, gazing upon each other as complete and dignified living beings. This bond is open and unreserved, brimming with vibrant unpredictability, and precisely because of its sincerity, it appears exceptionally fragile. However, once we fall into the shadow of "I and It," the living person is reduced to an object that can be dissected, analyzed, categorized, and labeled. Under this extremely utilitarian scrutiny, the only thing we care about is, "What use is this to me?"
The emergence of products like "ex.skill" signifies that the instrumental rationality of "me and it" has completely invaded the most private emotional realm.
In a real relationship, people are multifaceted, full of complexities, constantly evolving with contradictions and rough edges. Their reactions change according to the specific situation and emotional interactions. Your ex's reaction to the same sentence might be completely different when they wake up in the morning versus after working late at night.
But when you distill a person into a skill, what you extract is merely the functional remnant that happens to be "useful" and "effective" to you within that specific bond. The original, warm person, with their own joys and sorrows, is completely drained of their soul in this cruel purification, transformed into a "functional interface" that you can plug in and unplug at will.
It must be acknowledged that AI did not fabricate this chilling coldness out of thin air. Before the advent of AI, we were already accustomed to labeling others and precisely weighing the "emotional value" and "network weight" of each relationship. For example, we quantified people's conditions in the dating market into tables; we categorized colleagues in the workplace as "capable" and "lazy." AI simply made this implicit, functional extraction between people completely explicit.
The person was flattened, leaving only the cross-section that asked, "What use is it to me?"
Electronic patina
In 1958, Hungarian-born British philosopher Michael Polanyi published *Personal Knowledge*. In this book, he introduced a highly insightful concept: tacit knowledge.
Polanyi famously said, "We always know more than we can say."
He gave the example of learning to ride a bicycle. A skilled rider, propelled by the wind, can perfectly maintain balance with every tilt of gravity, but he cannot use dry physics formulas or pale words to accurately describe to a beginner the subtle intuition of the body at that moment. He knows how to ride, but he can't explain it. This kind of knowledge that cannot be encoded or spoken is tacit knowledge.
The workplace is full of this kind of tacit knowledge. A senior engineer might be able to pinpoint a problem by glancing at the logs when troubleshooting a system failure, but it's difficult for him to document this "intuition" built on thousands of trial and error attempts. A top salesperson might suddenly fall silent at the negotiating table, and the sense of pressure and timing conveyed by that silence is something no sales manual can record. An experienced HR professional can detect inaccuracies in a resume during an interview by observing just half a second of a candidate's averted eye contact.
"Colleague.skill" can only extract explicit knowledge that has already been written down or spoken. It can capture your debriefing documents, but it can't capture the struggles you went through when writing them; it can copy your decision responses, but it can't copy the intuition you had when making those decisions.
What the system distills is always just a shadow of one person.
If the story ended here, it would be nothing more than another clumsy imitation of human nature by technology.
But once a person is distilled into a skill, that skill doesn't remain static. It's used to reply to emails, write new documents, and make new decisions. In other words, these AI-generated shadows begin to create new contexts.
These AI-generated contexts will then be stored in Lark and DingTalk, becoming training material for the next round of distillation.
Back in 2023, a research team from Oxford and Cambridge Universities jointly published a paper on "model collapse." The research showed that when AI models are iteratively trained using data generated by other AI, the distribution of the data becomes increasingly narrow. Rare, marginal, but extremely authentic human traits are quickly erased. After only a few generations of training on synthetic data, the model completely forgets the long-tailed, complex real-human data, instead outputting extremely mediocre and homogeneous content.
A research paper published in Nature in 2024 also pointed out that training future generations of machine learning models with AI-generated datasets would seriously pollute their output.

It's like those meme images circulating online. Originally a high-resolution screenshot, it's forwarded, compressed, and forwarded countless times. Each time it's shared, some pixels are lost, and noise is added. In the end, the image becomes blurry and digitally altered.
When the real, tacit human context is exhausted, and the system can only train itself with the shadow of patina, what will be left in the end?
Who is erasing our traces?
All that's left is correct nonsense.
When the river of knowledge dries up and becomes an endless rumination and self-chewing of AI by AI, everything the system expels will inevitably become extremely standardized and extremely safe, yet hopelessly empty. You will see countless perfectly structured weekly reports and countless emails that are flawless, but they contain no trace of human life, no truly valuable insights.
This great defeat of knowledge is not because the human brain has become stupid. The real tragedy is that we have outsourced the right to think and the responsibility of leaving context to our own shadows.
A few days after "colleague.skill" went viral, a project called "anti-distill" quietly appeared on GitHub.
The author of this project did not attempt to attack large-scale models, nor did he write any grand manifestos. He simply provided a small tool to help workers automatically generate seemingly reasonable but actually illogical and invalid long articles in Lark or DingTalk.
His goal was simple: to hide his core knowledge before it was distilled by the system. Since the system preferred to collect "actively written long articles," he would feed it a bunch of meaningless gibberish.
This project didn't achieve the explosive success of "Colleague.skill"; in fact, it felt somewhat small and ineffective. Using magic to defeat magic, it essentially remains within the rules of the game set by capital and technology. It can't change the overall trend of systems increasingly relying on AI and increasingly ignoring real people.
But this does not prevent the project from being the most tragically poetic and profoundly metaphorical scene in the entire absurdist drama.
We worked incredibly hard to leave our mark on the system, writing detailed documents and making meticulous decisions, trying to prove our existence and value within this massive modern corporate machine. Little did we know that these painstakingly crafted traces would ultimately become the eraser that wiped us away.
But from another perspective, this may not necessarily be a complete dead end.
Because what that eraser wipes away is always just "the past you." A skill packaged into a file, no matter how sophisticated its fetching logic is, is essentially just a static snapshot. It's locked in the moment of export, relying solely on outdated resources, endlessly spinning its wheels within predetermined processes and logic. It lacks the instinct to confront the unknown chaos, and it doesn't possess the ability to evolve through setbacks in the real world.
When we relinquish those highly standardized, established experiences, we free up our own hands. As long as we continue to explore outwards and constantly break down and reconstruct the boundaries of our cognition, that shadow lingering in the clouds will forever only be able to follow in our footsteps.
Humans are like flowing algorithms.




