Who else cannot be distilled into skill?
Document: Sleepy.md
Unfortunately, in this era, the more seriously you work without reservation, the more likely you are to accelerate your distillation into a skill that can be replaced by AI.
In the past few days, the hot search lists and media channels have been flooded with "colleague.skill." As this matter continues to ferment on major social platforms, the public's focus has almost predictably been swept up by grand anxieties such as "AI layoffs," "capital exploitation," and "the digital immortality of workers."
These indeed cause anxiety, but what worries me the most is a line of usage advice written in the project README document:
"The quality of raw materials determines the quality of the skill: it is recommended to prioritize collecting long articles he actively writes > decision-making replies > daily messages."
The ones most easily and perfectly distilled by the system, and restored at a pixel level, are precisely those who work the hardest.
They are the ones who, after every project concludes, still sit down to write retrospective documents; they are the ones who, when faced with disagreements, are willing to spend half an hour typing long messages in the chat box, honestly analyzing their decision-making logic; they are the ones who are extremely responsible, meticulously entrusting every detail of their work to the system.
Diligence, once the most revered workplace virtue, has now become a catalyst that accelerates workers' transformation into AI fuel.
Exhausted Workers
We need to re-evaluate a term: context.
In everyday language, context is the background of communication. But in AI, especially in the world of rapidly growing AI agents, context is the fuel that powers the engine, the blood that sustains the pulse, and the only anchor that allows models to make precise judgments in chaos.
An AI stripped of context, no matter how impressive its parameters, is merely an amnesiac search engine. It cannot recognize who you are, cannot grasp the undercurrents hidden beneath business logic, and cannot know what long pulls and trade-offs you have experienced on this network woven from resource constraints and interpersonal games when making a decision.
The reason "colleague.skill" has caused such a huge stir is precisely because it coldly and accurately locks onto the mine that hoards vast amounts of high-quality context—the collaborative software of modern enterprises.
In the past five years, the Chinese workplace has undergone a quiet yet excruciating digital transformation. Tools like Feishu, DingTalk, and Notion have become massive corporate knowledge bases.
For example, Feishu, ByteDance has publicly stated that the number of documents generated internally each day is enormous, and these densely packed characters faithfully encapsulate every brainstorming session, every heated meeting confrontation, and every strategic compromise swallowed down by over a hundred thousand employees.
This digital penetration far exceeds any previous era. Once upon a time, knowledge was warm; it lay dormant in the minds of veteran employees, scattered in casual chats in the tea room. Now, all human wisdom and experience have been forcibly drained of moisture, ruthlessly settling in the cold, icy server matrix in the cloud.
In this system, if you do not write documents, your work cannot be seen, and new colleagues cannot collaborate with you. The efficient operation of modern enterprises is built on the daily cycle of every employee "offering" context to the system.
Diligent workers, carrying diligence and goodwill, unreservedly expose their thought trajectories on these cold platforms. They do this to make the gears of the team mesh more smoothly, to strive to prove their value to the system, and to desperately seek a place for themselves within this intricate commercial beast. They are not actively giving themselves up; they are just clumsily and diligently conforming to the survival rules of the modern workplace.
But it is precisely this context left for interpersonal collaboration that has become the perfect fuel for AI.
Feishu's management backend has a feature that allows super administrators to batch export members' documents and communication records. This means that the project retrospectives and decision-making logic you spent three years writing, enduring countless late nights, can be easily packaged into a cold, lifeless compressed file in just a few minutes with a single API interface.
When Humans Are Reduced to APIs
With the explosive popularity of "colleague.skill," some extremely uncomfortable derivatives have begun to appear in the Issues section of GitHub and on various social platforms.
Some have created "ex.ex.skill," attempting to feed past WeChat chat records to AI, allowing it to continue arguing or being affectionate in that familiar tone; others have created "white moonlight.skill," reducing untouchable emotions to a cold interpersonal sandbox, repeatedly simulating probing phrases, meticulously seeking the optimal emotional solution; and some have created "dad-like boss.skill," preemptively chewing on oppressive PUA phrases in the digital space, building a sad psychological defense for themselves.

The usage scenarios of these skills have completely departed from the realm of work efficiency. Unbeknownst to us, we have become adept at wielding the cold logic of treating tools to dissect and objectify those flesh-and-blood, living beings.
German philosopher Martin Buber once proposed that the underlying nature of human relationships can be categorized into two distinctly different modes: "I and you" and "I and it."
In the encounter of "I and you," we transcend prejudice and regard the other as a complete and dignified life form to gaze upon. This bond is open without reservation, filled with vibrant unpredictability, and, due to its sincerity, appears particularly fragile; however, once fallen into the shadow of "I and it," living humans are reduced to objects that can be disassembled, analyzed, and categorized. Under this extremely utilitarian scrutiny, the only thing we care about is, "What use does this thing have for me?"
The emergence of products like "ex.ex.skill" marks the complete invasion of the tool rationality of "I and it" into the most intimate emotional domains.
In a real relationship, a person is three-dimensional, full of wrinkles, constantly flowing with contradictions and rough edges; a person's reactions change based on specific contexts and emotional interactions. Your ex's reaction to the same sentence may be entirely different when they wake up in the morning compared to after working late at night.
But when you distill a person into a skill, what you strip away is merely the part of them that happens to be "useful" or "functional" to you in that specific bond. The originally warm, self-aware individual is completely drained of their soul in this cruel purification, alienated into a "functional interface" that you can plug in and out at will.
It must be acknowledged that AI did not create this chilling coldness out of thin air. Before AI emerged, we had long been accustomed to labeling others, precisely measuring the "emotional value" and "network weight" of each relationship. For example, we quantify people's attributes into tables in the matchmaking market; we classify colleagues in the workplace as "hard workers" and "slackers." AI merely makes this implicit, functional extraction between people completely explicit.
Humans have been flattened, leaving only the aspect of "what use does it have for me."
Electronic Patina
In 1958, Hungarian-British philosopher Michael Polanyi published "Personal Knowledge." In this book, he proposed a penetrating concept: tacit knowledge.
Polanyi famously stated, "We know more than we can tell."
He gave an example of learning to ride a bicycle. A skilled rider gliding through the wind can perfectly balance in every gravitational tilt, but they cannot accurately describe that moment's subtle intuition to a beginner using dry physics formulas or pale words. They know how to ride, but they cannot articulate it. This knowledge that cannot be encoded or verbalized is tacit knowledge.
The workplace is full of such tacit knowledge. A senior engineer may glance at logs to pinpoint a system fault, but they find it challenging to document this "intuition" built on thousands of trial and error; an excellent salesperson may fall into silence at the negotiation table, and the pressure and timing of that silence cannot be recorded in any sales manual; an experienced HR can detect the fluff in a resume just by noticing a candidate's half-second of averted gaze during an interview.
"Colleague.skill" can only extract those explicit knowledge that has already been written down or spoken. It can capture your retrospective documents but cannot capture the struggles you faced while writing them; it can replicate your decision replies but cannot replicate the intuition you had when making those decisions.
What the system distills is always just a shadow of a person.
If the story ends here, it would merely be another clumsy imitation of humanity by technology.
But when a person is distilled into a skill, this skill does not remain static. It will be used to reply to emails, write new documents, and make new decisions. In other words, these AI-generated shadows begin to generate new contexts.
And these AI-generated contexts will be deposited in Feishu and DingTalk, becoming training materials for the next round of distillation.
As early as 2023, research teams from Oxford University and Cambridge University jointly published a paper on "model collapse." The research indicated that when AI models are iteratively trained using data generated by other AIs, the distribution of the data becomes increasingly narrow. Rare, marginal, yet extremely real human traits are quickly erased. After just a few generations of training on synthetic data, models completely forget those long-tail, complex real human data, instead outputting extremely mediocre and homogenized content.
In 2024, "Nature" also published a research paper pointing out that training future generations of machine learning models with AI-generated datasets will severely pollute their outputs.

This is akin to those meme images circulating online, originally a high-definition screenshot, being forwarded, compressed, and forwarded again by countless people. With each transmission, a portion of the pixels is lost, and some noise is added. In the end, the image becomes blurry, covered in electronic patina.
When the real, tacitly knowledgeable human context is drained, and the system can only train itself with the patina of shadows, what will remain?
Who Is Erasing Our Traces
What remains is only the correct nonsense.
When the river of knowledge dries up into an endless regurgitation and self-chewing of AI on AI, everything the system breathes in and out will inevitably become extremely standardized, extremely safe, yet hopelessly hollow. You will see countless perfectly structured weekly reports, countless emails that are faultless, but devoid of any breath of living people, devoid of any truly valuable insights.
This great collapse of knowledge is not because human brains have become dull; the real tragedy lies in our outsourcing the right to think and the responsibility to leave context to our own shadows.
A few days after the explosive popularity of "colleague.skill," a project named "anti-distill" quietly appeared on GitHub.
The project's author did not attempt to attack large models, nor did they write any grand declarations. They simply provided a small tool to help workers automatically generate seemingly reasonable but actually filled with logical noise ineffective long texts in Feishu or DingTalk.
Their purpose is simple: to hide their core knowledge before being distilled by the system. Since the system likes to capture "actively written long texts," they feed it a bunch of nutritionally void gibberish.
This project did not explode like "colleague.skill"; it even seems a bit small and powerless. Using magic to defeat magic essentially still revolves around the game rules set by capital and technology. It cannot change the trend of the system becoming increasingly reliant on AI and increasingly neglectful of real people.
But this does not prevent this project from becoming the most tragically poetic and deeply metaphorical scene in the entire absurd drama.
We strive to leave traces in the system, writing detailed documents, providing meticulous decisions, trying to prove our existence in this massive modern corporate machine, proving our value. Yet we do not realize that these extremely serious traces will ultimately become the erasers that wipe us away.
But looking at it from another angle, this may not be a complete dead end.
Because what that eraser wipes away is always just "the past you." A skill packed into a document, no matter how sophisticated its capture logic, is essentially just a still snapshot. It is locked in that exported second, endlessly spinning in predetermined processes and logic, relying on outdated nutrients. It does not confront the instinct of unknown chaos, nor does it possess the ability to self-evolve in the face of real-world setbacks.
When we hand over those highly standardized, formulaic experiences, we also free our hands. As long as we continue to reach outwards, constantly breaking and reconstructing our cognitive boundaries, that shadow lingering in the cloud can only follow our backs, step by step.
Humans are flowing algorithms.
Popular articles















