Written by: Deep Thinking Circle
Have you noticed that everyone around you using AI is doing the same thing? Prompts, accepts, publishes. Without judgment or taste, they mechanically repeat the same actions like factory workers on an assembly line. I recently read an article by Silicon Valley entrepreneur Shann , who bluntly pointed out that 90% of AI users are currently trapped in this situation. They think that mastering AI tools means mastering the future, unaware that the real competition has only just begun. More importantly, Shann believes we only have about 12 months to build a true competitive advantage; otherwise, once this window closes, standing out will become extremely difficult. This deeply resonated with me because I myself have gone through a similar awakening process.
I remember the feeling about a year ago when I first really started building products and content with AI—it was absolutely addictive. The time between "I have an idea" and "it's live" was practically zero. I completed more projects in three months than in the previous two years combined. But when I mustered the courage to look back at what I'd released, I had to admit a harsh truth: half of them were mediocre. Technically sound, functionally complete, but utterly unremarkable. They looked like everything else because they were built exactly the same way. The same prompts, the same defaults, the same superficial understanding of "good." I'd fallen into the most common traps of the AI era: mistaking quantity for quality, rapid release for productivity, and doing more for doing better. This realization made me stop and rethink: in an era where AI allows everyone to produce quickly, what is the real competitive advantage?
My new book, "Going Global: Global Product Marketing Practices," is about to be published. To thank all the readers who have supported DeepThink Circle, I've prepared this book giveaway. You can be among the first to receive a free copy. If you're interested, please fill out the form below. Due to the limited number of copies provided by the publisher, I will select a portion of the respondents. I may not be able to guarantee that everyone will receive a copy, so please understand.
The proliferation of AI Slops and the crisis of trust
" AI slop " was named the word of the year for 2025. Mentions of the term surged ninefold, from 461,000 to 2.4 million. But numbers alone cannot fully capture the true experience of consumers. You've certainly seen content like these: LinkedIn posts that look like they've been generated with mid-level marketing tips; landing pages with identical gradient backgrounds, the "Inter" font, and headlines like "Revolutionize Your Workflow"; blog posts that cover every angle of the topic but say nothing of substance. Technically, there's nothing wrong with this content, but it lacks the most important element: a human touch.
Shann shared a particularly interesting research finding. A study by NYU and Emory University showed that AI-generated ads actually had a 19% higher click-through rate than human-generated ads. By standard metrics, AI output was objectively better. However, when consumers learned these ads were AI-generated, their purchase intention dropped by 33%. This phenomenon is worth pondering: better output quality, yet people chose to reject them. It wasn't because the content was bad, but because they didn't feel a human presence behind it. No one was making the decision, no one cared enough to put their name on it. Consumers could sense this lack, even if they couldn't pinpoint exactly what was wrong.
I've observed this phenomenon spreading across various sectors. Statistics show that 80-90% of AI agent projects fail in production environments, with thousands of seemingly identical websites launching daily, their content reading like a bot summarizing the output of another bot. The barrier to "functionality" has never been lower, meaning the barrier to "excellence" has never been more crucial. Functionality is now free, but excellence still comes at a price. This price is measured by taste, attention, and the willingness to surpass the initial output. Consumer trust in AI-generated content has declined by approximately 50%, not by chance, but as a natural reaction to this content flood.
Three major competitive advantages: capabilities that AI cannot replace.
Paul Graham once said, "In the age of AI, taste will become even more important. When anyone can make anything, the real difference is what you choose to make." He's right, but I think taste alone isn't enough. After a year of practice and observation, I've found that only three things can truly build a competitive advantage in the AI era: taste, distribution, and high agency.
Taste is knowing what's good. This isn't an abstract concept, but rather the judgment manifested in every decision. Distribution is getting good things to those who care about them. In this age of information overload, being seen is a scarce skill. High agency is the willingness to proactively find out what to do even when no one tells you. This is a personality trait that determines whether you bypass obstacles or stop when faced with them.
Why can't AI replace these three? Because judgment can only come from experience, trust can only come from persistence, and intrinsic motivation won't give up when the path is unclear. Most people have a fundamental misunderstanding of AI: AI doesn't level the playing field; it just tilts it further. AI is like a mirror, reflecting how much the user truly understands. Give it to people without context, without taste, and without understanding what they're building, and you get large-scale, generic output. Give it to people who truly understand their field and can evaluate output with a trained eye, and it becomes the most powerful tool they've ever used. The same input, completely different results. The variable is always people.

First moat: Taste
Shann shared his moments of epiphany during the building process. Looking back at his rapidly released work, he realized half of it was mediocre. So he did something most people skip: he stopped and learned. He spent hundreds of hours researching what truly constituted "good." He read about the thought processes of other builders, studying creators who consistently produced truly unique work. Not for the sake of being different, but because someone cared enough to make genuine decisions, rather than accepting whatever AI initially presented. He studied website design, typography, spacing, visual hierarchy, analyzing websites that actually translated, trying to understand why they worked while thousands of similar sites failed. He read about narrative, narrative tension, and what keeps people scrolling instead of jumping out.
This reminded me of my own experience. When building AI-driven marketing materials, I initially tried every tool I could find: Gamma, Chronicle , Beautiful.ai, and so on. The outputs all had the same "okay" feel—technically complete, visually clean, but utterly forgettable. So I stopped looking for tools to do the job for me and started doing it myself. I spent several days meticulously studying the materials, not just reading, but thinking. What story do these data tell? What makes people care about these numbers? What is the narrative thread that connects all this content from beginning to end? I studied the true principles of presentation design, how information designers handle data density, how the best conference presentations build tension and release, and how visual hierarchy guides the eye across a page without telling where to look. Finally, I clearly divided the work: Claude Opus 4.6 wrote the storylines and copy, Gemini generated the visuals, and I guided both, providing specific references, constraints, and examples of how each section should feel.
Why do AIs always default to generalization? Leon Lin has a brilliant explanation. He built a "taste skill" for Claude Code because he realized a fundamental characteristic of how LLMs (Large Language Models) work: they are probabilistic machines. Without strict rules, they statistically default to the most common patterns in the training data. This is why every AI-generated website looks the same: Inter font, purple gradient, rounded corners in the grid. It's not that the AI can't do better, but that the most likely output is the average of everything it's seen. Leon's solution is explicit design rules for 400 tokens: specific fonts (Press Start 2P, VT323) instead of Inter and Roboto, specific colors (neon pink, electric blue, acid green) instead of the default blue-purple tones, rules about actions, spatial composition, backgrounds, and a key "avoid what" list to prevent the AI from reverting to default settings.

This list of "what to avoid" is the real insight. Taste is not just knowing what you want, but also knowing what to reject. It's having your own opinions on default settings and being willing to overturn them. Most people accept any output because they don't have a strong enough sense of what "better" should look like, so they don't know to keep pushing forward. This is why there are no shortcuts to taste: you can't get it from tutorials. You get it from exposure, from slowly building an internal model of what works and what doesn't by observing thousands of examples. From studying typography until you can tell why one font pairing feels sophisticated while another feels generic, even if you can't fully explain why. From reading enough good writing until you can feel when a sentence carries its weight and when it's just filling space.
I've come to deeply understand that cultivating taste requires time and a great deal of deliberate practice. Shann mentioned a new 80/20 rule: 80% is AI, 20% is your taste. Let AI do what it does best—research, drafting, boilerplate code, structure, formatting, speed. That's the 80%. Don't resist it, don't slow it down, don't manually work on things a machine can do in seconds. That's wasting your most valuable resources: attention and judgment. But the last 20% is yours. That's where you decide what to keep and what to delete. You rewrite the beginning because AI gives you a safe option, and safety doesn't stop you from scrolling. You replace default components with what's truly appropriate. You examine the output, applying all the knowledge you've learned in your specific domain about what's good.
Most people have this ratio reversed. They spend 80% of their energy on prompting and tweaking AI, trying to get the perfect output the first time, running the same prompt fifteen times with slightly different wording, searching for the magic word combination that produces exactly what they want. Then they spend almost no time on curation and judgment. They optimize the wrong side of the equation. Productivity without quality is just movement. The internet is saturated with competent mediocrity, everything works but nothing stands out because everyone has stopped in the same place.
The second moat: Distribution.
You can have the best products, the best content, the best designs in the world. If nobody sees them, they're meaningless. This is a moat that most builders, especially tech-savvy builders, severely underestimate. AI has leveled the barrier to entry for building, but it hasn't touched the barrier to trust. Building is becoming commoditized; anyone can release products, create content, generate marketing campaigns. The barrier to making things is approaching zero. But what about the barrier to trust? It's as high as ever, or perhaps even higher, because the flood of AI-generated content makes people more skeptical, not less so. When everything can be AI-generated, trust in the humans behind the work becomes a premium asset.
Shann points out a key distinction: the gap between "vibe-coded and published" and "someone actually using it and paying for it" is almost entirely a matter of distribution. And at the heart of distribution is massive trust. Yes, you can generate 50 posts in an hour. You can automate communications, reuse content across platforms, and schedule everything a month in advance. Someone is publishing over 1,000 AI-generated posts daily across hundreds of accounts, and their engagement is approaching zero. Because quantity without quality is just massive noise, audiences can sense what's being mass-produced and what's being made for them.
The difference between good and bad content rarely lies in the information it contains. It lies in whether the reader trusts the person who wrote it. Trust comes from consistency, a recognizable voice, and accumulated evidence that the person knows what they're talking about because they've been showcasing their work for months or years . You can't create that in prompts. Trust operates on a completely different clock. AI can compress creation from days to minutes, but trust still takes months or years to build. There are no shortcuts, no hacking techniques. You can't vibe-code trust.
I think there's a crucial distinction most people miss: passive audiences are commodities, followers are vanity metrics. An active community is the moat. It's those who interact in your replies, share your work without being asked, and come back every day because you've become part of their thinking about a topic. You can't create that with content calendars and scheduling tools. You earn it by being genuinely useful, saying specific things instead of vague ones, being honest about what you know and don't know, and being there long enough for people to start paying attention. The real advantage of distribution in the AI age lies in using AI for the logistics—formatting, reusing, scheduling, and analyzing. Focus all your energy on making the content itself, worth sharing, even better.
Taste fuels dissemination. If your work is genuinely good, people will start sharing it for you. They share it because it makes them think, not because you demand it of them. If your work is generic, no amount of posting frequency can save it. You're just putting more mediocre work in front of more people faster.
The third moat: High Agency (High Initiative)
This is a moat that most people underestimate, but it may be the most important of the three. Taste can be cultivated, and distribution can be established, but high initiative is the personality trait that either drives everything else or blocks it. High initiative is the willingness to figure things out without someone handing you a tutorial. It's finding a way around an obstacle instead of giving up when you encounter one. It's combining tools that no one tells you to combine because you're curious enough to try them. When something doesn't work, open the documentation and try four different approaches before asking for help.
Replit's CEO once said, "You don't need any development experience. You need perseverance. You need to learn quickly." Coinbase's CEO said something similar: their best employees are often completely unqualified on paper, but they are highly proactive people who simply get things done without needing to be managed to every result. The people thriving now aren't the most qualified or technically proficient, but those who act without asking for permission. Non-developers can release Chrome extensions, SaaS products, and complete mobile apps in a weekend because they have the curiosity to open up the tools and start experimenting, rather than waiting for the perfect course or the perfect timing.
AI is a multiplier, not a balancer. This is perhaps the most misunderstood thing about these tools right now. People talk about AI democratizing access and leveling the playing field. This is true technically, but misleading in practice. A multiplier amplifies anything you bring to it. Curiosity plus AI equals 10x leverage; you move faster, learn faster, build faster, and correct course more quickly. Passivity plus AI equals zero. Zero multiplied by ten is still zero.
In practice, high initiative looks like this: instead of asking "How do I do this?", you ask "What if I try this?" and then you actually try. Before posting a question, before searching for answers, you try something. You fail, you learn from your failure, you try again with the new information. This willingness to engage with uncertainty rather than back down is what distinguishes people who build real things from those who consume content about building things.
You can see this in people who don't just write code with Claude, but also go to X, Reddit, into the community and source code, to study what the best builders are actually doing. They reverse engineer why certain products feel better than the AI's default settings. They learn the underlying frameworks instead of just copying and pasting suggestions. They ask Claude to critique their own work, using AI to challenge their assumptions rather than just confirming them. Highly proactive people treat patience as a strategic asset. Everyone else is racing to release the first usable thing, which creates opportunities for anyone willing to delve deeper. When the market is filled with speed and superficiality, slowness and depth become a competitive advantage.
The biggest misconception about AI right now is that it's a shortcut. It's a speed multiplier, and a speed multiplier applied to poor judgment will only get you to the wrong place faster. It won't save you from building the wrong thing. It will get you building the wrong thing in record time. Of the three moats, high initiative is probably the hardest to fake. AI can approximate most of the execution layers: code, design, copywriting, research. What it can't approximate is the drive to figure things out when everything is unclear and nobody tells you what to do next. That has to come from you, and frankly, it's the foundation that makes the other two possible.
The window is closing.
Currently, most people using AI are lazy about it. I don't say this to be harsh; it's just an observable fact. The default behavior is: prompt, accept, publish. They hardly edit, hardly apply judgment, and hardly put any taste into it. The result reflects this: a growing ocean of competent, forgettable, and indistinguishable output.
This won't last forever. As AI improves, as tools become more intuitive, and as more people figure out the technological layers, the gap between lazy and intentional AI use will narrow. Right now, simply having these three moats gives you a head start over 95% of people using the same tools. This window will close, but it's open today.
I've observed a phenomenon: your audience is being overwhelmed by AI slop. Every scroll is a wall of generic output, all looking, sounding, and feeling the same. Those who cultivate taste, know what's worth making, build genuine reach and distribution by earning trust over time, and proactively keep figuring things out while others accept the defaults, immediately stand out. Not because they're faster, have better tools, or have discovered some secret hint nobody knows, but because they're doing something almost nobody else is willing to do: care about what happens after AI finishes.
Shann gave a 12-month timeframe. I think he's right. In 12 months, having a taste won't be so rare; it will be expected. Distribution will be harder to establish because everyone will be trying it. Those who start now gain a first-mover advantage with compound interest. This isn't about creating artificial scarcity or urgency; it's the reality of the technology adoption curve. Early adopters build the infrastructure, accumulate expertise, and earn trust. Later entrants must compete in a much more crowded space.
My advice is simple: build all three moats. Taste knows what's worth making, distributing it makes it visible, and initiative keeps going even when everything is unclear. That's how you build something people actually remember. Others will post faster and then wonder why nobody cares. Tools are just tools; what really matters is what you do with them and how much of yourself you invest in the process.




