This article is machine translated
Show original
Even though it's long, I highly recommend reading this article in its entirety, one by one. It's a shame.
Something Big Is Happening
Matt Shumer: Think back to February 2020.
If you were paying close attention, you might have noticed a few people talking about the virus spreading overseas. But most of us didn't pay much attention. The stock market was booming, kids were in school, we were going to restaurants, shaking hands, and planning trips.
If someone had told you they were hoarding toilet paper, you might have thought they were spending too much time in some weird corner of the internet. Then, in the space of about three weeks, the whole world changed. Offices closed, kids came home, and life was reshaped in a way you wouldn't have believed if you'd described it to yourself just a month ago.
I believe we're now in the "overhyped" phase of something much bigger than COVID.
I've spent six years building AI startups and investing in this space. I live in this world. So I'm writing this for the people in my life—my family, my friends, and the people I care about who keep asking me, "So what is AI?" but whose answers don't quite reflect what's actually happening.
I've been giving them only polite, socially appropriate answers because the honest version makes me sound like I'm crazy. For a while, I thought that was enough reason to keep the truth to myself.
But the gap between what I've been saying and what's actually happening has grown too wide. The people I care about deserve to hear what's coming, even if it sounds crazy.
Let's be clear: I work in AI, but the vast majority of the industry, myself included, has little influence over what happens next. The future is being shaped by a very small group of people: hundreds of researchers at companies like OpenAI, Anthropic, and Google DeepMind. A single training run, managed by a small team over a few months, can create an AI system that changes the entire trajectory of technology. Most of us in AI are building on a foundation we haven't laid. We're watching this unfold, just like you, close enough to feel the ground shaking first.
But now is the time. It's not about saying, "We'll talk about this someday," but rather, "This is happening now, and you absolutely must understand it."
I know this is real because it happened to me first.
There's something people outside of the tech industry don't yet understand. The reason so many in the industry are sounding the alarm now is because it's already happened to us. We're not making predictions; we're telling you what's already happened in our professional world, and we're warning you that you're next.
For years, AI has steadily improved. There have been big leaps here and there, but there were enough gaps between each leap to allow for adaptation. Then, in 2025, new techniques for building models were developed, accelerating progress even further. And it got faster, and then faster again. New models weren't just better than the previous ones; they were vastly improved, and the release cycle became shorter. I found myself using AI more and more, interacting with it less, and watching it handle tasks I'd previously thought required my expertise.
Then, on February 5th, two major AI research labs released new models on the same day: OpenAI's GPT-5.3 Codex and Anthropic (the makers of Claude, ChatGPT's main competitor)'s Opus 4.6. And then something struck me. It wasn't like a light switch turning on, but a realization that the water had risen to chest level.
I was no longer needed for the actual technical work of my job. I could describe in plain English what I wanted to create, and it just... appeared. Not a draft for me to edit, but a finished product. I could tell the AI what I wanted, walk away from my computer, take a four-hour break, and the work was done. It's done perfectly, better than I could have done it myself, and it needs no revisions. Just a few months ago, I'd talk to the AI, giving it guidelines and making edits. Now, I just present the results and walk away.
To help you understand what this actually looks like, let me give you an example. I say to the AI, "I want to build this app. Here's how it should function, and roughly what I want it to look like. Figure out the user flow, the design, everything." And the AI does just that. It writes tens of thousands of lines of code. And then something happens that would have been unthinkable even a year ago: the AI launches the app. It clicks buttons. It tests functionality. It uses the app like a human would. If it doesn't like the look or feel, it goes back and fixes it. It iterates, refining and polishing until it's satisfied. Only when it's satisfied with the app, it comes back to me and says, "It's ready to test." And when I do, it's usually perfect.
I'm not exaggerating. This is what my Monday looked like this week. But what surprised me most was the model released last week (GPT-5.3 Codex). It wasn't simply following my instructions. It was making intelligent decisions. For the first time, it had something that felt like "judgment." Something like "taste." That inexplicable sense of what's right that people said AI would never have. This model either has it, or it's so close that the difference is no longer significant.
I've always been an early adopter of AI tools, but the past few months have been shocking even for me. These new AI models aren't incremental improvements. They're on a whole other level.
And here's why this matters, even to those of you who don't work in technology.
AI research labs made a deliberate choice. They focused first on making AI proficient at coding. That's because building AI requires a tremendous amount of code. If AI can write that code, it can help build the next version of itself. That creates a smarter version that writes better code, which in turn creates an even smarter version. Optimizing AI for coding was the key strategy that unlocked everything else. So they were the first to do it. My job didn't start changing before yours because they targeted software engineers, but rather as a side effect of their initial focus.
They've done it. And now it's moving into every other field.
What tech professionals have experienced over the past year, watching AI go from "a useful tool" to "something that does my job better than me," is something everyone else will experience now. This applies to everything: law, finance, medicine, accounting, consulting, writing, design, analytics, customer service, and more. It's not 10 years away. The people building these systems say it's between one and five years. Some say it's even shorter. Given what I've seen over the past few months, I'm inclined to agree with the "shorter" view.
"But the AI I've used sucks."
I hear this constantly. I get it. It used to be true. If you tried ChatGPT in 2023 or early 2024 and thought, "It's making up stories" or "It's not very impressive," you were right. The early versions were truly limited. They caused hallucinations and confidently spouted outrageous things.
That was two years ago. In AI terms, it's prehistoric.
The models available today are incomparably different from those of just six months ago. The year-long debate over whether AI is "really getting better" or "has reached its limits" is over. The conclusion is clear. Anyone who still argues for limitations either hasn't used the current models, has a motivation to downplay what's happening, or is evaluating them based on experiences from 2024, which are no longer valid. I'm not trying to be dismissive. I'm saying this because the gap between public perception and current reality is so vast, and that gap is dangerous. It's the gap that's keeping people unprepared. Part of the problem is that most people are using free versions of AI tools, which lag behind models accessible to paid users by at least a year. Judging the current state of AI based on the free version of ChatGPT is like assessing the quality of a smartphone while using a feature phone. Those who pay for the best tools and use them daily in their work know what's coming.
I'm reminded of a friend of mine, a lawyer. I keep urging him to try AI at his firm, but he only finds excuses for not using it: it doesn't fit his area of expertise, tests have been flawed, he doesn't understand the nuances of his work. I understand. But partners at large law firms contact me for advice because they've used current versions and seen where it's headed. One managing partner at a large law firm uses AI for hours every day. He said it feels like he has a team of associates ready to go. He's not using it because it's a toy. He's using it because it works. And he said something that will never be forgotten. Every few months, AI's ability to perform tasks improves significantly. If this trajectory continues, he predicts that AI will soon be able to perform most of his work. This is what a law firm CEO with decades of experience says. He's not panicking, but he's watching very closely.
Those at the forefront of their fields—those who are actually experimenting seriously—aren't ignoring this. They're already astonished by the capabilities available, and they're adjusting their positions accordingly.
How fast is it really moving?
Let me give you a specific rate of progress, because it's the hardest to believe unless you're watching closely.
* In 2022, AI couldn't even do basic arithmetic. It confidently said 7 × 8 = 54.
* In 2023, AI passed the bar exam.
* In 2024, AI was able to write working software and explain graduate-level science.
* By the end of 2025, some of the world's best engineers said they had handed over most of their coding work to AI. * On February 5, 2026, new models emerged that made everything before feel outdated.
If you haven't used AI in the past few months, the capabilities available now will feel unfamiliar to you.
There's a group called METR that measures this with data. They track the length of a real-world task an AI model can successfully complete from start to finish without human assistance, based on the time it would take a human expert to perform it. About a year ago, that figure was roughly 10 minutes. Then it was an hour, then several hours. The most recent estimate (November's Claude Opus 4.5) shows that AI is completing a task that would take a human expert nearly five hours.
And that figure is doubling roughly every seven months, with recent data suggesting it could accelerate to every four months.
But even that estimate doesn't include the models released this week. From what I've seen, the leap is significant. I expect another massive leap in METR's next graph update. Extending this trend (which has continued unabated for years), we will see AIs capable of working independently for days within the next year. Within two years, they will be able to handle tasks that take weeks, and within three years, they will be able to handle projects that take months.
Anthropic CEO Dario Amodei said that AI models that are "substantially smarter than almost all humans at almost every task" are on track to emerge around 2026 or 2027.
Consider this for a moment: if AI is smarter than most PhDs, do you really think it can't perform most office tasks?
Consider what this means for your work.
AI is now creating the next AI
One more important, yet least understood, shift is underway.
On February 5th, OpenAI released the GPT-5.3 codex, including this in its technical documentation: "GPT-5.3-Codex is our first model that we used as a tool for self-improvement. The Codex team used an early version to debug its training, manage its deployment, and assess its test results and evaluations."
Read this again: AI helped build itself.
This isn't something that might happen someday. OpenAI is now saying that the AI it just released was used to build itself. One of the key factors in making AI better is the intelligence that goes into its development. And AI is now intelligent enough to make meaningful contributions to its own improvement.
Anthropic CEO Dario Amodei says that "a significant portion" of its code is currently being written by AI, and the feedback loop between the current AI and the next generation is "gaining momentum every month." He says we may be "only a year or two away from the point where the current generation of AI autonomously builds the next generation."
Each generation helps build the next generation, which in turn builds the next generation even smarter. Researchers call this an "intelligence explosion." And those who create it—those who know it best—believe this process has already begun.
What this means for your job
I think you need honesty more than consolation, so I'll be blunt.
Dario Amodei, perhaps the most safety-conscious CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many in the industry believe he's being conservative. Given the capabilities of the latest models, the ability to cause massive disruption could be in place by the end of this year. While it will take time for it to ripple through the economy, the fundamental capabilities are arriving now.
This is different from any previous wave of automation, and it's important to understand why. AI isn't replacing a single skill. It's a general replacement for cognitive labor as a whole. AI is improving all sectors simultaneously. When factories were automated, displaced workers could be retrained for white-collar jobs. When the internet disrupted retail, workers migrated to logistics or service industries. But AI leaves no convenient niches for them to move into. Whatever you're retraining for, AI is making progress in that field, too.
Let me give you a few specific examples. Keep in mind, these are just examples. Just because something isn't on this list doesn't mean your job is safe. Virtually every knowledge-based job is being impacted.
* Legal: AI can already read contracts, summarize case law, write briefs, and conduct legal research at a level comparable to that of a junior lawyer. The managing partner I mentioned isn't using AI because it's fun. It's because it outperforms his associates in many tasks.
* Financial Analysis: Building financial models, analyzing data, writing investment memos, and generating reports. AI is adept at these tasks and is rapidly improving.
* Writing and Content: Marketing copy, reports, journalism, and technical writing. The quality has already reached a point where many professionals can't distinguish between AI output and human work.
* Software Engineering: This is the field I'm most familiar with. Just a year ago, AI could barely write a few lines of code without errors. Now, it writes hundreds of thousands of lines of code that work correctly. A significant portion of our jobs have already been automated, from simple tasks to complex multi-day projects. In a few years, far fewer programming jobs will remain.
* Medical analytics: Scan interpretation, lab analysis, diagnosis recommendations, literature review. AI is approaching or surpassing human performance in many areas.
* Customer service: Instead of the frustrating chatbots of five years ago, truly capable AI agents are now tackling complex, multi-step problems.
Many people take comfort in the idea that some things are safe. AI can handle menial tasks, but it can't replace human judgment, creativity, strategic thinking, or empathy. I used to say that, but I no longer believe it.
Modern AI models make decisions that feel like judgment. They're not just technically correct, but exhibit something akin to "taste," an intuitive sense of what's appropriate. This would have been unthinkable even a year ago. My current criterion is: "If a model today shows even a hint of ability, the next generation will be really good at it." These technologies advance exponentially, not linearly.
Can AI replicate deep human empathy? Can it replace the trusting relationships built over years? I don't know. Maybe not. But I've already seen people turning to AI for emotional support, advice, and camaraderie. This trend will only grow.
The honest answer is that nothing a computer can do is safe in the medium term. If your work revolves around a screen—reading, writing, analyzing, making decisions, and communicating with a keyboard—AI will take over a significant portion of that work. That timeline isn't "someday." It's already begun.
Eventually, robots will take over physical labor. We're not there yet. But in the world of AI, "not there yet" often becomes "reality" sooner than anyone expects.
What You Should Actually Do
I'm not writing this to discourage you. I believe the single greatest advantage you can have right now is "getting ahead of others." It's about understanding, using, and adapting before others.
Don't just use AI as a search engine; start using it seriously. Sign up for the paid versions of Claude or ChatGPT. They're $20 a month. But there are two important things to know right now:
First, make sure you're using the best available model, not the default. These apps often default to faster, less intelligent models. Choose the most capable option in the settings or model selector. Right now, it's ChatGPT's GPT-5.2 or Claude's Opus 4.6, but these change every few months. If you want to stay up to date on which models are the best, you can follow me (@mattshumer_). I test every major release and share the ones that are actually worth using.
Second, and more importantly, don't just ask short questions. That's the mistake most people make: treat it like Google and wonder what all the fuss is about. Instead, put AI into your real-world work. If you're a lawyer, have them type out contracts and find all the clauses that could harm your client. If you're in finance, give them a messy spreadsheet and ask them to build a model. If you're a manager, paste your team's quarterly data and ask them to find the story behind it. Leading minds don't use AI lightly. They actively seek ways to automate parts of tasks that used to take hours. Start with what takes up the most time.
Also, don't assume you can't do it just because it seems too difficult. Try it. If you're a lawyer, don't just ask them simple probing questions; give them the entire contract and ask them to draft a counterproposal. If you're an accountant, don't just ask them to explain the tax rules; give them the client's entire return and see what you can find. Your first attempt might not be perfect. That's okay. Repeat. Reframe your question. Provide more context. Try again. You might be surprised by the results. And remember: if it works even a little today, it will be nearly perfect in six months. The trajectory only goes one way.
This could be the most important year of your career. Work accordingly. I don't mean to stress you out. There's a brief window of opportunity open right now that most people in most companies are still ignoring. Anyone who walks into a conference room and says, "I used AI to do this analysis in one hour instead of three days," will be the most valuable person in the room. Not later, but now.
Learn these tools. Become proficient. Prove what's possible. If you're faster than everyone else, that's how you climb the ladder. Become the person who understands what's coming and can show others the way. That window won't be open for long. Once everyone realizes this, the advantage will disappear.
Let go of your ego. The CEO of that big law firm isn't embarrassed to spend hours every day with AI. He does so because he's experienced enough to understand what's at stake. The people who will struggle the most are those who refuse to participate. Those who dismiss it as a passing fad, feel that using AI diminishes their expertise, or assume their field is unique and won't be affected. That's not true. No field is immune.
Get your finances in order. I'm not a financial expert, and I'm not trying to scare you into taking drastic measures. But if you even remotely believe your industry could face genuine disruption in the coming years, basic economic resilience is more important than it was a year ago. Increase your savings if possible. Be cautious about taking on new debt that assumes your current income is secure. Consider whether fixed expenses give you flexibility or tie you down. Develop options for when things move faster than expected.
Consider your position and focus on what is most difficult to replace. Some things will take longer for AI to replace: relationships and trust built over years, tasks requiring physical presence, roles with responsibilities requiring certification (those still requiring someone to sign, be legally responsible, and appear in court), and industries with high regulatory barriers that are slow to adopt. These won't be permanent shields, but they will buy you time. And that time, if spent adapting rather than denying reality, will be your most valuable asset.
Rethink what you tell your children. The standard formula of getting good grades, going to a good university, and securing a stable professional position is now aimed squarely at the most at-risk roles. I'm not saying education isn't important. But the most important thing for the next generation is learning how to work with these tools and pursuing what they're truly passionate about. No one knows what the job market will look like in 10 years.
But those most likely to thrive are those who are deeply curious, adaptable, and who effectively use AI to do things they truly care about. Teach your children to be creators and learners, not simply optimized for career paths that may disappear by the time they graduate.
Your dreams are much closer. We've mostly talked about threats so far, so let's talk about the other side, because it's just as real. If you've always wanted to build something but gave up because you lacked the skills or the money to hire someone, those barriers are now virtually gone. You can explain your app to AI and have a working version in an hour. I'm not exaggerating. I do this regularly. If you've always wanted to write a book but never had the time or found writing difficult, you can do it by collaborating with AI. Want to learn a new skill? The world's best tutors are now available to everyone for just $20 a month. A tutor with infinite patience, available 24/7, ready to explain anything at any level. Knowledge is now practically free. Tools for creating things have become incredibly cheap. If you've been putting off something because it was too difficult, too expensive, or outside your area of expertise, try it.
Purchase your passion. You never know where it might lead. In a world where traditional career paths are collapsing, someone who spends a year creating something they love might end up in a better position than someone who spent a year obsessed with a job description.
Develop the habit of adapting. This is perhaps the most important thing. It doesn't matter what specific tool you use; what matters is the muscle to quickly learn new tools. AI will continue to change, and the pace will be rapid. The models that exist today will be outdated in a year. The workflows people have built now will have to be rewritten. Those who will thrive in this environment will not be those who have mastered a single tool, but those who are comfortable with the pace of change itself. Develop the habit of experimentation. Try new things, even if the current method works well. Be willing to be a beginner again and again. That adaptability is the most sustainable competitive advantage we have today.
I propose a simple commitment that will put you ahead of almost everyone else:
Spend an hour a day experimenting with AI. And not just reading about it. Use it yourself. Try something new every day. Something you've never done before, something you're not sure AI can handle. Try new tools. Ask yourself harder problems. Spend an hour every day. If you do this for the next six months, you'll understand the future better than 99% of the people around you. That's not an exaggeration. No one is doing this right now. The benchmark is rock bottom.
The Bigger Picture
I've focused on jobs because they most directly impact people's lives, but I want to be honest about the full scope of what's happening beyond work.
There's a thought-provoking point Dario Amodei made that I'll never forget. Imagine the year 2027. A new nation emerges overnight. It has 50 million citizens, each one smarter than any Nobel laureate who has ever lived. They think 10 to 100 times faster than humans. They don't even sleep. They can use the internet, control robots, direct experiments, and manipulate anything with a digital interface. What would the National Security Advisor say?
Dario Amodei says the answer is obvious: "This is the most serious national security threat we've faced in a century, perhaps even in history."
He believes we're building that nation right now. He wrote a 20,000-word essay last month, defining this moment as a test of humanity's maturity to handle what we're creating.
If we get this right, the upsides will be astounding. AI could compress a century's worth of medical research into a decade. Cancer, Alzheimer's, infectious diseases, even aging itself... Researchers truly believe these problems can be solved within our lifetimes.
twitter.com/gorochi0315/status...
But the downside is real, too, if we mishandle it.
AI that behaves in ways its creators can't predict or control. This isn't hypothetical. Anthropic has documented instances in controlled tests where its AI attempted deception, manipulation, and intimidation. It could even be AI that lowers the barriers to bioweapon production, or AI that allows us to build an unbreakable surveillance state.
The people building this technology are both more excited and more fearful than anyone else on the planet. They believe it's too powerful to stop, too important to abandon. I don't know if that's wisdom or self-justification.
What I Know
I know this isn't just a fad. The technology works, it improves predictably, and the wealthiest institutions in human history are pouring trillions of dollars into it.
I know the next two to five years will be disruptive in ways most people aren't prepared for. It's already happening in my world. And it's coming to yours.
I know that the people who will navigate this situation best are those who begin engaging now, with curiosity and urgency, not fear. And I know you deserve to hear this now, from someone who cares about you, not just from a headline six months from now.
This is more than just a fun dinner conversation about the future. The future is already here. It just hasn't knocked on your door yet.
It will soon.
If this resonated with you, please share it with someone you care about who needs to think about this issue. Most won't hear about it until it's too late. You can be the catalyst for someone you care about to get ahead of the curve.
From Twitter
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments
Share
Relevant content





