Source: The Atlantic
Author: Josh Tyrangiel
Original title: America Isn't Ready for What AI Will Do to Jobs
Compiled and edited by: BitpushNews
The full text is approximately 25,000 words and will take about 45 minutes to read.
In 1869, a group of reformers in Massachusetts persuaded the state government to try a simple idea: counting.
At the time, the Second Industrial Revolution was sweeping across New England. It taught factory owners a lesson—a lesson most MBA students learn in their first semester today—that increased efficiency often comes at a price, and that price is usually borne by others. These new machines weren't just spinning cotton or forging steel; they operated at speeds exceeding the limits of the human body—a complex engineering marvel originally designed for entirely different purposes after millions of years of evolution. Factory owners knew this all too well, just as they knew that human tolerance for suffering has its limits, and once those limits are crossed, they'll start burning things.
Nevertheless, the machine continues to operate at full speed.
Therefore, Massachusetts established the nation’s first Bureau of Labor Statistics , hoping that data could accomplish what conscience could not.
Policymakers believed that by measuring working hours, the environment, wages, and what economists now call “negative externalities,” but back then referred to as “the torn-off child’s arm,” they might be able to create a relatively fair outcome for everyone. Or, to put it more cynically, “a sustainable level of exploitation.”
Years later, as federal troops opened fire on striking railroad workers and wealthy citizens began funding private arsenals—signs that society was not functioning well—Congress decided the idea was worth promoting on a large scale and established the Bureau of Labor Statistics (BLS).
Statistics cannot abolish injustice, and rarely can they quell controversy. But the act of “counting”—an attempt to see the facts clearly and a promise that the government will follow a shared set of facts—signals a willingness to pursue fairness, or at least demonstrates that efforts are being made to achieve it. Over time, this willingness becomes crucial. It is one way a republic earns trust.
BLS is a small miracle of civilization. Each month, it sends detailed surveys to approximately 60,000 households, 120,000 businesses, and government agencies, supplementing its findings with qualitative research to verify and revise them. It has indeed contributed to this "report card" of American society: for a full 250 years, the United States has not experienced a violent class war. Furthermore, one has to admire the entertainment value hidden within its seemingly trivial data.
It is through the BLS that we know that in 2024, 44,119 people were employed in mobile food service (i.e., food trucks), an increase of 907% from 2000; non-veterinary pet care (grooming, training) employed 190,984 people, an increase of 513%; the United States has nearly 100,000 massage therapists, and in Napa, California, the concentration of this profession is five times the national average.
These, along with thousands of other BLS statistics, paint a picture of an increasingly prosperous society and a workforce perpetually adaptable to change. But like all statistical agencies, the BLS has its limitations. It excels at revealing what has already happened, but is very limited in telling us what is to come. The data cannot predict economic recessions or global pandemics—nor can it foresee the arrival of a technology that might disrupt the labor market like a meteorite hitting the dinosaurs.
CEOs' Overt Schemes
I am referring to artificial intelligence (AI) of course.
After a debut that could have been directed by H.P. Lovecraft—Elon Musk’s early warning: “We are summoning demons”—the AI industry has shifted from nightmarish language to soporific corporate jargon: driving innovation, accelerating transformation, and refactoring workflows.
This is the first time in history that humanity has invented something truly miraculous, yet is eager to put it in the guise of an ordinary corporate fleece vest.
Selling software to businesses can certainly be incredibly lucrative, but downplaying the impact of AI is also a useful pretense. This technology can process a hundred reports before you finish your coffee, draft and analyze documents faster than an entire team of legal assistants, create music on par with pop stars or the genius of Juilliard graduates, and program—real programming, not just copy-pasting from Stack Overflow—with the precision of top engineers. Tasks that once required skill, judgment, and years of training are now being coldly and relentlessly performed by software capable of continuous self-learning.
AI is already ubiquitous, and any intelligent knowledge worker can delegate tedious tasks to machines. Many companies, including Microsoft and PwC, have already instructed their employees to leverage AI to improve efficiency.
But anyone who subcontracts tasks to AI is smart enough to imagine the following scenario: one day, "augmentation" will evolve into "automation," and cognitive obsolescence will force them to look for work in food trucks, pet spas, or massage parlors. At least until humanoid robots arrive.

Many economists insist that everything will be alright.
Capitalism is resilient. The widespread adoption of ATMs actually increased the number of bank tellers, the introduction of Excel increased the number of accountants, and the emergence of Photoshop stimulated the demand for graphic designers. In each case, new technologies automated old tasks, increased productivity, and created jobs with higher salaries than anyone could have imagined before.
BLS predicts that employment will grow by 3.1% over the next 10 years. While this is lower than the 13% growth of the previous decade, adding 5 million new jobs is not a disaster in a country with a stable population.
However, some things are difficult for economists to measure. Americans tend to derive meaning and identity from their work. Most don't want to change careers, even if they are confident they can find another job—when in fact they aren't. A Reuters/Ipsos poll in August 2025 showed that 71% of respondents worried that artificial intelligence would "cause too many people to lose their jobs permanently."
If modern "factory owners" hadn't already publicly predicted that AI would "permanently eliminate jobs," this data showing public panic might have been dismissed as yet another case of needless worry.
In May 2025, Dario Amodi, CEO of AI company Anthropic, stated that AI could push unemployment up by 10% to 20% within the next one to five years and "wipe out half of junior white-collar jobs." Ford CEO Jim Farley estimated that it would lay off "a full half of white-collar workers" within a decade.
OpenAI CEO Sam Altman revealed that he even had a bet with his tech CEO friends in a small group: on the day when a company with only one employee and a valuation of one billion dollars would emerge (this magazine's business unit has partnered with OpenAI).
Other companies, including Meta, Amazon, UnitedHealth, Walmart, JPMorgan Chase, and UPS, have recently announced layoffs, using more euphemistic language in their reports to investors, such as the rise of "automation" and "a downward trend in total headcount."
Taken together, these statements are highly unusual: capitalists are warning workers that the ice beneath their feet is about to crack—while continuing to stomp on it.
It's like watching two versions of the same scene. In version one, the ice is as solid as a rock because it always has been; in version two, many people sink to the bottom. The difference only becomes clear the moment the surface finally collapses—and by then, the options available become extremely limited.
AI is already transforming jobs through a series of assigned tasks. If the transformation is slow enough and the economy adjusts quickly enough, economists may be right: we'll be fine. Even better. But if AI triggers a rapid restructuring of the workforce—compressing years of change into months and impacting roughly 40% of jobs globally (as predicted by the IMF)—the consequences will extend far beyond the economic sphere. They will test already vulnerable political institutions.
So the question is whether the upheaval we are approaching can be managed through statistical methods, or whether it is so cruel that it is unbearable to count.
Rearview Mirror: The Economist's Blind Spot
Austan Goolsby is the president of the Federal Reserve Bank of Chicago, a professor at the University of Chicago Booth School of Business, and former chairman of the Council of Economic Advisers in the Obama administration. He is also one of the few economists you wouldn't find boring even if you bumped into him at a party.
When I asked Goolsby if he had concrete data showing that AI had begun to erode the labor market, he gave a response that was both obvious and unhelpful, accompanied by a smile. He answered, yet it seemed, he didn't answer.
I've known Goolsby for a long time and enjoy these moments—he makes self-deprecating jokes about our powerlessness as professionals. Economists rarely offer direct answers to the present, while journalists hate it when the future isn't revealed before a deadline.
We spoke in September, shortly after the release of the research paper known as the "Canary Paper." Three scholars from Stanford University's Digital Economy Lab concluded, through analysis of millions of job pay records influenced by generative AI, that employment among workers aged 22 to 25—the "canaries"—has declined by approximately 13% since the end of 2022.
For several days, this report has been the topic of discussion for everyone in the field, and the discussion mainly refers to finding fault with it.
Some argue that the report overstates the impact of ChatGPT; others point to the inherent cyclicality of youth employment; still others suggest that a sharp rise in interest rates during the same period is a more likely source of volatility. Furthermore, the "Canary" report contradicts a study released a few weeks earlier by the Center for Economic Innovation, which concluded that AI is unlikely to cause mass unemployment in the short term, although it will reshape jobs and wages. That study's title was highly provocative: "AI and Jobs: The Conclusion (Before the Next Conclusion)."
This is what Goolsby wanted to emphasize: economists are bound by numbers. And numerically, there is no indication that AI has already impacted people's jobs. "It's too early to draw conclusions," he said.
A lack of certainty should not be mistaken for a lack of concern. The Federal Reserve's mandate is to promote maximum employment, so corporate statements about impending layoffs caught Goolsby's attention. But the data didn't match.
One possibility is that the labor market is weaker than it appears, but this weakness is being absorbed internally by companies and isn't reflected in the unemployment rate. However, if companies are hoarding more workers than they need (i.e., "labor hoarding"), you should see weak productivity growth. It's as predictable as a hangover: too many workers, not enough work, and productivity declines. "But the opposite is true," Goolsby says. "Productivity growth has been very high. I don't know how to explain that."
Productivity is a shortcut to a more prosperous society. If each worker can produce more in the same amount of time—more goods, better services, faster results—then the total economy will grow even without an increase in the number of workers. This is a rare productivity boost that expands the overall economic pie, rather than simply redistributing shares.
For the past few years, U.S. productivity has been booming. This may be temporary, due to some one-off boost, such as the small business startup boom spurred by the pandemic. But Goolsby, with his penchant for complicating simple issues, points out that general-purpose technologies like electricity and computers can create lasting productivity gains, making society as a whole wealthier.
Whether AI falls into this category of technology, only time will tell. How long? " Several years ," Goolsby said.
At the same time, there is another variable. The direct risk to employment may not be AI itself, but rather that companies, lured by the promise of AI, over-invest before fully understanding its capabilities. Goolsby recalls the dot-com bubble, when companies frantically invested in laying fiber optic cables and building capacity. “In 2001, when we discovered that the internet’s growth rate wasn’t 25% per year, but only 10%—though that was still a great rate—it meant we had too much fiber optic cable, and then the business investment collapsed. A huge number of people lost their jobs in the ‘traditional way’.”
If AI investments were to experience a similar collapse, the scenario would likely be very familiar: painful, unstable, and accompanied by media scathing criticism. But this would merely be a financial reset, not a technological regression—a scenario economists are adept at identifying because it has happened before.
This is the paradox of economics. To understand how quickly the present is propelling us into the future, you need a fixed point of reference, but all fixed points remain in the past. It's like driving while only looking in the rearview mirror—challenging if the road is straight; disastrous if the road isn't.
David Otto and D'Aremmogul are among the best "rearview mirror drivers" in this field. Both are at MIT and excel at understanding past economic upheavals. Aremmogul, a 2024 Nobel laureate in economics, studies inequality; Otto focuses on labor. Both maintain that the story of AI and its consequences will largely depend on "speed"—not because they assume lost jobs will be automatically replaced, but because a slower pace of change allows society time to adapt, even if some jobs disappear forever.
The labor market has its natural adjustment rate. If 3% of employees in a profession retire or are laid off each year over 30 years, you hardly notice it. But after ten years, a third of the jobs in that profession are gone. Elevator operators and tollbooth operators experienced this slow decline without harming the economy. “When change happens more rapidly,” Otto told me, “things get complicated.”
Otto is best known for his research on the “China shock.” In 2001, China joined the World Trade Organization; six years later, 13% of U.S. manufacturing jobs—about 2 million—had disappeared. The “China shock” disproportionately hit small manufacturing industries—textiles, toys, furniture—primarily in the South. “Workers in many places have not yet recovered,” Otto said, “and we are clearly suffering political consequences.”
But AI isn't trade policy; it's software. Even if it first impacts certain professions and regions—for example, lawyers in large city law firms might feel the effects years earlier than workers in less digitalized industries—this technology won't be geographically limited. Ultimately, everyone will be affected.
All of this sounds ominous until you remember the most important thing about software: people hate it, almost as much as they hate change.
This is why many economists believe that the AI "meteorite" is still at least a decade away.
“Those tech CEOs want us to believe that the automation market is a given, that everything will happen smoothly and lucratively,” Acemoglu said. Then, with a Nobel Prize-worthy scoff, he added, “History tells us that it will actually happen much slower.”
The arguments are as follows:
Before AI can transform a company, it must access the company's internal data and weave it into existing systems—which sounds easy, provided you're not a Chief Technology Officer (CTO). A well-known industry secret among most Fortune 500 companies is that many of their critical functions still run on heavy, industrial-grade mainframe computers. These machines almost never fail and are therefore irreplaceable. Mainframes are like Christopher Walken: they've been working tirelessly since the 1960s, excelling at performing specific roles (processing payments, securing data), and now almost no one truly understands how they work.
Integrating legacy technologies with modern artificial intelligence means reconciling hardware, suppliers, contracts, ancient programming languages, and people—each with their own strong opinions on the “right” way to change things. Months passed, then years; company parties came and went; and CEOs still couldn’t understand why the miracle of AI hadn’t solved all their problems.
Every new general-purpose technology is, for a time, "hijacked" by the chaos of the old. The first batch of power plants opened as early as the 1880s, when no one was arguing whether electricity was superior to the steam engine. But because factories were built around steam engines in the basement, power was transmitted to the machines via long shafts, belts, and pulleys running through the building. To adopt electricity, factory owners didn't just need to buy an electric motor; they needed to tear down and rebuild the entire factory. Some did, but most simply waited for the existing facilities to wear out and become obsolete, which explains why the significant economic benefits of electricity didn't become apparent until 40 years later.
However, these explanations are hardly reassuring for economist Anton Kolinek. He told me he was “extremely worried.” He believes the U.S. could see significant unemployment as early as this year—“a very clear labor market effect.”
“Then the economists you talk to will say, ‘I see it in the data!’” Coreneck paused. “Let’s not joke about this, because it’s too serious.”
Coreneck is a professor at the University of Virginia and chair director of the Transformative AI Economics Initiative. Last year, Time magazine named him one of the most influential people in AI. But he didn't initially want to be an economist. He grew up in a mountain village in Austria, writing machine code with 0s and 1s—the least glamorous form of programming, but also the most rigorous. It teaches you where instructions become bottlenecks, where systems clog, and what breaks down first under excessive pressure.
Since the breakthroughs in deep learning in the early 2010s, he has been closely following the development of AI, even though his doctoral dissertation focused on financial crisis prevention. When he first saw a demonstration of a large language model in September 2022, it took him "about five seconds" to start thinking about its impact on future work, beginning with his own.
We had breakfast together in Charlottesville in the fall. Coreneck was young and slim, with delicate thin-rimmed glasses and a light red beard. My overall impression of him was that he'd rather customize Excel tabs than predict the apocalypse. Yet, here he uttered those five words economists despise most: This time it might be different.

Coreneck's argument is simple: his colleagues didn't misread the data, but they misread the technology. "We can't fully imagine what it means to have 'very smart' machines," Coreneck said. "Machines have always been stupid, so we don't trust them, and it always takes time to promote them. But if they are smarter than us, in many ways they can 'self-promote'."
This is already happening. During sporting events, numerous incomprehensible advertisements are promoting AI tools that promise to accelerate the integration of other AI tools into the workflows of large companies. Deployment time has been reduced by up to 50% because many of these systems do not require large-scale new hardware or manual rewriting.
This is where Collinek parted ways with the "rearview mirror" economist. If AI moves as quickly as he anticipates, the harm to many workers will arrive before institutions can adapt—and each successful application only adds to the pressure of more applications.
Consulting firms, for example, have traditionally charged hefty fees for junior assistants to conduct research and draft reports. Clients tolerated these costs because there were no alternatives. But if a company can leverage AI to deliver the same results faster and at a lower cost, its competitors face a brutal choice: either adopt the technology or explain why they still charge a premium for human time. Once one company adopts AI and undermines its competitors, the rest either quickly follow suit or are eliminated. Competition not only rewards adopters but also makes delayed adoption unforgivable.
Kolinek acknowledges the objections based on two criteria: the data is currently unclear, and new technologies have historically created more jobs than destroyed. But he believes his colleagues need to start looking ahead. “Whenever I talk to people at West Coast labs”—Kolinek is an unpaid member of the Anthropic Council of Economic Advisers—“I don’t feel they’re artificially exaggerating the product’s potential. I usually sense they’re just as fearful as I am. We should at least consider the possibility that what they tell us about ‘may’ actually ‘come true.’”
Collinek is unsure whether technology itself can be guided by policy, but he hopes more economists will engage in scenario planning to avoid catching policymakers off guard. Because mass unemployment is not just unemployment; it also means loan defaults, cascading defaults, shrinking consumer demand, and a self-reinforcing recession—a recession that can turn a shock into a crisis, and a crisis into the decline of an empire.
In that brief period at the beginning of 2025, CEOs were enthusiastically offering "thought leadership" on AI and its impact on employees and profits. But then, almost simultaneously, these pronouncements eerily ceased. Anyone who has ever seen a shark's fin break the surface of the water and disappear knows that's hardly reassuring.
The Bureau of Labor Statistics offers a simple explanation: the U.S. employs approximately 280,590 public relations professionals, a 69% increase over the past two decades (almost seven times the number of journalists). It's not hard to imagine their professional advice: AI is unpopular with the public, and CEOs who talk about layoffs are even less so. So, let's just keep quiet about AI and jobs, shall we?
In October, the day after The New York Times revealed that Amazon executives planned to potentially automate more than 600,000 jobs by 2033, a PR director at a large multinational corporation told me, “We absolutely will not talk about this anymore.” This made at least a small historical moment—it was the first time someone had asked me to allow them to remain anonymous so that I could explain in the journal that they would no longer be speaking out publicly.
In other words, CEOs of Fortune 100 companies like Walmart, Amazon, and Ford, as well as executives at emerging AI-driven companies like Anthropic, Stripe, and Waymo—people who were talking about AI and jobs just a few months ago—either declined or ignored multiple interview requests from this reporter. Even the Business Roundtable—an association of CEOs from the 200 most powerful companies in the U.S., which exists to speak on these issues on behalf of its members—told me its CEO, former White House Chief of Staff under George W. Bush, Joshua Bolten, that he had “nothing to say.”
Of course, telling reporters that they won't make public comments doesn't mean they're truly silent. These CEOs are at least talking to one person: Reid Hoffman, co-founder of LinkedIn and a member of the Microsoft board. Hoffman is a tech expert by background and an optimist by nature. He knows everyone in the business world, and everyone knows he knows everyone, making him Silicon Valley's most sought-after "know-it-all"—a rational, neutral mouthpiece to whom CEOs can turn when they want to think loudly.
He told me that AI has already divided CEOs into three categories.
The first category is the dabblers : these latecomers finally start spending quality time with their Chief Technology Officers (CTOs).
The second type is driven by vanity , or a desire to have their traditional businesses taken more seriously by tech gurus, and thus they are eager to declare themselves AI leaders. "They're like saying, 'Look at me! I'm important! I'm at the core.' But they haven't actually done anything substantial," Hoffman said. "They're just thinking, 'Get me a place at the AI table too.'"
The third category is quite different: these executives are secretly developing transformation plans . "They are the ones who have foresight. And commendably, I think many of them are thinking about how to help the entire workforce transform through education, skills reengineering, or training."
But what these three groups have in common is that they all believe—after years of promises about AI—investors have lost patience with the "dream." This year, they want results. And the fastest way for a CEO to produce results is through layoffs. Hoffman says layoffs are inevitable. "Many of them have convinced themselves that there's only one outcome. I think that's a lack of imagination."
Hoffman didn't waste time trying to persuade CEOs against layoffs; he knew they would. "What I told them was, you have to demonstrate pathways and ideas for benefiting from AI beyond cost-cutting. How do you generate more revenue? How do you help your employees transition to using AI more effectively?"
"Slow" Washington
“This is a high fever,” Gina Raimondo, former Rhode Island governor and Biden’s Commerce Secretary, told me, referring to the wave of layoffs. “Every CEO and every board felt they needed to move faster, faster. ‘We have 40,000 people doing customer service? Cut it to 10,000. Leave the rest to AI.’ If the core of this whole thing is focused on moving fast and efficient, then a lot of people will be seriously hurt. Given our current situation, I don’t think this country can afford this kind of shock.”
Like Hoffman, Raimondo occupies a unique niche: she's a Democrat who walks into a boardroom without triggering their "culture metal detector" alarm. She co-founded a venture capital firm, and executives at AI companies find her pragmatic and tech-savvy, and are willing to talk to her. "This is a technology that can make us more efficient, healthier, and more sustainable," Raimondo says, "but only if we manage this transformation process very carefully."
Last summer, Raimondo traveled to Sun Valley, Idaho, for a four-day conference with Allen & Co., known as a “billionaire’s summer camp.” She posed the same two questions to people: How do you use AI? And what happens to your employees when you do?
Many CEOs admit they are caught in a dilemma. Wall Street expects them to replace human workers with AI; if they don't, they themselves will lose their jobs. But if they all order massive layoffs, they know the consequences will be enormous—to their employees, to the country, and even to their conscience as human beings.
Raimondo responded, “The country’s most powerful CEOs have a responsibility to help address this issue.” She envisioned the possibility of a “large-scale, new type of government-business partnership.” “Imagine if we could get companies to take responsibility for retraining and reassigning laid-off employees.”
She knew what that sounded like. “A lot of people say, ‘Oh, Gina, you’re so naive. That’s impossible.’ Okay. But I’m telling you, if we don’t use this moment to do things differently, the America we know is going to end.”
If these executives' concerns are as genuine as Raymondo believes, then perhaps they can be spurred into action. Liz Schuler, president of the AFL-CIO, is trying to do so, but with mostly failures. She told me that CEOs and technology leaders are so focused on winning the AI race that "workers have become a forgotten appendage."
Schuler realized that this sentiment was to be expected, as a union leader, so she took the initiative to compromise: “Most workers, especially union leaders, would initially panic, right? Like, ‘Wow, this is basically going to wipe out all jobs, everyone will lose their safety net, we have to stop it’—but we know that’s not going to happen.” Schuler said that instead of panicking, she spoke with union leaders representing the AFL-CIO, which represents about 15 million people, urging them to figure out what they wanted from the technology and what they were prepared to trade for it in the short window before AI was imposed on them.
So far, only one company has accepted the offer. Microsoft has agreed to involve its employees in discussions about developing AI and its governance principles. Most notably, the agreement includes a “neutral agreement” that allows workers the freedom to unionize without retaliation—unprecedented in the tech industry. “We see this as a model,” Schuler said. “We hope to see others recognize that workers are at the heart of this debate and our future.”
Squinting and scrutinizing, you might convince yourself that the Microsoft deal is indeed a "proof of concept." But more likely, it's an isolated case. All the persuasion, rationality, and appeals to patriotism and shared humanity are battling a truth as old as the history of wage labor: American capitalism's pursuit of efficiency, like water flowing downhill, is inevitable, indifferent, and has predictable consequences for anyone who happens to be at the bottom. With AI, capital now possesses, for the first time, a tool promising near-infinite productivity—something factory owners and millers could never have dreamed of: maximizing efficiency with the fewest employees demanding the largest share of profits.
In this context, the CEOs' silence carries a different kind of echo. It could be a cold admission that the decision has been made, or it could be a repressed plea for the government to save them from this self-destructive competition.
So, let's turn our attention back to Washington.
You may have realized that our current political situation is unbearable. However, the only way to make it bearable—to rediscover the glimmer of hope at its core—is through more political maneuvering. This is the joke at the heart of Wall Street: it is precisely that kind of struggle that hollows out the place that is the only way for it to be reborn.
If there were one issue that could alleviate America's "political headache"—something large enough and urgent enough—you might assume that "the future of American jobs" is that issue. But Michigan Senator Gary Peters, a senior senator, told me, "At least from my contacts in the Senate, not many people are talking about it." Peters (a Democrat) specifically pointed to a prevalent mentality among Republicans (though he also said both sides share responsibility): "It's like saying, 'We don't need to do anything. Everything will be fine. In fact, the government should just step aside. Let industry keep moving forward and keep innovating.'"

It's difficult to slow the development of artificial intelligence without handing over American technological hegemony to China—a point emphasized with almost religious fervor by tech lobbying groups. Forcing AI labs to disclose the consequences of their deployments in advance is also difficult, as they themselves often lack the necessary understanding. You can regulate the use of AI to replace workers, but enforcement requires a regulatory body that doesn't currently exist, as well as the technological expertise that governments lack.
That said, the government has a decades-old "script" for guiding workers through economic shocks. Peters has been banging on the table, trying to get Congress to use it.
Since the U.S. began to more actively open its economy to global trade in 1974, the Trade Adjustment Assistance (TAA) program has helped more than 5 million people with retraining, payroll insurance, and relocation subsidies, costing approximately $500 million annually in recent years. In 2018, Peters co-sponsored the Automation TAA Act, aiming to extend the same benefits to workers squeezed out by AI and robots. It quietly died, like many proposals in Congress. In 2022, the TAA's authorization expired, and Peters' efforts to revive the program have made no progress in a Congress extremely averse to trade votes and new spending.
This is incredibly foolish. The U.S. currently has approximately 700,000 vacant factory and construction jobs. (Ironically, one of the few factors hindering AI development is the lack of qualified HVAC technicians to install cooling systems in data centers.) Ford CEO Jim Farley, who predicted that half of white-collar jobs might disappear, has also been saying that the auto industry is short hundreds of thousands of technicians who can work at dealerships—jobs that are in a long-term “ideal state”: technically demanding enough to earn six-figure salaries and relying on precise manual dexterity that is difficult to robotize. But someone has to pay for the months of training required for these positions. “These are very good jobs,” Peters said. However, “the federal government spends far more money on four-year higher education institutions than on skills training programs.”
There are countless ideas for what to do if AI hollows out vast numbers of jobs: universal basic income (UBI), employer-independent benefits, lifelong training, and shorter workweeks. These ideas surface whenever tech anxiety reaches its peak; then, just as "naturally," they fade away, defeated by cost, politics, or the simple fact that they require a level of collaboration that the U.S. has failed to achieve for decades.
The 119th Congress is like a ghost ship, steered by burnout and a desire to avoid tough choices. Meanwhile, the AI industry is pouring tens of millions of dollars into ensuring no one can take the helm. For example, a Super PAC called Leading the Future has reportedly secured a $50 million commitment from Silicon Valley venture capital firm Andreessen Horowitz, and another $50 million from OpenAI co-founders Greg Brockman and his wife. This body plans to "aggressively oppose" candidates from both parties who threaten industry priorities. And these priorities boil down to one thing: moving fast. No, even faster.
Schuler told me that the AFL-CIO will continue to pressure elected officials to develop a worker-centric AI agenda, but "the struggle may not be as intense at the federal level as it is at the state level." More than 1,000 AI-related bills are under consideration in state legislatures. Of course, AI funding will follow; Leading the Future has announced plans to focus on New York, California, Illinois, and Ohio.
The executive branch has delegated almost all AI regulatory authority to David Sacks—nominally co-chair of the President's Council of Advisors on Science and Technology, but functionally more of a "government role player" while maintaining his identities as a venture capitalist and podcast host. Sacks is also the White House's cryptocurrency czar and co-authored the Trump administration's "American AI Action Plan."
A New York Times investigation found that Sachs has investments in at least 449 companies related to artificial intelligence. This isn't just "the fox guarding the henhouse," he's also livestreaming !
AI is still a new thing. It may grow to transform our lives in unimaginably good ways. But it also raises profound questions about security, inequality, and the viability of a flawed wage-labor system that has fostered some of the most prosperous societies in human history. And there is absolutely no indication—nothing at all—that our political system is capable of handling the changes to come.
This means that the deepest challenge posed by artificial intelligence may not be related to employment at all.
Final Warning
“My God, the ideal state in democracy textbooks,” Nick Clegg said, “is to express and resolve differences peacefully, which could otherwise erupt in more destructive or violent ways. So you would expect a strong democracy to be able to absorb these kinds of changes.”
Clegg, a former British Deputy Prime Minister and leader of the Liberal Democrats, lost his parliamentary seat after Brexit and subsequently moved to California, where he led global affairs at Facebook/Meta for seven years, becoming a kind of "Tocqueville" with established powers before returning to London in 2025. Clegg told me that many governments "simply lack the means" to deal with AI.
He suspects that the societies most likely to weather the next few years smoothly are small, homogeneous societies like those in Scandinavia, capable of mature dialogue—they would form “a committee led by a wise former finance minister, produce a perfect blueprint, and then everyone reaches a consensus to implement it. A hundred years from now, they will still be the happiest societies in the world.” Or they might be large, authoritarian societies that refuse to engage in dialogue. China, as the United States’ main AI competitor, has repeatedly demonstrated its ability to implement rapid and societal-wide changes without seeking consent or delay.
“If democratic governments simply drift into this period, which may require more rapid change than they currently demonstrate,” Craig warned, “then democracy will not be able to deliver a satisfactory answer.”
He then delivered a motivational speech via Zoom that was distinctly British, combining Churchillian steadfastness with a slightly smug sense of superiority regarding America's centuries-old history of "getting by" or "success." "You guys are very energetic," he began, "and it's really remarkable how many times people have predicted America's demise."
If politics is seen as part of the solution, Gary Peters will be out of the running, as he's retiring next year. Marjorie Taylor Greene, arguably the best-talking Republican advocate in Congress on protecting the workforce from the AI revolution (really), has already resigned. Gina Raimundo, considered a potential 2028 presidential candidate, is a centrist capable of balancing “accelerating AI development” with “prudent management.” But the issue is unlikely to wait until then. “We’re entering a world that seems more volatile every day,” Peters says. “This uncertainty creates anxiety, and anxiety can sometimes lead to dramatic shifts in how people behave and vote.”
This brings us to Bernie Sanders. Long before AI was even in the theoretical stage, he was already contemplating a future shaped by AI. Sanders told me in his familiar, staccato tone, “Are AI and robots inherently evil or terrifying? No. We’ve already seen positive progress in healthcare, drug manufacturing, disease diagnosis, and more. But here’s a simple question: Who will benefit from this transformation?”
At his 2025 “Fighting the Oligarchs” speaking tour stop in Davenport, Iowa, the audience booed when he mentioned AI. Sanders, a politician who relies heavily on “intuition,” could sense decades of pent-up anger—about trade, inequality, the cost of living, systemic injustice, and government loyalty to corporations—converging on the focus of AI.
In October, he released a report titled "95 Points on AI and Jobs." The report quoted all the alarmist pronouncements from CEOs and consulting firms about the impending doom of employment and proposed measures such as shorter workweeks, stronger worker protections, profit sharing, and an unnamed "robot tax for large corporations," the revenue of which would be used to "benefit workers harmed by AI." It was a document brimming with anger, as if Sanders had punched it.
At least one populist politician believes Sanders hasn't done enough.
Steve Bannon's townhouse in Washington, D.C., is very close to the Supreme Court. He greeted me in his signature attire: camouflage overalls, a black shirt under a brown shirt, and then a black button-down shirt. He hadn't shaved in days. I wouldn't have been surprised if he suggested we go for submarine sandwiches or form a militia.
Bannon certainly has some, how should I put it, rogue-like qualities. But he's by no means an AI novice. In the early 2000s, when he was still a film producer, he tried to buy the rights to Ray Kurzweil's *The Singularity Is Near*, the bible of the AI movement, which predicted the day machines would surpass human intelligence. Bannon thought it would make a good documentary. A few years ago, he hired an AI journalist for his *War Room* podcast, who tracked every corporate layoff announcement, looking for omens.
He worries that runaway AI could create viruses and seize weapons—a concern shared by national security officials, biosafety researchers, and some prominent AI scientists—but he believes American workers face such an imminent danger that he's prepared to abandon parts of his ideology. "I advocate dismantling the executive state, but I'm not an anarchist," Bannon told me. "You really have to have a regulatory body. If you don't have a regulatory body for this, you might as well tear the whole system down, right? Because regulatory bodies are built for this kind of thing."
Bannon wants more than just regulation. He's calling for an old idea: when the government deems a technology strategically important, it should own a portion of it—like it did with railroads and briefly with banks during the 2008 financial crisis. He points to Donald Trump's "wise" decision in August to give the federal government a 9.9% stake in Intel. But he argues that stakes in AI need to be much larger—commensurate with the amount of federal support flowing to AI companies.
“I don’t know—as a starting point, let’s say 50% ownership,” Bannon said. “I realize the right wing will go crazy.” But he believes the government needs to send people with good judgment to the boards of these companies. “And you have to get on that roll now, now, right now.”
Instead, he warned that we are facing "the convergence of all the worst elements in the system—greed and desire, plus those who only want to seize primal power—all converging here."
I pointed out that the person overseeing this convergence is the same person Bannon helped to get elected, and he recently suggested that this person should be re-elected for a third term.
“President Trump is a great business genius,” Bannon said. But he received “selective information” from Elon Musk, David Sachs, and others. Bannon believes these people jumped on Trump’s bandwagon simply to maximize their profits and control in the AI field. “If you notice, these people didn’t cheer when I mentioned ‘Trump 2028.’ I didn’t hear a single ‘good job,’” he said. “They’re taking advantage of Trump,” and anticipating a major split within the Republican Party.
Bannon's political leanings naturally hindered the formation of bipartisan alliances, but AI even disrupted his perception of boundaries. He and Glenn Beck signed a joint letter calling for a ban on the development of superintelligence, fearing that systems smarter than humans could not be reliably constrained; joining them were prominent academics and former Obama administration officials—"the left who would rather spit on the floor than admit they're on the same page as Steve Bannon on anything." He has been outlining the theory of alliances needed to cope with the future: "These ethicists and moral philosophers—you have to combine them, to be honest, with some 'street fighters'."
The "horseshoe" issue—where far-right and far-left positions meet—is remarkably rare in American politics. It often emerges when highly specialized issues (like the gold standard in 1896 or the subprime mortgage crisis in 2008) are alchemically transformed into an emotional fluctuation (like William Jennings Bryan's "Golden Cross" or the Tea Party movement). This is populism. And the threat of popular uprising occasionally humanizes American capitalism: the eight-hour workday, weekends, and minimum wage all arise from the space between reform and revolution.
No one understands or can exploit that gray area better than Bannon. His anger about AI might sound rational one moment and extremely threatening the next. When we discussed the people running the most powerful AI labs, he said, “Let’s be blunt,” “We’re in a situation where, frankly, some people who aren’t fully adults on the spectrum—you can tell from their behavior—are making decisions for the entire species. Not for this country, but for this species. Once we hit that tipping point, there’s no turning back. That’s why we have to stop it, and we may have to take drastic measures.”
The problem with popular uprisings is that once you encourage everyone to grab them, the potential destruction can be endless. And unlike earlier times, we now live in a society defined by two things: a cell phone that lets everyone see how well others are doing, and guns that they'll use if they decide to do something.
America would be better off if its elites acted responsibly, without being driven by fear. If CEOs remembered that citizens are also shareholders. If economists tried to model the future before it even entered the rearview mirror. If politicians chose jobs for their voters, not their own. None of this would require a revolution. It would simply require everyone to do their jobs better.
For everyone, there is a basic starting point—a threshold that is so low that it can even be considered a basic cognitive test of this republic.
Erika McEntarfer, former director of the Bureau of Labor Statistics, was fired by Trump in August for releasing a weak jobs report. McEntarfer saw no evidence of political interference at the Bureau of Labor Statistics, but she told me, “Independence isn’t the only threat to economic data. Funding and staffing are equally dangerous.”
Most economic papers attempting to understand the impact of AI on labor demand use the BLS's Current Population Survey (CPS). "It's the best source available right now," McEnroe said, "but the sample size is rather small. Only 60,000 households, and it hasn't increased in 20 years. The response rate has been declining."
To understand what's happening in our economy, the obvious first step would be to expand the sample size of the survey and add a supplemental survey on the use of AI in the workplace. This would require only a few more economists and a few million dollars—a negligible investment. But the BLS budget has been shrinking for decades.
The United States established the BLS because it believes that the primary responsibility of a democracy is to understand the situation of its people. If we lose this belief—if we cannot force ourselves to measure reality; if we are too lazy to even “count”—then wish us good luck when facing these AI machines.
Twitter: https://twitter.com/BitpushNewsCN
BitPush Telegram Community Group: https://t.me/BitPushCommunity
Subscribe to Bitpush Telegram: https://t.me/bitpush




