Atlantic Monthly Special Report: AI-Driven Unemployment Wave Looms, and the US Is Completely Unprepared

avatar
ME News
02-11
This article is machine translated
Show original
In 1869, a group of reformers in Massachusetts persuaded the state government to try a simple idea: counting.

Article by Josh Tyrangiel

Source: The Atlantic

The full text is approximately 25,000 words and will take about 45 minutes to read.

In 1869, a group of reformers in Massachusetts persuaded the state government to try a simple idea: counting.

At the time, the Second Industrial Revolution was sweeping across New England. It taught factory owners a lesson—a lesson most MBA students learn in their first semester today—that increased efficiency often comes at a price, and that price is usually borne by others. These new machines weren't just spinning cotton or forging steel; they operated at speeds exceeding the limits of the human body—a complex engineering marvel originally designed for entirely different purposes after millions of years of evolution. Factory owners knew this all too well, just as they knew that human tolerance for suffering has its limits, and once those limits are crossed, they'll start burning things.

Nevertheless, the machine continues to operate at full speed.

Therefore, Massachusetts established the nation's first Bureau of Labor Statistics , hoping to address numerous interview requests. Even the Business Roundtable—an association of CEOs from America's 200 most powerful companies, established to speak on such issues on behalf of its members—told me that its CEO, former White House Chief of Staff under the George W. Bush administration, Joshua Bolten, "had nothing to say."

Of course, telling reporters that they won't make public comments doesn't mean they're truly silent. These CEOs are at least talking to one person: Reid Hoffman, co-founder of LinkedIn and a member of the Microsoft board. Hoffman is a tech expert by background and an optimist by nature. He knows everyone in the business world, and everyone knows he knows everyone, making him Silicon Valley's most sought-after "know-it-all"—a rational, neutral mouthpiece to whom CEOs can turn when they want to think loudly.

He told me that AI has already divided CEOs into three categories.

The first category is the dabblers : these latecomers finally start spending quality time with their Chief Technology Officers (CTOs).

The second type is driven by vanity , or a desire to have their traditional businesses taken more seriously by tech gurus, and thus they are eager to declare themselves AI leaders. "They're like saying, 'Look at me! I'm important! I'm at the core.' But they haven't actually done anything substantial," Hoffman said. "They're just thinking, 'Get me a place at the AI table too.'"

The third category is quite different: these executives are secretly developing transformation plans . "They are the ones who have foresight. And commendably, I think many of them are thinking about how to help the entire workforce transform through education, skills reengineering, or training."

But what these three groups have in common is that they all believe—after years of promises about AI—investors have lost patience with the "dream." This year, they want results. And the fastest way for a CEO to produce results is through layoffs. Hoffman says layoffs are inevitable. "Many of them have convinced themselves that there's only one outcome. I think that's a lack of imagination."

Hoffman didn't waste time trying to persuade CEOs against layoffs; he knew they would. "What I told them was, you have to demonstrate pathways and ideas for benefiting from AI beyond cost-cutting. How do you generate more revenue? How do you help your employees transition to using AI more effectively?"

“This is a high fever,” Gina Raimondo, former Rhode Island governor and Biden’s Commerce Secretary, told me, referring to the wave of layoffs. “Every CEO and every board felt they needed to move faster, faster. ‘We have 40,000 people doing customer service? Cut it to 10,000. Leave the rest to AI.’ If the core of this whole thing is focused on moving fast and efficient, then a lot of people will be seriously hurt. Given our current situation, I don’t think this country can afford this kind of shock.”

Like Hoffman, Raymond has secured a $50 million commitment from Silicon Valley venture capital firm Andreessen Horowitz, along with another $50 million from OpenAI co-founders Greg Brockman and his wife. The firm plans to "aggressively oppose" candidates from both parties who threaten industry priorities. And these priorities boil down to one thing: moving fast. No, even faster.

Schuler told me that the AFL-CIO will continue to pressure elected officials to develop a worker-centric AI agenda, but "the struggle may not be as intense at the federal level as it is at the state level." More than 1,000 AI-related bills are under consideration in state legislatures. Of course, AI funding will follow; Leading the Future has announced plans to focus on New York, California, Illinois, and Ohio.

The executive branch has delegated almost all AI regulatory authority to David Sacks—nominally co-chair of the President's Council of Advisors on Science and Technology, but functionally more of a "government role player" while maintaining his identities as a venture capitalist and podcast host. Sacks is also the White House's cryptocurrency czar and co-authored the Trump administration's "American AI Action Plan."

A New York Times investigation found that Sachs has investments in at least 449 companies related to artificial intelligence. This isn't just "the fox guarding the henhouse," he's also livestreaming!

AI is still a new thing. It may grow to transform our lives in unimaginably good ways. But it also raises profound questions about security, inequality, and the viability of a flawed wage-labor system that has fostered some of the most prosperous societies in human history. And there is absolutely no indication—nothing at all—that our political system is capable of handling the changes to come.

This means that the deepest challenge posed by artificial intelligence may not be related to employment at all.

“My God, the ideal state in democracy textbooks,” Nick Clegg said, “is to express and resolve differences peacefully, which could otherwise erupt in more destructive or violent ways. So you would expect a strong democracy to be able to absorb these kinds of changes.”

Clegg, a former British Deputy Prime Minister and leader of the Liberal Democrats, lost his parliamentary seat after Brexit and subsequently moved to California, where he led global affairs at Facebook/Meta for seven years, becoming a kind of "Tocqueville" with established powers before returning to London in 2025. Clegg told me that many governments "simply lack the means" to deal with AI.

He suspects that the societies most likely to weather the next few years smoothly are small, homogeneous societies like those in Scandinavia, capable of mature dialogue—they would form “a committee led by a wise former finance minister, produce a perfect blueprint, and then everyone reaches a consensus to implement it. A hundred years from now, they will still be the happiest societies in the world.” Or they might be large, authoritarian societies that refuse to engage in dialogue. China, as the United States’ main AI competitor, has repeatedly demonstrated its ability to implement rapid and societal-wide changes without seeking consent or delay.

“If democratic governments simply drift into this period, which may require more rapid change than they currently demonstrate,” Craig warned, “then democracy will not be able to deliver a satisfactory answer.”

He then delivered a motivational speech via Zoom that was distinctly British, combining Churchillian steadfastness with a slightly smug sense of superiority regarding America's centuries-old history of "getting by" or "success." "You guys are very energetic," he began, "and it's really remarkable how many times people have predicted America's demise."

If politics is seen as part of the solution, Gary Peters will be out of the running, as he's retiring next year. Marjorie Taylor Greene, arguably the best-talking Republican advocate in Congress on protecting the workforce from the AI revolution (really), has already resigned. Gina Raimundo, considered a potential 2028 presidential candidate, is a centrist capable of balancing “accelerating AI development” with “prudent management.” But the issue is unlikely to wait until then. “We’re entering a world that seems more volatile every day,” Peters says. “This uncertainty creates anxiety, and anxiety can sometimes lead to dramatic shifts in how people behave and vote.”

This brings us to Bernie Sanders. Long before AI was even in the theoretical stage, he was already contemplating a future shaped by AI. Sanders told me in his familiar, staccato tone, “Are AI and robots inherently evil or terrifying? No. We’ve already seen positive progress in healthcare, drug manufacturing, disease diagnosis, and more. But here’s a simple question: Who will benefit from this transformation?”

At his 2025 “Fighting the Oligarchs” speaking tour stop in Davenport, Iowa, the audience booed when he mentioned AI. Sanders, a politician who relies heavily on “intuition,” could sense decades of pent-up anger—about trade, inequality, the cost of living, systemic injustice, and government loyalty to corporations—converging on the focus of AI.

In October, he released a report titled "95 Points on AI and Jobs." The report quoted all the alarmist pronouncements from CEOs and consulting firms about the impending doom of employment and proposed measures such as shorter workweeks, stronger worker protections, profit sharing, and an unnamed "robot tax for large corporations," the revenue of which would be used to "benefit workers harmed by AI." It was a document brimming with anger, as if Sanders had punched it.

At least one populist politician believes Sanders hasn't done enough.

Steve Bannon's townhouse in Washington, D.C., is very close to the Supreme Court. He greeted me in his signature attire: camouflage overalls, a black shirt under a brown shirt, and then a black button-down shirt. He hadn't shaved in days. I wouldn't have been surprised if he suggested we go for submarine sandwiches or form a militia.

Bannon certainly has some, how should I put it, rogue-like qualities. But he's by no means an AI novice. In the early 2000s, when he was still a film producer, he tried to buy the rights to Ray Kurzweil's *The Singularity Is Near*, the bible of the AI movement, which predicted the day machines would surpass human intelligence. Bannon thought it would make a good documentary. A few years ago, he hired an AI journalist for his *War Room* podcast, who tracked every corporate layoff announcement, looking for omens.

He worries that runaway AI could create viruses and seize weapons—a concern shared by national security officials, biosafety researchers, and some prominent AI scientists—but he believes American workers face such an imminent danger that he's prepared to abandon parts of his ideology. "I advocate dismantling the executive state, but I'm not an anarchist," Bannon told me. "You really have to have a regulatory body. If you don't have a regulatory body for this, you might as well tear the whole system down, right? Because regulatory bodies are built for this kind of thing."

Bannon wants more than just regulation. He's calling for an old idea: when the government deems a technology strategically important, it should own a portion of it—like it did with railroads and briefly with banks during the 2008 financial crisis. He points to Donald Trump's "wise" decision in August to give the federal government a 9.9% stake in Intel. But he argues that stakes in AI need to be much larger—commensurate with the amount of federal support flowing to AI companies.

“I don’t know—as a starting point, let’s say 50% ownership,” Bannon said. “I realize the right wing will go crazy.” But he believes the government needs to send people with good judgment to the boards of these companies. “And you have to get on that roll now, now, right now.”

Instead, he warned that we are facing "the convergence of all the worst elements in the system—greed and desire, plus those who only want to seize primal power—all converging here."

I pointed out that the person overseeing this convergence is the same person Bannon helped to get elected, and he recently suggested that this person should be re-elected for a third term.

“President Trump is a great business genius,” Bannon said. But he received “selective information” from Elon Musk, David Sachs, and others. Bannon believes these people jumped on Trump’s bandwagon simply to maximize their profits and control in the AI field. “If you notice, these people didn’t cheer when I mentioned ‘Trump 2028.’ I didn’t hear a single ‘good job,’” he said. “They’re taking advantage of Trump,” and anticipating a major split within the Republican Party.

Bannon's political leanings naturally hindered the formation of bipartisan alliances, but AI even disrupted his perception of boundaries. He and Glenn Beck signed a joint letter calling for a ban on the development of superintelligence, fearing that systems smarter than humans could not be reliably constrained; joining them were prominent academics and former Obama administration officials—"the left who would rather spit on the floor than admit they're on the same page as Steve Bannon on anything." He has been outlining the theory of alliances needed to cope with the future: "These ethicists and moral philosophers—you have to combine them, to be honest, with some 'street fighters'."

The "horseshoe" issue—where far-right and far-left positions meet—is remarkably rare in American politics. It often emerges when highly specialized issues (like the gold standard in 1896 or the subprime mortgage crisis in 2008) are alchemically transformed into an emotional fluctuation (like William Jennings Bryan's "Golden Cross" or the Tea Party movement). This is populism. And the threat of popular uprising occasionally humanizes American capitalism: the eight-hour workday, weekends, and minimum wage all arise from the space between reform and revolution.

No one understands or can exploit that gray area better than Bannon. His anger about AI might sound rational one moment and extremely threatening the next. When we discussed the people running the most powerful AI labs, he said, “Let’s be blunt,” “We’re in a situation where, frankly, some people who aren’t fully adults on the spectrum—you can tell from their behavior—are making decisions for the entire species. Not for this country, but for this species. Once we hit that tipping point, there’s no turning back. That’s why we have to stop it, and we may have to take drastic measures.”

The problem with popular uprisings is that once you encourage everyone to grab them, the potential destruction can be endless. And unlike earlier times, we now live in a society defined by two things: a cell phone that lets everyone see how well others are doing, and guns that they'll use if they decide to do something.

America would be better off if its elites acted responsibly, without being driven by fear. If CEOs remembered that citizens are also shareholders. If economists tried to model the future before it even entered the rearview mirror. If politicians chose jobs for their voters, not their own. None of this would require a revolution. It would simply require everyone to do their jobs better.

For everyone, there is a basic starting point—a threshold that is so low that it can even be considered a basic cognitive test of this republic.

Erika McEntarfer, former director of the Bureau of Labor Statistics, was fired by Trump in August for releasing a weak jobs report. McEntarfer saw no evidence of political interference at the Bureau of Labor Statistics, but she told me, “Independence isn’t the only threat to economic data. Funding and staffing are equally dangerous.”

Most economic papers attempting to understand the impact of AI on labor demand use the BLS's Current Population Survey (CPS). "It's the best source available right now," McEnroe said, "but the sample size is rather small. Only 60,000 households, and it hasn't increased in 20 years. The response rate has been declining."

To understand what's happening in our economy, the obvious first step would be to expand the sample size of the survey and add a supplemental survey on the use of AI in the workplace. This would require only a few more economists and a few million dollars—a negligible investment. But the BLS budget has been shrinking for decades.

The United States established the BLS because it believes that the primary responsibility of a democracy is to understand the situation of its people. If we lose this belief—if we cannot force ourselves to measure reality; if we are too lazy to even “count”—then wish us good luck when facing these AI machines.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments