Home » Blog » Why Didn’t AI “Join the Workforce” in 2025?
Exactly one year ago, Sam Altman made a bold prediction: “We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies.” Soon after, OpenAI’s Chief Product Officer, Kevin Weil, elaborated on this claim when he stated in an interview that 2025 would be the year “that we go from ChatGPT being this super smart thing…to ChatGPT doing things in the real world for you.” He provided examples, such as filling out paperwork and booking hotel rooms. An Axios article covering Weil’s remarks provided a blunt summary: “2…
Home » Blog » Why Didn’t AI “Join the Workforce” in 2025?
Exactly one year ago, Sam Altman made a bold prediction: “We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies.” Soon after, OpenAI’s Chief Product Officer, Kevin Weil, elaborated on this claim when he stated in an interview that 2025 would be the year “that we go from ChatGPT being this super smart thing…to ChatGPT doing things in the real world for you.” He provided examples, such as filling out paperwork and booking hotel rooms. An Axios article covering Weil’s remarks provided a blunt summary: “2025 is the year of AI agents.”
These claims mattered. A chatbot can summarize text or directly answer questions, but in theory, an agent can tackle much more complicated tasks that require multiple steps and decisions along the way. When Altman talked about these systems joining the workforce, he meant it. He envisioned a world in which you assign projects to an agent in the same way you might to a human employee. The often-predicted future in which AI dominates our lives requires something like agent technology to be realized.
The industry had reason to be optimistic that 2025 would prove pivotal. In previous years, AI agents like Claude Code and OpenAI’s Codex had become impressively adept at tackling multi-step computer programming problems. It seemed natural that this same skill might easily generalize to other types of tasks. Mark Benioff, CEO of Salesforce, became so enthusiastic about these possibilities that early in 2025, he claimed that AI agents would imminently unleash a “digital labor revolution” worth trillions of dollars.
But here’s the thing: none of that ended up happening.
As I report in my most recent New Yorker article, titled “Why A.I. Didn’t Transform Our Lives in 2025,” AI agents failed to live up to their hype. We didn’t end up with the equivalent of Claude Code or Codex for other types of work. And the products that were released, such as ChatGPT Agent, fell laughably short of being ready to take over major parts of our jobs. (In one example I cite in my article, ChatGPT Agent spends fourteen minutes futilely trying to select a value from a drop-down menu on a real estate website.)
Silicon Valley skeptic Gary Marcus told me that the underlying technology powering these agents – the same large language models used by chatbots – would never be capable of delivering on these promises. “They’re building clumsy tools on top of clumsy tools,” he said. OpenAI co-founder Andrej Karpathy implicitly agreed when he said, during a recent appearance on the Dwarkesh Podcast, that there had been “overpredictions going on in the industry,” before then adding: “In my mind, this is really a lot more accurately described as the Decade of the Agent.”
Which is all to say, we actually don’t know how to build the digital employees that we were told would start arriving in 2025.
To find out more about why 2025 failed to become the Year of the AI Agent, I recommend reading my full New Yorker piece. But for now, I want to emphasize a broader point: I’m hoping 2026 will be the year we stop caring about what people believe AI might do, and instead start reacting to its real, present capabilities.
For example, last week, Sal Kahn wrote a New York Times op-ed in which he said, “I believe artificial intelligence will displace workers at a scale many people don’t yet realize.” The standard reaction would be to fret about this scary possibility. But what if we instead responded: *says who? *The actual examples Kahn provides, which include someone telling him that A.I. agents are “capable” of replacing 80% of his call center employees, or Waymo’s incredibly slow and costly process of hand-mapping cities to deploy self-driving cars, are hardly harbingers of general economic devastation.
So, this is how I’m thinking about AI in 2026. Enough of the predictions. I’m done reacting to hypotheticals propped up by vibes. The impacts of the technologies that already exist are already more than enough to concern us for now…