Over the past few months, I’ve introduced artificial intelligence into the hobby life of my seven-year-old son, Peter. On Saturdays, he takes a coding class, in which he recently made a version of rock-paper-scissors, and he really wants to make more sophisticated games at home. I gave ChatGPT and Claude a sense of his skill level, and they instantaneously suggested next steps. Claude proposed trying to recreate Pong in Scratch, a coding environment for kids. We downloaded it, and I sat in an armchair, with ChatGPT on my iPad, while Peter gave the project a shot on the computer. Whenever he got stuck, I answered his questions, drawing either on my own programming knowledge or on A.I. He finished a rudimentary version of the game in about an hour.
In the following weeks, with furth…
Over the past few months, I’ve introduced artificial intelligence into the hobby life of my seven-year-old son, Peter. On Saturdays, he takes a coding class, in which he recently made a version of rock-paper-scissors, and he really wants to make more sophisticated games at home. I gave ChatGPT and Claude a sense of his skill level, and they instantaneously suggested next steps. Claude proposed trying to recreate Pong in Scratch, a coding environment for kids. We downloaded it, and I sat in an armchair, with ChatGPT on my iPad, while Peter gave the project a shot on the computer. Whenever he got stuck, I answered his questions, drawing either on my own programming knowledge or on A.I. He finished a rudimentary version of the game in about an hour.
In the following weeks, with further help from me and A.I., Peter made a game based on the light-cycle duels in the movie “Tron,” complete with music and a score-keeping system. He sketched the beginnings of a “library simulator,” and finished his own arcade game, Dot in Space, about a tiny spaceship travelling at warp speed. Whenever he hit a potentially momentum-killing bump in the road, A.I. enabled us to roll through it. At my request, the systems began pointing us toward more sophisticated coding environments—Construct, GDevelop, Godot Engine, GameMaker—and suggesting more ambitious projects. Last weekend, he stayed up late, programming a polished version of Asteroids while wolfing down Cheerios and gulping from his water bottle as though it were an energy drink.
Since Peter is a kid, and I’m a dad, all this can seem cute and quaint. Isn’t it nice that A.I. can help a young person learn to code, and an older one become a coding tutor? But consider what’s happening from a different perspective. In “The Wealth of Nations,” Adam Smith described the “acquired and useful abilities” of a worker as a kind of “fixed capital”—something akin to a hunk of real estate or piece of equipment. It wasn’t until the nineteen-sixties that an economist named Theodore Schultz coined the term “human capital” to describe the ongoing, dynamic process through which people invest in improving themselves. Schultz realized that individuals spend a lot of time, money, and effort becoming more capable. They go to night school, network, read self-help books, and tend to use their free time “to improve skills and knowledge.” The work of improving human capital often happens out of sight. But the “simple truth,” he argued, was “that people invest in themselves and that these investments are very large.” Schultz suggested that these investments, which improve “the quality of human effort,” might account “for most of the impressive rise in the real earnings per worker” that economists had observed in the preceding decades.
Today, it’s obvious that companies and organizations benefit greatly from people with lots of human capital. Meetings are more useful when they involve knowledgeable participants; a product improves when the team building it possesses a wide range of skills. What’s less obvious is that companies and organizations simultaneously struggle to recognize and take advantage of changes in human capital. Suppose someone is hired to do one job, and then acquires skills that qualify her for another. Ideally, the organizational chart would shift around her as she becomes more capable; in practice, the job is often a prison. And when a worker breaks out of that prison, by getting a job elsewhere, she takes her human capital with her. For this reason, from the perspective of the company, it’s almost as though the ideal hire is someone who works feverishly to build up their human capital until their first day of work, and then suddenly slows down, becoming a highly skilled cog in the machine. Organizations want their workers to continue improving themselves—but not too fast, lest they outgrow the systems in which they’re enmeshed.
Luckily for managers, building human capital takes a long time. Or, at least, it used to: artificial intelligence is, among other things, a technology that speeds up learning and increases capability. Millions of people now use large language models. They’re not all flirting with their chatbots; instead, they’ve discovered that, with the help of A.I., they can perform tasks they’ve never done before, and learn quickly about subjects they’ve previously found inaccessible. What happens when you suddenly increase the speed with which human capital can accrue? This is one of the challenges posed by A.I. to the business world, which is struggling to figure out what the technology is worth.
For a number of reasons, it feels odd to think of A.I. as a tool for increasing human capital. Doesn’t its usefulness lie in intellectual automation, which makes hard-earned human knowledge redundant? The leading A.I. firms talk about a future in which their systems have replaced workers en masse. The big companies that are currently integrating A.I. into their businesses are almost certainly thinking along similar lines. They have to, because A.I. is expensive. Microsoft’s charges on a per-user basis for its corporate chatbot, Copilot. If a big company—one with thousands of employees—wants to purchase Copilot “seats” for its staff, it’s looking at investing many millions of dollars each year.
Will that “spend” lead to a corresponding return? The simplest way for a company to answer that question is to think in terms of new products or staffing cuts, which could generate revenue or lower costs, respectively. (The two can be combined, of course.) In its new report on “enterprise” A.I., released this week, OpenAI offers a number of case studies focussed on products that replace human labor. A typical example is an A.I. voice agent, useful for customer-service calls; the company says one such agent is currently saving companies “hundreds of millions of dollars annually.”
All this makes it seem as though worker replacement is the logical endpoint of corporate A.I. But it’s important to note that, both conceptually and as a matter of internal accounting, big companies often have difficulty figuring out how to integrate new technologies. In the nineteen-eighties and nineties, when I.T. departments were new, it was sometimes unclear how they could be internally justified. An I.T. department might spend millions each year on new computers, networking hardware, or productivity software. Did all that spending produce a return? How could its value be judged? If a large corporation installed a mainframe, it might replace some accountants. If an I.T. manager wanted to explain to her boss why computers mattered, the simplest thing she could say might have been that they could replace the typing pool.
As time went on, however, it became clear that the costs and benefits of information technology far exceeded what could be accounted for in this way. Modern companies reorganized themselves around computers; in this new world, the point of I.T. departments wasn’t to replace computer-dependent workers but to enhance their effectiveness. Workers began demanding more from their I.T. departments. In a development known as “consumerization,” the tools used by tech-savvy employees at home—such as smartphones—became more advanced than the ones provided at work; employees, who wanted to do more, began demanding upgrades. The upshot is that, today, when I.T. “spend” is proposed, no one insists that those investments do anything so crude as replace workers. The important question is whether new investments help existing employees accomplish their agendas, and keep up with their competitors at other firms.
The idea that the best use of A.I.—perhaps the only profitable use—is the direct replacement of workers combines two strains of thought: one stemming from speculations about A.I.’s future, and the other from the short-term, balance-sheet thinking that’s probably unavoidable when companies explore new technology. It is, meanwhile, profoundly at odds with the experiences many of us have while actually using A.I. Vast numbers of individuals pay for accounts with OpenAI, Anthropic, and other companies because they find that A.I. makes them more capable and productive. It is, from their perspective, a multiplier of human capital. If you have a fine-grained sense of what you want to accomplish—write software, analyze research, diagnose an illness, repair something in your house—A.I. can help you do it faster and better. Companies today spend a lot of money to train their employees; even highly qualified white-collar workers are exposed to online seminars and sent to expensive retreats, in the hopes that they will return improved. Suppose that A.I. makes some employees five or ten per cent more knowledgeable and capable. How much should a company pay for that cognitive boost?
According to one narrative about A.I., the boost it provides will eventually be big enough to allow individual workers to replace teams. Some particularly optimistic observers suggest that, someday soon, we’ll see the first billion-dollar companies run by one or two A.I.-assisted individuals. Maybe there are some kinds of work for which this could be possible. But, if you’ve tried to use the technology to do your actual job, you’ve likely discovered its intrinsic limitations. A.I. systems aren’t smart or well-informed enough to make many important decisions; they lack critical context; they are disembodied, forgetful, unnatural, and sometimes glaringly stupid. Perhaps most significantly, they cannot be held accountable, and cannot learn on the job. They can aid you in the execution of your informed ambitions—but they cannot replace you. And so the situation, broadly speaking, is that, at many companies, trying to replace workers with A.I. will be a grave mistake—not only because A.I. cannot replace those workers, but because it actually makes them more valuable. The businesses that figure this out first will be the ones to thrive.
If A.I., in its current state, cannot replace workers en masse, then why are investors pouring trillions of dollars into the A.I. industry? One possible answer is that they’re participating in a bubble. They’ve either been taken in by sci-fi scenario-spinning, or are taking advantage of the climate created by those scenarios. Recently, the writer Cory Doctorow outlined some of his thoughts about A.I. in a talk at the University of Washington, in Seattle. “A.I. is a bubble and it will burst,” he said. “Most of the companies will fail. Most of the data-centers will be shuttered or sold for parts.” What will be left? His answer, essentially, was nothing: just a lot of newly cheap computer chips, once used for A.I., along with software tools for “transcribing audio and video, describing images, summarizing documents, automating a lot of labor-intensive graphic editing, like removing backgrounds, or airbrushing passersby out of photos.” The models themselves, Doctorow speculates, could get shut down, because they’re so expensive to run. We’ll have to endure the ensuing economic crash—“seven AI companies currently account for more than a third of the stock market,” he noted—without our chatbot therapists.
When hype is at its height, anti-hype is both inevitable and valuable. The risk, however, is that it will become as extreme as the hype it hopes to puncture. I was in college from 1998 to 2002, at the apex of the first dot-com boom; I paid much of my college tuition by running a small startup with my roommates, mainly making websites and applications for other startups. Then, as now, countless companies offered products that didn’t add up. (We worked for some of them.) It was easy to predict that many of these businesses would fail, and that investors at all scales would lose a lot of money. Still, the underlying technology—the internet—was unquestionably powerful. It’s hard not to say the same about A.I. today.
And yet, compared to the dot-com boom, the story of artificial intelligence is weirder. When the internet arrived, people weren’t sure how to make money with it. Even so, there was a sense in which the technology itself was somewhat complete. It seemed clear that connectivity would get faster and more pervasive; beyond that, the uses to which the internet might be put—streaming media, e-commerce, collaboration, cloud storage, and so on—were already broadly apparent. (Around the year 2000, for example, our little company was hired to create a workplace-collaboration system that had many of the capabilities we now associate with Slack.) Over the following decades, the engineering efforts required to create the modern internet would be prodigious; it would take extraordinary ingenuity to build the cloud, for example. But, from the beginning, the fundamental nature of the internet itself was more or less settled.
With A.I., it’s different. From a scientific perspective, the work of building and understanding A.I. is far from complete. Experts in the field differ on important issues, such as whether increases in the scale of today’s A.I. systems will deliver substantial increases in intelligence. (Perhaps new systems, shaped by further breakthroughs, will be required.) They disagree on conceptual issues, too, such as what “intelligence” means. On the all-important question of whether today’s A.I. research will lead to the invention of systems capable of human-level thinking, they hold strong, divergent views. People who work in A.I. tend to articulate their opinions clearly and forcefully, and yet there is no consensus. Anyone who weaves a scenario is disagreeing with a large cohort of her colleagues. Researchers will be answering many questions about A.I. empirically, by trying to build better A.I. and seeing what works. The A.I. bubble, in short, is more than just a bubble—it’s a collision between scientific uncertainty and evolving business thinking.
There are, at this moment, two big unknowns about artificial intelligence. First, we don’t know whether and how companies will succeed in getting value out of A.I.; they’re trying to figure that out, and they could get it wrong. Second, we don’t know how much smarter A.I. will become. About the first unknown, though, we have some clues. We can say, from firsthand experience, that having an A.I. available to you can be really useful; that it can help you learn; that it can make you more capable; that it can assist you in better utilizing your human capital, and even in expanding it. We can also say, with some confidence, that A.I. cannot do many of the important things people do—that, except in certain narrow circumstances, it is better at enabling human beings than at taking their place. Meanwhile, about the second question—whether A.I. will get a lot smarter, so smart that it transforms the world—we know very little. We are waiting to find out, and even the experts can’t agree. Our challenge is to act on what we know, and not to let our guesses about the future overrule it. ♦