**Kendra Pierre-Louis: **For Scientific American’s Science Quickly, I’m Kendra Pierre-Louis, in for Rachel Feltman.
In 2022 OpenAI unleashed ChatGPT onto the world. In the years following generative AI has wormed its way into our inboxes, our classrooms and our medical records, raising questions about what role these technologies should have in our society.
A Pew survey released in September of this year found that 50 percent of Americans were more concerned than excited about the increased AI use in their day-to-day life; only 10 percent felt the other way. That’s up from the 37 percent of Americans whose dominant feeling was concern in 2021. And according to Karen Hao, the aut…
**Kendra Pierre-Louis: **For Scientific American’s Science Quickly, I’m Kendra Pierre-Louis, in for Rachel Feltman.
In 2022 OpenAI unleashed ChatGPT onto the world. In the years following generative AI has wormed its way into our inboxes, our classrooms and our medical records, raising questions about what role these technologies should have in our society.
A Pew survey released in September of this year found that 50 percent of Americans were more concerned than excited about the increased AI use in their day-to-day life; only 10 percent felt the other way. That’s up from the 37 percent of Americans whose dominant feeling was concern in 2021. And according to Karen Hao, the author of the recent book Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, people have plenty of reasons to worry.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Karen recently chatted with Scientific American associate books editor Bri Kane. Here’s their conversation.
Bri Kane: I wanted to really jump right into this book because there is so much to cover; it is a dense book in my favorite kind of way. But I wanted to start with something that you start the book on really early on, [which] is that you are able to be clear-eyed about AI in a way that a lot of reporters and even regulators are not able to be, whether because they are not as well-versed in the technology or because they get stars in their eyes when Sam Altman or whoever starts talking about AI’s future. So why are you able to be so clearheaded about such a complicated subject?
Karen Hao: I think I just got really lucky in that I started covering AI back in 2018, when it was just way less noisy as a space, and I was a reporter at MIT Technology Review, which really focuses on covering the cutting-edge research coming out of different disciplines. And so I spent most of my time speaking with academics, with AI researchers that had been in the field for a long time and that I could ask lots of silly questions to about the evolution of the field, the different philosophical ideas behind it, the newest techniques that were happening and also the limitations of the technologies as they stood.
And so I think, really, the only advantage that I have is context. Like, I have—I had years of context before Silicon Valley and the Sam Altmans of the world started clouding the discourse, and it allows me to more calmly analyze the flood of information that is happening right now.
Kane: Yeah, you center the book around a central premise, which I think you make a very strong argument for, that we should be thinking about AI in terms of empires and colonialism across history. Can you explain to me here why you think that is an accurate and useful lens and what in your research and reporting brought you to this conclusion?
Hao: So the reason why I call companies like OpenAI “empires” is both because of the sheer magnitude at which they are operating and the controlling influence they’ve developed in so many facets of society but also the tactics for how they’ve accumulated an enormous amount of economic and political power. And that’s specifically that they amass that power through the dispossession of the majority of the rest of the world.
And I highlight many parallels in the book for how they do this, but one of them is that they extract an extraordinary amount of resources from different parts of the world, whether that’s physical resources or the data that they use to train their models from individuals and artists and writers and creators or the way that they extract economic value from the workers that contribute to the development of their technologies and never really see a proportional share of it in return.
And there’s also this huge ideological component to the current AI industry. Sometimes people ask me, “Why didn’t you just make it a critique of capitalism? Why do you have to draw on colonialism?” And it’s because if you just look at the activities of these companies through a capital lens, it actually doesn’t make any sense. OpenAI doesn’t have a viable business model. It’s committing to spending $1.4 trillion in the next few years when it only has tens of billions in revenue. The profit motive is coupled with an ideological motive: this quest for an artificial general intelligence [AGI], which is a faith-based idea; it’s not a scientific idea. It is this quasi-religious notion that if we continue down a particular path of AI development that somehow a kind of AI god is gonna emerge that will solve all of humanity’s problems, or damn us to hell. And colonialism is the fusion of capitalism and ideology, so that—there’s, there’s just a multitude of parallels between the empires of old and the empires of AI.
The reason why I started thinking about this in the first place was because there were a number of scholars that started articulating this argument. There were two pieces of scholarship that were particularly influential to me. One was a paper called “Decolonial AI” that was written by William Isaac, Shakir Mohamed and Marie-Therese Png out of Deep Mind and the University of Oxford. The other one is the book The Costs of Connection, published in 2019 by Nick Couldry and Ulises Mejias, that also articulated this idea of a data colonialism that underpins the tech industry. I realized this was the frame to also understand OpenAI, ChatGPT and to where we are in this particular moment with AI.
Kane: So I wanted to talk to you about the scale of what AI is capable of now and what the desired continued growth that these companies are planning for, in the very near future. Specifically, what I think your book touches on that a lot of conversations around AI are not really focusing on is the scale of environmental impact that we’re seeing with these data centers and what we are planning to build more data centers on top of, which is viable land and potable water. So can you talk to me about the environmental impacts of AI that you are seeing and that you are most concerned with?
Hao: Yeah, there are just so many intersecting crises that the AI industry’s path of development is exacerbating.
One, of course, is the energy crisis. So Sam Altman just a couple weeks ago announced a new target for how much computational infrastructure he wants to build: he wants to see 250 gigawatts of data-center capacity laid by 2033—just for his company. Who knows if it’s even possible to build that. Like, Altman has estimated that this would cost around $10 trillion. Where is he gonna get that money? Who, who knows? But if that were to come to pass, the primary energy sources that we would be using to power this infrastructure is fossil fuels, because we’re not gonna get a huge breakthrough in nuclear fusion by 2033 and renewable energy just doesn’t cut it because these facilities require being run 24/7 and we—renewable energy just cannot be that supply.
And so Business Insider had this investigation earlier this year that found that utilities are, quote, “torpedo[ing]” their renewable-energy goals in order to service the data center demand. So we are seeing natural gas plants having their lives extended, coal plants having their lives extended. And that’s not just pumping emissions into the atmosphere; it’s also pumping air pollution into communities. And part of Business Insider’s investigation found that there could be billions of dollars of health care costs that result from this astronomical increase in, in air pollution in communities that have already historically suffered the inability to access their fundamental right to clean air. We’ve seen incredible reporting coming out of Memphis, Tennessee, for example, where Colossus, the supercomputer being used to train Grok, is being run on 35 [reportedly] unlicensed methane gas turbines that is pumping that, toxic pollutants into that community’s air.
Then you have the problem of the freshwater consumption of these facilities. Most of these facilities are cooled with water because it is more energy-efficient, ironically. But then, when it’s cooled with water, it has to be cooled with freshwater because any other type of water leads to the corrosion of the equipment or to bacterial growth. And Bloomberg then had an investigation finding that two thirds of these new facilities are entering into water-scarce areas. And so there’s literally communities around the world that are competing with Silicon infrastructure for life-sustaining resources.
There was this article from Truthdig that put it really well that the AI industry, we should be thinking of this as a heavy industry. Like, this is—it is extremely toxic to the environment and to public health around the world.
Kane: Well, some may say that the concerns around environmental impact of AI will just be solved by AI: “AI will just tell us the solution to climate change. It’ll crunch the numbers in a way we haven’t done so before.” Do you think that is realistic?
Hao: What I would say is, like, this is obviously based on speculation, and the harms that I just described are literally happening right now. And so the question is, like, how long are we going to deal with the, the actual harms and hold out for a speculative possibility that maybe, at the end of the road, it’s all gonna be fine?
Like, of course, Silicon Valley tells us we can hold on for as long as, as they want us to because they’re going to be fine—like, the Sam Altmans of the world are gonna be fine. You know, they have their bunkers built, and they’re all set up to survive whatever environmental catastrophe comes after they’ve destroyed the planet. [Laughs.]
But the possibility of an AGI emerging and solving everything is so astronomically small, and I have to emphasize, like, AI researchers themselves don’t even believe that this is going to come to pass. There was a survey earlier this year that found that [roughly] 75 percent of long-standing AI researchers who are not in the pocket of industry do not think we are on the path to an artificial general intelligence that’s gonna solve all of our problems.
And so just from that perspective, like, we should not be using a teeny, tiny possibility on the far-off horizon that is not even scientifically backed to justify an, an extraordinary and irreversible set of damages that are occurring right now.
Kane: So Sam Altman is a central figure of your book. He is the central figure of OpenAI, which has become one of the biggest, most important AI companies in the world. But you also say in your book that, in your opinion, he is a master manipulator that tells people what they want to hear, not what he truly believes or an objective truth. So do you think Sam Altman is lying or has lied about OpenAI’s current abilities or their realistic future abilities? Or has he just fallen for his own marketing?
Hao: The thing that’s kind of complex about OpenAI and the thing that surprised me the most when I was reporting the book is, originally, I came to some of their claims around AGI with the skepticism of: “This is all rhetoric and not actually rooted in any kind of sincerity.” And then I realized in the process of reporting that there are actual people who genuinely believe this within the organization and, and within the broader San Francisco community. And there are quasi-religious movements that have developed around what we then hear in the public as narratives that AGI could solve all of humanity’s problems or AGI could kill everyone.
It is really hard to figure out exactly whether Altman himself is a believer in this regard or whether he has just found it to be politically savvy to leverage the real beliefs that are bubbling up within the broader AI community as, as part of the rhetoric that allows him to negotiate more and more and more resources and capital to come to OpenAI. But one of the things that I also wanna emphasize is I think it’s—sometimes we fixate too much on individuals and whether or not the individuals are good or bad people, like, whether, whether they have good moral character or whatever. I think, ultimately, the problem is not the individual; the problem is the system of power that has been constructed to allow any individual to influence billions of people’s lives with their decisions.
Sam Altman has his particular flaws, but no one is perfect. And, like, anyone who would sit in that seat of power would have their particular flaws that would then cascade and have massive ripple effects on people all around the world. And I just don’t think that, like, we should ever be allowing this to happen. That is an inherently unsound structure. Like, even if Altman were, like, more charismatic or, or more truthful or whatever, that doesn’t mean that we should suddenly cede him all of that power. And even if Altman were swapped in for someone else, that doesn’t mean that the problem is solved.
I do think that Altman, in particular, is an incredible storyteller and able to be very persuasive to many different audiences and persuade those audiences to cede him and his company extraordinary amounts of power. We should not allow that to happen, and we should also be focused on dismantling the power structure and holding the company accountable rather than fixating on, on, necessarily, the man himself.
Kane: So one thing you just brought up is the international ramifications of some of these actions that are happening, and one thing that really struck me about the book is that you did a lot of international travel. You visited the data centers and spoke directly with AI data annotators. Can you tell me about that experience and who you met?
Hao: Yeah, so I traveled to Kenya to meet with workers that OpenAI had contracted, as well as workers that were just broadly being contracted by the rest of the AI industry that was following OpenAI’s lead. And with the workers that OpenAI contracted what OpenAI wanted them to do was to help them build a content-moderation filter for the company’s GPT models. Because at the time they were trying to expand their commercialization efforts, and they realized that if you put text-generation models that can generate anything into the hands of millions of people, you’re gonna come up with a problem where it’s been trained on the internet—the internet also has really dark corners. It could end up spewing racist, toxic hate speech at users, and then it would become a huge PR crisis for the company and, and make the product very unsuccessful.
For the workers what that meant was they had to wade through some of the worst content on the internet, as well as AI-generated content where OpenAI was prompting its own AI models to imagine the worst content on the internet to provide a more diverse and comprehensive set of examples to these workers. And these workers suffered the same kinds of psychological traumas that content moderators of the social media era suffered. They were being so relentlessly exposed to all of the awful tendencies in humanity that they broke down. They started having social anxiety. They started withdrawing. They started having depressive symptoms. And for some of the workers that also meant that their family and their communities unraveled because individuals are part of a tapestry of a particular place, and there are people that depend on them. It’s, like, a node in, in a broader network that breaks down.
I also spoke with, you know, the workers that, that were working for other types of companies, on a different part of the human labor-supply chain, not just content moderation but reinforcement learning from human feedback, which is this thing that many companies have adopted, where tens of thousands of workers have to teach the model what is a good answer when a user chats with the chatbot. And they use this method to not only imbue certain types of values or encode certain values within the models but also to just generally get the model to work. Like, you have to teach an AI model what dialogue looks like: “Oh, Human A talks, and then Human B talks. Human A asks question; Human B gives an answer.” And that’s now, like, the, the template for how the chatbot is supposed to interact with humans as well.
And there was this one woman I spoke to, Winnie, who—she worked for this platform called Remotasks, which is the back end for Scale AI, one of the primary contractors of reinforcement learning from human feedback, both for OpenAI and other companies. And she—like, the content that she was working with was not necessarily traumatic in and of itself, but the conditions under which she was working were deeply exploitative, where she never knew who she was working for and she also never knew when the tasks would arrive onto the Remotasks platform.
And so she would spend her days waiting by her computer for work opportunities to arrive, and when I spoke to her she had already been waiting for months for a task to arrive. And when those tasks arrived she was so worried about not capitalizing on the opportunity that she would work for 22 hours straight in a day to just try and earn as much money as possible to ultimately feed her kids. And it was only when her partner would tell her, like, “I will take over for you,” that Winnie would be willing to go take a nap. What she earned was, like, a couple dollars a day. Like, this is the lifeblood of the AI industry, and yet these workers see absolutely none of the economic value that they’re generating for these companies.
Kane: Do you see a future where the business of AI is conducted more ethically in terms of these workers that you spoke with?
Hao: I do see a future with, with this happening, but it—it’s not gonna come from the companies voluntarily doing that; it’s going to come from external pressure forcing them to do that. I, at one point, spoke with a woman who had been deeply involved in the Bangladesh [Accord], which is an international labor-standards agreement for the fashion industry that passed after there were some really devastating labor accidents that happened in the fashion industry.
And what she said was, at the time, the way that she helped facilitate this agreement was by building up a significant amount of public pressure to force these companies to sign on to new standards for how they would audit their supply chains and guarantee labor rights to the workers who worked for them. And she saw a pathway within the AI industry to do the same exact thing. Like, if we get enough backlash from consumers, even from companies that are trying to use these models, it will force those companies to have higher standards, and hopefully, we can then codify that into some kind of regulation or legislation.
Kane: That makes me think of another question I wanted to ask you, which is: Are the regulators that we currently have, in—under this current administration, capable of regulating this AI development? Are they caught up on the field, generally speaking, enough to know what needs regulation? Are they well-versed enough in this field to know the difference between Sam Altman’s marketing speak and [Elon] Musk’s marketing speak and [Peter] Thiel’s marketing speak, compared to the reality on the ground that you have seen with your own eyes?
Hao: We’re definitely suffering a crisis of leadership at the top in the U.S. and also in many countries around the world that would have been the ones to step up to regulate and legislate this industry. That said, I don’t think that that means there’s nothing to be done in this moment. I actually think that means there’s even more work to be done in bottoms-up governance.
We need the public to be active participants in calling out these companies. We—and we’ve seen this already happening, you know? Like, with the recent spate of mental health crises that have been caused by these AI models, we see an outpouring of public backlash, and families and victims suing these companies; like, that is bottoms-up governance at work.
And we see corporations and brands and, nonprofits and civil society all calling out these companies to do better. And in fact, we recently saw a significant gain, where Character.AI said, as one of the companies that has a product that has been accused of killing a teen, they recently announced that they’re going to ban kids from [using its chatbots]. And so there is so much opportunity to continue holding these companies accountable, even in the absence of policymakers that are willing to do it themselves.
Kane: So we’ve talked about a lot of concerns around AI’s development, but you also are saying that there’s so much optimism to be had. Do you consider yourself an AI doomer or an AI boomer?
Hao: I’m neither a boomer nor doomer by the specific definition that I use in the book, which is that both of these camps believe in an artificial general intelligence and believe that AI will ultimately develop some kind of agency of its own—maybe consciousness, sentience—and I just don’t think that it’s even worth engaging in a project that is attempting to develop agentic systems that take agency away from people.
What I see as a much more hopeful vision of an AI future is returning back to developing AI models and AI systems that support, rather than supplant, humans. And one of the things that I’m really bullish about is specialized AI models for solving particular challenges that are, that are things that, like, we need to overcome as a society.
So I don’t believe in AGI on the horizon solving climate change, but there is this climate change nonprofit called Climate Change AI that has done the hard work of cataloging all of the different challenges—well-scoped challenges—within the climate-mitigation effort that, that can actually leverage AI technologies to help us tackle them.
And none of the technologies that they are talking about are related any—in any way to large language models, general-purpose systems, a theoretical artificial general intelligence; they’re all these specialized machine-learning tools that are doing things like maximizing renewable energy production, minimizing the resource consumption of buildings and cities, optimizing supply chains, increasing the accuracy of extreme-weather forecasts.
One of the examples that I often give is also of DeepMind’s AlphaFold, which is also a specialized deep-learning tool that has nothing to do with extremely large-scale language models or, or AGI but was a, a tool trained on a relatively modest number of computer chips to accurately predict the protein-folding structures from a sequence of amino acids—very important for understanding human disease, accelerating drug discovery. [Its developers] won the Nobel Prize [in] Chemistry last year.
And these are the types of AI systems that I think we should be putting our energy, time, talent into building. We need more AlphaFolds. We need more climate-change-mitigation AI tools. And one of the benefits of these specialized systems is that they can also be far more localized and therefore respect the culture, language history of a particular community, rather than developing a one-size-fits-all solution to everyone in this world. Like, that is also inherently extremely imperial [Laughs], to believe that we can have a single model that encapsulates the rich diversity of, of our humanity.
And so yeah, so I guess I am very optimistic that there is a more beautiful AI future on the horizon, and I think step one to getting there is holding these companies, these empires, accountable and then imagining those new possibilities and building them.
Kane: Thank you so much, Karen, for joining, and thank you so much for this work of reporting that you have done in Empire of AI.
Hao: Thank you so much for having me, Bri.
**Pierre-Louis: **And thank you for listening. Don’t forget to tune in on Monday for our rundown on some of the most important news in science.
Science Quickly is produced by me, Kendra Pierre-Louis, along with Fonda Mwangi and Jeff DelViscio. Shayna Posses and Aaron Shattuck fact-check our show. Our theme music was composed by Dominic Smith. Subscribe to Scientific American for more up-to-date and in-depth science news.
For Scientific American, this is Kendra Pierre-Louis. See you next time!