I look at myself in the mirror every day, and I’d like to think I am looking at a good person. Since computing research has been a significant part of my life for decades, I have to believe my research contributes to betterment of the world. Does that make me a techno-optimist?
The term “techno-optimism” was popularized in computing by venture capitalist Marc Andreessen in a 2023 essay, “The Techno-Optimist Manifesto,” where he argued that “Technology is the glory of human ambition and achievement, the spearhead of progress, and the realization of our potential,” while the claim that “. . . technology takes our jobs, reduces our wages, increases inequality, threatens our health, r...
I look at myself in the mirror every day, and I’d like to think I am looking at a good person. Since computing research has been a significant part of my life for decades, I have to believe my research contributes to betterment of the world. Does that make me a techno-optimist?
The term “techno-optimism” was popularized in computing by venture capitalist Marc Andreessen in a 2023 essay, “The Techno-Optimist Manifesto,” where he argued that “Technology is the glory of human ambition and achievement, the spearhead of progress, and the realization of our potential,” while the claim that “. . . technology takes our jobs, reduces our wages, increases inequality, threatens our health, ruins the environment, degrades our society, corrupts our children, impairs our humanity, threatens our future, and is ever on the verge of ruining everything” is a lie. The manifesto advocates for effective accelerationism, which calls for unrestricted technological progress. Well, I am not that kind of techno-optimist.
For a tech-pessimist manifesto, see for example, the AI 2027 report, which lays out a scenario for how AI models could become all powerful by 2027 and, from there, extinguish humanity. Several computing luminaries, including ACM A.M. Turing-Award recipients Yoshua Bengio and Geoffrey Hinton, have recently raised the issue of existential risk from artificial general intelligence (AGI), popularizing the discussion of P(doom), which is the probability of existentially catastrophic outcomes of AI.
The January 2026 Communications cover story by Cuéllar et al. on “Shaping AI’s Impact on Billions of Lives” is a self-declared attempt “to develop a nuanced take on artificial intelligence’s (AI’s) impact, in contrast to the awkward discourse between AI accelerationists and AI doomers.” The article focuses on AI impact in seven fields and puts forward four guidelines on how to shape AI for the common good.
I applaud the authors for their efforts toward shaping AI for the common good, yet I still find the article deeply techno-optimistic. To understand my concern, I suggest going back to a 2016 essay published by Dario Amodei, CEO of Anthropic, and Jack Clark, a co-founder of Anthropic. Both were at OpenAI then. The essay, “Faulty Reward Functions in the Wild,” described an attempt to use reinforcement learning to train agents to play a video game. The agent was willing to keep setting a boat on fire and spinning in circles as long as it obtained its goal, which was the high score. The point of the article was to explain the safety problem for AI with a faulty reward function.
But I view the article as a metaphor for Silicon Valley. Consider the story of OpenAI, the company that launched ChatGPT in late 2022 and started the GenAI revolution. It was founded in 2015 with the goal of developing “safe and beneficial” AGI. To that end, it was founded as a non-profit corporation. In 2019, OpenAI created a for-profit subsidiary to attract investment to help scale up research and deployment efforts. In October 2025, OpenAI said it had converted its main business into a for-profit corporation, with the non-profit company (now known as the OpenAI Foundation) owning only a 26% stake. Along the way, Sam Altman, CEO of the for-profit component, was fired in November 2023 by the non-profit board and then rehired under pressure from investors. What is the explanation for this saga? The reward function is profits (or potential profits) and not betterment of the world. For example, this reward function results in an obsession with efficiency and neglect of resilience.
The ideology of Silicon Valley was called “The California Ideology” in a penetrating 1995 essay by Richard Barbrook and Andy Cameron. They described that ideology as a mix of “cybernetics, free-market economics, and counter-culture libertarianism,” or “dotcom neoliberalism” for short. I have written in the past on some of the mantras of this ideology, such as “Information wants to be free,” which gave us surveillance capitalism, and the “Declaration of the Independence of Cyberspace,” which allowed large corporations to gain control of the Internet commons.
As World Wide Web inventor Tim Berners-Lee writes in his new memoir, This Is for Everyone, “In the early days of the web, delight and surprise were everywhere, but today online life is as likely to induce anxiety as joy,” where misinformation, polarization, election manipulation, and problematic social media use have all become synonymous with the Web. What happened? Faulty reward function.
Can AI contribute to the common good? Undoubtedly, but only if we amend our current, faulty reward function, I believe. That is techno-realism.