When the popular conception of the Turing test went whooshing by, many of us thought it was a little strange how much daily life just kept going. This was a milestone people had talked about for decades. It felt impossibly out of reach, then all of a sudden it felt close, then all of a sudden we were on the other side. We got some great new products and not much about the world changed, even though computers can now converse and think about hard problems.
Most of the world still thinks about AI as chatbots and better search, but today, we have systems that can outperform the smartest humans at some of our most challenging intellectual competitions. Although AI systems are still spikey and face serious weaknesses, systems that can solve such hard problems seem more like 80% of the way…
When the popular conception of the Turing test went whooshing by, many of us thought it was a little strange how much daily life just kept going. This was a milestone people had talked about for decades. It felt impossibly out of reach, then all of a sudden it felt close, then all of a sudden we were on the other side. We got some great new products and not much about the world changed, even though computers can now converse and think about hard problems.
Most of the world still thinks about AI as chatbots and better search, but today, we have systems that can outperform the smartest humans at some of our most challenging intellectual competitions. Although AI systems are still spikey and face serious weaknesses, systems that can solve such hard problems seem more like 80% of the way to an AI researcher than 20% of the way. The gap between how most people are using AI and what AI is presently capable of is immense.
AI systems that can discover new knowledge—either autonomously, or by making people more effective—are likely to have a significant impact on the world.
In just a few years, AI has gone from only being able to do tasks (in the realm of software engineering specifically) that a person can do in a few seconds to tasks that take a person more than an hour. We expect to have systems that can do tasks that take a person days or weeks soon; we do not know how to think about systems that can do tasks that would take a person centuries.
At the same time, the cost per unit of a given level of intelligence has fallen steeply; 40x per year is a reasonable estimate over the last few years!
In 2026, we expect AI to be capable of making very small discoveries. In 2028 and beyond, we are pretty confident we will have systems that can make more significant discoveries (though we could of course be wrong, this is what our research progress appears to indicate).
We’ve long felt that AI progress plays out in surprising ways, and that society finds ways to co-evolve with the technology. Although we expect rapid and significant progress in AI capabilities in the next few years, we expect that day-to-day life will still feel surprisingly constant; the way we live has a lot of inertia even with much better tools.
In particular, we expect the future to provide new and hopefully better ways to live a fulfilling life, and for more people to experience such a life than do today. It is true that work will be different, the economic transition may be very difficult in some ways, and it is even possible that the fundamental socioeconomic contract will have to change. But in a world of widely-distributed abundance, people’s lives can be much better than they are today.
AI systems will help people understand their health, accelerate progress in fields like materials science, drug development, and climate modeling, and expand access to personalized education for students around the world. Demonstrating these kinds of tangible benefits helps build a shared vision of a world where AI can make life better, not just more efficient.
OpenAI is deeply committed to safety, which we think of as the practice of enabling AI’s positive impacts by mitigating the negative ones. Although the potential upsides are enormous, we treat the risks of superintelligent systems as potentially catastrophic and believe that empirically studying safety and alignment can help global decisions, like whether the whole field should slow development to more carefully study these systems as we get closer to systems capable of recursive self-improvement. Obviously, no one should deploy superintelligent systems without being able to robustly align and control them, and this requires more technical work.
Here are several things we think could help with achieving a positive future with AI:
We think that frontier labs should agree on shared safety principles and to share safety research, learnings about new risks, mechanisms to reduce race dynamics, and more. We can imagine ideas like frontier labs agreeing to certain standards around AI control evaluations being quite helpful.
Society went through a similar process to establish building codes and fire standards, which have saved countless lives.
There are two schools of thought about AI. One is that AI is like “normal technology,” in that it will progress like other technological revolutions in the past, from the printing press to the internet. Things will play out in ways that give people and society a chance to adapt, and conventional tools of public policy should work. We will need to prioritize ideas like promoting innovation, protecting the privacy of conversations with AI and defending against misuse of powerful systems by bad actors by partnering with the federal government.
We believe AI at around today’s capability levels is roughly here, and should diffuse everywhere, which means most developers and open-source models, and almost all deployments of today’s technology, should have minimal additional regulatory burdens relative to what already exists. It certainly should not have to face a 50-state patchwork
The other one is where superintelligence develops and diffuses in ways and at a speed humanity has not seen before. Here, we should do most of the things above, but we also will need to be more innovative. If the premise is that something like this will be difficult for society to adapt to in the “normal way,” we should also not expect typical regulation to be able to do much either. In this case, we will probably need to work closely with the executive branch and related agencies of multiple countries (such as the various safety institutes) to coordinate well, particularly around areas such as mitigating AI applications to bioterrorism (and using AI to detect and prevent bioterrorism) and the implications of self-improving AI.
The high-order bit should be accountability to public institutions, but how we get there might have to differ from the past.
Building an AI resilience ecosystem.
In either scenario, building out an AI resilience ecosystem will be essential. When the internet emerged, we didn’t protect it with a single policy or company—we built an entire field of cybersecurity: software, encryption protocols, standards, monitoring systems, emergency response teams etc. That ecosystem didn’t eliminate risk, but it reduced it to a level society could live with, enabling people to trust digital infrastructure enough to build their lives and economies on it. We will need something analogous for AI, and there is a powerful role for national governments to play in promoting industrial policy to encourage this.
Ongoing reporting and measurement from the frontier labs and governments on the impacts of AI.
Understanding how AI is concretely impacting the world makes it easier to steer this technology towards positive impact. Prediction is hard: for example, the impact of AI on jobs has been hard to anticipate, in part because today’s AIs strengths and weaknesses are very different from those of humans. Measuring what’s happening in practice is likely to be very informative.
Building for individual empowerment.
We believe that adults should be able to use AI on their own terms, within broad bounds defined by society. We expect access to advanced AI to be a foundational utility in the coming years—on par with electricity, clean water, or food. Ultimately, we think society should support making these tools widely available and that the north star should be helping empower people to achieve their goals.