Recorded live in Berkeley, at the Roots of Progress conference (an amazing event), here is the material with transcript, here is the episode summary:
Sam Altman makes his second appearance on the show to discuss how he’s managing OpenAI’s explosive growth, what he’s learned about hiring hardware people, what makes roon special, how far they are from an AI-driven replacement to Slack, what GPT-6 might enable for scientific research, when we’ll see entire divisions of companies run mostly by AI, what he looks for in hires to gauge their AI-resistance, how OpenAI is thinking about commerce, whether GPT-6 will write great poetry, why energy is the binding constraint to chip-building and where it’ll come from, his updated plan fo…
Recorded live in Berkeley, at the Roots of Progress conference (an amazing event), here is the material with transcript, here is the episode summary:
Sam Altman makes his second appearance on the show to discuss how he’s managing OpenAI’s explosive growth, what he’s learned about hiring hardware people, what makes roon special, how far they are from an AI-driven replacement to Slack, what GPT-6 might enable for scientific research, when we’ll see entire divisions of companies run mostly by AI, what he looks for in hires to gauge their AI-resistance, how OpenAI is thinking about commerce, whether GPT-6 will write great poetry, why energy is the binding constraint to chip-building and where it’ll come from, his updated plan for how he’d revitalize St. Louis, why he’s not worried about teaching normies to use AI, what will happen to the price of healthcare and hosing, his evolving views on freedom of expression, why accidental AI persuasion worries him more than intentional takeover, the question he posed to the Dalai Lama about superintelligence, and more.
Excerpt:
**COWEN: **What is it about GPT-6 that makes that special to you?
**ALTMAN: **If GPT-3 was the first moment where you saw a glimmer of something that felt like the spiritual Turing test getting passed, GPT-5 is the first moment where you see a glimmer of AI doing new science. It’s very tiny things, but here and there someone’s posting like, “Oh, it figured this thing out,” or “Oh, it came up with this new idea,” or “Oh, it was a useful collaborator on this paper.” There is a chance that GPT-6 will be a GPT-3 to 4-like leap that happened for Turing test-like stuff for science, where 5 has these tiny glimmers and 6 can really do it.
**COWEN: **Let’s say I run a science lab, and I know GPT-6 is coming. What should I be doing now to prepare for that?
**ALTMAN: **It’s always a very hard question. Even if you know this thing is coming, if you adapt your —
**COWEN: **Let’s say I even had it now, right? What exactly would I do the next morning?
**ALTMAN: **I guess the first thing you would do is just type in the current research questions you’re struggling with, and maybe it’ll say, “Here’s an idea,” or “Run this experiment,” or “Go do this other thing.”
**COWEN: **If I’m thinking about restructuring an entire organization to have GPT-6 or 7 or whatever at the center of it, what is it I should be doing organizationally, rather than just having all my top people use it as add-ons to their current stock of knowledge?
**ALTMAN: **I’ve thought about this more for the context of companies than scientists, just because I understand that better. I think it’s a very important question. Right now, I have met some orgs that are really saying, “Okay, we’re going to adopt AI and let AI do this.” I’m very interested in this, because shame on me if OpenAI is not the first big company run by an AI CEO, right?
**COWEN: **Just parts of it. Not the whole thing.
**ALTMAN: **No, the whole thing.
**COWEN: **That’s very ambitious. Just the finance department, whatever.
**ALTMAN: **Well, but eventually it should get to the whole thing, right? So we can use this and then try to work backwards from that. I find this a very interesting thought experiment of what would have to happen for an AI CEO to be able to do a much better job of running OpenAI than me, which clearly will happen someday. How can we accelerate that? What’s in the way of that? I have found that to be a super useful thought experiment for how we design our org over time and what the other pieces and roadblocks will be. I assume someone running a science lab should try to think the same way, and they’ll come to different conclusions.
**COWEN: **How far off do you think it is that just, say, one division of OpenAI is 85 percent run by AIs?
**ALTMAN: **Any single division?
**COWEN: **Not a tiny, insignificant division, mostly run by the AIs.
**ALTMAN: **Some small single-digit number of years, not very far. When do you think I can be like, “Okay, Mr. AI CEO, you take over”?
Of course we discuss roon as well…