A tech C.E.O. explains why A.I. probably won’t cure diseases anytime soon. Hint: You still need humans.
Dec. 26, 2025, 7:00 a.m. ET
The leaders of the biggest A.I. labs argue that artificial intelligence will usher in a new era of scientific discovery, which will help us cure diseases and accelerate our ability to address the climate crisis. But what has A.I. actually done for science so far?
To understand, we asked Sam Rodriques, a scientist turned technologist who is developing A.I. tools for scientific research through his nonprofit FutureHouse and a for-profit spinoff, Edison Scientific. Edison recently released Kosmos — an A.I. agent, or A.I. scientist to use the company’s language, that it says can accomplish six months of doctoral or postdoctoral-leve…
A tech C.E.O. explains why A.I. probably won’t cure diseases anytime soon. Hint: You still need humans.
Dec. 26, 2025, 7:00 a.m. ET
The leaders of the biggest A.I. labs argue that artificial intelligence will usher in a new era of scientific discovery, which will help us cure diseases and accelerate our ability to address the climate crisis. But what has A.I. actually done for science so far?
To understand, we asked Sam Rodriques, a scientist turned technologist who is developing A.I. tools for scientific research through his nonprofit FutureHouse and a for-profit spinoff, Edison Scientific. Edison recently released Kosmos — an A.I. agent, or A.I. scientist to use the company’s language, that it says can accomplish six months of doctoral or postdoctoral-level research in a single 12-hour run.
Sam walks us through how Kosmos works, and why tools like it could dramatically speed up data analysis. But he also discusses why some of the most audacious claims about A.I. curing disease are unrealistic, as well as what bottlenecks still stand in the way of a true A.I.-accelerated future.
Below is a transcript of our conversation, lightly edited for clarity and length.
Listen to ‘Hard Fork’: Where Is All the A.I.-Driven Scientific Progress?
A tech C.E.O. explains why A.I. probably won’t cure diseases anytime soon. Hint: You still need humans.
Roose: So, we have brought you here today to be our science expert, our guide to the biggest recent A.I.-powered breakthroughs that are happening in science. This is an area that I sort of understand in an ambient way is important, and there are big things happening, but neither of us are scientists, although I did make a killer baking soda volcano in elementary school. So we have so much to talk about today, but before we get into some of the particulars, I want to ask you about your project that you’ve been working on. Last month, the commercial arm of your nonprofit, which is called Edison Scientific, launched a new A.I. scientist called Kosmos that you say can accomplish work equivalent to six months of a Ph.D. or postdoctoral scientist in a single run of this model. Tell us about how Kosmos works and where that six-month number comes from.
Rodriques: Yeah, exactly. And actually, I want to just like start out by saying that when I got that six-month number, my reaction originally was, “There is no way that this is true,” right? And we have now measured it in a bunch of different ways. I can walk you guys through that. But basically, just to take a step back, so we have been working for two years on figuring out how to build an A.I. scientist. And the concept here is there’s so much more science that we can do than we have scientists, right? And so how do we scale up science? And the thing that happened with Kosmos that is pretty cool is Kosmos is like the first thing that I think that we’ve made that actually really feels like an A.I. scientist when you’re working with it, right? Which is to say that you go in, you give it a research objective, it goes away, and it comes back with insights that are actually really deep and interesting and sometimes wrong, but about 80 percent of the time right. Which is kind of similar to if you ask a human to go away and do something, comes back like similar percentage of the time is right. And it’s a kind of new experience working with it, so that’s very exciting.
The six-month number specifically, the way that we measured this was we had a bunch of academic collaborators, you know, scientists who had done a bunch of science previously that they had not published yet. And we basically gave the same research objective and the same data set to the A.I., to Kosmos, and we ask it, you know, to go away and just make new discoveries. And it would come back and it had found the same things that the researchers had found overnight. And then you go and you ask the researchers, how long did it take you to find this in the first place? And they would say like three months, five months, six months, whatever. And so that’s where it comes from. And it’s like, that’s the amount of time that it took them to come up with the finding.
Casey Newton: So let me just ask you a couple of questions so I can ground myself here. Is this tool kind of a box you type into like the other chatbots? And if so, what is powering it? Did you guys sort of build your own model from scratch? Did you sort of make fine tunings to another company’s model?
Rodriques: Yeah, exactly. So it is indeed a box that you basically type into. You ask it a research objective. It’s not a chatbot, right? Like it runs for 12 hours or so before eventually coming back to you with its findings. In terms of how it’s built, we build on top of a bunch of different language models from OpenAI, from Google, from Anthropic. Like in any given run, we use models from all the different providers. We also have our own models for specific tasks that we’ve trained internally, where those models are much better for the specific tasks that we trained on than the models that the frontier providers make. And then the key insight in Kosmos is basically this use of what we call like a structured world model. So one of the main limitations with A.I. systems today is that they’re just limited in the length of the task and the sophistication of the task that they can carry out before they kind of go off the rails. They like, you know, forget what they’re doing, they no longer are on task. And what we figured out was a way to have them contributing to this world model that gets built up over time that basically describes the full state of knowledge about the task that they’re working on, which then means that we can orchestrate hundreds of different agents running in parallel, running in series, and have them all working towards a coherent goal. And that was the real unlock.
Roose: Right. Another thing that I found interesting about Kosmos is the cost. This model costs $200 per prompt.
Rodriques: Yeah.
Roose: So every time you give it a task, you’re paying $200. Why is it so expensive?
Rodriques: I mean, it uses a lot of compute. I mean, that’s the fundamental answer is it uses a lot of compute, right?
Roose: Like, give us a sense of how much.
Rodriques: Well, so an individual run from Kosmos will write 42,000 lines of code and read 1,500 research papers on average. Like, if you run Claude, it might write a few hundred lines of code, right? So that gives you some sense. There’s a lot of compute that is going into this.
Newton: Have you ever had like a scientist whose cat walks across the keyboard and accidentally hits enter, and all of a sudden spends like $600?
Rodriques: This is a problem. This is a problem. And we, right, so the thing that you have to understand, right, is that if you are a scientist and you go and do an experiment, you get some data back, you’re going to spend $5,000 or $10,000 gathering that data. And so what scientists want is they want the absolute best performance that they can get. And like scientists who have used Kosmos generally come back to me and are like, they can’t believe we’re only charging $200 for it, right? And, you know, I will say, like, $200 right now is a promotional price. We actually have to eventually charge more.
Roose: Oh, it’s going up. So get those prompts in before Christmas!
Rodriques: Exactly. But really, you know, it’s like if you have to spend thousands of dollars gathering the data, the cost at the end of the day is not the limitation. We do have to be very generous with refunds because people, you know, make mistakes all the time.
Newton: Ah, I made a typo.
Rodriques: Yeah, exactly.
Roose: So what you just mentioned about the sort of the tests that you all ran to figure out how long this thing could run for, how much time it was saving scientists, that’s about like sort of replicating existing research that’s out there. But a lot of what we hear from the people who are running these big A.I. labs is the possibility that pretty soon A.I. will start making novel scientific discoveries. We’ll start doing things that existing scientific methods and processes can’t do. How close are we to that?
Rodriques: That’s already happening, actually. So if you go and you read the paper that we put out about Kosmos, we put out seven conclusions that it had come to, three of which were replications of existing findings, four of which are net new contributions to the scientific literature, like new discoveries.
Newton: And of those, what’s the most impressive?
Rodriques: So, one of the ones that we really like, the human genome contains millions of genetic variants, right? These are differences between different people’s DNA that are associated with disease. And for the most part, we know that a variant is associated with a disease, but we have no idea why, right? And so we asked Kosmos, we gave it a bunch of raw data about a huge number of different genetic factors, so like what the variants are, what proteins bind near the variants, right, like all these kinds of things, and just asked it for Type 2 diabetes to go and identify a mechanism associated with one of these variants. And it came back and it identified, this was a variant that was not in a gene. And Kosmos identified that this is actually somewhere where a different protein binds. It was able to identify what protein binds and what gene is being expressed and connected that to the actual mechanism of that gene, SSR1, which is involved in the pancreas in secreting insulin, right?
Newton: OK, so in this case, is what I’m hearing that your model was able to do some very fancy reasoning over some existing data and identify something that sort of no other human scientist had gotten around to and might not have for a really long time?
Rodriques: Yeah, that’s right.
Newton: OK.
Rodriques: And I think science generally consists of deciding what data to gather, gathering that data and then drawing conclusions. And so at this point, basically, it’s like step number three that Kosmos is aimed at. And you know there’s more work —
Newton: You left out step zero, which was getting the Trump administration to unfreeze your funding. But everything else was right.
Rodriques: (laughter)
Roose: So what happens when you get a discovery like this from Kosmos? Do you have to then go validate it? Do you hand it to like a team of researchers who then have to make sure it works? Like, what happens next?
Rodriques: Yeah, absolutely. You have to go and validate it. And so that’s actually one of the things also, you know, in the paper, we actually describe how we went and validated that particular variant. In general, when people are using it, yeah, you go in. I mean, actually, literally when you run a Kosmos run, the first thing you have to do is you have to understand what it’s telling you. Because it has just done something that scientists think is like six months’ worth of work, and you’re going to sit there for a long time just like reading and understanding it. Once you’ve read it and understood it, then yes, indeed, you’re going to go and you’re going to run various experiments, do your own analysis, cross-reference to try to like convince yourself that this is true. And then based on what your research objective is, you’ll decide next steps, right? You know, in this case, I think it’s probably low likelihood there’s a new drug target from this particular finding, right? But you could go and you could run this on other findings, and then, eventually, maybe you find new drug targets, you start a drug program, that’s, you know —
Roose: So one concern that I’ve heard people express about models like Kosmos is that this is just like sort of not where the roadblocks are, that the reason that we don’t have more A.I.-discovered drugs and designed drugs out there curing diseases is not actually because we don’t have the research methods to discover those, it’s because, like, you got to go to trials, and you got to recruit human subjects, and you got to get F.D.A. approval. Like, all that stuff just takes a lot longer than the actual discovery of the drug. So what problems are models like these helping to solve in our scientific process right now?
Rodriques: So, so, absolutely. I actually really agree that like the bottleneck at the end of the day in solving medicine is basically, you know, clinical trials. I mean, and the easiest way to see this is if you look at the number of diseases that we know how to cure in mice, right? It’s astronomical, because obviously you can just run experiments, and in humans, things are just slow. That said, if you think that every experiment that is being run right now by pharma companies, like every clinical trial that’s being run, is like optimally planned and optimally, you know, conceived given the full state of knowledge, you are off your rocker, right? There’s like no way. And those experiments cost hundreds of millions of dollars. And so the question is, like, we do at the end of the day have to run clinical trials. How do we make sure that those experiments are the best experiments we could possibly be running given all the knowledge that we have, given all the data we have? There’s so much data that we have that has insights in it that are waiting to be found, where we just like do not have people to go and find them, and that’s ultimately going to feed into better experiments, better trials, right?
Newton: Well, so then I’m curious how you see your tool fitting into the workflow of today’s scientist. Is it the sort of thing where, like, I have completed my experiments, and now I want some help doing some analysis? Is it, I have all these old experiments that I only did a little bit of analysis on, and I’m curious if I can sort of squeeze any more juice out of them? Or like, what other ways are you seeing the A.I. being really good right now for working scientists?
Rodriques: Yeah, yeah, great question. So going back to me in 2019, which is when I was wrapping up my Ph.D., right? I had this gigantic data set, and I wanted to graduate because I was a Ph.D. student, which meant that I was making like, you know, $40,000 a year or something, and there were great opportunities to go out and like don’t be a Ph.D. student anymore. OK, so I spent six months literally just like sitting at my desk like trying to analyze the data and drawing conclusions, reading papers, right? For right now, that’s where Kosmos fits in. It’s like, you know, you would just take that data set, you give it to Kosmos, it comes up with a lot of findings. Right now, you need to go and do a bunch of manual work to validate those findings and so on. Pretty soon it’s going to come up with findings and you’re going to be like, “Great.”
Image
Credit...Photo Illustration by The New York Times; Yaroslav Kushta/Getty
Roose: Sam, I’m curious if you could help sort of give us and our listeners a state of the world of A.I. science right now. Recently the White House announced what it’s calling the Genesis Mission, which is a federal effort to kind of corral and harness all of these data sets that the federal government is sitting on and use them to do new scientific exploring. We also have lots of efforts, including yours, but lots of things going on in and around the tech industry, the biotech industry, people doing A.I. for materials science. Give us a sense of like the lay of the land of like what’s hot right now in A.I. science, where is the effort and money going?
Rodriques: Right. In order to understand the landscape of A.I. and science, the first thing, like fundamentally, that you have to understand is that A.I. is about building models, right? So, for example, right, like a language model, like what is a language model? A language model is fundamentally a model of human language. It just so happens that when you build a model of human language, it learns how to think like a human in some sense because humans encode their thoughts in language. This is like one of the greatest discoveries, right, certainly of the 21st century, maybe of all time. So similarly, when we talk about A.I. in science, what you have to think about is that you are modeling things. That is what A.I. does. And there are kind of two fundamental categories. There’s modeling the natural world, right? And there’s modeling the process of doing science. These things are fundamentally different, and the reason to make this distinction is because, you know, what we are doing, right, we are modeling the process of doing science. The other side of the A.I.-for-science world is building models that can, for example, predict the structure of proteins, that can generate a new antibody, that can create a new organism from scratch, which are all things that have happened in 2025, where there’s just a huge amount of momentum.
Roose: Yeah, that makes sense. I mean, of the things that are happening in the part of the process of modeling the natural world, you mentioned protein folding, novel organisms, what has most excited you as a scientist that you’ve seen?
Rodriques: So, it’s absolutely, what’s most exciting right now, I think, without a doubt, is this trend towards what we call generative models. So, these are things where these are models that can produce examples of, you know, proteins or antibodies or whatever that have desired characteristics basically from scratch. This is a new capability that we have never had before, and it’s huge.
Newton: I’m curious about the reliability piece as you’re running all of these experiments. You know, I saw this going around on social media this week, I reproduced it myself. If you asked Google, “Is 2026 next year?” It said, “No, 2026 is not next year, it is the year after next.” So, in such a world, Sam, some people might get concerned at the idea that we’re now entrusting the A.I. with all of our data analysis. So, how much time are scientists having to spend going back and essentially rechecking the work of the A.I.s and what kind of tax does that place on their work?
Rodriques: Yeah, this is very funny. I mean, look, you have to spend a lot of time going back and checking. But like, to be clear, this is true regardless of whether or not an A.I. does it or whether you ask a friend to do it. If you’re going to publish a paper, you damn well better go back and check it and be sure that you are confident. And it’s never going to be 100 percent, right? The best you’re going to do is you’re going to get to a place where it is similarly good to if you were doing it yourself, which is not 100 percent because you’re not infallible, and checking the work is like always going to be faster than producing it in the first place. By a lot.
Roose: A lot of our biggest scientific breakthroughs in history have come from these kind of strange accidents, these moments of serendipity. You know, penicillin starts growing in a petri dish, and we discover, “Oh my God, this is great.” Does A.I. preserve that kind of serendipity, those kinds of accidents, or do they sort of optimize it away?
Rodriques: Yeah, this is a great question, and the fact of the matter is we just really don’t know yet. This is going to be a really important core question that a lot of people are asking.
Roose: What’s your intuition on that?
Rodriques: I think that they probably will, because —
Roose: They probably will preserve it?
Rodriques: They probably will preserve it because penicillin, my understanding is that basically like the window was left open on some agar with like no antibiotic in it. Obviously, they didn’t have antibiotics because this was the discovery of the first one, right? So, the window was left open with some agar and like, you know, some spores flew onto it and began growing and they observed that the bacteria was inhibited, right? That’s a mistake. Someone screwed up, right? And that mistake led to something fantastic, and you will have mistakes, I think, that will be preserved.
Newton: But in the meantime, scientists should always leave their windows open. You never know what’s going to happen.
Rodriques: You have no, you know, seriously though, like there’s so much when you get graduate students in academia, right? When you get graduate students, first-year graduate students, they have no idea what to do. They have no idea what to do. And that is a huge source of scientific progress, because they just do the most random, kooky stuff that no one who knows anything would ever think to do, and it’s actually, it’s actually really important.
Roose: You almost want your like A.I. scientist model to hallucinate a little bit.
Rodriques: Totally. Or just add noise, right? We talk about this is just like adding noise in order to, this is actually important for biological evolution also, right? Like the genome has a lot of noise, and that’s how the evolution randomly comes up with like new stuff, is that there’s like a protein that like just totally random, doesn’t do anything, then one day, all of a sudden, oops, it does something, and that’s great, right?
Roose: What do you make of the leaders of the big A.I. labs, people like Demis and Dario and Sam Altman, who are saying, you know, “A.I. is going to allow us to cure all diseases, or most diseases, within the next decade or two?”
Rodriques: Decade is crazy. Oh, and I’m happy to take a very strong stance on this because if I’m wrong, it’s a great thing, right? But if I’m wrong, everyone wins. But like, decade is crazy.
Roose: Why is it crazy?
Rodriques: Because, for the reason that we were talking about before: You have to run clinical trials. If we had a drug right now that prevented aging, completely halted aging in humans between the ages of like 25 and 65 or something, you would not know for 10 years, because you can’t detect in humans in that age range whether or not they’re aging for like at least like, you know, five or 10 years. Like, you don’t detect from one year to the next that you’re aging. So, you won’t know if the thing is working.
Newton: I don’t know. Some people at my 10-year high school reunion were already looking pretty rough.
Rodriques: (laughter)
Newton: Hate to say it.
**Rodriques: **I did say 25!
Newton: OK, fair enough.
**Rodriques: **But, right. I mean, we have to conduct experiments. Those experiments will take time. Now, 30 years, I think it’s very plausible. We don’t know what is going to be possible. We don’t know if it’s possible to halt aging. We don’t know if it’s possible to cure all diseases or whatever, but between now and 30 years from now, I think you should expect to see a humongous leap forward in terms of our knowledge.
Newton: Let me drill in on that a bit because I think some people might hear that as saying that this is essentially a regulatory issue that we just don’t have, you know, the F.D.A. set up to measure this. I’m curious about the experimental side of it, though. Because my understanding is we don’t really have enough biologists to run all the experiments that we want. We might not have the funding to fund the experiments. And you did raise the point that some of these experiments just actually take a long time to run. So like what are all of the factors that in your mind are just going to make it so hard to service diseases?
Rodriques: Oh my gosh. You have to go and you have to like, you know, even supposing you have a molecule that you want to test in a human, and you know which humans you want to test it in, you have to go and make it, right? Humans are big, they require a lot of it. You have to make sure it’s a high enough grade that you can actually put it into a human. You have to find the patients, which means forming relationships with the doctors, right? Actually, you know, waiting until you have enough patients who are willing to do it. For many diseases, there just aren’t that many patients. And so finding the patients is hard, right? And then you have to actually dose them. You have to wait and see what happens, right? Even with no regulation, it would be slow.
Newton: Yeah. There’s no A.I. shortcut for almost any of that, at least not right now.
Rodriques: No, like what A.I. will allow us to do is it will allow us to discover a lot of things where we already have the information to discover it. We just haven’t figured that out yet. The other thing that A.I. researchers sometimes talk about, which is probably not reasonable, is like you should not expect that you’re one day going to like get GPT-7 and just like ask it how to cure Alzheimer’s and it will just tell you. My expectation is that there is not enough knowledge. We do not have enough knowledge to solve it in principle, even with infinite intelligence, right? Like with infinite intelligence, there would still be some things that are just not known about the world where we have to conduct the experiments to see. You’ll be able to plan the best possible experiment given everything that’s known, but you will not just be able to like, you know, de novo kind of figure it out, right?
Roose: Casey, I took Latin. That means from new.
Newton: Oh thank you, thank you. That saved me a step of Googling.
Roose: This isn’t quite science, per se, but I’m curious what you make of this, Sam. All of the big A.I. labs are obsessed with math, with winning the International Math Olympiad, with putting up a gold medal score, with solving these unproven math theorems. And I have a take about this, which is that I believe that this is because these labs are filled with people who were themselves competitive mathletes in high school and took part in the I.M.O. and did pretty well. And a lot of those people think that like A.G.I. will just sort of be like a slightly smarter version of them. But I’m curious, like, why are these places so obsessed with math as being one of the sort of first places that they want to make a lot of progress?
Rodriques: There are two reasons. I think that one of the reasons is exactly what you just said. It’s just familiar, right? But the other reason is that you can measure progress, right? So, ultimately, like what drives progress in machine learning, a big part of what drives progress, is benchmarks. With math, you can tell whether or not your proof is right. And there’s kind of like an infinite number of things to go and prove. So, it’s just like really easy to tell whether or not you’re getting better. And things like the I.M.O. just present like great opportunities. By contrast, if you look at some of the biggest breakthroughs recently, biggest breakthroughs this year in A.I. for biology, right? Things like Chai Discovery, Nabla, coming up with these like extremely good models for producing antibodies de novo, right? Huge breakthrough, but ultimately, the win for them is going to be when it’s approved in a human, and that might be another five years or something. Arc Institute putting out like the first time anyone has designed an organism from scratch, they designed a bacteriophage. It’s a kind of virus that infects bacteria. Incredible, right? But it’s just harder to evaluate. Like, how good is it? Like, you’re not going to release it into the wild, and so, it’s harder to evaluate, whereas the I.M.O. is just super clean. And so I think that’s one thing that we think about a lot is just like, you know, how do we get really clear benchmarks that we can pursue to measure whether or not we’re doing a good job at science?
Roose: I have an answer here: International Cancer Curing Olympiad.
Newton: I like that.
Roose: Should we start this?
Newton: I think that would be great!
Roose: We can give people a medal if they win. Uh, let’s get on it, labs. So, when the C.E.O.s or the leaders of these companies make these statements about how we’re going to cure all disease using A.I. in the next 10 years or 15 years, or whatever timeline they give, are they doing that because they don’t understand the bottlenecks? I mean, these are very smart people. So, what are they not seeing, or are they just doing this as a sort of a marketing exercise? Is this an attempt to get people excited about A.I. who might otherwise be freaked out about it? Why are they giving these projections?
Rodriques: No, look, I mean, I think that they are — reasonable people could disagree. There are lots of reasons why you could argue that actually the models will get super smart and they will figure out ways to measure whether or not we’re making progress before you run a clinical trial, and that will increase the iteration cycle, right? Like, there are reasonable arguments to be made about that, right? Like, you know, that we are just going to not do full clinical trials anymore, we’ll just use biomarkers. Like, that’s not crazy and that’s one way that I could be wrong, and maybe in 10 years, we do have cures for all diseases. So, that’s part of it. Obviously, there’s part of it, which is that they want to hype the thing. Part of it is that, you know, does Sam Altman really intimately understand what it takes to go and manufacture, like, scale up manufacturing for a small molecule to put it into the clinic? Like, probably not, right? So, there’s a mixture. I don’t think any of it’s in bad faith. It’s just that people are very excited. There will be a little bit of a collision with reality at some point. We’re going to see exactly where that is, but regardless, the future is going to be awesome, right?
Newton: At this moment in 2025, how much do you think A.I. tools have changed the life of a working scientist, and how different do you expect that will be a year from now?
Rodriques: I think that you’d be shocked to the extent that they have not yet. Scientists in general are extremely conservative people because if you’re running an experiment, you never actually fully know, in biology at least, you usually do not fully understand why the experiment works and why not. There are some things that you’ve inherited from protocols that you’ve run in the past, and where it’s like we do it this way. You could go and test it, but there are way too many things to test. So, you’re just kind of locked in in your methods, and it’s what works, and you just want to do what works. And so for that reason biologists just adopt new methods slowly. I think most labs around the world are still probably doing science the way they’ve done it before, and probably will continue to do so for a while, and that’s OK. One place, I think, with coding, a lot of people are already adopting it, because in biology, historically, coding has been a big bottleneck. It’s a huge unlock now that biologists who didn’t know how to code can do a lot of coding using Claude Code, using OpenAI’s models, Gemini, et cetera. So, that’s a huge unlock. I think that that’s going to see a lot of adoption quickly. Literature search, right? Like being able to parse the immensity of the scientific literature, that’s a huge unlock, that’s going to get adopted very quickly. The tools like what we’re building are like a little bit more frontier. Ultimately, people will adopt them when they see other people using them and getting great results.
Roose: Sam, can we play a little lightning-round game here with you? We’re calling this one Overhyped/Underhyped. So we’ll tell you something, and you tell us whether in your scientific opinion it is overhyped or underhyped. Ready? Vibe proving. This is when A.I. systems go out and like write math proofs.
Rodriques: If I have a forced choice, probably overhyped. It’s great as a progress driver in A.I., and being good at it will probably have implications elsewhere, but is it itself that useful? I’m not sure.
Roose: Robotics for A.I. lab automation?
Rodriques: Robotics for automating A.I. labs or …?
Roose: Yes or for automating scientific labs.
Newton: Like, wet labs.
**Rodriques: **Robotics for automating scientific labs. Um, I think appropriately hyped. It is going to be totally transformative. The technology is not at all there yet. There’s a lot that we need to do, but like, yeah, probably appropriately hyped.
Newton: AlphaFold 3?
Rodriques: That’s an interesting one. I mean, I think that I would say probably like underhyped in that I think like all of the protein structure models, there’s a lot of hype around them, but they’re going to be extremely transformative. So, I would say probably underhyped. There’s a lot of hype around it, though, so it’s a hard decision to make.
Roose: Virtual cells? We heard from Patrick Collison this summer about what the Arc Institute has done with making a virtual cell.
Rodriques: This is overhyped, but for a specific reason. The models that they’re building at Arc are awesome. And they’re doing similar things at New Limit, Chan Zuckerberg. Like many of these places, many of these great companies and great organizations are doing it. I think that calling it a virtual cell is a little bit overhyped, right? Like ultimately, that kind of model models something very specific. Like, actually building a true virtual cell, like being able to simulate a cell in a computer is an amazing goal. We are very far away from that.
Newton: Quantum computing?
Rodriques: Overhyped.
Roose: Brain-computer interfaces?
Rodriques: Oh, man, this one’s really hard. I’m going to say overhyped. I’m a huge believer in B.C.I.s. I think effective B.C.I.s or the way that we imagine them in sci-fi are further out than people imagine. Even Neuralink is making amazing progress —
Roose: Yeah Casey’s got one in his head right now.
Newton and Rodriques: (laughter)
Roose: It’s on the fritz.
Rodriques: There are a lot of great people who are making progress there, but it’s further out, I think, than people think.
Newton: So, we’re nearing the end of the year. If we can put you in a bit of a reflective mode, what do you think were the top three A.I.-driven scientific advancements this year?
Rodriques: Yeah, I think that, the first one was — well, honestly, this year has been the year of agents. This was the year when people discovered agents, and so I do, in good faith, I have to put myself, I have to put us on that list. Also with Google co-scientist, I mean, we’re not the only people who are working on this. Google has been doing a great job. There are a bunch of other people. So, A.I. agents for science, definitely. And then, like, generative design is just having a huge moment, right? So, the other ones would probably be the work that Chai has been doing, the work that Nabla has been doing, and many others, on de novo antibody design.
Newton: I’m really glad you defined de novo earlier in the podcast, by the way. It’s come up a lot.
Roose: Yes.
Rodriques: Sorry, when I say de novo, I just mean like literally, it generates it from scratch. You don’t give it anything, right? You just like, or you give it a target that you want it to bind to, and it generates it from scratch. This is huge because basically the promise that companies like Chai, Nabla and so on are going after is a world in which you can say like, “We know to cure this disease, we have to target that protein.” You click a button, and you have an antibody that you can go and put in humans tomorrow. It’s huge. It cuts out an enormous amount of what people have to do previously. So, that’s a huge one. And the third one, I just think like what Brian Hee, Patrick Hsu, and so on at the Arc Institute have done with generating organisms de no *— *sorry, generating organisms from scratch.
Newton: We can say it! We know what it means now. That’s the important thing.
Rodriques and Roose: (laughter)
Roose: This is our like “Pee-wee’s Playhouse” Word of the Week.
Rodriques: (laughter) The de novo design of organisms, is it useful? I don’t know. Is it awesome? Like, absolutely. It’s such a big breakthrough.
Roose: And, Sam, what should we be watching for next year? What are you excited about that may be coming down the pipe for 2026?
Rodriques: Honestly, it is again going to be the agents that see an explosion. We are right now at like the beginning of that S-curve, and that is going to continue. Maybe a year ago I would tell people that I thought in 2026 or maybe 2027 that the majority of the high-quality hypotheses that are generated by the scientific community would be generated by us or by agents that are like the ones that we’re building. And when I said it in 2024, I thought I was overhyping, right? I was just like, “I need some hype.” At this point, it may be real. I mean, I think 2026 would be ambitious for that. I mean, for the majority of the good hypotheses that come out to be made by agents, that’s a huge leap. But like, 2027, yeah, man. I mean, 2026 is going to be the year when we just see these agents start to infiltrate everything, right? Infiltrate labs, infiltrate people’s normal life. I mean, it’s already happening.
Roose: Cool.
Newton: Yeah.
Roose: Well, I look forward to it. Sam, thank you so much for giving us the science education that we clearly didn’t get in school.
**Newton: **Yeah, you’ve really given us some de novo things to think about. I appreciate that.
Rodriques: (laughter) Good. Thank you, guys.
Listen to and Follow ‘Hard Fork’
Apple | Spotify | Amazon | YouTube | iHeartRadio
Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
Feedback
We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.
Credits
“Hard Fork” is hosted by Kevin Roose and Casey Newton and produced by Rachel Cohn and Whitney Jones. This episode was edited by Jen Poyant. Engineering by Chris Wood and original music by Dan Powell, Diane Wong, Alyssa Moxley and Rowan Niemisto. Fact-checking by Will Peischel.
Special thanks to Paula Szuchman, Pui-Wing Tam and Dahlia Haddad.
Kevin Roose is a Times technology columnist and a host of the podcast "Hard Fork."
Advertisement