(pixabay)
When will Artificial General Intelligence (AGI) become a thing? Tomorrow? Next month? Next year? Whenever the PR team decides we need a headline-grabbing thing to say? Who knows, but whatever the correct answer, it’s now a de rigeur question to be aired at any major AI talking shop and this week’s Davos junket is no exception.
So, step forward Dario Amodei, CEO of Anthropic, and Demis Hassabis, CEO of Google DeepMind, to update their earlier outings into the realms of speculation on this front.
First up, Amodei, who previously predicted that there would be an AI model that could do everything a human could do at the level of a Nobel Laureate across many fields, by, well, this year. Still sticking by that one? Actually he sort of is:
It’s always hard to know exactly w…
(pixabay)
When will Artificial General Intelligence (AGI) become a thing? Tomorrow? Next month? Next year? Whenever the PR team decides we need a headline-grabbing thing to say? Who knows, but whatever the correct answer, it’s now a de rigeur question to be aired at any major AI talking shop and this week’s Davos junket is no exception.
So, step forward Dario Amodei, CEO of Anthropic, and Demis Hassabis, CEO of Google DeepMind, to update their earlier outings into the realms of speculation on this front.
First up, Amodei, who previously predicted that there would be an AI model that could do everything a human could do at the level of a Nobel Laureate across many fields, by, well, this year. Still sticking by that one? Actually he sort of is:
It’s always hard to know exactly when something will happen, but I don’t think that’s going to turn out to be that far off.
He bases his assumptions on the idea that AI firms would make models that are good at coding and good at AI research, and use that to produce the next generation of model and speed it up to create a loop. That would increase the speed of model development, he argues:
In terms of models that write code, I have engineers within Anthropic, who say, ‘I don’t write any code anymore. I just let the model write the code, I edit it, I do the things around it’....Then it’s a question of, how fast does that loop close? Not every part of that loop is something that can be sped up by AI. There are, chips, there’s manufacture of chips, there’s training time for the model, so there’s a lot of uncertainty.
That said, he reckons AGI is a few years away at most:
It’s very hard for me to see how it could take longer than that. But if I had to guess, I would guess that this goes faster than people imagine, that that key elements of code and increasingly research [are] going faster than we imagine. That’s going to be the key driver.
Questions
For his part, Hassabis has been more conservative in his assumptions, citing a 50% chance of a system that can exhibit all the cognitive capabilities humans can by the end of the decade. He’s sticking to that timeline:
There has been remarkable progress. In some areas of engineering work, coding or mathematics it is a little bit easier to see how they’ll be automated, partly because they’re verifiable [in terms of] what the output is. Some areas of natural science are much harder to do than that. You won’t necessarily know if the chemical compound you’ve built or this prediction about physics is correct. It may be. But you may have to test it experimentally, and that will all take longer.
So I think there are some missing capabilities at the moment in terms of not just solving existing conjectures or existing problems, but actually coming up with the question in the first place, or coming up with the theory or the hypothesis. I think that’s much, much harder, and the highest level of scientific creativity.
He adds:
The full closing of the loop is an unknown. It’s possible to do you may need AGI itself to be able to do that in some domains. These domains where there’s there’s more messiness around them, it’s not so easy to verify your answer very quickly...But I think in coding and mathematics and these kind of areas, I can definitely see that working. And then the question is more theoretical - what is the limit of engineering and maths to solve the natural sciences?
The day after...
OK, so all that being the case, the topic of the panel on which both gents were sitting was The Day After AGI, which does sound a lot like a disaster movie. Leaving aside the question of when AGI happens, should we be more worried about what happens when it does? Amodei pitches himself as an optimist, but does admit that he can see “grave risks” ahead.
He cites a scene from the movie of Carl Sagan’s Contact as a frame of reference:
It’s this international panel that’s interviewing people to be humanity’s representative to meet the alien. And one of the questions they asked one of the candidates is, ‘If you could ask the aliens any one question, what would it be?’. And one of the characters says,’ I would ask, how did you do it? How did you manage to get through this technological adolescence without destroying yourselves? How did you make it through?’ Ever since I saw it 20 years ago, it’s kind of stuck with me.
That’s the mindset with which he approaches AGI:
I think the next few years we’re going to be dealing with how do we keep these systems under control that are highly autonomous and smarter than any human? How do we make sure that individuals don’t mis-use them. I have worries about things like bio-terrorism. How do we make sure that nation states don’t mis-use and that’s why I’ve been so concerned about the CCP and other authoritarian governments. What are the economic impacts? I’ve talked about labor displacement a lot . What haven’t we thought of? [That] in many cases is maybe the hardest thing to deal with.
There’s also the inevitable ‘what happens to my job?’ concerns. Amodei admits that he can see a time when Anthropic needs fewer people in junior and intermediate roles:
We might have AI that’s better than humans at everything in maybe one to two years, maybe a little longer than that, those don’t seem to line up. The reason is that there’s this there’s this lag, and there’s this replacement thing. I know the labor market is adaptable. It’s just like 80% of people used to do farming, then farming got automated, and they became factory workers, and then knowledge workers. So, there is some level of adaptability here. We should be economically sophisticated about how the labor market works. But my worry is, as this exponential keeps compounding - and I don’t think it’s going to take that long again, somewhere between between a year and five years - it will overwhelm our ability to adapt.
Hassabis shares his concerns here:
I’m constantly surprised, even when I meet economists at places like this, that there are not more professional economists and professors thinking about what happens and not just sort of on the way to AGI...Maybe there are ways to distribute this new productivity, this new wealth, more fairly. I don’t know if we have the right institutions to do that, but that’s what should happen at that point...There are even bigger questions than that to do with meaning and purpose and a lot of the things that we get from our jobs, not just economically, that’s one question. But I think that may be easier to solve strangely than what happens to the human condition and humanity as a whole.
Who takes charge?
And what happens to humanity is a big question right now as the macro-economic and socio-political rulebooks are torn up. Hassabis has an AGI spin here as well:
AI’s a dual purpose technology, so it could be re-purposed by bad actors for harmful ends. We’ve needed to think about that all the way through. But I’m a big believer in human ingenuity. But the question is having the time and the focus and all the best minds collaborating on it to solve these problems. I’m sure if we had that, we would solve the technical risk problem. It may be we don’t have that, and then that will introduce risk, because it’ll be fragmented, there’ll be different projects, and people be racing each other. Then it’s much harder to make sure systems that we produce will be technically safe, but I feel like that’s a very tractable problem
There’s only so much AI vendors can do, argues Amodei, before governments need to take responsibility - and we’re running out of time:
We’re just trying to do the best we can. We’re just one company, and we’re trying to operate in the environment that exists, no matter how crazy it is. My policy recommendations [for government] haven’t changed - not selling chips is one of the biggest things we can do to make sure that we have the time to handle this. I wish we had five to 10 years, but assume I’m right and it can be done in one to two years.
Why can’t we slow down? The reason we can’t do that, is is because we have geo-political adversaries building the same technology at a similar pace. It’s very hard to have an enforceable agreement where they slow down and we slow down. And so if we can just not sell the chips, then this isn’t a question of competition between the US and China. This is a question of competition between me and Demis, which I’m very confident that we can work out.
That would mean a major shift in current US economic policy, of course, which seems unlikely to say the least. So Amodei has a warning:
These random countries in different parts of the world build data centers that have Nvidia chips instead of Huawei chips. I think of this more like it’s a decision - are we going to sell nuclear weapons to North Korea because that produces some profit for Boeing, where we can say, ’OK these [bomb] cases were made by Boeing, the US is winning, this is great. That analogy should just make clear how I see this trade off - I just don’t think it makes sense.
My take
Last word to Amodei:
There’s all kinds of crazy stuff going on in the outside world outside AI, but my view is this is happening so fast, it is such a crisis, we should be devoting almost all of our effort to thinking about how to get through this.
Same time, same place next year, guys....and we still won’t have AGI then!