Blaise Agüera y Arcas in Nature:
Large language models can be unreliable and say dumb things, but then, so can humans. Their strengths and weaknesses are certainly different from ours. But we are running out of intelligence tests that humans can pass reliably and AI models cannot. By those benchmarks, and if we accept that intelligence is essentially computational — the view held by most computational neuroscientists — we must accept that a working ‘simulation’ of intelligence actually is intelligence. There was no profound discovery that suddenly made obviously non-intelligent machines intelligent: it did turn out to be a matter of sc…
Blaise Agüera y Arcas in Nature:
Large language models can be unreliable and say dumb things, but then, so can humans. Their strengths and weaknesses are certainly different from ours. But we are running out of intelligence tests that humans can pass reliably and AI models cannot. By those benchmarks, and if we accept that intelligence is essentially computational — the view held by most computational neuroscientists — we must accept that a working ‘simulation’ of intelligence actually is intelligence. There was no profound discovery that suddenly made obviously non-intelligent machines intelligent: it did turn out to be a matter of scaling computation.
Other researchers disagree with my assessment of where we are with AI. But in what follows, I want to accept the premise that intelligent machines are already here, and turn the mirror back on ourselves. If scaling up computation yields AI, could the kind of intelligence shown by living organisms, humans included, also be the result of computational scaling? If so, what drove that — and how did living organisms become computational in the first place?
More here.
Enjoying the content on 3QD? Help keep us going by donating now.