We must accept that a working ‘simulation’ of intelligence actually is intelligence
3quarksdaily.com·5d
🧠Cognitive Science
Preview
Report Post

Blaise Agüera y Arcas in Nature:

Large language models can be unreliable and say dumb things, but then, so can humans. Their strengths and weaknesses are certainly different from ours. But we are running out of intelligence tests that humans can pass reliably and AI models cannot. By those benchmarks, and if we accept that intelligence is essentially computational — the view held by most computational neuroscientists — we must accept that a working ‘simulation’ of intelligence actually is intelligence. There was no profound discovery that suddenly made obviously non-intelligent machines intelligent: it did turn out to be a matter of sc…

Similar Posts

Loading similar posts...