Maybe the first superintelligence will be a polyintelligence
thebookofluke.com·13h·
Discuss: r/programming

Now that it is mostly settled that LLMs are not the last difficult step toward superintelligence and the singularity, it is my hope that we can get to the interesting conversation.

Human intelligence, it is reliable in some ways and has inherent limitations. You can say the same thing about LLM intelligence. The interesting part is that the profile of reliability and limitation for each intelligence is strangely complimentary.

Comparing the breadth of our knowledge, humans have just a tiny puddle of information available to our working memory while LLMs have an entire ocean. We also fail to make connections sometimes that are right in front of us, whereas connections are the entire basis for how LLMs compile and store information and they hone in on connections in the content you …

Similar Posts

Loading similar posts...