Today I’m talking about the discourse I see around LLM releases in places like Youtube, Hacker News and Reddit. Every time a new model comes out there is a rush to figure out the ways in which the models fails to do something that we would consider baseline for a functional human.

We wouldn’t expect a human to write an reasonably coherent thesis on a topic based on large chunks of available information on the internet in half an hour, nor would we expect to be able to create an image that passes on first glance as real in a matter of seconds. By the same token it seems like expecting a language model to intuit (with no way to test in the real world) something like which side of the door an object is on is a bar that would require a mind that is in a fundamental sense super-human.

T…

Similar Posts

Loading similar posts...

Keyboard Shortcuts

Navigation
Next / previous item
j/k
Open post
oorEnter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
gh
Interests
gi
Feeds
gf
Likes
gl
History
gy
Changelog
gc
Settings
gs
Browse
gb
Search
/
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc

Press ? anytime to show this help