Toy Models of Superposition
dev.to·1d·
Discuss: DEV
🔲Cellular Automata
Preview
Report Post

Why some AI neurons hide many ideas — a simple model explains it

Deep nets sometimes cram lots of different ideas into a single unit, a weird thing called polysemanticity. It means one neuron can light up for apples, faces, or a color, all at once, making the model hard to read. A tiny, clear model shows this happens because the system stores extra, rare bits of info in superposition, like hiding extra messages in the same place.

As the model grows these hidden bits can suddenly shift behavior, like a quick flip — think of it as a phase change in how the network uses its parts. The work also finds a surprising tie to the shape of things in many dimensions, simple geometry ideas explain why this packing happens. This also helps explain why small, strange tweaks can tr…

Similar Posts

Loading similar posts...

Keyboard Shortcuts

Navigation
Next / previous item
j/k
Open post
oorEnter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
gh
Interests
gi
Feeds
gf
Likes
gl
History
gy
Changelog
gc
Settings
gs
Browse
gb
Search
/
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc

Press ? anytime to show this help