This post is about synthetic data and why it wont scale like we think. This field is the absolute frontier of science, it is about the process of learning itself, and consequently it is mired in epistemics and traps. The problem is that the search space of solutions is infinite, and importantly, there is also an infinite number of deceptively good ideas. We need to develop strong tastes and high level beliefs to tell us when our eyes (empirics) are wrong. I think the old statistical theorems are actually really good and explain deep learning very well contrary to what many people will tell you. # The "magic" of deep learning ![[inductive_bias_invert.png]] *Inverted image from ["Deep Learning is Not So Mysterious or Different"](https://arxiv.org/abs/2503.02113v2),…

Similar Posts

Loading similar posts...

Keyboard Shortcuts

Navigation
Next / previous item
j/k
Open post
oorEnter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
gh
Interests
gi
Feeds
gf
Likes
gl
History
gy
Changelog
gc
Settings
gs
Browse
gb
Search
/
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc

Press ? anytime to show this help