Might the DeepSeek-OCR paper be a key innovation for smarter models?
nitter.net·16h·
Discuss: r/LocalLLaMA
Flag this post

https://nitter.net/karpathy/status/1980397031542989305

I quite like the new DeepSeek-OCR paper. It's a good OCR model (maybe a bit worse than dots), and yes data collection etc., but anyway it doesn't matter.

The more interesting part for me (esp as a computer vision at heart who is temporarily masquerading as a natural language person) is whether pixels are better inputs to LLMs than text. Whether text tokens are wasteful and just terrible, at the input.

Maybe it makes more sense that all inputs to LLMs should only ever be images. Even if you happen to have pure text input, maybe you'd prefer to render it and then feed that in:

- more ...

Similar Posts

Loading similar posts...