Multimodal?
reddit.com·14h·
Discuss: r/LocalLLaMA
🔢Embeddings
Preview
Report Post

Why models makers prefer their models to be text only? Most models now are trained on 10-30TBs of tokens, which is a good number for generalization,but even biggest models aren't multimodal even though images are much less complicated for the model to adapt to,new vision capable models are always using encoder instead of the model being actually capable of processing all-in-one (voices,images,videos,and have the ability to generate them too) instead they depend on an encoder that let the text-only model understand what the image contains and the videos gets sliced into multiple images instead of being natively trained on full videos,of course we got small vision capable models that are even under 7B parameters which is REALLY GOOD,but a better result w...

Similar Posts

Loading similar posts...