Why models makers prefer their models to be text only? Most models now are trained on 10-30TBs of tokens, which is a good number for generalization,but even biggest models aren't multimodal even though images are much less complicated for the model to adapt to,new vision capable models are always using encoder instead of the model being actually capable of processing all-in-one (voices,images,videos,and have the ability to generate them too) instead they depend on an encoder that let the text-only model understand what the image contains and the videos gets sliced into multiple images instead of being natively trained on full videos,of course we got small vision capable models that are even under 7B parameters which is REALLY GOOD,but a better result w...
Why models makers prefer their models to be text only? Most models now are trained on 10-30TBs of tokens, which is a good number for generalization,but even biggest models aren't multimodal even though images are much less complicated for the model to adapt to,new vision capable models are always using encoder instead of the model being actually capable of processing all-in-one (voices,images,videos,and have the ability to generate them too) instead they depend on an encoder that let the text-only model understand what the image contains and the videos gets sliced into multiple images instead of being natively trained on full videos,of course we got small vision capable models that are even under 7B parameters which is REALLY GOOD,but a better result would be achieved if model was trained on everything from scratch, especially after the researchers that adopted new architectures for images/videos and very small (0.5B likely) audio understanding models and it was actually confirmed that images and videos and audio data is much easier and needs far less training than text because text is multilingual and images are mostly repetitive,so a cleaned curated dataset of Images/video/audio can actually train even a 1B model with the newest techniques available.