Adapting Self-Supervised Representations as a Latent Space for Efficient Generation
machinelearning.apple.com·1d
Flag this post

AuthorsMing Gui†‡*, Johannes Schusterbauer†‡*, Timy Phan†‡, Felix Krause†‡, Josh Susskind, Miguel Angel Bautista, Björn Ommer†‡

We introduce Representation Tokenizer (RepTok), a generative modeling framework that represents an image using a single continuous latent token obtained from self-supervised vision transformers. Building on a pre-trained SSL encoder, we fine-tune only the semantic token embedding and pair it with a generative decoder trained jointly using a standard flow matching objective. This adaptation enriches the token with low-level, reconstruction-relevant details, enabling faithful image reconstruction. To preserve the favorable geometry of the original SSL space, we add a cosine-similarity loss that regularizes the adapted token, ensuring the latent space remains…

Similar Posts

Loading similar posts...