LRQ-DiT: Log-Rotation Post-Training Quantization of Diffusion Transformers for Text-to-Image Generation
arxiv.org·1d
Investigating the Impact of Large-Scale Pre-training on Nutritional Content Estimation from 2D Images
arxiv.org·12h
Why are LLMs' abilities emergent?
arxiv.org·12h
GeRe: Towards Efficient Anti-Forgetting in Continual Learning of LLM via General Samples Replay
arxiv.org·12h
Loading...Loading more...