Quantization Advances Vision-Language Models, Preserving Performance with Reduced Half Precision (opens in new tab)

quantumzeitgeist.com·11w·Open original (opens in new tab)

Quantizing vision-language models presents a significant challenge as practitioners seek to reduce computational cost without sacrificing performance. Gautom Das, Vincent La, and Ethan Lau from the University of Maryland, College Park, alongside Abhinav Shrivastava and Matthew Gwilliam, investigate best practices for aggressively quantizing these complex multimodal pipelines. Their research explores how techniques like GPTQ and AWQ impact captioning, retrieval, and question answering when applied to the vision, language, and connection components of such models. Crucially, the team demonstrate that both the visual transformer (ViT) and the large language model (LLM) contribute comparably to overall performance, despite differing parameter sizes, and that lower-bit quantization of the LLM can yield surprisingly high accuracy , offering valuable guidance for deploying efficient multimodal large language models.

The team conducted a dense grid search of bit widths and combinations of model components, block groups, and layer types to understand the sensitivity of each part of the MLLM to quantization. This systematic approach allowed for a detailed analysis of how different quantization strategies affect performance across various tasks and model architectures, such as BLIP-2 and LLaVA. The research unveils several key principles governing MLLMs and quantization strategies, providing practical guidance for efficient deployment.

Task characteristics also play a vital role, with reasoning tasks favouring LLM precision, while visual-textual alignment tasks exhibit more balanced requirements. Moreover, the choice of quantization method dramatically redistributes component importance, with AWQ concentrating on LLM preservation and GPTQ distributing importance more evenly. Architectural dependencies create interaction effects, necessitating holistic pipeline analysis rather than independent component evaluation. These findings highlight that not all components are equally sensitive to a reduction in precision, allowing for targeted quantization strategies that optimize the model size/task performance trade-off. By minimizing information loss across salient model components, the team paves the way for deploying quantized multimodal models in resource-constrained environments and broadening access to these powerful technologies.

Quantization of BLIP-2 for Multimodal Performance

Loading more...

Keyboard Shortcuts

Navigation
Next / previous item
j/k
Open post
oorEnter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
gh
Interests
gi
Feeds
gf
Likes
gl
History
gy
Changelog
gc
Settings
gs
Browse
gb
Search
/
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc

Press ? anytime to show this help