Dec 4, 2024


A few months ago OpenAI published their open weights model(s) GPT-OSS (20B and 120B), and one of the eye-catching characteristics was that it was heavily quantized - in other words “shrunk” or compressed - to 4 bits per parameter (MXFP4) instead of the commonly used 16 bit (BF16).

4 bit (“half byte”) means that the model uses much less memory, energy and also can run efficiently on hardware natively supporting 4 bit MXFP inference (e.g. Nvidia Blackwell B200/300 and AMD MI355X). With 4 bit compared to 16 bit there is some quality loss, but typically marginally.

But can we go even lower and represent the model in fewer bits per parameter, e.g. 3 bit, 2 bit or even 1 bit? There is highly interesting and seemingly promising research on more efficient qua…

Similar Posts

Loading similar posts...

Keyboard Shortcuts

Navigation
Next / previous item
j/k
Open post
oorEnter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
gh
Interests
gi
Feeds
gf
Likes
gl
History
gy
Changelog
gc
Settings
gs
Browse
gb
Search
/
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc

Press ? anytime to show this help