MiniMax M2, a new generation large language model optimized for agentic workflows and end-to-end coding. MiniMax publicly released MiniMax-M2 and published weights on Hugging Face; it’s an MoE (sparse) model with a very large total parameter budget but a much smaller active set per token, and it supports very large contexts (200k+ tokens).

The Minimax M2’s design is indeed excellent, and I believe developers are eager to experience its features. Here are some solutions for using the M2, as well as advanced techniques that can be used as a reference. For using the Minimax M2, I recommend CometAPI. This article explains what M2 is and its key features, compares hosted API access vs self-hosting, lays out pricing and practical examples for calling the model, and finishes with advan…

Similar Posts

Loading similar posts...

Keyboard Shortcuts

Navigation
Next / previous item
j/k
Open post
oorEnter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
gh
Interests
gi
Feeds
gf
Likes
gl
History
gy
Changelog
gc
Settings
gs
Browse
gb
Search
/
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc

Press ? anytime to show this help