Running Vibe with a Local Model (vLLM)

Vibe Mistral CLI, released this week, is Mistral’s take on the agentic code CLI. However, one small detail is not documented anywhere: how to make Vibe work with local models instead of Mistral official APIs. While there is an option local, it will only work for localhost. I however have a dedicated GPU server that I want to integrate.

In this blog, I’ll walk you through how I successfully connected Vibe CLI to a locally hosted model running on vLLM, using Devstral-2-123B-Instruct-2512 as the example.

This makes it possible to use Vibe fully offline.


🌐 Background

Mistral released Devstral-2 and created the Vibe CLI tooling. Their announcement mentions support for custom providers but **does not includ…

Similar Posts

Loading similar posts...

Keyboard Shortcuts

Navigation
Next / previous item
j/k
Open post
oorEnter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
gh
Interests
gi
Feeds
gf
Likes
gl
History
gy
Changelog
gc
Settings
gs
Browse
gb
Search
/
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc

Press ? anytime to show this help