ATTENDING RAY SUMMIT 2025?

Come check out Kevin’s talk on Nov 5th at 4pm (Golden Gate C3) where he’ll be presenting the work in this blog post and more! We will also be at our booth, so drop by, say hello, and get a chance to win an Nvidia DGX Spark!

TL;DR

At Daft, we are committed to building the best tool for running models on your data. We know that LLM batch inference is often difficult, costly, and slow, but we believe it doesn’t have to be that way. Today, we are releasing an inference backend in beta that cuts batch inference time in half.

This new vLLM Prefix Caching provider is able to accomplish this by combining the power of the vLLM serving engine with Daft’s distributed execution Flotilla to do two things:

  • •…

Similar Posts

Loading similar posts...

Keyboard Shortcuts

Navigation
Next / previous item
j/k
Open post
oorEnter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
gh
Interests
gi
Feeds
gf
Likes
gl
History
gy
Changelog
gc
Settings
gs
Browse
gb
Search
/
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc

Press ? anytime to show this help