Scheduling in LLM Inference
fergusfinn.comΒ·8wΒ·
Discuss: Hacker News
Preview
Report Post

Inference engines are systems designed to provide generative AI APIs: you send your text request to them, and you get text responses back.

Usually these systems run β€˜online’ - that is, they provide a server, to which requests can be sent (nowadays, in the openAI compatible format, a kind of frozen version of the openAI API circa a couple of years ago). Examples are vllm, sglang, tgi.

These systems tend to want to run on a GPU for efficiency, and GPUs don’t run β€˜online’ - they want to process lots of data at once, in batches. So we need to get the requests - arriving online, over time - into the β€˜in-progress batch’ that the LLM on the G…

Similar Posts

Loading similar posts...

Keyboard Shortcuts

Navigation
Next / previous item
j/k
Open post
oorEnter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
gh
Interests
gi
Feeds
gf
Likes
gl
History
gy
Changelog
gc
Settings
gs
Browse
gb
Search
/
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc

Press ? anytime to show this help