Local LLMs: state of the art
dev.to·2h·
Discuss: DEV
📊Performance Profiling
Preview
Report Post

With all the local LLMs available by now, you might get curious about what’s the best we can have running locally and how does that compare against what you can get with free-tier inference providers. And the first question you’ll have is: what model do I use?

I’ve set out to answer those questions for myself. Here is what I’ve learned from this journey.


Goals and hardware

My use case is agentic coding. Specifically, KiloCode. That’s pretty important because broadly speaking, there are two main use cases for LLMs, and some of the requirements are the opposite:

Creative writing/roleplay: you want the model to be creative - to be able to tell an interesting and unexpected story, rather than sticking to what you say.

Agentic coding (or other agenti…

Similar Posts

Loading similar posts...