Use Local LLMs to Eliminate Little Annoying Tasks
dev.to·4d·
Discuss: DEV
🌳Git
Preview
Report Post

Over the past year, I’ve been slowly moving many of the little repetitive tasks in my engineering workflow over to local LLMs. These are the tiny chores that show up dozens of times a day and quietly wear you down. Automating them away has been a real blessing.

If you’re a software engineer who wants to eliminate the tedious parts and move faster, then I hope sharing some of my scripts inspires you to streamline your own workflow as well.

The core of my setup is Ollama. It runs several code focused local models. You do need a little power under the hood machine to run some of these higher param models. On my M4 Mac these models have been fantastic:

  • qwen2.5-coder:7b runs extremely fast and is more than enough for most tasks
  • qwen2.5-coder:14b a bit sl…

Similar Posts

Loading similar posts...