LLM fine-tuning: LoRA vs full fine-tuning — a comparison
pub.towardsai.net
·1d
💬Prompt Engineering
Preview
Report Post

LLM Fine-Tuning: LoRA vs Full Fine-Tuning — a Comparison

What is fine-tuning, why would you use it, what are some ways to do it, and how to measure the results? All that, and more, in this article.

If you run an LLM, and you want to steer its behavior (make it do things it would not normally do), you have a range of options. Prompting will influence its behavior: set a system prompt carefully, and it will apply to everything the model does. Plugging the model into external data sources, for example via MCPs, will also change the outputs.

But prompting and data sources only go so far. If you need to make radical changes to the style of the output, the model itself needs to change. One way to do this is to put the model through a bit of extra training: take ...

Similar Posts

Loading similar posts...