Parameter-efficient fine-tuning in tinygrad
dxuuu.xyz·8h·
Preview
Report Post

Home

Parameter-efficient fine-tuning (PEFT) is a family of techniques used to adapt LLMs (along with other types of large models) to specific tasks or datasets. For example, if you had a private dataset from your legal practice, you could take gpt-oss off the shelf and customize it to be really good at your specific legal specialty. Unlike full fine-tuning (FFT), with PEFT you only need to update a subset of the weights in the model, leading to better computational efficiency while requiring significantly less storage.

This holiday season, I decided to implement Low Rank Adaptation (LoRA) using tinygrad. Tinygrad because it was diff…

Similar Posts

Loading similar posts...