Is GRPO Broken?
neelsomaniblog.com·6h·
Discuss: Hacker News

This is the fourth and final piece in my series on reinforcement learning. Previously, we covered classical RL, continuous control, and off-policy methods. The topic of LLM post-training is discussed all over X, so this primer should help anyone get up to speed.

Here’s how I like to think about post-training methodologies:

SFT is simple. It’s just applying additional training iterations like the pre-training stage, but on a curated set of ideal (prompt, response) pairs. You might make this more efficient with a LoRA adapter.

In this post, we’ll focus on quadrant 2: DP…

Similar Posts

Loading similar posts...