We reverse-engineered Flash Attention 4
modal.com·13h

Back

Engineering

September 26, 2025•15 minute read

er by Schraudolph, but the implementation here is quite different, involving a cubic polynomial approximation (as described in detail below).

When each tile of normalized attention scores is ready, a Correction warp checks if the normalization scaling factor has changed and, if necessary, rescales the final output tile in Tensor Memory (tO).

  • ⚡️ New in Flash Attention 4: the choice of when to rescale became much smarter, reportedly cutting down on output rescaling operations by a factor of 10. Roughly: the scaling factor used to be a simple running maximum. Now updates are applied only when the maximum has changed enough to impact numerical stability. This seems like a good, and very portable, idea...

Similar Posts

Loading similar posts...