Search papers, labs, and topics across Lattice.
The paper proves that attention sinks—where attention mass concentrates on a fixed, content-agnostic position—are a necessary consequence of softmax normalization when implementing trigger-conditional behavior in Transformers. They show that a task requiring the model to output the average of preceding tokens only when a trigger token is present provably induces an attention sink. They further demonstrate that non-normalized ReLU attention can solve the same task without sinks, highlighting normalization as the key driver, and validate these findings empirically.
Softmax attention's normalization creates unavoidable "attention sinks" when implementing trigger-conditional logic, but ReLU attention offers a sink-free alternative.
Transformers often display an attention sink: probability mass concentrates on a fixed, content-agnostic position. We prove that computing a simple trigger-conditional behavior necessarily induces a sink in softmax self-attention models. Our results formalize a familiar intuition: normalization over a probability simplex must force attention to collapse onto a stable anchor to realize a default state (e.g., when the model needs to ignore the input). We instantiate this with a concrete task: when a designated trigger token appears, the model must return the average of all preceding token representations, and otherwise output zero, a task which mirrors the functionality of attention heads in the wild (Barbero et al., 2025; Guo et al., 2024). We also prove that non-normalized ReLU attention can solve the same task without any sink, confirming that the normalization constraint is the fundamental driver of sink behavior. Experiments validate our predictions and demonstrate they extend beyond the theoretically analyzed setting: softmax models develop strong sinks while ReLU attention eliminates them in both single-head and multi-head variants.