Search papers, labs, and topics across Lattice.
This paper investigates the impact of applying Rotary Positional Embedding (RoPE) to only a fraction of hidden dimensions in transformer architectures, finding that even a small fraction (around 10%) achieves convergence comparable to full RoPE while yielding up to 10x memory savings. The study systematically examines training dynamics and convergence across various model sizes, sequence lengths, datasets, and architectures. Additionally, the authors show that minimal RoPE or QK-Norm can stabilize training trajectories in models without positional encoding (NoPE).
You can slash RoPE memory costs by 10x without sacrificing convergence, just by applying it to a sliver (10%) of hidden dimensions.
Rotary Positional Embedding (RoPE) is a common choice in transformer architectures for encoding relative positional information. Although earlier work has examined omitting RoPE in specific layers, the effect of varying the fraction of hidden dimensions that receive rotary transformations remains largely unexplored. This design choice can yield substantial memory savings, which becomes especially significant at long context lengths. We find up to 10x memory savings over the standard RoPE cache, while achieving comparable final loss. In this work, we present a systematic study examining the impact of partial RoPE on training dynamics and convergence across architectures and datasets. Our findings uncover several notable patterns: (1) applying RoPE to only a small fraction of dimensions (around 10%) achieves convergence comparable to using full RoPE; (2) these trends hold consistently across model size, sequence lengths and datasets of varying quality and architectures, with higher-quality data resulting in lower overall loss and similar benchmark performance; and (3) some models trained with NoPE (No Positional Encoding) showcase unstable learning trajectories, which can be alleviated through minimal RoPE application or QK-Norm which converges to a higher loss. Together, these results offer practical guidance for model designers aiming to balance efficiency and training stability, while emphasizing the previously overlooked importance of partial RoPE.