Search papers, labs, and topics across Lattice.
The paper introduces "algorithmic cores," compact subspaces within transformer networks that are necessary and sufficient for task performance, aiming to identify invariant computational structures across different training runs. Through experiments on Markov-chain, modular-addition, and GPT-2 models, the study demonstrates that independently trained transformers converge to similar algorithmic cores despite differing weights. These cores, such as a single axis governing subject-verb agreement in GPT-2, represent low-dimensional invariants that persist across training runs and scales.
Independently trained transformers, despite having different weights, converge to the same low-dimensional "algorithmic cores" that drive task performance.
Large language models exhibit sophisticated capabilities, yet understanding how they work internally remains a central challenge. A fundamental obstacle is that training selects for behavior, not circuitry, so many weight configurations can implement the same function. Which internal structures reflect the computation, and which are accidents of a particular training run? This work extracts algorithmic cores: compact subspaces necessary and sufficient for task performance. Independently trained transformers learn different weights but converge to the same cores. Markov-chain transformers embed 3D cores in nearly orthogonal subspaces yet recover identical transition spectra. Modular-addition transformers discover compact cyclic operators at grokking that later inflate, yielding a predictive model of the memorization-to-generalization transition. GPT-2 language models govern subject-verb agreement through a single axis that, when flipped, inverts grammatical number throughout generation across scales. These results reveal low-dimensional invariants that persist across training runs and scales, suggesting that transformer computations are organized around compact, shared algorithmic structures. Mechanistic interpretability could benefit from targeting such invariants -- the computational essence -- rather than implementation-specific details.