Search papers, labs, and topics across Lattice.
This paper investigates why transformers struggle to generalize to unseen variable names in symbolic reasoning tasks, specifically propositional logic. They identify a representational collapse where unembeddings of unseen tokens converge to similar vectors, hindering the model's ability to differentiate them. By combining architectural modifications to improve copying, increasing data diversity, and strategically freezing or resetting (un)embeddings, the authors achieve improved generalization to unseen tokens.
Unseen token generalization in transformers isn't just about copying; it's fundamentally limited by a representational collapse in the unembedding space.
We investigate the ability of decoder-only transformer models to perform abstract symbolic reasoning; specifically solving propositional logic reasoning problems given in-context. Previous work demonstrated that models fail to generalize to problems involving variable names that were not observed during training, and it was shown that one reason behind this is the difficulty of copying (or generating) unseen tokens. We show both theoretically and empirically that a particular representational collapse also has a crucial role: the unembeddings (last-layer weights) of unseen tokens collapse to nearly the same vector during training. The collapse makes distinguishing multiple unseen variables difficult for the model (especially when the embedding and unembedding parameters are shared), and provides a mechanistic explanation for the effectiveness of existing heuristic interventions like"active forgetting", which periodically reset the token (un)embeddings. Based on these observations, we devise a combination of techniques, involving a small architecture change facilitating copying, data diversity, and freezing or resetting (un)embeddings, that achieves generalization to unseen tokens. We support our claims with extensive controlled experiments on propositional logic reasoning problems. Beyond synthetic experiments, we also observe evidence of (un)embedding collapse in the open-weight models in the Gemma 3 family, which includes 99 unused tokens reserved for downstream use. Empirically we find that the correlated embeddings of these tokens are a poor initialization for finetuning applications.