Search papers, labs, and topics across Lattice.
This paper investigates the data efficiency of Transformers and RNNs for state tracking, focusing on in-distribution performance rather than OOD generalization. Through large-scale experiments, the authors demonstrate that Transformers require significantly more training data than RNNs as state-space size and sequence length increase. Furthermore, they show that Transformers exhibit poor weight sharing across different sequence lengths, suggesting they learn length-specific solutions, unlike RNNs which effectively amortize learning.
Transformers struggle with state tracking even in-distribution, requiring far more data than RNNs as sequence length grows, and failing to generalize learned mechanisms across sequence lengths.
Despite the remarkable practical success of transformer-based language models, recent work has raised concerns about their ability to perform state tracking. In particular, a growing body of literature has shown this limitation primarily through failures in out-of-distribution (OOD) generalization, such as length extrapolation. In this work, we shift attention to the in-distribution implications of these limitations. We conduct a large-scale experimental study of the data efficiency of transformers and recurrent neural networks (RNNs) across multiple supervision regimes. We find that the amount of training data required by transformers grows much more rapidly with state-space size and sequence length than for RNNs. Furthermore, we analyze the extent to which learned state-tracking mechanisms are shared across different sequence lengths. We show that transformers exhibit negligible or even detrimental weight sharing across lengths, indicating that they learn length-specific solutions in isolation. In contrast, recurrent models exhibit effective amortized learning by sharing weights across lengths, allowing data from one sequence length to improve performance on others. Together, these results demonstrate that state tracking remains a fundamental challenge for transformers, even when training and evaluation distributions match.