Search papers, labs, and topics across Lattice.
This paper provides a theoretical characterization of forgetting in continual learning by shifting the focus from task order to task distribution. It derives an exact operator identity for the forgetting quantity in an exact-fit linear regression regime where tasks are sampled i.i.d. from a task distribution. The analysis establishes an unconditional upper bound on forgetting, identifies the leading asymptotic term, characterizes the convergence rate, and relates this rate to geometric properties of the task distribution.
Forget about task order: the *distribution* of tasks itself dictates the rate and nature of forgetting in continual learning.
A central challenge in continual learning is forgetting, the loss of performance on previously learned tasks induced by sequential adaptation to new ones. While forgetting has been extensively studied empirically, rigorous theoretical characterizations remain limited. A notable step in this direction is \citet{evron2022catastrophic}, which analyzes forgetting under random orderings of a fixed task collection in overparameterized linear regression. We shift the perspective from order to distribution. Rather than asking how a fixed task collection behaves under random orderings, we study an exact-fit linear regime in which tasks are sampled i.i.d.\ from a task distribution~$\Pi$, and ask how the generating distribution itself governs forgetting. In this setting, we derive an exact operator identity for the forgetting quantity, revealing a recursive spectral structure. Building on this identity, we establish an unconditional upper bound, identify the leading asymptotic term, and, in generic nondegenerate cases, characterize the convergence rate up to constants. We further relate this rate to geometric properties of the task distribution, clarifying what drives slow or fast forgetting in this model.