Search papers, labs, and topics across Lattice.
This paper introduces Proximal Decoupling, a novel continual learning approach that separates task learning from stability enforcement using operator splitting. The method minimizes the current task loss in one step and then applies a sparse regularizer to prune redundant parameters and preserve task-relevant ones in a separate step. By decoupling learning and retention, Proximal Decoupling achieves state-of-the-art results on standard continual learning benchmarks, improving both stability and adaptability without complex mechanisms like replay buffers.
Decoupling learning and memory lets models learn new tasks without catastrophic forgetting, outperforming standard regularization techniques.
In continual learning, the primary challenge is to learn new information without forgetting old knowledge. A common solution addresses this trade-off through regularization, penalizing changes to parameters critical for previous tasks. In most cases, this regularization term is directly added to the training loss and optimized with standard gradient descent, which blends learning and retention signals into a single update and does not explicitly separate essential parameters from redundant ones. As task sequences grow, this coupling can over-constrain the model, limiting forward transfer and leading to inefficient use of capacity. We propose a different approach that separates task learning from stability enforcement via operator splitting. The learning step focuses on minimizing the current task loss, while a proximal stability step applies a sparse regularizer to prune unnecessary parameters and preserve task-relevant ones. This turns the stability-plasticity into a negotiated update between two complementary operators, rather than a conflicting gradient. We provide theoretical justification for the splitting method on the continual-learning objective, and demonstrate that our proposed solver achieves state-of-the-art results on standard benchmarks, improving both stability and adaptability without the need for replay buffers, Bayesian sampling, or meta-learning components.