Search papers, labs, and topics across Lattice.
This paper introduces Target-Aligned Reinforcement Learning (TARL), a method that selectively updates the online network based on the alignment between online and target network estimates to address the stability-recency tradeoff in target networks. TARL prioritizes transitions where the target and online networks agree, effectively reducing the impact of outdated target estimates. Theoretical analysis and empirical results across benchmark environments demonstrate that TARL accelerates convergence and improves performance compared to standard RL algorithms.
Target networks don't have to be a necessary evil: aligning online and target network estimates can actually *accelerate* RL convergence.
Many reinforcement learning algorithms rely on target networks - lagged copies of the online network - to stabilize training. While effective, this mechanism introduces a fundamental stability-recency tradeoff: slower target updates improve stability but reduce the recency of learning signals, hindering convergence speed. We propose Target-Aligned Reinforcement Learning (TARL), a framework that emphasizes transitions for which the target and online network estimates are highly aligned. By focusing updates on well-aligned targets, TARL mitigates the adverse effects of stale target estimates while retaining the stabilizing benefits of target networks. We provide a theoretical analysis demonstrating that target alignment correction accelerates convergence, and empirically demonstrate consistent improvements over standard reinforcement learning algorithms across various benchmark environments.