Search papers, labs, and topics across Lattice.
This paper introduces Low-Rank Adaptation (LoRA) to regularize critic learning in off-policy reinforcement learning, addressing overfitting and instability issues associated with large critics. They develop a LoRA formulation compatible with SimbaV2, preserving its hyperspherical normalization geometry. Experiments on DeepMind Control and IsaacLab benchmarks using SAC and FastTD3 show that LoRA achieves lower critic loss and improved policy performance, demonstrating its effectiveness as a structural regularizer.
Freezing most of your critic network and only training a tiny LoRA adapter can dramatically improve off-policy RL performance and stability.
Scaling critic capacity is a promising direction for enhancing off-policy reinforcement learning (RL). However, larger critics are prone to overfitting and unstable in replay-buffer-based bootstrap training. This paper leverages Low-Rank Adaptation (LoRA) as a structural-sparsity regularizer for off-policy critics. Our approach freezes randomly initialized base matrices and solely optimizes low-rank adapters, thereby constraining critic updates to a low-dimensional subspace. Built on top of SimbaV2, we further develop a LoRA formulation, compatible with SimbaV2, that preserves its hyperspherical normalization geometry under frozen-backbone training. We evaluate our method with SAC and FastTD3 on DeepMind Control locomotion and IsaacLab robotics benchmarks. LoRA consistently achieves lower critic loss during training and stronger policy performance. Extensive experiments demonstrate that adaptive low-rank updates provide a simple, scalable, and effective structural regularization for critic learning in off-policy RL.