Search papers, labs, and topics across Lattice.
This paper investigates whether complex representation learning objectives are necessary for Behavioral Foundation Models (BFMs) in zero-shot RL. They find that simple self-supervised next-state prediction in latent space, when combined with an orthogonality regularization term to maintain feature diversity, can match or surpass state-of-the-art BFM methods. The proposed method, Regularized Latent Dynamics Prediction (RLDP), also demonstrates superior performance in low-coverage scenarios compared to existing approaches.
Forget fancy objectives: a simple, regularized latent dynamics model can achieve state-of-the-art zero-shot RL performance, even with limited data.
Behavioral Foundation Models (BFMs) produce agents with the capability to adapt to any unknown reward or task. These methods, however, are only able to produce near-optimal policies for the reward functions that are in the span of some pre-existing state features, making the choice of state features crucial to the expressivity of the BFM. As a result, BFMs are trained using a variety of complex objectives and require sufficient dataset coverage, to train task-useful spanning features. In this work, we examine the question: are these complex representation learning objectives necessary for zero-shot RL? Specifically, we revisit the objective of self-supervised next-state prediction in latent space for state feature learning, but observe that such an objective alone is prone to increasing state-feature similarity, and subsequently reducing span. We propose an approach, Regularized Latent Dynamics Prediction (RLDP), that adds a simple orthogonality regularization to maintain feature diversity and can match or surpass state-of-the-art complex representation learning methods for zero-shot RL. Furthermore, we empirically show that prior approaches perform poorly in low-coverage scenarios where RLDP still succeeds.