Search papers, labs, and topics across Lattice.
This paper introduces Safe-Support Q-Learning, a novel safe RL framework that eliminates unsafe state visitation during training by using a behavior policy supported on a safe set. The method employs a two-stage approach, first training a Q-function with a KL-regularized Bellman target to stay close to the behavior policy, and then extracting a parametric policy from the learned Q-values. Experiments show the method achieves stable learning, well-calibrated value estimates, and safer behavior compared to existing baselines.
Guaranteeing zero unsafe state visits during RL training is now possible, opening the door to deploying RL agents in previously inaccessible high-risk environments.
Ensuring safety during reinforcement learning (RL) training is critical in real-world applications where unsafe exploration can lead to devastating outcomes. While most safe RL methods mitigate risk through constraints or penalization, they still allow exploration of unsafe states during training. In this work, we adopt a stricter safety requirement that eliminates unsafe state visitation during training. To achieve this goal, we propose a Q-learning-based safe RL framework that leverages a behavior policy supported on a safe set. Under the assumption that the induced trajectories remain within the safe set, this policy enables sufficient exploration within the safe region without requiring near-optimality. We adopt a two-stage framework in which the Q-function and policy are trained separately. Specifically, we introduce a KL-regularized Bellman target that constrains the Q-function to remain close to the behavior policy. We then derive the policy induced from the trained Q-values and propose a parametric policy extraction method to approximate the optimal policy. Our approach provides a unified framework that can be adapted to different action spaces and types of behavior policies. Experimental results demonstrate that the proposed method achieves stable learning and well-calibrated value estimates and yields safer behavior with comparable or better performance than existing baselines.