Search papers, labs, and topics across Lattice.
This paper introduces a shielded reinforcement learning framework that uses sequential control barrier functions to satisfy Signal Temporal Logic (STL) constraints during learning. The approach enables RL agents to adhere to complex spatio-temporal tasks, such as visiting dynamic targets with unknown trajectories, which goes beyond simple safety constraints. Simulations demonstrate the effectiveness of the proposed framework in enforcing rich STL specifications.
Guaranteeing complex mission objectives in RL is now tractable: this method enforces Signal Temporal Logic constraints, enabling robots to learn while adhering to dynamic, time-sensitive tasks.
Reinforcement Learning (RL) has shown promise in various robotics applications, yet its deployment on real systems is still limited due to safety and operational constraints. The safe RL field has gained considerable attention in recent years, which focuses on imposing safety constraints throughout the learning process. However, real systems often require more complex constraints than just safety, such as periodic recharging or time-bounded visits to specific regions. Imposing such spatio-temporal tasks during learning still remains a challenge. Signal Temporal Logic (STL) is a formal language for specifying temporal properties of real-valued signals and provides a way to express such complex tasks. In this paper, we propose a framework that leverages sequential control barrier functions and model-free RL to ensure that the given STL tasks are satisfied throughout the learning process. Our method extends beyond traditional safety constraints by enforcing rich STL specifications, which can involve visits to dynamic targets with unknown trajectories. We also demonstrate the effectiveness of our framework through various simulations.