Search papers, labs, and topics across Lattice.
This paper reviews the integration of Reinforcement Learning (RL) with the Robotics Operating System (ROS) to address challenges in robotics such as sensor modeling and dynamic environments. It analyzes ROS-based RL applications across domains, categorizing them by application area, RL algorithm type, and ROS-RL integration level. The review identifies advantages and limitations of different RL techniques within ROS-based robotics, providing recommendations for specific applications and environments.
ROS-based RL offers superior decision making, improved perception, enhanced automation, and reliability in robotics.
Common challenges in the area of robotics include issues such as sensor modeling, dynamic operating environments, and limited on-broad computational resources. To improve decision making, robots need a dependable framework to facilitate communication between different modules and the optimal action for real-world applications. The Robotics Operating System (ROS) and Reinforcement Learning (RL) are two promising approaches that help accomplish precise control, seamless integration of sensors-actuators, and exhibit learned behavior. The ROS enables seamless communication between heterogeneous components, while RL focuses on learning optimal behaviors through trial-and-error scenarios. Combining the ROS and RL offers superior decision making, improved perception, enhanced automation, and reliability. This work focuses on investigating ROS-based RL applications across various domains, aiming to enhance understanding through comprehensive discussion, analysis, and summarization. We base our evaluation on the application area, type of RL algorithm used, and degree of ROS–RL integration. Additionally, we provide summary of seminal works that define the current state of the art, along with GitHub repositories and resources for research purposes. Based on the review of successfully implemented projects, we make recommendations highlighting the advantages and limitations of RL techniques for specific applications and environments. The ultimate goal of this work is to advance the robotics field by providing a comprehensive overview of the recent important works that incorporate both the ROS and RL, thereby improving the adaptability of these emerging techniques.