Search papers, labs, and topics across Lattice.
This paper introduces an adaptive reinforcement learning algorithm with a dynamic reward function to enable energy-efficient path planning for mobile robots in uneven terrains. The algorithm learns to minimize total traversal energy in a 2.5D grid world by iteratively improving its policy through experience, without relying on prior models or energy-cost maps. Simulation results demonstrate a 10.9% reduction in energy consumption compared to shortest-path methods and comparable performance to deterministic model-based planners.
Mobile robots can now navigate uneven terrain with significantly improved energy efficiency thanks to a novel reinforcement learning approach that doesn't require prior knowledge of the environment.
Efficient navigation of mobile robots through partially known, uneven terrains remains a significant challenge due to the impact of terrain features on motion costs. This paper presents a novel adaptive reinforcement learning approach using a dynamic reward function to address this issue. The proposed algorithm enables learning of energy-efficient paths by estimating cumulative energy costs in a two-and-a-half dimensional (2.5D) grid world, without requiring prior models or energy-cost maps. Unlike conventional reinforcement learning approaches that optimize step-wise energy, our method focuses on minimizing the total traversal energy. Based on classical Q-learning, the agent iteratively improves its policy through experience. Simulation results show that the proposed approach reduces energy consumption by 10.9% compared to shortest-path methods and achieves comparable performance to deterministic, model-based planners optimized for energy.