Search papers, labs, and topics across Lattice.
Advancements in artificial intelligence (AI) have driven robotics to the forefront of technological innovation, enhancing productivity and safety across industries. Autonomous navigation, especially in unstructured environments with irregular terrains and dynamic obstacles, remains a key challenge. This paper introduces a vision-controlled autonomous navigation framework that enables robots to traverse complex environments using only vision sensors and image processing. The system integrates visual segmentation, optimized path planning, and advanced trajectory tracking. Key contributions include: (1) Semantic Mapping and Localization - A target detection network generates a global semantic map from local views, enhancing perception without external markers; (2) Improved Path Planning - The RRT-connect algorithm is refined for safer, adaptive navigation in unpredictable terrains; (3) Accurate Trajectory Control-A Soft Actor-Critic (SAC)-based model reduces tracking errors and enhances path-following precision; (4) Empirical Validation - Experiments with a magnetic miniature robot in unstructured environments confirm the system's robustness and accuracy. The proposed framework addresses existing limitations, paving the way for more autonomous and resilient robotic systems in complex environments.