Search papers, labs, and topics across Lattice.
This survey reviews AI-enabled computer vision techniques for space robotic missions, focusing on deep learning approaches for tasks like image classification, object detection, and pose estimation across EDL, orbital operations, and surface exploration. It highlights the use of hybrid pipelines combining deep neural networks with classical geometry for terrain-relative navigation and uncooperative target pose estimation. The survey identifies challenges such as computational constraints, data set limitations, and adaptability issues, advocating for lightweight architectures, synthetic data, and robust sensor fusion to enhance autonomy and resilience.
Despite progress in autonomous landing and rover navigation, AI-based computer vision for space robotics still struggles with onboard computational limits, insufficient datasets, and dynamically changing environments.
This survey provides a comprehensive overview of recent advancements and challenges in Artificial Intelligence (AI)‐enabled computer vision (CV) techniques for space robotic missions, spanning critical phases such as Entry, Descent, and Landing (EDL), orbital operations, and planetary surface exploration. Emphasis is placed on deep‐learning–based approaches for image classification, object detection, semantic segmentation, relative pose estimation, and feature matching. State‐of‐the‐art methods in terrain‐relative navigation, crater‐based or rock‐feature matching, and pose estimation for uncooperative targets are highlighted, illustrating the progress achieved through hybrid pipelines combining deep neural networks with classical geometry. The paper also critically evaluates publicly available orbital and planetary data sets—along with the increasing role of synthetic data—for developing and benchmarking CV algorithms under strict resource limitations and harsh environmental conditions. Despite demonstrated success in tasks like autonomous landing, debris removal, and rover navigation, current solutions face significant hurdles. These include computational constraints on onboard hardware, insufficient coverage of planetary conditions in existing data sets, and limited adaptability to dynamically changing environments. To address these shortcomings, research must prioritize lightweight neural architectures, advanced synthetic data generation, adaptive or incremental learning, and robust multisensor fusion. By integrating these strategies, AI‐based CV systems can advance autonomy, precision, and resilience in future space missions.