Search papers, labs, and topics across Lattice.
This survey reviews the landscape of Physical AI, focusing on the role of foundation models, simulation environments, and integrated development platforms. It analyzes the NVIDIA ecosystem (Omniverse, Isaac Sim, etc.) as a case study, highlighting its self-reinforcing development cycle for robot learning and perception. The paper identifies a trend towards hybrid architectures that combine deliberative planning with reactive control, leveraging OpenUSD for interoperability, while also discussing limitations like the sim-to-real gap.
Physical AI is converging towards hybrid architectures that blend learned models and classical techniques, but faces challenges like the sim-to-real gap and proprietary lock-in.
Physical Artificial Intelligence (AI), or embodied AI, represents a paradigm shift from purely virtual intelligence to systems with a physical presence capable of perceiving, reasoning, and acting upon the world. This survey provides a comprehensive review of the foundational concepts of Physical AI, highlighting the critical role of foundation models and integrated development platforms. As a case study, we conduct a deep vertical analysis of the NVIDIA ecosystem, examining how simulation (Omniverse, Isaac Sim), synthetic data generation (Cosmos), robot learning (Isaac Lab), perception (Isaac ROS), and edge computing (Jetson) create a self-reinforcing development cycle. We further analyze state-of-the-art architectures, comparing language-grounded planners to generalist Vision-Language-Action policies. Our analysis reveals a clear convergence towards hybrid architectures that combine deliberative reasoning with reactive control, heavily relying on standards like OpenUSD for interoperability. The survey concludes by discussing key limitations, such as the sim-to-real gap and proprietary lock-in, emphasizing the convergence of learned models and classical techniques in shaping the future of autonomous systems.