Search papers, labs, and topics across Lattice.
Latent Particle World Model (LPWM) is introduced as a self-supervised, object-centric world model that learns scene decompositions from video data by discovering keypoints, bounding boxes, and object masks without explicit supervision. The model employs a novel latent action module to model stochastic particle dynamics, enabling flexible conditioning on actions, language, and image goals. LPWM achieves state-of-the-art performance on both real-world and synthetic datasets and demonstrates applicability to decision-making tasks like goal-conditioned imitation learning.
Unsupervised discovery of object keypoints and dynamics directly from video unlocks state-of-the-art world models applicable to decision-making.
We introduce Latent Particle World Model (LPWM), a self-supervised object-centric world model scaled to real-world multi-object datasets and applicable in decision-making. LPWM autonomously discovers keypoints, bounding boxes, and object masks directly from video data, enabling it to learn rich scene decompositions without supervision. Our architecture is trained end-to-end purely from videos and supports flexible conditioning on actions, language, and image goals. LPWM models stochastic particle dynamics via a novel latent action module and achieves state-of-the-art results on diverse real-world and synthetic datasets. Beyond stochastic video modeling, LPWM is readily applicable to decision-making, including goal-conditioned imitation learning, as we demonstrate in the paper. Code, data, pre-trained models and video rollouts are available: https://taldatech.github.io/lpwm-web