Search papers, labs, and topics across Lattice.
Robots can now "see" hidden objects and understand articulation by learning from human egocentric video, even if they can't physically explore those areas themselves.
Freeing robots from pre-assigned tasks slashes completion times in multi-agent settings, with a new algorithm improving performance on almost 90% of tested scenarios.
Heuristic maritime routes lead to extreme fuel waste in nearly 5% of voyages, but this RL approach cuts that risk by almost 10x.
Hyper-redundant robots get a 75% accuracy boost thanks to a neural network that adaptively blends learned behavior with kinematic priors.
Zero-shot robotic manipulation is now within reach: TiPToP matches a 350-hour fine-tuned model without *any* robot data.
Forget painstakingly collecting robot data in the real world – this interactive world simulator lets you train policies that perform just as well, but entirely in simulation.
Forget hand-engineered features: this approach learns symbolic representations for robotic planning directly from pixels using VLMs, enabling impressive zero-shot generalization to new environments and goals.
Forget simulated manipulation—ManipulationNet offers a global infrastructure for benchmarking robots in the real world, complete with standardized hardware and software, to finally measure progress toward general manipulation.
NeuroSkill(tm) offers real-time, edge-based human-AI interaction by directly modeling human state of mind from BCI data, enabling more nuanced and empathetic agentic responses.
Learning robotic reward functions from a million trajectories reveals that comparing entire trajectories, not just individual frames, unlocks better generalization and learning from suboptimal data.
Forget computationally expensive fluid dynamics: this work shows that a simple, stateless model, carefully calibrated to real-world data, can create surprisingly effective digital twins for soft underwater robots.
Agentic AI can automate complex optical systems control with near-perfect success rates, leaving code-generation approaches in the dust.
Decomposing Bellman values into a graph of simpler objectives lets agents master complex, high-dimensional tasks with less tuning and better safety.
Achieve robust safety-critical control with a single hyperparameter by using a novel Taylor-Lagrange formulation that directly incorporates control actions into the current time step.
Stop repeating avoidable mistakes in public robot deployments: here's a community-vetted checklist to guide your next study.
Control hybrid rigid-soft robots with the ease of AR teleoperation, thanks to a new pipeline that accurately models the soft robot's real-world behavior in simulation.
Forget hand-engineering initial conditions for robust RL: this method *learns* which conditions are feasible while simultaneously training a safe policy.
LLMs can now generate complex, physically plausible 3D scenes for robotics simulation by iteratively proposing assets and refining arrangements based on physics engine feedback.
Quadrupedal robots can now nimbly navigate stairs and rough terrain thanks to a new multimodal RL approach that doesn't require feeling around with its front feet.
A novel system enables robotic hands to achieve perfect motion recognition in games by fusing CNN-based vision with adaptable reinforcement learning.