Search papers, labs, and topics across Lattice.
3
6
6
5
Robots can now manipulate objects with greater dexterity and adaptability thanks to a new world model that leverages both vision and high-frequency tactile feedback to predict and react to contact dynamics.
A principled framework for General World Models reveals the limitations of current systems and the architectural requirements for future progress.
Open-sourcing SAIL-VL2 gives the multimodal community a new SOTA vision-language model under 4B parameters, driven by innovations in data curation, progressive training, and sparse MoE architectures.