Search papers, labs, and topics across Lattice.
2
0
5
2
Kickstart MoE training by initializing experts with semantically meaningful subspaces, leading to faster specialization and better performance than standard upcycling techniques.
VLA models struggle with physical reasoning, but Pri4R's simple trick of predicting 3D point tracks during training boosts performance by up to 40% on manipulation tasks, without adding any inference overhead.