Search papers, labs, and topics across Lattice.
Harvard University
2
0
6
18
Stop wasting compute: this RL-trained orchestration policy adaptively decides when your embodied agent should reason with an LLM, slashing latency and boosting task success compared to fixed strategies.
Moxin 7B and its variants (VLM, VLA, Chinese) offer a new suite of fully transparent, open-source multimodal models, pushing beyond simple weight sharing to enable deeper customization and collaborative research.