Search papers, labs, and topics across Lattice.
University of California, Merced
2
0
6
VLAs can learn to adapt to new environments at test time without any fine-tuning, achieving significant performance gains on robotic manipulation and Atari games.
Squeeze up to 3.2x more performance from your long-context LLMs by intelligently splitting attention computation between CPU and GPU.