Search papers, labs, and topics across Lattice.
University of California Los Angeles
3
0
7
Forget hand-designed agent communication topologies: Agent Q-Mix learns decentralized communication strategies that boost accuracy and token efficiency in LLM multi-agent systems.
Training Gemini-scale models just got a whole lot faster: veScale-FSDP boosts throughput by up to 66% and cuts memory use by 30% compared to existing FSDP implementations.
Forget expensive MoE training from scratch: ExpertWeaver unlocks inherent MoE structure within dense LLMs using GLU activation patterns, offering a training-free conversion.