Search papers, labs, and topics across Lattice.
3
0
8
0
Forget storing full task-specific models – Auto-FlexSwitch compresses the knowledge into tiny, dynamically assembled task vectors, slashing storage costs without sacrificing accuracy.
Unleashing multiple independently-optimized agents within a shared tree search dramatically boosts code generation performance, surpassing single-agent limitations.
Stop reinventing the wheel for diffusion LLM alignment: DARE provides a unified framework for SFT, preference optimization, and RL, accelerating research and enabling fair comparisons across dLLMs.