Search papers, labs, and topics across Lattice.
3
0
6
5
LLMs can now synthesize high-performance kernels for niche hardware like NPUs, even with limited data, thanks to a self-evolving agent that bootstraps and refines code via value-driven reinforcement learning.
MLLMs can slash 68% of their FLOPs with minimal accuracy loss by pruning visual tokens at the "Entropy Collapse Layer"鈥攚here information content plummets鈥攗sing a new matrix-entropy-guided method.
Function calling gets a serious upgrade: a new reward model and inference scaling technique boosts performance by focusing on the *process* of tool use, not just the outcome.