Search papers, labs, and topics across Lattice.
3
0
7
9
Ditch quadratic attention in your ViTs without sacrificing performance: ViT-AdaLA distills knowledge from pre-trained VFMs into linear attention architectures, achieving state-of-the-art results on classification and segmentation.
LLMs can't keep up: even state-of-the-art models struggle to adapt to dynamically changing facts in continual knowledge streams, forgetting updates and getting distracted.
Forget direct prompt editing: this agentic planning framework, powered by offline RL and synthetic data, masters complex image styling by breaking it down into interpretable tool sequences.