Search papers, labs, and topics across Lattice.
2
0
5
1
Robots can now plan 9x faster and achieve significantly higher success rates by decoupling action prediction from video generation in World-Action Models.
Ditch quadratic attention in your ViTs without sacrificing performance: ViT-AdaLA distills knowledge from pre-trained VFMs into linear attention architectures, achieving state-of-the-art results on classification and segmentation.