Search papers, labs, and topics across Lattice.
1
0
3
Video Transformers can achieve near-full attention accuracy with significantly less compute by focusing only on informative vertical vectors.