Search papers, labs, and topics across Lattice.
1
0
3
LLMs can maintain performance while skipping global attention for 80% of tokens, slashing compute costs and memory footprint in long-context scenarios.