Search papers, labs, and topics across Lattice.
1
0
3
Linear attention gets a serious memory upgrade: RAM-Net uses sparse addressing to unlock exponential state scaling without extra parameters, blowing past SOTA in long-range retrieval.