Search papers, labs, and topics across Lattice.
1
0
3
2
LLMs can maintain performance while processing longer contexts, thanks to a new compression method that intelligently adjusts the compression ratio based on the information density of the input.