Search papers, labs, and topics across Lattice.
1
0
3
2
Extend your LLM's context window by 16x without retraining from scratch: just "self-inject" a compressed representation from a smaller model into a larger one.