Search papers, labs, and topics across Lattice.
3
0
7
21
LLMs can be made far more efficient at code editing by having them focus on generating concise "edit sketches," while smaller models handle the less demanding task of applying those sketches to the original code.
Forget simply truncating context – compressing code repositories into latent vectors can actually *improve* LLM code generation by filtering out noise, boosting BLEU score by up to 28.3%.
Task-oriented dialogue agents can now learn to balance user satisfaction and operational costs, thanks to a new RL framework that optimizes for both.