Search papers, labs, and topics across Lattice.
3
0
8
MLLMs don't just forget language, they also suffer from perceptual drift in cross-modal spaces, but MAny offers a training-free merging strategy to fix both.
Forget scaling model size: RefineRL shows that incentivizing self-refinement in smaller LLMs lets them punch *way* above their weight, rivaling models 10x larger on competitive programming tasks.
Speculative decoding gets a throughput boost of up to 4.32x by using reinforcement learning to dynamically balance drafting and verification.