Search papers, labs, and topics across Lattice.
4
1
10
4
RFT's impressive in-domain performance masks surprisingly weak generalization to new environments, highlighting a critical challenge for deploying LLM agents in the real world.
GPT-5's scientific reasoning skills plummet by nearly 50% when tackling multi-step workflows, revealing a critical gap in current LLM agents' ability to orchestrate complex tool use.
Retrofit your VLMs with Multi-Head Latent Attention (MLA) for faster inference and smaller memory footprint, without costly pretraining, using this parameter-efficient conversion framework.
Finally, a fully open-source, reproducible system for long-form song generation is here, complete with licensed data, code, and a Qwen-based model that rivals closed-source systems.