Search papers, labs, and topics across Lattice.
Meituan LongCat Interaction Team
4
0
6
Stop drowning your MLLMs in irrelevant context: FES-RAG shows that carefully selecting multimodal fragments boosts factual accuracy by up to 27% and slashes context length.
LLMs can model user preferences more effectively by disentangling intent into multiple latent factors, leading to improved recommendation accuracy and interpretability.
Open-source MLLMs can now achieve state-of-the-art accuracy on complex tabular reasoning tasks, even outperforming models 18x their size, by explicitly penalizing visual hallucinations and shortcut guessing through process-supervised RL.
Combining LLMs with traditional ML beats either alone at simulating complex user behavior, thanks to a clever policy-guided alignment.