Search papers, labs, and topics across Lattice.
5
0
8
Multimodal agents can now reason, plan, and execute actions more effectively by integrating perception as a core component, not just an auxiliary interface.
LLM agents struggle to maintain performance in multi-day collaborative tasks, dropping significantly after just one environmental update, revealing a critical gap in adaptation to evolving real-world conditions.
Forget reward engineering: this work shows LLMs can self-evolve and outperform larger models by learning to explore and summarize new environments autonomously.
Mimicking how the brain integrates language and context with facial cues unlocks state-of-the-art dynamic emotion recognition.
By reflecting on its own reasoning, ReflectRM achieves a +10.2 improvement in mitigating positional bias compared to leading generative reward models, making it a far more stable evaluator.