Search papers, labs, and topics across Lattice.
8
0
8
4
LLM agents struggle to maintain performance in multi-day collaborative tasks, dropping significantly after just one environmental update, revealing a critical gap in adaptation to evolving real-world conditions.
VLAA-GUI's innovative framework allows autonomous agents to not only verify their success but also adaptively recover from failures, achieving human-level performance in GUI tasks.
Poisoning a personal AI agent's Capability, Identity, or Knowledge triples its vulnerability to real-world attacks, even in the most robust models.
Current AI agents struggle to maintain accurate beliefs in evolving information environments, with performance varying significantly based on both model capability (15.4% range) and framework design (9.2%).
Forget hyperparameter tuning – autonomous research reveals that bug fixes and architectural tweaks unlock far greater gains in multimodal agent memory.
LVLMs can be made significantly less prone to hallucinations, without any training, by explicitly grounding them in visual evidence and iteratively self-refining their answers based on verified information.
LLM agents can now learn on the fly and adapt to evolving user needs without disruptive downtime, thanks to a novel meta-learning framework that synthesizes new skills from failure trajectories and optimizes the base policy during inactive periods.
Skip the expensive supervised fine-tuning: this RL-only method teaches LLMs to use tools by showing them how in-context, then gradually removing the crutches until they're tool-using pros in zero-shot.