Search papers, labs, and topics across Lattice.
5
0
7
2
LLMs can be forced to generalize beyond initial constraints by actively searching for adversarial test cases that expose logical divergences in generated code.
Synergy's architecture lets agents evolve through experience by proactively recalling rewarded trajectories, hinting at a new way to build agents that learn and adapt in open, collaborative environments.
LLM agents acting in the real world introduce a whole new threat landscape beyond unsafe text, demanding a shift in focus towards system-level security for agent ecosystems.
LMM-based GUI agents stick out like a sore thumb in human-centric mobile environments, but simple techniques can make them blend in without sacrificing utility.
GUI agents learn faster and generalize better with a new reward shaping technique that dynamically adapts to successful exploration trajectories, outperforming fixed reward schemes.