Search papers, labs, and topics across Lattice.
Forget black-box policies: CSRO uses LLMs to generate human-readable code policies in multi-agent RL, achieving performance competitive with traditional methods.
LLM-powered diagnostic AI is ready for prime time: a real-world clinical trial shows it's safe, patients love it, and doctors find it useful.
Multimodal web agents are surprisingly vulnerable to cross-modal attacks, but a novel adversarial training approach can double task completion efficiency while mitigating these risks.
LLMs can drastically accelerate robot planning in cluttered environments by injecting common-sense priors about object locations and co-occurrences, slashing planning time by up to 72% in real-world experiments.
LLMs are becoming "epistemic agents" that shape our knowledge environment, so we need a new framework for evaluating and governing them based on trustworthiness, not just performance.
Gemini 3 Deep Think can now autonomously solve a majority of problems in a challenging math competition, signaling a leap in AI's mathematical reasoning capabilities.
Sequence models can learn to cooperate in multi-agent settings simply by training against diverse partners, no explicit meta-learning required.
LLMs can autonomously discover novel MARL algorithms that outperform hand-designed baselines, revealing untapped potential in automated algorithm design.
Forget rigid heuristics: this adaptive AI delegation framework dynamically adjusts task allocation, authority transfer, and trust-building, promising more robust agentic systems.
People prefer AI advisors, but AI delegates that autonomously negotiate on their behalf actually lead to higher individual gains and improve overall group welfare in multi-party bargaining games.