Search papers, labs, and topics across Lattice.
Forget prompt engineering – LSE trains LLMs to self-edit their own contexts at test time, outperforming even GPT-5 and Claude Sonnet 4.5 in Text-to-SQL and question answering.
LLM agents can learn to elicit crucial information from users by rewarding interaction turns that most reduce uncertainty about the optimal action, leading to better task performance.
Forget fixed teams: this new reinforcement learning framework lets agents spawn new teammates on the fly, unlocking dynamic strategies previously impossible.
Forget scraping messy real-world websites: AutoWebWorld lets you synthesize infinite, perfectly verifiable web interaction data for just $0.04 a pop, dramatically boosting agent performance.
GPT-5's real-time router learns to route queries to specialized models, making it faster and more useful than its predecessors.
Command A shows how to build an enterprise-grade LLM that balances performance, efficiency, and multilingual capabilities using decentralized training and model merging.