Search papers, labs, and topics across Lattice.
4
1
7
3
Forget trajectory-level rollouts: MuSEAgent learns faster and reasons better by distilling past interactions into reusable, state-aware decision experiences.
Achieve significantly more accurate text and formula rendering with a training-free agentic workflow that injects glyph templates into latent spaces and attention maps of text-to-image models.
Current multimodal models are stuck in bi-modal interactions, but OmniGAIA and OmniAtlas offer a path towards truly omni-modal AI assistants capable of reasoning and tool use across video, audio, and images.
LLMs can learn to generate high-quality symbolic world models by interacting with a multi-agent system that provides adaptive, behavior-aware feedback, closing the gap between static validation and interactive execution.