Search papers, labs, and topics across Lattice.
7
1
12
2
Ditch the garment masks: a simple human mask is all you need to nail video virtual try-on in the wild.
Building agents that can reliably automate complex, multi-step workflows over local files and tools just got a whole lot easier.
Forget scaling laws: a carefully crafted data curriculum lets a 32B parameter model beat GPT-4.5 and Claude-4.5 on web navigation.
LLMs can now automatically evolve and optimize GPU kernels to beat hand-tuned and proprietary models like Gemini and Claude.
A new 32B code LLM trained specifically for industrial tasks crushes existing models on specialized domains like chip design and GPU kernel optimization, while remaining competitive on general coding benchmarks.
Code LLMs can achieve SOTA performance in agentic tasks by explicitly modeling the dynamic evolution of software logic across different training stages.
A 32B model trained entirely on synthetic data from InfTool outperforms models 10x larger on tool use, rivaling even Claude-Opus.