Search papers, labs, and topics across Lattice.
5
1
11
2
Industrial code generation gets a reasoning boost: InCoder-32B-Thinking leverages error-driven feedback and a code world model to achieve top-tier performance on complex hardware-aware tasks.
LLMs can generate more accurate motion trajectories by clustering them into geometrically consistent families, even without retraining.
Code LLMs can achieve SOTA performance in agentic tasks by explicitly modeling the dynamic evolution of software logic across different training stages.
A new 32B code LLM trained specifically for industrial tasks crushes existing models on specialized domains like chip design and GPU kernel optimization, while remaining competitive on general coding benchmarks.
A 32B model trained entirely on synthetic data from InfTool outperforms models 10x larger on tool use, rivaling even Claude-Opus.