Search papers, labs, and topics across Lattice.
2
1
6
1
Code LLMs can achieve SOTA performance in agentic tasks by explicitly modeling the dynamic evolution of software logic across different training stages.
A 32B model trained entirely on synthetic data from InfTool outperforms models 10x larger on tool use, rivaling even Claude-Opus.