Search papers, labs, and topics across Lattice.
2
0
6
Nemotron 3 Super proves you can achieve comparable accuracy to existing 120B models, but with significantly higher inference throughput, by combining Mamba, Attention, and Mixture-of-Experts.
LLMs' code fixes often break what wasn't broken, but a new training scheme that rewards minimal edits can boost repair precision by 31%.