Search papers, labs, and topics across Lattice.
3
0
6
5
Multimodal models can now handle audio natively with improved efficiency, achieving state-of-the-art results in complex tasks like document understanding and agentic computer use.
Nemotron 3 Super proves you can achieve comparable accuracy to existing 120B models, but with significantly higher inference throughput, by combining Mamba, Attention, and Mixture-of-Experts.
By decoupling MLLM instruction tuning from DiT alignment, DuoGen achieves state-of-the-art interleaved multimodal generation without costly unimodal pretraining.