Search papers, labs, and topics across Lattice.
4
0
9
A unified framework and comprehensive evaluation reveals the surprisingly nuanced performance of diffusion-based data augmentation, showing where it shines and where it falls short in low-data image classification.
LLMs struggle with new code frameworks because they lack training data capturing codebase relationships, but this work shows how to synthesize reasoning-aware data from code graphs to dramatically improve performance.
Robots can now adapt to dynamic environments with minimal human involvement by learning from a world model and force-torque feedback, achieving state-of-the-art manipulation performance.
Forget full attention: a hybrid sparse-linear attention model, MiniCPM-SALA, achieves 3.5x faster inference and supports 1M context length on a single GPU, all while maintaining comparable performance.