Search papers, labs, and topics across Lattice.
4
5
10
6
LLMs can now optimize CUDA kernels across diverse scientific computing and LLM workloads, rivaling hand-tuned libraries like cuBLAS.
Forget iterative optimization: now you can edit 3D models in a single feedforward pass with globally consistent deformations and high-fidelity textures.
Achieve zero package hallucinations from LLMs in dependency recommendation by monitoring the decoding process and intervening with an authoritative package list.
High-quality data is all it takes: Bee-8B, trained on the new Honey-Data-15M dataset, leapfrogs existing fully open MLLMs to rival semi-open models.