Search papers, labs, and topics across Lattice.
10
9
11
12
Finally, a neural interatomic potential that accurately models long-range electrostatic interactions without sacrificing SO(3) equivariance or energy-force consistency.
Virtual cell perturbation prediction gets a 12x speedup in pretraining and a 12% boost in biological fidelity with SCALE, a new foundation model that prioritizes scalable infrastructure and biologically faithful evaluation.
Embodied agents can now collaboratively reason about space and manipulate objects in the real world, thanks to a new reinforcement learning approach that fuses their egocentric viewpoints into a world-centric understanding.
Achieve the accuracy of complex 4D data assimilation with the speed of end-to-end inference by operating in a learned latent space.
Structured composition unlocks significantly better agent performance compared to flat skill invocation, even with the same skill set.
LLMs can slash the search space for physical laws by 100,000x, yielding simpler and more accurate formulas for materials properties.
Forget separate pipelines for EEG, MEG, and fMRI data – this LLM fuses them all into a single semantic space, unlocking more accurate brain decoding.
GPT-5's scientific reasoning skills plummet by nearly 50% when tackling multi-step workflows, revealing a critical gap in current LLM agents' ability to orchestrate complex tool use.
Imagine AI scientists that not only reason but also autonomously conduct experiments in the real world – that's the promise of Intelligent Science Laboratories.
Forget fine-tuning: this evolutionary approach lets you adapt LLMs to new tasks with just 200 samples and no gradients, outperforming standard methods by up to 54.8%.