Search papers, labs, and topics across Lattice.
6
0
11
Training trillion-parameter Mixture-of-Experts models just got a whole lot faster: Megatron Core now achieves >1 PFLOP/GPU on NVIDIA's latest hardware.
Predict how well your LLM will transfer to a new domain *before* fine-tuning, by using sparse autoencoders to spot tell-tale signs of domain shift in the model's representations.
Unified multimodal models often *hurt* performance on multimodal understanding tasks, except for spatial reasoning, visual illusions, and multi-round reasoning, challenging the assumption that generation universally improves understanding.
Uncover why your spam filter fails: X-MAP reveals topic-level semantic patterns that expose the weaknesses of your detection model.
LLMs can now leverage a hierarchical graph structure for memory retrieval, enabling global reasoning and boosting performance on long-term memory benchmarks beyond what's achievable with similarity search alone.
GPT-5's scientific reasoning skills plummet by nearly 50% when tackling multi-step workflows, revealing a critical gap in current LLM agents' ability to orchestrate complex tool use.