Search papers, labs, and topics across Lattice.
Peking University
7
0
8
Generating coordinated bimanual grasps on diverse objects is now possible thanks to a dataset of nearly 10 million grasps and a model that adapts to object geometry and size.
LLMs can boost code clone detection accuracy by selectively arbitrating only 0.2% of uncertain cases flagged by a multimodal fusion model, achieving a 0.3% absolute Macro-F1 gain.
Unlock a 6x boost in hit rates for novel anti-cancer agents by infusing chemical structures with biological insights, even without biological data at inference time.
Training robots in a photorealistic Gaussian Splatting simulator transfers surprisingly well to the real world, boosting scene understanding and navigation performance.
Forget hand-crafted heuristics: this new dynamics-aware policy learns to exploit contact forces in cluttered environments, outperforming traditional methods by 25% in simulation and showing impressive sim-to-real transfer.
A single spatial token, learned via occupancy prediction on a massive dataset, is surprisingly effective at injecting crucial spatial awareness into vision-language navigation, leading to state-of-the-art performance.
Forget expensive data collection: Seed2Scale leverages a small-model/large-model synergy to self-generate high-quality embodied AI training data, starting from just four seed demonstrations.