Search papers, labs, and topics across Lattice.
8
20
11
9
Forget static coordination – robots that chat and dynamically re-plan can achieve a whopping 69% improvement in collaborative navigation success.
Finetuning visual foundation models with LoRA-based pairwise training dramatically improves AIGI detection robustness against real-world distortions.
Single-pixel imaging gets a deep learning boost: SISTA-Net leverages learned sparsity and hybrid CNN-VSSM architectures to achieve state-of-the-art reconstruction quality, even in noisy underwater environments.
Ruyi2.5 achieves comparable performance to Qwen3-VL on general multimodal benchmarks while significantly outperforming it in privacy-constrained surveillance, demonstrating the effectiveness of its edge-cloud architecture.
Achieve real-time (409 FPS) underwater image enhancement with a tiny (3,880 parameter) model that significantly improves color accuracy, enabling deployment on resource-constrained underwater platforms.
Don't fully retrain your draft model after fine-tuning your LLM: EDA restores speculative decoding performance with significantly less compute by adapting only a small, private component and regenerating training data.
Skip the costly robot teleoperation data: ZeroWBC learns surprisingly natural humanoid control policies directly from human egocentric videos.
LLMs can learn better from human feedback by exploring more creatively, thanks to a simple coin-flip counting method that encourages them to try new things.