Search papers, labs, and topics across Lattice.
12
0
11
FedLLMs, thought to be safer due to data localization, are shockingly vulnerable: a new attack achieves near 100% membership inference accuracy, even with differential privacy.
LLMs signal their internal certainty during answer decoding through predictable attention patterns on their own reasoning traces.
Achieve robust SLAM in dynamic environments without semantic labels or depth sensors by disentangling scene dynamics with a generalizable motion model.
VLA models can ace the task but still trigger unsafe outcomes, exposing a critical gap between action execution and semantic understanding.
Navigate to objects forever: OVAL enables robots to continuously explore and remember new environments, unlocking truly lifelong object goal navigation.
Achieve real-time, globally consistent, and photorealistic SLAM in large-scale environments by directly performing loop closure on optimized Gaussian maps.
Even GPT-5.4 can't handle investment banking tasks, failing nearly half the criteria and producing zero client-ready outputs on a new benchmark designed with 500+ bankers.
Incomplete trajectory data got you down? This plug-and-play framework progressively aligns features from incomplete observations with complete ones, boosting prediction accuracy in autonomous driving scenarios.
VLMs that ace digital document parsing benchmarks still stumble badly when faced with real-world scanned, warped, or photographed documents, revealing a significant "reality gap."
You can cut MLLM hallucinations in remote sensing tasks without any training by cleverly exploiting the model's own attention mechanisms to focus on relevant image regions.
LLM-powered pentesting agents fail not because of model limitations, but because they can't estimate task difficulty, leading to wasted effort and premature context exhaustion.
Forget sub-task prediction – the secret to better robot policies is reasoning directly in the action space with a sequence of coarse action intents.