Search papers, labs, and topics across Lattice.
8
0
12
Token-level Mixture-of-Experts, directly ported from LLMs, can actually *hurt* autonomous driving performance in VLA models; SAMoE-VLA fixes this with scene-adaptive expert selection, achieving SOTA results with fewer parameters.
Interpolating latent representations before decoding yields a reconstruction FID (iFID) that finally aligns with the generation FID of latent diffusion models, achieving ~0.85 correlation where standard rFID fails.
By predicting latent features instead of pixels, PROSPECT achieves state-of-the-art VLN performance and long-horizon robustness without adding inference overhead.
Achieve >90% accuracy in non-invasive intracranial tumor diagnosis from MRI using a novel "Virtual Biopsy" framework, potentially reducing the need for risky and biased traditional biopsies.
Unlock the wealth of patient insights buried in secure messages: PVminer structures the patient voice with unprecedented accuracy, outperforming existing clinical NLP models.
By using a unified proxy to simultaneously combat both external and internal data heterogeneity, ProxyFL achieves significant performance and convergence improvements in federated semi-supervised learning.
By aligning latent representations with multiple visual foundation models, FRAPPE offers a more scalable and data-efficient way to imbue generalist robotic policies with robust world-awareness.
By decoupling generation and refinement experts within a masked diffusion VLA model, DriveFine achieves both flexible decoding and self-correction for autonomous driving.