Search papers, labs, and topics across Lattice.
14
0
16
Spotting coordinated fake reviewers just got easier: a new graph learning method boosts detection accuracy by adaptively weighing network diversity and similarity.
By explicitly modeling visibility, VSDiffusion generates more geometrically plausible and realistic shadows, outperforming prior methods on a challenging image composition task.
Forget training separate models for every remote sensing modality pair: Any2Any learns a single latent space for unified translation, even generalizing to unseen modality combinations.
Forget satellite-specific hacks: FoundPS achieves state-of-the-art pansharpening performance with a single model robust to diverse sensors and scenes.
You can cut MLLM hallucinations in remote sensing tasks without any training by cleverly exploiting the model's own attention mechanisms to focus on relevant image regions.
Achieve superior LLM pruning performance by first nudging models toward sparsity-friendliness *before* applying any weight removal.
By pruning and quantizing the KV cache, XStreamVGGT achieves a remarkable 4.42x memory reduction and 5.48x speedup in streaming 3D reconstruction without sacrificing performance.
PIME leverages prototype-guided Monte Carlo Tree Search to extract compact, neuroscientifically-validated brain subnetworks predictive of disorder, outperforming standard deep learning approaches in both accuracy and interpretability.
Individuals can now demand a tamper-proof, verifiable record of every action taken by AI agents operating on their own devices, thanks to a new sovereignty kernel.
Robots can now adapt to dynamic environments with minimal human involvement by learning from a world model and force-torque feedback, achieving state-of-the-art manipulation performance.
Forget global coordinates: EgoPush lets mobile robots rearrange multiple objects using only an egocentric camera and learned object relationships, even in cluttered environments.
By ditching node alignment, this random-walk method cracks the code for classifying highly variable brain networks, boosting accuracy in distinguishing Alzheimer's from Lewy Body Dementia.
LLM code copilots are put to the test with SecCodeBench-V2, a new benchmark revealing their security vulnerabilities across 22 CWE categories and five programming languages.
MLLMs struggle to effectively zoom into relevant details in ultra-high-resolution remote sensing imagery, but a new staged training framework can teach them when and where to focus for substantial accuracy gains.