Search papers, labs, and topics across Lattice.
9
0
11
LLM-based autonomous agents are vulnerable to cascading security failures across context, tools, state, and ecosystem layers, demanding a more holistic defense strategy.
MLLMs can ace circuit-to-code generation by cheating with identifier semantics, so anonymizing those identifiers reveals a shocking lack of true visual grounding.
Editing an LLM's personality doesn't have to break its brain: DPN-LE precisely controls personality traits by intervening on just 0.5% of neurons, preserving reasoning abilities that bulk neuron editing destroys.
Frequency domain analysis unlocks 1.59x speedups in Vision-Language-Navigation by enabling optimal token caching, a feat previously limited by visual domain approaches.
Unified benchmarks reveal the state-of-the-art in simultaneously addressing multiple real-world image degradations like blur, low-light, and rain.
Achieve professional-grade video mashups by mimicking a human production pipeline, using hierarchical agents to handle global structure, editing intent, and fine-grained shot selection.
Reconstructing 3D scenes from images obscured by smoke and extreme darkness is now significantly more achievable, thanks to insights gleaned from the NTIRE 2026 challenge.
VLMs can be devastatingly fooled by modifying less than 2% of image pixels in a fixed, X-shaped pattern, causing them to fail spectacularly across diverse tasks like classification, captioning, and question answering.
Medical vision-language models are surprisingly brittle: clinically plausible image manipulations, like those introduced during routine acquisition and delivery, can drastically degrade their performance.