Search papers, labs, and topics across Lattice.
5
0
7
0
Projector fine-tuning, commonly used for aligning MLLMs, unexpectedly introduces backdoor vulnerabilities with activation mechanisms distinct from those in text-only LLMs.
ViTs can achieve robust generalization through adversarial training even when overfitting, mirroring a phenomenon previously observed only in CNNs.
Panoramic vision-language models can achieve a level of holistic scene understanding and robustness in adverse conditions that's impossible for traditional pinhole-based VLMs.
LLM agents can now defend against indirect prompt injection attacks without sacrificing task performance, thanks to a new method that surgically manipulates attention based on latent space analysis.
Agentic LLMs are far more vulnerable to indirect prompt injection attacks than previously thought: AdapTools achieves over 2x improvement in attack success while significantly degrading system utility, even against strong defenses.