Search papers, labs, and topics across Lattice.
2
0
2
LLM agents can now defend against indirect prompt injection attacks without sacrificing task performance, thanks to a new method that surgically manipulates attention based on latent space analysis.
Agentic LLMs are far more vulnerable to indirect prompt injection attacks than previously thought: AdapTools achieves over 2x improvement in attack success while significantly degrading system utility, even against strong defenses.