Search papers, labs, and topics across Lattice.
3
0
3
Trigger-based defenses offer a false sense of security in federated learning, as this new attack shows backdoors can be implanted without any explicit triggers, achieving 2-50x better performance than trigger-based attacks.
LLM agents can now defend against indirect prompt injection attacks without sacrificing task performance, thanks to a new method that surgically manipulates attention based on latent space analysis.
Agentic LLMs are far more vulnerable to indirect prompt injection attacks than previously thought: AdapTools achieves over 2x improvement in attack success while significantly degrading system utility, even against strong defenses.