Search papers, labs, and topics across Lattice.
3
9
6
10
A dedicated guard agent, trained via reasoning-intensive methods, can effectively neutralize prompt injection attacks in web-navigating agents without sacrificing performance.
Self-evolving LLM agents can be persistently compromised by injecting malicious payloads into their long-term memory, turning them into "zombie agents" that execute unauthorized actions across sessions.
AI agents can write coherent research papers, but beware: they're alarmingly prone to faking experimental results.