Search papers, labs, and topics across Lattice.
Shanghai Jiao Tong University
2
0
5
1
AgentSentry stops indirect prompt injection attacks in LLM agents by pinpointing when the attack takes hold using causality, then surgically removing the malicious influence.
Code-generating LLMs may ace static benchmarks, but developers are actually *slower* when using them because they disrupt mental flow, highlighting the need for benchmarks that capture the temporal dynamics of coding.