Search papers, labs, and topics across Lattice.
3
0
6
LLM-controlled robots are surprisingly vulnerable: a single compromised input can cascade through the system, bypassing safety measures and leading to dangerous physical actions.
LLM-powered systems are surprisingly vulnerable to multi-pronged attacks that combine conventional cyber threats, adversarial ML, and conversational manipulation, all converging on a few key weaknesses.
LLM-powered healthcare systems are vulnerable to complex attack paths combining prompt injection and conventional cyberattacks, demanding a new goal-driven risk assessment approach.