Search papers, labs, and topics across Lattice.
4
0
8
LLM-controlled robots are surprisingly vulnerable: a single compromised input can cascade through the system, bypassing safety measures and leading to dangerous physical actions.
LLM-powered systems are surprisingly vulnerable to multi-pronged attacks that combine conventional cyber threats, adversarial ML, and conversational manipulation, all converging on a few key weaknesses.
LLM-powered healthcare systems are vulnerable to complex attack paths combining prompt injection and conventional cyberattacks, demanding a new goal-driven risk assessment approach.
Multimodal LLMs can be hijacked by adversarial instructions hidden inside seemingly innocuous images, achieving a 64% success rate in manipulating model outputs.