Search papers, labs, and topics across Lattice.
2
0
5
2
LLM-powered healthcare systems are vulnerable to complex attack paths combining prompt injection and conventional cyberattacks, demanding a new goal-driven risk assessment approach.
Multimodal LLMs can be hijacked by adversarial instructions hidden inside seemingly innocuous images, achieving a 64% success rate in manipulating model outputs.