Search papers, labs, and topics across Lattice.
1
0
2
Forget prompt engineering – surgically altering a model's internal activations can jailbreak it, exposing vulnerabilities even when the input looks harmless.