Search papers, labs, and topics across Lattice.
1
0
3
Backdoor attacks in LLMs can be defused at inference time, without retraining or external data, by geometrically smoothing attention patterns to disrupt adversarial routing.