Search papers, labs, and topics across Lattice.
2
0
6
1
Backdoor attacks in LLMs can be defused at inference time, without retraining or external data, by geometrically smoothing attention patterns to disrupt adversarial routing.
Code LLMs don't just memorize training data – some generalize far better than others, and even "leaky" datasets like CVEFixes show surprisingly low memorization advantage.