Search papers, labs, and topics across Lattice.
Washington University in St. Louis
3
0
6
Adversarial attacks on LLMs can be dramatically sped up and made more effective by exploiting the surprisingly low-rank structure of adversarial perturbations.
Bridge the trust gap in cloud-based LLM services with AFTUNE, a practical framework that lets you audit proprietary fine-tuning and inference without prohibitive overhead.
A 4B-parameter model, fortified with TraceGuard, can detect reasoning backdoors as effectively as models 100x larger, even against unseen and adaptive attacks.