Search papers, labs, and topics across Lattice.
University of California, Los Angeles
2
0
5
Soft-gating with an "advisor" model can steer LLMs to be safer and more useful, reducing over-refusal without sacrificing detection accuracy.
Chain-of-thought compression, while shrinking reasoning traces, can quietly erode a model's safety, factuality, and multilingual performance.