Search papers, labs, and topics across Lattice.
University of Southern California
2
0
5
6
Watermarking LLMs by embedding the signal into the reasoning process itself proves surprisingly robust against fine-tuning and other post-training modifications.
LLMs are revolutionizing conversational AI research, but understanding how to best leverage them for user simulation requires a new taxonomy and understanding of open challenges, as this survey reveals.