Search papers, labs, and topics across Lattice.
The University of North Carolina at Chapel Hill, [
1
5
3
40
Using preference data from stronger models to align LLMs via DPO can backfire, dramatically worsening safety by making models more susceptible to jailbreaking.