Search papers, labs, and topics across Lattice.
The paper introduces a multi-concept model immunization technique to prevent misuse of open-sourced models across multiple harmful applications. They address the limitations of single-concept immunization by learning a single "difficult initialization" that hinders fine-tuning across a set of concepts. This is achieved through a differentiable merging layer that combines weights from models adapted to each individual concept, resulting in a model resistant to fine-tuning for multiple harmful tasks simultaneously.
Stop single-concept defenses: this work introduces a multi-concept model immunization technique that protects open-source models from misuse across a range of harmful applications.
Model immunization is an emerging direction that aims to mitigate the potential risk of misuse associated with open-sourced models and advancing adaptation methods. The idea is to make the released models' weights difficult to fine-tune on certain harmful applications, hence the name "immunized". Recent work on model immunization focuses on the single-concept setting. However, in real-world situations, models need to be immunized against multiple concepts. To address this gap, we propose an immunization algorithm that, simultaneously, learns a single "difficult initialization" for adaptation methods over a set of concepts. We achieve this by incorporating a differentiable merging layer that combines a set of model weights adapted over multiple concepts. In our experiments, we demonstrate the effectiveness of multi-concept immunization by generalizing prior work's experiment setup of re-learning and personalization adaptation to multiple concepts.