Search papers, labs, and topics across Lattice.
This paper introduces Controllable Alignment Prompting (CAP), a reinforcement learning-based framework that optimizes prompts to selectively unlearn knowledge in LLMs without modifying model parameters. CAP uses a prompt generator trained with RL to suppress specific information while preserving general capabilities, offering a more controllable and reversible unlearning process. Experiments show CAP achieves precise unlearning and overcomes transferability issues of parameter-modifying methods.
Forget about fine-tuning: this new prompting method lets you selectively erase knowledge from LLMs on demand, even without access to model weights.
Large language models (LLMs) trained on unfiltered corpora inherently risk retaining sensitive information, necessitating selective knowledge unlearning for regulatory compliance and ethical safety. However, existing parameter-modifying methods face fundamental limitations: high computational costs, uncontrollable forgetting boundaries, and strict dependency on model weight access. These constraints render them impractical for closed-source models, yet current non-invasive alternatives remain unsystematic and reliant on empirical experience. To address these challenges, we propose the Controllable Alignment Prompting for Unlearning (CAP) framework, an end-to-end prompt-driven unlearning paradigm. CAP decouples unlearning into a learnable prompt optimization process via reinforcement learning, where a prompt generator collaborates with the LLM to suppress target knowledge while preserving general capabilities selectively. This approach enables reversible knowledge restoration through prompt revocation. Extensive experiments demonstrate that CAP achieves precise, controllable unlearning without updating model parameters, establishing a dynamic alignment mechanism that overcomes the transferability limitations of prior methods.