Search papers, labs, and topics across Lattice.
This paper investigates whether LLMs internally represent contextual privacy norms, finding that the key parameters of contextual integrity (information type, recipient, and transmission principle) are encoded as linearly separable directions in the model's activation space. Despite this structured internal representation, LLMs still leak private information, indicating a misalignment between representation and behavior. To address this, the authors introduce CI-parametric steering, which independently intervenes along each CI dimension, demonstrating more effective privacy control compared to monolithic steering.
LLMs understand contextual privacy better than you think, but their actions don't reflect it, revealing a critical gap between internal knowledge and outward behavior.
Large language models (LLMs) are increasingly deployed in high-stakes settings, yet they frequently violate contextual privacy by disclosing private information in situations where humans would exercise discretion. This raises a fundamental question: do LLMs internally encode contextual privacy norms, and if so, why do violations persist? We present the first systematic study of contextual privacy as a structured latent representation in LLMs, grounded in contextual integrity (CI) theory. Probing multiple models, we find that the three norm-determining CI parameters (information type, recipient, and transmission principle) are encoded as linearly separable and functionally independent directions in activation space. Despite this internal structure, models still leak private information in practice, revealing a clear gap between concept representation and model behavior. To bridge this gap, we introduce CI-parametric steering, which independently intervenes along each CI dimension. This structured control reduces privacy violations more effectively and predictably than monolithic steering. Our results demonstrate that contextual privacy failures arise from misalignment between representation and behavior rather than missing awareness, and that leveraging the compositional structure of CI enables more reliable contextual privacy control, shedding light on potential improvement of contextual privacy understanding in LLMs.