Search papers, labs, and topics across Lattice.
This paper investigates the internal representations of security concepts within CodeLLMs, finding that models often possess awareness of vulnerabilities even while generating insecure code. Based on this finding, they introduce Secure Concept Steering for CodeLLMs (SCS-Code), a method that manipulates internal representations during token generation to promote secure and functional code. SCS-Code outperforms existing methods on secure coding benchmarks by leveraging a more fine-grained understanding of security subconcepts.
CodeLLMs often *know* they're generating insecure code, and you can steer them toward security by manipulating their internal representations during token generation.
Large Language Models (LLMs) show remarkable capabilities in understanding natural language and generating complex code. However, as practitioners adopt CodeLLMs for increasingly critical development tasks, research reveals that these models frequently generate functionally correct yet insecure code, posing significant security risks. While multiple approaches have been proposed to improve security in AI-based code generation, combined benchmarks show these methods remain insufficient for practical use, achieving only limited improvements in both functional correctness and security. This stems from a fundamental gap in understanding the internal mechanisms of code generation and the root causes of security vulnerabilities, forcing researchers to rely on heuristics and empirical observations. In this work, we investigate the internal representation of security concepts in CodeLLMs, revealing that models are often aware of vulnerabilities as they generate insecure code. Through systematic evaluation, we demonstrate that CodeLLMs can distinguish between security subconcepts, enabling a more fine-grained analysis than prior black-box approaches. Leveraging these insights, we propose Secure Concept Steering for CodeLLMs (SCS-Code). During token generation, SCS-Code steers LLMs'internal representations toward secure and functional code output, enabling a lightweight and modular mechanism that can be integrated into existing code models. Our approach achieves superior performance compared to state-of-the-art methods across multiple secure coding benchmarks.