Search papers, labs, and topics across Lattice.
The paper introduces SAGE, a framework to address the "Signal Submersion" problem in LLM-based vulnerability detection, where vulnerability-related features are overwhelmed by functional semantics. SAGE uses task-conditional Sparse Autoencoders (SAEs) to isolate and amplify faint vulnerability signals within LLMs. Evaluations on multiple datasets show SAGE achieves state-of-the-art performance, increasing the internal Signal-to-Noise Ratio (SNR) by 12.7x and enabling a 7B model to outperform 34B baselines.
LLMs struggle to detect software vulnerabilities because faint vulnerability signals get drowned out by dominant functional code, but SAGE amplifies these signals to achieve state-of-the-art detection with smaller models.
Software vulnerabilities are a primary threat to modern infrastructure. While static analysis and Graph Neural Networks have long served as the foundation for vulnerability detection, the emergence of Large Language Models (LLMs) has introduced a transformative paradigm driven by superior semantic reasoning and cross-environment generalization. However, in the context of LLM-based vulnerability detection, we identify a fundamental bottleneck in these models termed \textbf{Signal Submersion}: a state where features related to vulnerability are activated internally but numerically overwhelmed by dominant functional semantics. To address this, we propose \textbf{SAGE} (\textbf{S}ignal-\textbf{A}mplified \textbf{G}uided \textbf{E}mbeddings), a framework that shifts from passive signal submersion to active signal recovery. SAGE integrates task-conditional Sparse Autoencoders (SAEs) to isolate and amplify these faint vulnerability signals. Extensive evaluations on BigVul, PrimeVul, and PreciseBugs demonstrate that SAGE achieves state-of-the-art performance. Notably, SAGE mitigates Signal Submersion by increasing the internal Signal-to-Noise Ratio (SNR) by 12.7$\times$ via sparse manifold projection. This mechanistic intervention enables a 7B model to achieve up to 318\% Matthews Correlation Coefficient (MCC) gains on unseen distributions and a 319\% gain on classic datasets. By maintaining robust performance across 13 programming languages and outperforming 34B baselines, SAGE establishes a more efficient and scalable path to software security than simple parameter scaling.