Search papers, labs, and topics across Lattice.
This paper introduces Causal Concept Graphs (CCG), a method for discovering and representing causal relationships between concepts within the latent space of language models using task-conditioned sparse autoencoders. CCG combines sparse autoencoders with DAGMA-style differentiable structure learning to recover a directed acyclic graph representing causal dependencies. Experiments on ARC-Challenge, StrategyQA, and LogiQA using GPT-2 Medium demonstrate that CCG-guided interventions achieve significantly higher Causal Fidelity Scores compared to ROME-style tracing, SAE-only ranking, and random baselines, indicating improved causal reasoning.
Uncover the hidden causal chains inside your LLM with Causal Concept Graphs, which outperform existing methods for reasoning by explicitly modeling concept dependencies.
Sparse autoencoders can localize where concepts live in language models, but not how they interact during multi-step reasoning. We propose Causal Concept Graphs (CCG): a directed acyclic graph over sparse, interpretable latent features, where edges capture learned causal dependencies between concepts. We combine task-conditioned sparse autoencoders for concept discovery with DAGMA-style differentiable structure learning for graph recovery and introduce the Causal Fidelity Score (CFS) to evaluate whether graph-guided interventions induce larger downstream effects than random ones. On ARC-Challenge, StrategyQA, and LogiQA with GPT-2 Medium, across five seeds ($n{=}15$ paired runs), CCG achieves $\CFS=5.654\pm0.625$, outperforming ROME-style tracing ($3.382\pm0.233$), SAE-only ranking ($2.479\pm0.196$), and a random baseline ($1.032\pm0.034$), with $p<0.0001$ after Bonferroni correction. Learned graphs are sparse (5-6\% edge density), domain-specific, and stable across seeds.