Search papers, labs, and topics across Lattice.
The paper investigates the validity of the Linear Representation Hypothesis in activation steering for LLMs, finding significant geometric distortions in activation spaces that challenge the assumption of global linearity. To address this, they introduce "Curveball steering," a nonlinear steering method based on polynomial kernel PCA that operates in a feature space to better respect the learned activation geometry. Experiments demonstrate that Curveball steering outperforms linear PCA-based steering, especially in scenarios with high geometric distortion.
LLM activation spaces aren't linear, and exploiting their true geometry with "Curveball steering" unlocks more effective control than standard linear interventions.
Activation steering is a widely used approach for controlling large language model (LLM) behavior by intervening on internal representations. Existing methods largely rely on the Linear Representation Hypothesis, assuming behavioral attributes can be manipulated using global linear directions. In practice, however, such linear interventions often behave inconsistently. We question this assumption by analyzing the intrinsic geometry of LLM activation spaces. Measuring geometric distortion via the ratio of geodesic to Euclidean distances, we observe substantial and concept-dependent distortions, indicating that activation spaces are not well-approximated by a globally linear geometry. Motivated by this, we propose"Curveball steering", a nonlinear steering method based on polynomial kernel PCA that performs interventions in a feature space, better respecting the learned activation geometry. Curveball steering consistently outperforms linear PCA-based steering, particularly in regimes exhibiting strong geometric distortion, suggesting that geometry-aware, nonlinear steering provides a principled alternative to global, linear interventions.