On the Non-Identifiability of Steering Vectors in Large Language Models
Sohan Venkatesh, Ashish Mahendran Kurapath
Accepted in ICLR CAO Workshop, 2026
Activation steering methods are widely used to control large language model (LLM) behavior and are often interpreted as revealing meaningful internal representations. This interpretation assumes steering directions are identifiable and uniquely recoverable from input-output behavior. We show that, under white-box single-layer access, steering vectors are fundamentally non-identifiable due to large equivalence classes of behaviorally indistinguishable interventions. Empirically, we show that orthogonal perturbations achieve near-equivalent efficacy with negligible effect sizes across multiple models and traits. Critically, we show that the non-identifiability is a robust geometric property that persists across diverse prompt distributions. These findings reveal fundamental interpretability limits and highlight the need for structural constraints beyond behavioral testing to enable reliable alignment interventions.
@article{venkatesh2026non,
title={On the Non-Identifiability of Steering Vectors in Large Language Models},
author={Venkatesh, Sohan and Mahendran Kurapath, Ashish},
journal={arXiv e-prints},
pages={arXiv--2602},
year={2026}
}