Steering off Course: Reliability Challenges in Steering Language Models
Date:
The number of AI publications has nearly tripled from 2010 to 2022 (https://hai.stanford.edu/ai-index). This unprecedented rate of growth is leading to many great advancements, but the speed of development comes with a cost. As researchers scramble to push benchmarks and discover new capabilities, many fundamental scientific questions are glossed over. This pattern has contributed to a growing blind spot in the robustness of interpretability techniques for large language models. One such example is “steering”, which has gained traction as an interpretable and lightweight alternative to model training. We systematically examine three prominent steering methods—DoLa, function vectors, and task vectors. In contrast to the original studies, which evaluated a handful of models, we test up to 36 models belonging to 14 families with sizes ranging from 1.5B to 70B parameters. Our experiments reveal substantial variability in the effectiveness of the steering approaches, with a large number of models showing no improvement and at times degradation in steering performance. Our analysis reveals fundamental flaws in the assumptions underlying these methods, challenging their reliability as scalable steering solutions.