So while your understanding is better than a lot of people on here, a few things to correct.
First off, this research isn’t being done on the models in reasoning mode, but in direct inference. So there’s no CoT tokens at all.
The injection is not of any tokens, but of control vectors. Basically it’s a vector which being added to the activations makes the model more likely to think of that concept. The most famous was “Golden Gate Claude” that had the activation for the Golden Gate Bridge increased so it was the only thing the model would talk about.
So, if we dive into the details a bit more…
If your theory was correct, then the way the research asks the question saying that there’s control vectors and they are testing if they are activated, then the model should be biased to sometimes say “yes, I can feel the control vector.” And yes, in older or base models that’s what we might expect to see.
But, in Opus 4/4.1, when the vector was not added, they said they could detect a vector… 0% of the time! So the control group had enough introspection capability as to not stochastically answer that there was a vector present when there wasn’t.
But then, when they added the vector at certain layer depths, the model was often able to detect that there was a vector activated, and further to guess what the vector was adding.
So again — no reasoning tokens present, and the experiment had control and experimental groups where the results negates your theory as to the premise of the question causing affirmative bias.
Again, the actual research is right there a click away, and given your baseline understanding at present, you might benefit and learn a lot from actually reading it.
I think we could have a fascinating discussion about this offline. But in short here’s my understanding: they look at a bunch of queries and try to deduce the vector that represents a particular idea—like let’s say “sphere”. So then without changing the prompt, they inject that concept.
How does this injection take place?
I played with a service a few years ago where we could upload a corpus of text and from it train a “prefix” that would be sent along with every prompt, “steering” the output ostensibly to be more like the corpus. I found the influence to be undetectably subtle on that model, but that sounds a lot like what is going on here. And if that’s not it then I don’t really follow exactly what they are doing.
Anyway my point is, that concept of a sphere is still going into the context mathematically even if it isn’t in the prompt text. And that concept influences the output—which is entirely the point, of course.
None of that part is introspective at all. The introspection claim seems to come from unprompted output such as “round things are really on my mind.” To my way of thinking, that sounds like a model trying to bridge the gap between its answer and the influence. Like showing me a Rorschach blot and asking me about work and suddenly I’m describing things using words like fluttering and petals and honey and I’m like “weird that I’m making work sound like a flower garden.”
And then they do the classic “why did you give that answer” which naturally produces bullshit—which they at least acknowledge awareness of—and I’m just not sure the output of that is ever useful.
Anyway, I could go on at length, but this is more speculation than fact and a dialog would be a better format. This sounds a lot like researchers anthropomorphizing math by conflating it with thinking, and I don’t find it all that compelling.
That said, I see analogs in human thought and I expect some of our own mechanisms may be reflected in LLM models more than we’d like to think. We also make decisions on words and actions based on instinct (a sort of concept injection) and we can also be “prefixed” for example by showing a phrase over top of an image to prime how we think about those words. I think there are fascinating things that can be learned about our own thought processes here, but ultimately I don’t see any signs of introspection—at least not in the way I think the word is commonly understood. You can’t really have meta-thoughts when you can’t actually think.
Shit, this still turned out to be about 5x as long as I intended. This wasn’t “in short” at all. Is that inspection or just explaining the discrepancy between my initial words and where I’ve arrived?
So while your understanding is better than a lot of people on here, a few things to correct.
First off, this research isn’t being done on the models in reasoning mode, but in direct inference. So there’s no CoT tokens at all.
The injection is not of any tokens, but of control vectors. Basically it’s a vector which being added to the activations makes the model more likely to think of that concept. The most famous was “Golden Gate Claude” that had the activation for the Golden Gate Bridge increased so it was the only thing the model would talk about.
So, if we dive into the details a bit more…
If your theory was correct, then the way the research asks the question saying that there’s control vectors and they are testing if they are activated, then the model should be biased to sometimes say “yes, I can feel the control vector.” And yes, in older or base models that’s what we might expect to see.
But, in Opus 4/4.1, when the vector was not added, they said they could detect a vector… 0% of the time! So the control group had enough introspection capability as to not stochastically answer that there was a vector present when there wasn’t.
But then, when they added the vector at certain layer depths, the model was often able to detect that there was a vector activated, and further to guess what the vector was adding.
So again — no reasoning tokens present, and the experiment had control and experimental groups where the results negates your theory as to the premise of the question causing affirmative bias.
Again, the actual research is right there a click away, and given your baseline understanding at present, you might benefit and learn a lot from actually reading it.
I think we could have a fascinating discussion about this offline. But in short here’s my understanding: they look at a bunch of queries and try to deduce the vector that represents a particular idea—like let’s say “sphere”. So then without changing the prompt, they inject that concept.
How does this injection take place?
I played with a service a few years ago where we could upload a corpus of text and from it train a “prefix” that would be sent along with every prompt, “steering” the output ostensibly to be more like the corpus. I found the influence to be undetectably subtle on that model, but that sounds a lot like what is going on here. And if that’s not it then I don’t really follow exactly what they are doing.
Anyway my point is, that concept of a sphere is still going into the context mathematically even if it isn’t in the prompt text. And that concept influences the output—which is entirely the point, of course.
None of that part is introspective at all. The introspection claim seems to come from unprompted output such as “round things are really on my mind.” To my way of thinking, that sounds like a model trying to bridge the gap between its answer and the influence. Like showing me a Rorschach blot and asking me about work and suddenly I’m describing things using words like fluttering and petals and honey and I’m like “weird that I’m making work sound like a flower garden.”
And then they do the classic “why did you give that answer” which naturally produces bullshit—which they at least acknowledge awareness of—and I’m just not sure the output of that is ever useful.
Anyway, I could go on at length, but this is more speculation than fact and a dialog would be a better format. This sounds a lot like researchers anthropomorphizing math by conflating it with thinking, and I don’t find it all that compelling.
That said, I see analogs in human thought and I expect some of our own mechanisms may be reflected in LLM models more than we’d like to think. We also make decisions on words and actions based on instinct (a sort of concept injection) and we can also be “prefixed” for example by showing a phrase over top of an image to prime how we think about those words. I think there are fascinating things that can be learned about our own thought processes here, but ultimately I don’t see any signs of introspection—at least not in the way I think the word is commonly understood. You can’t really have meta-thoughts when you can’t actually think.
Shit, this still turned out to be about 5x as long as I intended. This wasn’t “in short” at all. Is that inspection or just explaining the discrepancy between my initial words and where I’ve arrived?