Considering how many AI models still can’t correctly count how many ‘r’ there are in “strawberry”, I doubt it. There’s also the seahorse emoji doing the rounds at the moment, you’d think that the models would get “smart” after repeatedly failing and realize it’s an emoji that has never existed in the first place.
That’s the P in ChatGPT: Pre-trained. It has “learned” based on the set of data it has been trained on, but prompts will not have it learn anything. Your past prompts are kept to use as “memory” and to influence output for your future prompts, but it does not actually learn from them.
Considering how many AI models still can’t correctly count how many ‘r’ there are in “strawberry”, I doubt it. There’s also the seahorse emoji doing the rounds at the moment, you’d think that the models would get “smart” after repeatedly failing and realize it’s an emoji that has never existed in the first place.
Chatgpt5 can count the number of 'r’s, but that’s probably because it has been specifically trained to do so.
I would argue that the models do learn, but only over generations. So slowly and specifically.
They definitely don’t learn intelligently.
That’s the P in ChatGPT: Pre-trained. It has “learned” based on the set of data it has been trained on, but prompts will not have it learn anything. Your past prompts are kept to use as “memory” and to influence output for your future prompts, but it does not actually learn from them.