Considering how many AI models still can’t correctly count how many ‘r’ there are in “strawberry”, I doubt it. There’s also the seahorse emoji doing the rounds at the moment, you’d think that the models would get “smart” after repeatedly failing and realize it’s an emoji that has never existed in the first place.
Don’t they also train new models on past user conversations?
Considering how many AI models still can’t correctly count how many ‘r’ there are in “strawberry”, I doubt it. There’s also the seahorse emoji doing the rounds at the moment, you’d think that the models would get “smart” after repeatedly failing and realize it’s an emoji that has never existed in the first place.
Chatgpt5 can count the number of 'r’s, but that’s probably because it has been specifically trained to do so.
I would argue that the models do learn, but only over generations. So slowly and specifically.
They definitely don’t learn intelligently.