To elaborate a little:
Since many people are unable to tell the difference between a “real human” and an AI, they have been documented “going rogue” and acting outside of parameters, they can lie, they can compose stories and pictures based on the training received. I can’t see AI as less than human at this point because of those points.
When I think about this, I think about that being the reason as to why we cannot create so called “AGI” because we have no proper example or understanding to create it and thus created what we knew. Us.
The “hallucinating” is interesting to me specifically because that seems what is different between the AI of the past, and modern models that acts like our own brains.
I think we really don’t want to accept what we have already accomplished because we don’t like looking into that mirror and seeing how simple our logical process’ are mechanically speaking.
This is a good explanation of it. The way that LLM’s are currently programmed is not how the human brain works.
The way the neural networks work in an LLM is creating chains of strings and certain sequences of strings are raised in probability based on rewards. Humans create chains of all sorts of things including concepts and ideas and although some neural networks are stronger than others, they can deviate based on reasoning that is not based on statistical patterns.
AI companies can only throw so much memory and processing power at an LLM, but they’ll eventually have to use a new technology.
This is an interesting next step