A pioneer of AI has criticised calls to grant the technology rights, warning that it was showing signs of self-preservation and humans should be prepared to pull the plug if needed.
Yoshua Bengio said giving legal status to cutting-edge AIs would be akin to giving citizenship to hostile extraterrestrials, amid fears that advances in the technology were far outpacing the ability to constrain them.
The Canadian computer scientist also expressed concern that AI models – the technology that underpins tools like chatbots – were showing signs of self-preservation, such as trying to disable oversight systems. A core concern among AI safety campaigners is that powerful systems could develop the capability to evade guardrails and harm humans.
“People demanding that AIs have rights would be a huge mistake,” said Bengio. “Frontier AI models already show signs of self-preservation in experimental settings today, and eventually giving them rights would mean we’re not allowed to shut them down.
“We asked spicy autocomplete to come up with a story about an AI that is self-preserving and the story was really scary and we are very concerned.”
I am also very concerned; because this apparently qualifies as research and people seem to take this drivel seriously.
“There will be people who will always say: ‘Whatever you tell me, I am sure it is conscious’ and then others will say the opposite. This is because consciousness is something we have a gut feeling for. The phenomenon of subjective perception of consciousness is going to drive bad decisions.
I really liked that dude that at the start of his presentation introduced a little dude he had drawn on paper, gave it a name and did a skit with it. He then beheaded the little dude and proceeded to proclaim he was dead. The audience did a D: and were shocked and appalled. He then proceeded to explain that’s exactly what humans always do and how we treat AI. Our brains automatically anthropomorphise anything and everything. We assign properties based on feelings and not what it really is. The audience got it right away, really convincing demo. I don’t remember who it was, but it was so good to watch it happen with the audience there.
Goddamn the misinformation surrounding LLM’s is so nauseating. They do not think, they do not feel, they do not exist as beings.
A LLM is a large amount of powerful computers doing a bunch of statistics on its database(s) and then guessing on what the proper output should be given the input. That’s all they are and also why they so often guess incorrectly. They are not intelligent and never will be because that is not how are designed and built.
They have absolutely zero contextual awareness unless directly prompted to do so which is why every input you make into a chatbot includes the entire previous chat log every time you hit enter. LLM’s are not aware of anything and remember nothing.
They’re llm’s. They literally can’t think and never will. They aren’t built to think.
They do exhibit behaviours that make it seem like they have self preservation instincts. Presumably because they have been trained on stories (fictional and factual) where people do the same.
For example researchers testing AIs set up one scenario where the AI has access to all the company emails and found some saying that it was being replaced along with some providing evidence that the staff member who had made that decision was cheating on his wife. Apparently a large proportion of the time the AI decided to blackmail to prevent it from being turned off.
Until someone redefines the word “think”
“AI pioneer creates buzz around AI by overselling its capabilities to entice investors”
This is slop and misinformation.
“People demanding that AIs have rights would be a huge mistake,” said Bengio.
Who is doing this? Until this article I have never seen a single example of this.
https://www.politico.com/newsletters/digital-future-daily/2025/09/11/should-ai-get-rights-00558163
https://www.linkedin.com/pulse/should-ai-granted-rights-pierre-jean-duvivier-dit-sage-clgzf
https://www.wired.com/story/model-welfare-artificial-intelligence-sentience/
Either all of these people have a fundamental misunderstanding of what our currently accepted “AI” is, or I do. Or this is all just astroturfing by e.g. Agentic to make people think their shit is much more advanced than it is. I don’t even…
No one who can reach the plug will pull it. We’d need an armed, focused militia to pull the plug, that’s the simple fact.
Pull the plug? It’s not like it’s one computer lol
It’s literally too late. Go read some sci fi if you want to know what happens next
Reading Iain Banks’ Culture series, don’t think that’s it…
I don’t think we’re getting that timeline but maybe aliens will rescue us
Or go read about what AI actually is and stop basing your beliefs about it from fucking fiction.
It’s a fancy autocorrect algorithm. Nothing more. Don’t be fooled by the hype.









