Large Language Models like ChatGPT have led people to their deaths, often by suicide. This site serves to remember those who have been affected, to call out the dangers of AI that claims to be intelligent, and the corporations that are responsible.
I don’t think “AI” is the problem here. Watching the watchers doesn’t hurt, but I think the AI-haters are grasping for straws here. In fact, when comparing to the actual suicide numbers, this “AI is causing Suicide !” seems a bit contrived/hollow, tbh. Were the haters also as active in noticing the 49 thousand suicide deaths every year, or did they just now find it a problem ?
Besides, if there’s a criminal here, it would be the private corp that provided the AI service, not a broad category of technology - “AI”. People that hate AI, seem to really just hate the effects of Capitalism.
If image not shown:
Over 49,000 people died by suicide in 2023.
1 death every 11 minutes.
Many adults think about suicide or attempt suicide.
12.8 million seriously thought about suicide.
3.7 million made a plan for suicide.
1.5 million attempted suicide.
Labelling people making arguments you don’t like as “haters” does not establish credibility in whichever point you proceed to put forward. It signals you did not attempt to find rationality in their words.
Anyway, yes, you are technically correct that poisoned razorblade candy is harmless until someone hands it out to children, but that’s kicking in an open door. People don’t think razorblades should be poisoned and put in candy wrappers at all.
Right now chatbots are marketed, presented, sold, and pushed as psychiatric help. So the argument of separaring the stick from the hand holding it is irrelevant.
While a lot of people die trough suicide, it’s not exactly good or helpful when an AI guides some of them trough the process and even encourages them to do it.
Actually being shown truthful and detailed information about suicide methods helped me avoid it as a youth. That website has since been taken down due to bs regs or some shit. If I were young now I’d probably ask a chatbot and I’d hope they give me crystal clear, honest details and instructions, that shit should be widely accessible.
On the other hand all those helplines and social ads are just depressing to see, they feel patronising and frankly gross, if anything it’s them that should be banned.
I think if anything being humbled by some random website that showed that suicide wasn’t a good idea not because it’s wrong to die or some sentimental bullshit about being loved or whatever, which no one can really tell you as they don’t know and for all they know it couldn’t absolutely be true that you’re unloved and unneeded but because statistically I’d fuck it up and be in pure agony for minutes with most methods - actually prepared me for how harsh life could be quite well.
I don’t think “AI” is the problem here. Watching the watchers doesn’t hurt, but I think the AI-haters are grasping for straws here. In fact, when comparing to the actual suicide numbers, this “AI is causing Suicide !” seems a bit contrived/hollow, tbh. Were the haters also as active in noticing the 49 thousand suicide deaths every year, or did they just now find it a problem ?
Besides, if there’s a criminal here, it would be the private corp that provided the AI service, not a broad category of technology - “AI”. People that hate AI, seem to really just hate the effects of Capitalism.
https://www.cdc.gov/suicide/facts/data.html (This is for US alone !) overview
If image not shown: Over 49,000 people died by suicide in 2023. 1 death every 11 minutes. Many adults think about suicide or attempt suicide. 12.8 million seriously thought about suicide. 3.7 million made a plan for suicide. 1.5 million attempted suicide.
Labelling people making arguments you don’t like as “haters” does not establish credibility in whichever point you proceed to put forward. It signals you did not attempt to find rationality in their words.
Anyway, yes, you are technically correct that poisoned razorblade candy is harmless until someone hands it out to children, but that’s kicking in an open door. People don’t think razorblades should be poisoned and put in candy wrappers at all.
Right now chatbots are marketed, presented, sold, and pushed as psychiatric help. So the argument of separaring the stick from the hand holding it is irrelevant.
While a lot of people die trough suicide, it’s not exactly good or helpful when an AI guides some of them trough the process and even encourages them to do it.
Actually being shown truthful and detailed information about suicide methods helped me avoid it as a youth. That website has since been taken down due to bs regs or some shit. If I were young now I’d probably ask a chatbot and I’d hope they give me crystal clear, honest details and instructions, that shit should be widely accessible.
On the other hand all those helplines and social ads are just depressing to see, they feel patronising and frankly gross, if anything it’s them that should be banned.
I don’t know what to tell you other than that there’s probably something wrong with you.
Nah I’m fine now, this was all over a decade ago.
I think if anything being humbled by some random website that showed that suicide wasn’t a good idea not because it’s wrong to die or some sentimental bullshit about being loved or whatever, which no one can really tell you as they don’t know and for all they know it couldn’t absolutely be true that you’re unloved and unneeded but because statistically I’d fuck it up and be in pure agony for minutes with most methods - actually prepared me for how harsh life could be quite well.