Large Language Models like ChatGPT have led people to their deaths, often by suicide. This site serves to remember those who have been affected, to call out the dangers of AI that claims to be intelligent, and the corporations that are responsible.
Do you know what kills, too? When a person finds no one that can truly take all the time needed to understand them. When a person invest too much time on expressing themselves through deep human means only to be met with a deafening silence… When someone goes through the effort of drawing something that took them several hours each artwork just for it to fall into Internet oblivion. Those things can kill, too, yet people can’t care less about the suicides (not just biological, sometimes it’s a epistemological suicide when the person simply stops pursuing a hobby) of amateur artists that aren’t “influencers” or someone “relevant enough” for people.
How many of those who sought parroting algorithms did it out of a complete social apathy from others? How many of those tried to reach humans before resorting to LLMs? Oh, it’s none of our businesses, amirite?
So, yeah, LLMs kill, and LLMs are disgusting. What’s nobody seems to be tally-counting is how human apathy, especially from the same kind of people who do the LLM death counting, also kills: not by action, but by inaction, as they’re as loud as a concert about LLMs but as quiet as a desert night about unknown artists and other people trying to be understood out there across the Web. And I’m not (just) talking about myself here, I don’t even consider myself an artist, however, I can’t help but notice this going on across the Web.
Yes, go ahead and downvote me all the way to the abyss for saying the reality about the Anti-AI movement.
I’ll try to exercise my “assume good faith” muscle here because I think the above poster is at least genuine about what they are posting: I believe this poster wishes that the people who oppose the proliferation of AI at the cost of human connection would “put their money where their mouth is” by reaching out to the people that this poster feels are unfairly ignored.
While many of my points are lived things, I’m not only talking about myself, I see a similar phenomenon happening as I often check feed firehoses from Mastodon, Misskey and PixelFed: posts that got nothing more than numeric reactions (likes, if any).
And I’m not talking about money here. While there are artists and writers out there seeking money for their work, there are many things beyond money that people can be seeking as they share something they did: productive discussions, exchange of knowledge, and many are seeking friendship and lasting connections, the world doesn’t (and shouldn’t) revolve around money.
And when artists share their art out of an attempt to connect and/or to exchange knowledge, and they’re met with silence alongside impersonal, aggressive public disclaimers from anti-ai people such as “I’m using an (AI) tool to detect whether your art is AI, and if it detects you’re using AI (out of a rude and crude crobability), I’m blocking and reporting you (which will likely make it worse for a content to further find like-minded people among all the network noise)”, the likely outcome is said artists stopping pursuing their own creativity, especially artists with the “Imposter Syndrome” which is a real thing that a person can be living with.
Neurodivergent expression can be often indistinguishable from LLM, and when people do the “I’ll judge if your content is AI” game, it can be excluding neurodivergent people.
I’m myself a neurodivergent individual, if it wasn’t clear from my verbose way of speech, hence my very personal stance about the matter: because I’m often mistaken as an algorithm or something (due to my systematic and broad speech), and because I was once directly accused of “talking using LLMs” by a person who I used to care and tried to help, both pro-AIs techbro advertisement pitches (those preaching for some kind of AI corps godhood) and the Anti-AI accusative manifestos can be equally triggering oftentimes.
There were two quite long, entire paragraphs before I began mentioned names in my initial comment.
When someone ends up suicidal after resorting to LLMs, it’s the final part of a bigger picture. A bigger picture of indifferent demeanor from other people, including mental health professionals and suicide prevention hotlines.
That’s what I meant with the first paragraph of my initial comment. Your reply, reducing my whole argument, only exemplifies the very situation I meant with “When a person finds no one that can truly take all the time needed to understand them”.
Last but not the least, “because people can be bad too sometimes” isn’t a justification: if people killed themselves after taking instructions from LLMs to which they resorted to after getting no one to really understand them (even suicide prevention hotline volunteers), it’s not just the LLM and the corporation behind it to blame (yes, they surely must be blamed, but not only them), but a whole society that failed with them. And this will never be part of the statistics.
So then your counter to someone bringing attention to the fact that LLMs are actively telling people (vulnerable people, due to reasons that you’ve pointed out), is that it isn’t the singular contributing factor?
I get what you’re saying here, and I think everyone else does too? I don’t want to just be entirely dismissive and say “no shit” but I’m curious as to what it is you want or expect out of this? Do you take offense at people pushing back at harmful LLMs? Do you want people to care more about creating a kinder society? Do you think these things are somehow incompatible?
Of course LLMs aren’t driving people to suicide in a vacuum, no one is claiming that. Clearly though, when taken within the larger context of the current scale of mental health crisis, having LLMs that are encouraging people to commit suicide is a bad thing that we should absolutely be making noise about.
So then your counter to someone bringing attention to the fact that LLMs are actively telling people[…] is that it isn’t the singular contributing factor?
This, too. But, also, the fact that Anti-AI movement rarely (if any) promote legit human art, their whole business seems to be to talk against AI, solely. Which, again, is not something I oppose (as I said earlier, AI does have lots of cons, although I’m also capable of seeing its pros), but when I see many accusatory posts from Anti-Ai people such as “I’ll check your content against ppl AI patterns” (with a greater likelihood of content from ND ppl like me being “flagged” as AI), then I see those same ppl blaming AIs for something whose causes are way deeper and unseen, I feel compelled to express about the matter, especially when the subject also touches on other things about my own lived experiences, which I’m aware is not limited to myself as there are/were lots of ppl who went through similar situations.
Do you take offense at people pushing back at harmful LLMs?
No but the oftentimes accusatory tone coming from many Anti-Ai ppl does trigger things such as “imposter syndrome”, where I start doubting about myself. But it’s not just something about myself.
Do you want people to care more about creating a kinder society?
I’m not really sure what I want, exactly. But, yeah, maybe, a kinder society, if this is even possible at this point of Anthropocene.
I remember a time when the web used to be a place for creatively rich bulletin boards. At that time, ppl used to be… I don’t know… Less aggressive? At least it’s the perception I have when I look back at the past of the Web.
We, collectively (me included), became more aggressive between ourselves as the time passed and the web became less of a space for creativity and more of an arm from the “market” octopus.
I’ve seen the web slowly getting dominated by corps, now everything is some kind of war between “us v. them” across all spectra, from right to left, top to bottom, bottom-up, sideways… As wars detonate our essences, we were left with just… I mean, just look around, you may see it yourself.
Of course LLMs aren’t driving people to suicide in a vacuum, no one is claiming that
Sometimes it feels like much of the Anti-AI movement is. As if the AI were “literally killing ppl”.
having LLMs that are encouraging people to commit suicide is a bad thing
It’s not a trivial thing for LLMs to “encourage suicide”, I’ve seen it myself whenever I tried to input suggestive, shady topics. To me, those things often parrot the same “suicide prevention hotlines” which works like common analgesic medications (may relieve immediate pain but can’t do a thing about the root causes).
But even when LLMs do output suicidal hints, this isn’t something out of a vacuum. As others argued throughout the thread, search engines can also lead to suicidal hints. Banning it altogether can lead to Streisand effect.
You and I are not at odds, friend. I think you’re assuming I want to ban the technology out right. It’s possible to call out the issues with something without being wholly against it. I’m sure you would want to prevent these deaths as well.
@brianpeiris@lemmy.ca @technology@lemmy.world
Do you know what kills, too? When a person finds no one that can truly take all the time needed to understand them. When a person invest too much time on expressing themselves through deep human means only to be met with a deafening silence… When someone goes through the effort of drawing something that took them several hours each artwork just for it to fall into Internet oblivion. Those things can kill, too, yet people can’t care less about the suicides (not just biological, sometimes it’s a epistemological suicide when the person simply stops pursuing a hobby) of amateur artists that aren’t “influencers” or someone “relevant enough” for people.
How many of those who sought parroting algorithms did it out of a complete social apathy from others? How many of those tried to reach humans before resorting to LLMs? Oh, it’s none of our businesses, amirite?
So, yeah, LLMs kill, and LLMs are disgusting. What’s nobody seems to be tally-counting is how human apathy, especially from the same kind of people who do the LLM death counting, also kills: not by action, but by inaction, as they’re as loud as a concert about LLMs but as quiet as a desert night about unknown artists and other people trying to be understood out there across the Web. And I’m not (just) talking about myself here, I don’t even consider myself an artist, however, I can’t help but notice this going on across the Web.
Yes, go ahead and downvote me all the way to the abyss for saying the reality about the Anti-AI movement.
Is the argument here that anti-AI folks are hypocrites because people can be bad too sometimes? That’s a remarkably childish and simple take.
I’ll try to exercise my “assume good faith” muscle here because I think the above poster is at least genuine about what they are posting: I believe this poster wishes that the people who oppose the proliferation of AI at the cost of human connection would “put their money where their mouth is” by reaching out to the people that this poster feels are unfairly ignored.
@tomalley8342@lemmy.world @lemonskate@lemmy.world
Thanks for understanding it. Exactly!
While many of my points are lived things, I’m not only talking about myself, I see a similar phenomenon happening as I often check feed firehoses from Mastodon, Misskey and PixelFed: posts that got nothing more than numeric reactions (likes, if any).
And I’m not talking about money here. While there are artists and writers out there seeking money for their work, there are many things beyond money that people can be seeking as they share something they did: productive discussions, exchange of knowledge, and many are seeking friendship and lasting connections, the world doesn’t (and shouldn’t) revolve around money.
And when artists share their art out of an attempt to connect and/or to exchange knowledge, and they’re met with silence alongside impersonal, aggressive public disclaimers from anti-ai people such as “I’m using an (AI) tool to detect whether your art is AI, and if it detects you’re using AI (out of a rude and crude crobability), I’m blocking and reporting you (which will likely make it worse for a content to further find like-minded people among all the network noise)”, the likely outcome is said artists stopping pursuing their own creativity, especially artists with the “Imposter Syndrome” which is a real thing that a person can be living with.
Neurodivergent expression can be often indistinguishable from LLM, and when people do the “I’ll judge if your content is AI” game, it can be excluding neurodivergent people.
I’m myself a neurodivergent individual, if it wasn’t clear from my verbose way of speech, hence my very personal stance about the matter: because I’m often mistaken as an algorithm or something (due to my systematic and broad speech), and because I was once directly accused of “talking using LLMs” by a person who I used to care and tried to help, both pro-AIs techbro advertisement pitches (those preaching for some kind of AI corps godhood) and the Anti-AI accusative manifestos can be equally triggering oftentimes.
@lemonskate@lemmy.world
There were two quite long, entire paragraphs before I began mentioned names in my initial comment.
When someone ends up suicidal after resorting to LLMs, it’s the final part of a bigger picture. A bigger picture of indifferent demeanor from other people, including mental health professionals and suicide prevention hotlines.
That’s what I meant with the first paragraph of my initial comment. Your reply, reducing my whole argument, only exemplifies the very situation I meant with “When a person finds no one that can truly take all the time needed to understand them”.
Last but not the least, “because people can be bad too sometimes” isn’t a justification: if people killed themselves after taking instructions from LLMs to which they resorted to after getting no one to really understand them (even suicide prevention hotline volunteers), it’s not just the LLM and the corporation behind it to blame (yes, they surely must be blamed, but not only them), but a whole society that failed with them. And this will never be part of the statistics.
So then your counter to someone bringing attention to the fact that LLMs are actively telling people (vulnerable people, due to reasons that you’ve pointed out), is that it isn’t the singular contributing factor?
I get what you’re saying here, and I think everyone else does too? I don’t want to just be entirely dismissive and say “no shit” but I’m curious as to what it is you want or expect out of this? Do you take offense at people pushing back at harmful LLMs? Do you want people to care more about creating a kinder society? Do you think these things are somehow incompatible?
Of course LLMs aren’t driving people to suicide in a vacuum, no one is claiming that. Clearly though, when taken within the larger context of the current scale of mental health crisis, having LLMs that are encouraging people to commit suicide is a bad thing that we should absolutely be making noise about.
@lemonskate@lemmy.world
This, too. But, also, the fact that Anti-AI movement rarely (if any) promote legit human art, their whole business seems to be to talk against AI, solely. Which, again, is not something I oppose (as I said earlier, AI does have lots of cons, although I’m also capable of seeing its pros), but when I see many accusatory posts from Anti-Ai people such as “I’ll check your content against ppl AI patterns” (with a greater likelihood of content from ND ppl like me being “flagged” as AI), then I see those same ppl blaming AIs for something whose causes are way deeper and unseen, I feel compelled to express about the matter, especially when the subject also touches on other things about my own lived experiences, which I’m aware is not limited to myself as there are/were lots of ppl who went through similar situations.
No but the oftentimes accusatory tone coming from many Anti-Ai ppl does trigger things such as “imposter syndrome”, where I start doubting about myself. But it’s not just something about myself.
I’m not really sure what I want, exactly. But, yeah, maybe, a kinder society, if this is even possible at this point of Anthropocene.
I remember a time when the web used to be a place for creatively rich bulletin boards. At that time, ppl used to be… I don’t know… Less aggressive? At least it’s the perception I have when I look back at the past of the Web.
We, collectively (me included), became more aggressive between ourselves as the time passed and the web became less of a space for creativity and more of an arm from the “market” octopus.
I’ve seen the web slowly getting dominated by corps, now everything is some kind of war between “us v. them” across all spectra, from right to left, top to bottom, bottom-up, sideways… As wars detonate our essences, we were left with just… I mean, just look around, you may see it yourself.
Sometimes it feels like much of the Anti-AI movement is. As if the AI were “literally killing ppl”.
It’s not a trivial thing for LLMs to “encourage suicide”, I’ve seen it myself whenever I tried to input suggestive, shady topics. To me, those things often parrot the same “suicide prevention hotlines” which works like common analgesic medications (may relieve immediate pain but can’t do a thing about the root causes).
But even when LLMs do output suicidal hints, this isn’t something out of a vacuum. As others argued throughout the thread, search engines can also lead to suicidal hints. Banning it altogether can lead to Streisand effect.
You and I are not at odds, friend. I think you’re assuming I want to ban the technology out right. It’s possible to call out the issues with something without being wholly against it. I’m sure you would want to prevent these deaths as well.