Besides obvious traps in differing levels of quality of LLM’s and resources, being availiable to commoners for cheap or free and paid for corporate rich clients, there’s still an undiscovered question about how these different AIs are biased.
After reading a lenghty thread where I’ve seen many takes about if LLM could’ve pushed a teen into commiting suicide, I thought to myself: if there are obviously different models availiable, may they be taught differently for each userbase?
May, for example, some genAI for rich and poor differ, helping first ones to procreate and others to die off?
What if some data engineers trained a popular model to represent one specific agenda, to serve their favorite bosses and institutions?
What if, for the argument’s sake, their GenAIs serve this role as an enabler of suicide, as it was intentionally programmed for?
The amount of people that wil be significantly influencable by GenAI towards suicide is significant enough that it matters but stil way to insignificant to be talking about “die off”. Most people really want to stay living even with a lot of shit in their lives and despite AI.
Not a showerthought.
Acid rainshower (Liquid LSD)
Whatevere, whatever, if closeness to platfroms my and languages, their AI pick, are a statement
We’re cooked…
Not class warfare but political/ ideology warfare is already out there.



