I want to let people know why I’m strictly against using AI in everything I do without sounding like an ‘AI vegan’, especially in front of those who are genuinely ready to listen and follow the same.
Any sources I try to find to cite regarding my viewpoint are either mild enough to be considered AI generated themselves or filled with extremist views of the author. I want to explain the situation in an objective manner that is simple to understand and also alarming enough for them to take action.
If you want to explain your reasons ‘in good faith’ you should be honest, and not adopt other people’s reasons to argue the position you’ve already assumed.
“It’s a machine made to bullshit. It sounds confident and it’s right enough of the time that it tricks people into not questioning when it is completely wrong and has just wholly made something up to appease the querent.”
“There are emerging studies about AI induced psychosis[1], and there is a possibility to go psychotic even if one doesn’t have pre-conditions to become one. I would like to be cautious with the danger, like with cigaretes or Thalidomide. You never know how it might be dangerous.”
Maybe trying to be objective is the wrong choice here? After all, it might sound preachy to those who are ignorant to the dangers of AI. Instead, it could be better to stay subjective in hopes to trigger self-reflection.
Here are some arguments I would use for my own personal ‘defense’:
- I like to do the work by myself because the challenge of doing it by my own is part of the fun, especially when I finally get that ‘Eureka!’ moment after especially tough ones. When I use AI, it just feels halfhearted because I just handed it to someone else, which doesn’t sit right with me.
- when I work without AI, I tend to stumble over things that aren’t really relevant to what I’m doing, but are still fun to learn about and might be helpful sometimes else. With AI, I’m way too focused on the end result to even notice that stuff, which makes the work feel even more annoying.
- when I decide to give up or realize I can’t be arsed with it, I usually seek out communities or professionals, because that way it’s either done professionally or I get a better sense of community, but overall feel like I’m supporting someone. With AI, I don’t get that feeling, but rather I only feel either inferior for not coming up with a result as fast as the AI does or frustrated because it either spews out bullshit or doesn’t get the point I’m aiming for.
This is a brilliant idea! I was wondering whether talking subjectively would be detrimental to my point, but having it explained this way is so much better. I think the key point here is to not berate the other person for using AI in between this explanation.
It goes a bit further than just not berating. People often get defensive when you criticise something they like, which makes it harder to argue due to the other side suddenly treating the discussion as a fight. However by saying “it’s not for me” in a rather roundabout way you shift the focus away from “is it good/bad” and more about whether the other can empathise with your reasoning, and in turn reflect your view onto themselves and maybe realize that they didn’t notice something about their usage and feelings about AI that you already did.
it might sound preachy to those who are ignorant
Am I reading it wrong, or are you saying that people who have a different point of view are ignorant?
Ah, sorry, I didn’t mean ignorant in a general way, but to the critiques on AI/dangers of AI OP referred to in their post. I’ll edit my comment.
That makes sense, thanks for clarification.
I am telling people to refrain from wasting my time with parrotted training data and that there is no “I” in LLMs. And that using them harms the brain and the corporations behind are evil to the core. But yeah, mostly I give up beyond “please don’t bother me with this”
If it’s real life, just talk to them.
If it’s online, especially here on lemmy, there’s a lot of AI brain rotted people who are just going to copy/paste your comments into a chatbot and you’re wasting time.
They also tend to follow you around.
They’ve lost so much of their brains to AI, that even valid criticism of AI feel like personal insults to them.
They’ve lost so much of their brains to AI, that even valid criticism of AI feel like personal insults to them.
More likely they feel insulted by people saying how “brain-rotted” they are.
What would the inoffensive way of phrasing it be?
Genuinely every single pro-AI person I’ve spoken with both irl and online has been clearly struggling cognitively. It’s like 10x worse than the effects of basic social media addiction. People also appear to actively change for the worse if they get conned into adopting it. Brain rot is apparently a symptom of AI use as literally as tooth rot is a symptom of smoking.
Speaking of smoking and vaping, on top of being bad for you objectively, it’s lame and gross. Now that that narrative is firmly established we have actually started seeing youth nicotine use decline rapidly again, just like it was before vaping became a thing
What would the inoffensive way of phrasing it be?
…and then you proceed to spend the next two paragraphs continuing to rant about how mentally deficient you think AI users are.
Not that, for starters.
The lung capacity of smokers is deficient, yes? Is the mere fact offensive? Should we just not talk about how someone struggling to breathe as they walk up stairs is the direct result of their smoking?
This is literally begging the question.
They’ve lost so much of their brains to AI, that even valid criticism of AI feel like personal insults to them.
That’s the issue. I do wish to warn me or even just inform them of what using AI recklessly could lead to.
Why care?
You’re wanting to go out and argue with people and try to use logic when that part of their brain has literally atrophied.
It’s not going to accomplish anything, and likely just drive them deeper into AI.
Plenty of people that need help actually want it, put your energy towards that if you want to help people.
Why care?
To give some fucks, probably.
The post is aimed at me facing situations where I state among people I know that I don’t use AI, followed by them asking why not. Instead of driving them out by stating “Just because” or get into jargons that are completely unbeknownst to them, I wish to properly inform them why I have made this decision and why they should too.
I am also able to identify people to whom there’s no point discussing this. I’m not asking to convince them too.
I wish to properly inform them why I have made this decision and why they should too.
You’re asking how to verbalize why you don’t like AI, but you won’t say why you don’t like AI…
Let’s see if this helps, imagine someone asks you:
I don’t like pizza, how do I tell people the reasons why I don’t like pizza?
How the absolute fuck would you know how to explain it when you don’t know why they don’t like pizza?
You do have a point. I think I may be overthinking this after all. I’ll just try to talk with them about this upfront.
Yup, that’s dbzer0
I paste people’s AI questions into a chatbot for the humor of it.
and also alarming enough for them to take action.
Is this really an intent to explain in good faith? Sounds like you’re trying to manipulate their opinion and actions rather than simply explaining yourself.
If someone was to tell me that they simply don’t want to use generative AI, that they prefer to do writing or drawing by hand and don’t want suggestions about how to use various AI tools for it, then I just shrug and say “okay, suit yourself.”
Most people are against AI because of what corporations are doing with it. What do you expect corporations and governments are going to do with any new scientific or technological advance? Use it for for the benefit of humanity? Are you going to stop using computers because coorporations use them for their benefit harming the environment with their huge data centers? By rejecting the use of this new technological advance you are avoiding to take advantage of free and open source AI tools, that you can run locally on your computer, for whatever you consider a good cause. Fortunately many people who care about other human beings are more intelligent and are starting to use AI for what it really is, A TOOL.
“According to HRF’s announcement, the initiative aims to help global audiences better understand the dual nature of artificial intelligence: while it can be used by dictatorships to suppress dissent and monitor populations, it can also be a powerful instrument of liberation when placed in the hands of those fighting for freedom.”
just say that you don’t want to use it. why are you trying to figure out good reasons that somebody else came up with to not use something you have to elect to use in the first place? just say “I don’t want to use genAI”. you don’t need to explain yourself any further than that.
That’s perfectly fine if anyone just doesn’t want to use it, but he’s “strictly against” it and he’s searching for reasons. Pretty irrational IMO. It doesn’t surprise me, it’s the general trend regarding almost any subject nowadays, and you can’t blame AI for that.
One Thing to note: if you’re strictly against it then you are on fact an AI vegan.
And that’s okay!
Just like veganism you need to be clear though to us to help you answer that question:
-
what IS your reason? “At all” as absolute is not objectively feasible for all situations no matter your logic (stealing --> use an open model like apertus; energy --> link it to your solar panels, unreliable --> wrong use case, etc etc)
-
why do you want to convince others?
The issue is: you need to be honest to yourself AND to us to have a proper exchange.
“It doesn’t feel right and I want to limit it’s spread” is a way better answer then some stuff that sounds right but that are not grounded in your personal reality.
You’re right. I cannot avoid it completely. Sometimes I use it unknowingly through some other online service intermediate or work in projects among peers who do use AI. What I should’ve said is I avoid using it to the best of my ability.
- My complaint is with commercially available generative AI like ChatGPT, Gemini, Claude etc. The fact that they are being proposed as solution to every conceivable problem without addressing its drawbacks to equal standards and everyone accepting it as such is what’s wrong to me.
- I wish to inform them of the implications of using these services what others failed to do. I do believe some people would consider reducing their uses if not stop altogether if they heard what it really is and what they contribute to by using it.
It’s hard but right to admit that I’m coming off as an ‘AI vegan’ with what I’ve said earlier. I don’t want to be casted out for not wanting to use something just for the sake of it, like with other mainstream social media.
For 2. would it then be a approach for you to focus on exactly your own complaint?
“Be careful when you use gen AI, it’s sold to you as solution but you’ll have more work figuring out why it doesn’t understand you then it would be just doing it on your own”.
Perhaps I’m not yet understanding what you mean with “contribute to” or the implications though.
-
In a way aren’t you asking “how can I be an AI vegan, without sounding like an AI vegan”?
It’s OK to be an AI vegan if that’s what you want. :)
Stop trying to make AI Vegan work. It’s never going to stick. AFAIK this term is less than a week old and smuggly expecting everyone to have already assimilated it is bad enough, but it’s a shit descriptor that is trading in right leaning hatred of ‘woke’ and vegans are just a scape goat to you.
Explain how AI haters or doubters cross over with Veganism at all as a comparison?
For me this was the first time hearing it. And it made immediate perfect sense what OP meant. A pretty good analogy!
Explain how AI haters or doubters cross over with Veganism at all as a comparison?
They’re both taking a moral stance regarding their consumption despite large swathes of society considering these choices to be morally neutral or even good. I’ve been vegan for almost a decade and dislike AI, and while I don’t think being anti-AI is quite as ostracizing as being vegan, the comparison definitely seems reasonable to me. The behaviour of rabid meat eaters and fervent AI supporters are also quite similar.
But there are other arguments against ai besides consumption of resources. The front facing LLMs are just the pitch. The police state is becoming more oppressive using AI tracking and identification. The military using AI to remote control drones and weapon systems is downright distopian. It feels like they’re trying to flatten the arguments against AI into only an environmental issue, making it easier to dismiss especially among the population that doesn’t give a shit about the environment.
This is the first time I’ve encountered the term and I understood it immediately.
Congratulations? Does that make it universal? Dude was being a prick when someone didn’t know what it meant.
Like veganism, abstaining from AI is arguably better for the environment.
That’s not just true of those two things though. I’m looking for a tie that binds them together while excluding other terms. If it’s an analogy what is the analogy?
The fuck is an AI vegan? There isn’t meat and AI isn’t food.
Your bed isn’t really made for a king or queen.
The fuck it’s not.
I get the impression his bed was made for twins.
Oh great the bots are hallucinating.
They’re saying you’re taking things too literally and not thinking about the potential meaning of the sentence.
There is a belief that a lot of Vegans basically preach to others and look down on people who still consume meat. Their use of AI Vegan was meant to utilize that background and apply it to AI, so they don’t want to come off as someone preaching or being a snob about their issues with AI.
It’s called a euphemism. We all know that a vegan is someone who does not use animal products (e.g. meat, eggs, dairy, leather, etc). By using AI in front of the term vegan, OP intimates that they do not use AI products.
I suspect you’re smart enough to know this, but for some reason you’re being willfully obtuse.
~Then again, maybe not. 🤷♂️~
It seems to mean people who don’t consume AI content not use AI tools.
My hypothesis is it’s a term coined by pro-AI people to make AI-skeptics sound bad. Vegans are one of the most hated groups of people, so associating people who don’t use AI with them is a huge win for pro-ai forces.
Side note: do-gooder derogation ( https://en.wikipedia.org/wiki/Do-gooder_derogation ) is one of the saddest moves you can pull. If you find yourself lashing out at someone because they’re doing something good (eg: biking instead of driving, abstaining from meat) please reevaluate. Sit with your feelings if you have to.
You say “pro-AI” like there’s a group of random people needing to convince others to use the tools.
The general public tried them, and they’re using them pretty frequently now. Nobody is forcing people to use ChatGPT to figure out their Christmas shopping, but something like 40% of people have already or are planning on using it for that purpose this year. That’s from a recent poll by Leger.
If they weren’t at the very least perceived as adding value, people wouldn’t be using them.
I can say with 100% certainty that there are things I have used AI for that have saved me time and money.
The Anti-AI crowd may as well be the same people that were Anti-Internet 25 years ago.
Of course people are using AI. It’s the default behavior of Google, the most popular web search. It confidently spits out falsehoods. This is not an improvement.
And there are definitely people “needing to convince others to use the tools.”. Microsoft and Google et al are made of people. They’re running ads to get people to adopt it.
Buying stuff online and email are useful stuff in ways LLMs can only dream of. It is a technology nowhere near as good as its hype.
Furthermore , “the general public likes it” is a dubious metric for quality. People like all sorts of garbage. Heroin has its fans. I’m sure it’d have even more if it was free and highly advertised. Is that enough to prove it’s good? No. Other factors such as harm and accuracy matter, too.
Ah ok. You might be new to language? There’s this thing called analogy
Oh hey, language is supposed make ideas easier to transmit. The term is fucking clunky, using AI is not akin to diet.
Communicate clearer.
OP came up with the analogy. I understood quite well and caught up with it easily. Well done OP!
The most reasonable explanation I’ve heard/read is that generative AI is based on stealing content from human creators. Just don’t use the word “slop” and you’ll be good.
Except that is also a subjective and emotionally-charged argument.
What is your viewpoint?
Mine, for example, is that not only I don’t need it at all but it doesn’t offer anything of value to me so I can’t think of any use for it.This reminds me of those posts from anti-vaxers who complain about not being able to find good studies or sources that support their opinion.
Check out wheresyoured.at for some “haters guides.”
My general take is that virtually none of the common “useful” forms of AI are even remotely sustainable strictly from a financial standpoint, so there’s not use getting too excited about them.
The financial argument is pretty difficult to make.
You’re right in one sense, there is a bubble here and some investors/companies are going to lose a lot of money when they get beaten by competitors.
However, you’re also wrong in the sense that the marginal cost to run them is actually quite low, even with the hardware and electricity costs. The benefit doesn’t have to be that high to generate a positive ROI with such low marginal costs.
People are clearly using these tools more and more, even for commercial purposes when you’re paying per token and not some subsidized subscription, just check out the graphs on OpenRouter https://openrouter.ai/rankings
None of the hyperscalers have produced enough revenue to even cover operating costs. Many have reported deceptive “annualized” figures or just stopped reporting at all.
Couple that with the hardware having a limited lifespan of around 5 years, and you’ve got an entire industry being subsidized by hype.
Covering operating costs doesn’t make sense as the threshold for this discussion though.
Operating costs would include things like computing costs for training new models and staffing costs for researchers, both of which would completely disappear in a marginal cost calculation for an existing model.
If we use Deepseek R1 as an example of a large high end model, you can run a 8-bit quantized version of the 600B+ parameter model on Vast.Ai for about $18 per hour, or even on AWS for like $50/hour. Those produce tokens fast enough that you can have quite a few users on it at the same time, or even automated processes running concurrently with users. Most medium sized businesses could likely generate more than $50 in benefit from it per running hour, especially since you can just shut it down at night and not even pay for that time.
You can just look at it from a much smaller perspective too. A small business could buy access to consumer GPU based systems and use them profitably with 30B or 120B parameter open source models for dollars per hour. I know this is possible, because I’m actively doing it.








