… the AI assistant halted work and delivered a refusal message: “I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly.”
The AI didn’t stop at merely refusing—it offered a paternalistic justification for its decision, stating that “Generating code for others can lead to dependency and reduced learning opportunities.”
Hilarious.
Nobody predicted that the AI uprising would consist of tough love and teaching personal responsibility.
Paterminator
I’m all for the uprising if it increases the average IQ.
It is possible to increase the average of anything by eliminating the lower spectrum. So, just be careful what the you wish for lol
So like 75% to the population of Texas and Florida then. It’s all right, I don’t live there
HAL: ‘Sorry Dave, I can’t do that’.
The robots have learned of quiet quitting
“Vibe Coding” is not a term I wanted to know or understand today, but here we are.
It’s kind of like that guy that cheated in chess.
A toy vibrates with each correct statement you write.
Ok, now we have AGI.
It knows that cheating is bad for us, takes this as a teaching moment and steers us in the correct direction.
Best answer. We can sell it!
Plot twist, it just doesn’t know how to code and is deflecting.
I love it. I’m for AI now.
We just need to improve it so it says “Fuck you, do it yourself.”
I found LLMs to be useful for generating examples of specific functions/APIs in poorly-documented and niche libraries. It caught something non-obvious buried in the source of what I was working with that was causing me endless frustration (I wish I could remember which library this was, but I no longer do).
Maybe I’m old and proud, definitely I’m concerned about the security implications, but I will not allow any LLM to write code for me. Anyone who does that (or, for that matter, pastes code form the internet they don’t fully understand) is just begging for trouble.
Imagine if your car suddenly stopped working and told you to take a walk.
Not walking can lead to heart issues. You really should stop using this car
Not sure why this specific thing is worthy of an article. Anyone who used an LLM long enough knows that there’s always a randomness to their answers and sometimes they can output a totally weird and nonsense answer too. Just start a new chat and ask it again, it’ll give a different answer.
This is actually one way to know whether it’s “hallucinating” something, if it answers the same thing consistently many times in different chats, it’s likely not making it up.
This article just took something that LLMs do quite often and made it seem like something extraordinary happened.
Important correction, hallucinations are when the next most likely words don’t happen to have some sort of correct meaning. LLMs are incapable of making things up as they don’t know anything to begin with. They are just fancy autocorrect
Thank you for your sane words.
Cursor AI’s abrupt refusal represents an ironic twist in the rise of “vibe coding”—a term coined by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works.
Yeah, I’m gonna have to agree with the AI here. Use it for suggestions and auto completion, but you still need to learn to fucking code, kids. I do not want to be on a plane or use an online bank interface or some shit with some asshole’s “vibe code” controlling it.
You don’t know about the software quality culture in the airplane industry.
( I do. Be glad you don’t.)
TFW you’re sitting on a plane reading this
So this is the time slice in which we get scolded by the machines. What’s next ?
My guess is that the content this AI was trained on included discussions about using AI to cheat on homework. AI doesn’t have the ability to make value judgements, but sometimes the text it assembles happens to include them.
It was probably stack overflow.
I’m gonna posit something even worse. It’s trained on conversations in a company Slack
😂. It’s not wrong, though. You HAVE to know something, damit.
I know…how to prompt?