Tbf AI tag should be about AI-generated assets. Cause there is no problem in keeping code quality while using AI, and that’s what the whole dev industry do now.
there is no problem in keeping code quality while using AI
This opinion is contradicted by basically everyone who has attempted to use models to generate useful code which must interface with existing codebases. There are always quality issues, it must always be reviewed for functional errors, it rarely interoperates with existing code correctly, and it might just delete your production database no matter how careful you try to be.
I feel like I get where he’s coming from, but I can see the revulsion.
I picture someone asking their AI to write a rules engine for a gamemode and getting masses of duplicative, horrific code; but in my own work, my company has encouraged an assistive tool, and once it has an idea of what I’m trying to do, it will offer autocomplete options that are pretty spot on.
Still, I very much agree it’s hard to sort the difference and in untrained hands can definitely lead to unmaintainable code slop. Everything needs to get reviewed by knowledgeable human eyes before running.
The people lazy enough to have ai generate their code aren’t going to do that. You’re acting like games didn’t already have bugs before we invented a mostly wrong shortcut that kinda looks just good enough to fake being useful.
No, the issue with “AI” is thinking that it’s able to make anything production ready, be it art, code or dialog.
I do believe that LLMs have lots of great applications in a game pipeline, things like placeholders and copilot for small snippets work great, but if you think that anything that an LLM produces is production ready and you don’t need a professional to look at it and redo it (because that’s usually easier than fixing the mistakes) you’re simply out of touch with reality.
Tbf AI tag should be about AI-generated assets. Cause there is no problem in keeping code quality while using AI, and that’s what the whole dev industry do now.
At no point did you mention someone approving it.
Also, you should read what I said, I said most large stuff generated by AI needs to be completely redone. You can generate a small function or maybe a small piece of an image, if you have a professional validating that small chunk, but if you think you can generate an entire program or image with LLMs you’re delusional.
The killer app is language processing and if a localization contractor isn’t using an LLM to quickly check for style errors and inconsistencies, they’re just making it hard for them for no good reason.
Tbf AI tag should be about AI-generated assets. Cause there is no problem in keeping code quality while using AI, and that’s what the whole dev industry do now.
Hahahahahahahaha
This opinion is contradicted by basically everyone who has attempted to use models to generate useful code which must interface with existing codebases. There are always quality issues, it must always be reviewed for functional errors, it rarely interoperates with existing code correctly, and it might just delete your production database no matter how careful you try to be.
I feel like I get where he’s coming from, but I can see the revulsion.
I picture someone asking their AI to write a rules engine for a gamemode and getting masses of duplicative, horrific code; but in my own work, my company has encouraged an assistive tool, and once it has an idea of what I’m trying to do, it will offer autocomplete options that are pretty spot on.
Still, I very much agree it’s hard to sort the difference and in untrained hands can definitely lead to unmaintainable code slop. Everything needs to get reviewed by knowledgeable human eyes before running.
So don’t accept code that is shit. Have decent PR process. Accountability is still on human.
The people lazy enough to have ai generate their code aren’t going to do that. You’re acting like games didn’t already have bugs before we invented a mostly wrong shortcut that kinda looks just good enough to fake being useful.
keeping code quality is not the same as code generation
No, the issue with “AI” is thinking that it’s able to make anything production ready, be it art, code or dialog.
I do believe that LLMs have lots of great applications in a game pipeline, things like placeholders and copilot for small snippets work great, but if you think that anything that an LLM produces is production ready and you don’t need a professional to look at it and redo it (because that’s usually easier than fixing the mistakes) you’re simply out of touch with reality.
Are you even reading what I say? You are supposed to have a professional approving generated stuff.
But it’s still AI-generated, it doesn’t become less AI-generated because a human that knows shit about the subject approved it.
This is what you said:
At no point did you mention someone approving it.
Also, you should read what I said, I said most large stuff generated by AI needs to be completely redone. You can generate a small function or maybe a small piece of an image, if you have a professional validating that small chunk, but if you think you can generate an entire program or image with LLMs you’re delusional.
The killer app is language processing and if a localization contractor isn’t using an LLM to quickly check for style errors and inconsistencies, they’re just making it hard for them for no good reason.