No, the issue with “AI” is thinking that it’s able to make anything production ready, be it art, code or dialog.
I do believe that LLMs have lots of great applications in a game pipeline, things like placeholders and copilot for small snippets work great, but if you think that anything that an LLM produces is production ready and you don’t need a professional to look at it and redo it (because that’s usually easier than fixing the mistakes) you’re simply out of touch with reality.
Tbf AI tag should be about AI-generated assets. Cause there is no problem in keeping code quality while using AI, and that’s what the whole dev industry do now.
At no point did you mention someone approving it.
Also, you should read what I said, I said most large stuff generated by AI needs to be completely redone. You can generate a small function or maybe a small piece of an image, if you have a professional validating that small chunk, but if you think you can generate an entire program or image with LLMs you’re delusional.
Dude are you a software dev? Did you hear about, like, tickets? You are supposed to split bigger task into smaller tickets at a project approval phase.
LLM agents are completely capable of taking well-documented tickets and generating some semblance of code that you shape with a few upcoming prompts, criticising code style & issues until they are all fixed.
I’m not theoretical, this is how it’s done today. MCPs into JIRA and Figma and UI tickets just get about 90% done in a single prompt. Harder stuff is done in “invesrigate and write .md how to solve” & “this is why that won’t work, do this instead” to like 70% ready.
No, the issue with “AI” is thinking that it’s able to make anything production ready, be it art, code or dialog.
I do believe that LLMs have lots of great applications in a game pipeline, things like placeholders and copilot for small snippets work great, but if you think that anything that an LLM produces is production ready and you don’t need a professional to look at it and redo it (because that’s usually easier than fixing the mistakes) you’re simply out of touch with reality.
Are you even reading what I say? You are supposed to have a professional approving generated stuff.
But it’s still AI-generated, it doesn’t become less AI-generated because a human that knows shit about the subject approved it.
This is what you said:
At no point did you mention someone approving it.
Also, you should read what I said, I said most large stuff generated by AI needs to be completely redone. You can generate a small function or maybe a small piece of an image, if you have a professional validating that small chunk, but if you think you can generate an entire program or image with LLMs you’re delusional.
https://vger.to/piefed.ca/comment/2422544 mentioned here.
Dude are you a software dev? Did you hear about, like, tickets? You are supposed to split bigger task into smaller tickets at a project approval phase.
LLM agents are completely capable of taking well-documented tickets and generating some semblance of code that you shape with a few upcoming prompts, criticising code style & issues until they are all fixed.
I’m not theoretical, this is how it’s done today. MCPs into JIRA and Figma and UI tickets just get about 90% done in a single prompt. Harder stuff is done in “invesrigate and write .md how to solve” & “this is why that won’t work, do this instead” to like 70% ready.