• kescusay@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    24 hours ago

    Like I said, I do find it useful at times. But not only shouldn’t it replace coders, it fundamentally can’t. At least, not without a fundamental rearchitecturing of how they work.

    The reason it goes down a “really bad path” is that it’s basically glorified autocomplete. It doesn’t know anything.

    On top of that, spoken and written language are very imprecise, and there’s no way for an LLM to derive what you really wanted from context clues such as your tone of voice.

    Take the phrase “fruit flies like a banana.” Am I saying that a piece of fruit might fly in a manner akin to how another piece of fruit, a banana, flies if thrown? Or am I saying that the insect called the fruit fly might like to consume a banana?

    It’s a humorous line, but my point is serious: We unintentionally speak in ambiguous ways like that all the time. And while we’ve got brains that can interpret unspoken signals to parse intended meaning from a word or phrase, LLMs don’t.

    • FreedomAdvocate@lemmy.net.au
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      13 hours ago

      The reason it goes down a “really bad path” is that it’s basically glorified autocomplete. It doesn’t know anything.

      Not quite true - GitHub Copilot in VS for example can be given access to your entire repo/project/etc and it then “knows” how things tie together and work together, so it can get more context for its suggestions and created code.

      • kescusay@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        5 hours ago

        That’s still not actually knowing anything. It’s just temporarily adding more context to its model.

        And it’s always very temporary. I have a yarn project I’m working on right now, and I used Copilot in VS Code in agent mode to scaffold it as an experiment. One of the refinements I included in the prompt file to build it is reminders throughout for things it wouldn’t need reminding of if it actually “knew” the repo.

        • I had to constantly remind it that it’s a yarn project, otherwise it would inevitably start trying to use NPM as it progressed through the prompt.
        • For some reason, when it’s in agent mode and it makes a mistake, it wants to delete files it has fucked up, which always requires human intervention, so I peppered the prompt with reminders not to do that, but to blank the file out and start over in it.
        • The frontend of the project uses TailwindCSS. It could not remember not to keep trying to downgrade its configuration to an earlier version instead of using the current one, so I wrote the entire configuration for it by hand and inserted it into the prompt file. If I let it try to build the configuration itself, it would inevitably fuck it up and then say something completely false, like, “The version of TailwindCSS we’re using is still in beta, let me try downgrading to the previous version.”

        I’m not saying it wasn’t helpful. It probably cut 20% off the time it would have taken me to scaffold out the app myself, which is significant. But it certainly couldn’t keep track of the context provided by the repo, even though it was creating that context itself.

        Working with Copilot is like working with a very talented and fast junior developer whose methamphetamine addiction has been getting the better of it lately, and who has early onset dementia or a brain injury that destroyed their short-term memory.

        • FreedomAdvocate@lemmy.net.au
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          Adding context is “knowing more” for a computer program.

          Maybe it’s different in VS code vs regular VS, because I never get issues like what you’re describing in VS. Haven’t really used it in VS Code.