• MangoCats@feddit.it
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    14
    ·
    1 day ago

    AI tools are actually improving at a rate faster than most junior engineers I have worked with, and about 30% of junior engineers I have worked with never really “graduated” to a level that I would trust them to do anything independently, even after 5 years in the job. Those engineers “find their niche” doing something other than engineering with their engineering job titles, and that’s great, but don’t ever trust them to build you a bridge or whatever it is they seem to have been hired to do.

    Now, as for AI, it’s currently as good or “better” than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it’s improving relatively quickly. The question is: will it plateau, or will it improve exponentially?

    Many things in tech seem to have an exponential improvement phase, followed by a plateau. CPU clock speed is a good example of that. Storage density/cost is one that doesn’t seem to have hit a plateau yet. Software quality/power is much harder to gauge, but it definitely is still growing more powerful / capable even as it struggles with bloat and vulnerabilities.

    The question I have is: will AI continue to write “human compatible” software, or is it going to start writing code that only AI understands, but people rely on anyway? After all, the code that humans write is incomprehensible to 90%+ of the humans that use it.

    • AA5B@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      I’m seeing exactly the opposite. It used to be the junior engineers understood they had a lot to learn. However with AI they confidently try entirely wrong changes. They don’t understand how to tell when the ai goes down the wrong path, don’t know how to fix it, and it takes me longer to fix.

      So far ai overall creates more mess faster.

      Don’t get me wrong, it can be a useful tool you have to think of it like autocomplete or internet search. Just like those tools it provides results but the human needs judgement and needs to figure out how to apply the appropriate results.

      My company wants metrics on how much time we’re saving with ai, but

      • I have to spend more time helping the junior guys out of the holes dug by ai, making it net negative
      • it’s just another tool. There’s not really a defined task or set time. If you had to answer how much time autocomplete saved you, could you provide any sort of meaningful answer?
    • Feyd@programming.dev
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      1 day ago

      Now, as for AI, it’s currently as good or “better” than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it’s improving relatively quickly. The question is: will it plateau, or will it improve exponentially?

      LOL sure

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        1 day ago

        LOL sure

        I’m not talking about the ones that get hired in your 'leet shop, I’m talking about the whole damn crop that’s just graduated.