• SocialMediaRefugee@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    3 hours ago

    I find if I ask it about procedures that have any vague steps AI will stumble on it and sometimes put me into loops where it tells me to do A, A fails, so do B, B fails, so it tells me to do A…

    • x00z@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      40 minutes ago

      I tend to get decent results by saying I want neither A or B when asking for C.

    • KENNY_LOGIN_LILLIAN@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      15
      ·
      2 hours ago

      The study is garbage. No wonder it is a big hit with the tech illiterate fediverse community. AI is far better than humans.

      SOURCE: I have used LLMs to help me write code for three years. I had a traumatic brain injury so I can’t work.

      • pyrrhrick@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        23 minutes ago

        Currently it’s useful as an assistant, not as a developer.

        They might produce something that works but it’ll have tons of redundancies, rely heavily on external libraries where its not necessary to and can introduce dependencies which will just cause tech debt.

        We’ve tested LLMs extensively for real world coding tasks and they’re just not there yet.

        Extremely useful to assist with tasks, but not to write them for you.

      • nostrauxendar@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 hour ago

        If AI is far better than humans, can you do yourself a favour, go talk to your little robot friends and leave us humans alone?

        • KENNY_LOGIN_LILLIAN@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          34 minutes ago

          yes, i had a traumatic brain injury so i can’t work. thank you for quoting one of my lines , you fucking bot.

          you are on the list now.

          add raspberriesareyummy

  • kalkulat@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    4 hours ago

    I’d never ask a friggin machine to do coding for me, that’s MY blast.

    That said, I’ve had good luck asking GPT specific questions about multiple obscure features of Javascript, and of various browsers. It’ll often feed me a sample script using a feature it explains … a lot more helpful than many of the wordy websites like MDN … saving me shit-tons of time that I’d spend bouncing around a half-dozen ‘help’ pages.

      • Xenny@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        2 hours ago

        Ai is literally just copy pasting. Like if you think about AI as a control C control V machine, it makes sense. You wouldn’t trust a single fucking junior Dev that didn’t actually know how to code because they just Ctrl C control V from stack overflow for literally every single line of code. That’s all fucking AI is

  • Ledivin@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    9
    ·
    edit-2
    8 hours ago

    Anyone blindly having AI write their code is an absolute moron.

    Anyone with decent experience (5-10 years, maybe 10+?) can absolutely fucking skyrocket their output if they properly set up their environments and treat their agents as junior devs instead of competent programmers. You shouldn’t trust generated code any more than you trust someone fresh out of college, but they produce code in seconds instead of weeks.

    I have tripled my output while producing more secure code (based on my security audits), safer code (based on code coverage and security audits), and less error-prone code (based on production logs and our unchanged QA process).

    Now, the ethical issues and environmental issues, I 100% can get behind. And I have no idea what companies are going to do in 10 years when they have to replace people like me and haven’t been hiring or training replacements. But the productivity and quality debates are absolutely ridiculous, as long as a strong dev is behind the wheel and has been trained to use the tools.

    • skibidi@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      5 hours ago

      Consider: the facts

      People are very bad at judging their own productivity, and AI consistently makes devs feel like they are working faster, while in fact slowing them down.

      I’ve experienced it myself - it feels fucking great to prompt a skeleton and have something brand new up and running in under an hour. The good chemicals come flooding in because I’m doing something new and interesting.

      Then I need to take a scalpel to a hundred scattered lines to get CI to pass. Then I need to write tests that actually test functionality. Then I start extending things and realize the implementation is too rigid and I need to change the architecture.

      It is as this point that I admit to myself that going in intentionally with a plan and building it myself the slow way would have saved all that pain and probably got the final product shipped sooner, even if the prototype was shipped later.

      • setsubyou@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        It depends on the task. As an extreme example, I can get AI to create a complete application in a language I don’t know. There’s no way that’s not more productive than me first learning the language to a point where I can make apps in it. Just have to pick something simple enough for the AI.

        Of course the opposite extreme also exists. I’ve found that when I demand something impossible, AI will often just try to implement it anyway. It can easily get into an endless cycle where it keeps optimistically declaring that it identified the issue and fixed it with a small change, over and over again. This includes cases where there’s a bug in the underlying OS or similar. You can waste a huge amount of time going down an entirely wrong path if you don’t realize that an idea doesn’t work.

        In my real work neither of these really happen. So the actual impact is much less. A lot of my work is not coding in the first place. And I’ve been writing code since I was a little kid, for almost 40 years now. So even the fast scaffolding I can do with AI is not that exciting. I can do that pretty quickly without AI too. When AI coding tools appeared my bosses started asking if I was fast because I was using one. No, I’m fast because some people ask for a new demo every week. Causes the same problems later too.

        But I also do think that we all still need to learn how to use AI properly. This applies to all tools, but I think it’s more difficult than with other tools. If I try to use a hammer on something other than a nail, it will not enthusiastically tell me it can do it with just one more small change. AI tools absolutely will though, and it’s easy to just let them try because it’s just a few seconds to see what they come up with. But that’s a trap that leads to those productivity wasting spirals. Especially if the result actually somehow still works at first, so we have to fix it half a year later instead of right away.

        At my work there are some other things that I feel limit the productivity potential of AI tools. First of all we’re only allowed to use a very limited number of tools, some of them made in-house. Then we’re not really allowed to integrate them into our workflows other than the part where we write code. E.g. I could trivially write an mcp server that interacts with our (custom in-house) ci system and actually increases my productivity because I could save a small number of seconds very often if I could tell an AI to find builds for me for integration or QA work. But it’s not allowed. We’re all being pushed to use AI but the company makes it really difficult at the same time.

        So when I play around with AI on my spare time I do actually feel like I’m getting a huge boost. Not just because I can use a claude model instead of the ones I can use at work, but also just basic things like e.g. being able to turn on AI in Xcode at all when working on software for Apple platforms. On my work Macbook I can’t turn on any Apple AI features at all so even tab completion is worse. Or in other words, those realities of working on serious projects at a serious company with serious security policies can also kill any potential productivity boost from AI. They basically expect us to be productive with only those features the non-developer CEO likes, who also doesn’t have to follow any of our development processes…

  • RampantParanoia2365@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    11 hours ago

    I’m not a programmer, but I’ve dabbled with Blender for 3D modeling, and it uses Node trees for a lot of different things, which is pretty much a programming GUI. I googled how to make a shader, and the AI gave me instructions. About half of it was complete nonsense, but I did make my shader.

  • MyMindIsLikeAnOcean@piefed.world
    link
    fedilink
    English
    arrow-up
    48
    ·
    15 hours ago

    No shit.

    I actually believed somebody when they told me it was great at writing code, and asked it to write me the code for a very simple lua mod. It’s made several errors and ended up wasting my time because I had to rewrite it.

    • morto@piefed.social
      link
      fedilink
      English
      arrow-up
      16
      ·
      8 hours ago

      In a postgraduate class, everyone was praising ai, calling it nicknames and even their friend (yes, friend), and one day, the professor and a colleague were discussing some code when I approached, and they started their routine bullying on me for being dumb and not using ai. Then I looked at his code and asked to test his core algorithm that he converted from a fortran code and “enhanced” it. I ran it with some test data and compared to the original code and the result was different! They blindly trusted some ai code that deviated from their theoretical methodology, and are publishing papers with those results!

      Even after showing the different result, they didn’t convince themselves of anything and still bully me for not using ai. Seriously, this shit became some sort of cult at this point. People are becoming irrational. If people in other universities are behaving the same and publishing like this, I’m seriously concerned for the future of science and humanity itself. Maybe we should archive everything published up to 2022, to leave as a base for the survivors from our downfall.

      • Xenny@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        That’s not a bad idea. I’m already downloading lots of human knowledge and media that I want backed up because I can’t trust humanity anymore to have it available anymore

      • MyMindIsLikeAnOcean@piefed.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 hours ago

        The way it was described to me by some academics is that it’s useful…but only as a “research assistant” to bounce ideas off of and bring in arcane or tertiary concepts you might not have considered (after you vet them thoroughly, of course).

        The danger, as described by the same academics, is that it can act as a “buddy” who confirms you biases. It can generate truly plausible bullshit to support deeply flawed hypotheses, for example. Their main concern is it “learning” to stroke the egos of the people using it so it creates a feedback loop and it’s own bubbles of bullshit.

      • Serinus@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 hours ago

        It works well when you use it for small (or repetitive) and explicit tasks. That you can easily check.

        • ThirdConsul@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          According to OpenAis internal test suite and system card, hallucination rate is about 50% and the newer the model the worse it gets.

          And that fact remains unchanged on other LLM models.

        • frongt@lemmy.zip
          link
          fedilink
          English
          arrow-up
          8
          ·
          11 hours ago

          For words, it’s pretty good. For code, it often invents a reasonable-sounding function or model name that doesn’t exist.

          • Xenny@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            2 hours ago

            It’s not even good for words. AI just writes the same stories over and over and over and over and over and over. It’s the same problem as coding. It can’t think of anything novel. Hell it can’t even think. I’d argue the best and only real use for an llm is to help be a rough draft editor and correct punctuation and grammar. We’ve gone way way way too far with the scope of what it’s actually capable of

            • Flic@mstdn.social
              link
              fedilink
              arrow-up
              1
              ·
              2 hours ago

              @Xenny @frongt it’s definitely not good for words with any technical meaning, because it creates references to journal articles and legal precedents that sound plausible but don’t exist.
              Ultimately it’s a *very* expensive replacement for the lorem ipsum generator keyboard shortcut.

      • ptu@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        14 hours ago

        I use it for things that are simple and monotonous to write. This way I’m able to deliver results to tasks I couldn’t have been arsed to do. I’m a data analyst and mostly use mysql and power query

      • dogdeanafternoon@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        13 hours ago

        What’s your preferred Hello world language? I’m gunna test this out. The more complex the code you need, the more they suck, but I’ll be amazed if it doesn’t work first try to simply print hello world.

        • xthexder@l.sw0.com
          link
          fedilink
          English
          arrow-up
          9
          ·
          edit-2
          12 hours ago

          Malbolge is a fun one

          Edit: Funny enough, ChatGPT fails to get this right, even with the answer right there on Wikipedia. When I tried running ChatGPT’s output the first few characters were correct but it errors with invalid char at 37

          • dogdeanafternoon@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            12 hours ago

            Cheeky, I love it.

            Got correct code first try. Failed creating working docker first try. Second try worked.

            tmp="$(mktemp)"; cat >"$tmp" <<'MBEOF'
            ('&%:9]!~}|z2Vxwv-,POqponl$Hjig%eB@@>}=<M:9wv6WsU2T|nm-,jcL(I&%$#"
            `CB]V?Tx<uVtT`Rpo3NlF.Jh++FdbCBA@?]!~|4XzyTT43Qsqq(Lnmkj"Fhg${z@>
            MBEOF
            docker run --rm -v "$tmp":/code/hello.mb:ro esolang/malbolge malbolge /code/hello.mb; rm "$tmp"
            

            Output: Hello World!

            • xthexder@l.sw0.com
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              10 hours ago

              I’m actually slightly impressed it got both a working program, and a different one than Wikipedia. The Wikipedia one prints “Hello, world.”

              I guess there must be another program floating around the web with “Hello World!”, since there’s no chance the LLM figured it out on its own (it kinda requires specialized algorithms to do anything)

              • dogdeanafternoon@lemmy.ca
                link
                fedilink
                English
                arrow-up
                1
                ·
                11 hours ago

                I’d never even heard of that language, so it was fun to play with.

                Definitely agree that the LLM didn’t actually figure anything out, but at least it’s not completely useless

  • PetteriPano@lemmy.world
    link
    fedilink
    English
    arrow-up
    100
    ·
    16 hours ago

    It’s like having a lightning-fast junior developer at your disposal. If you’re vague, he’ll go on shitty side-quests. If you overspecify he’ll get overwhelmed. You need to break down tasks into manageable chunks. You’ll need to ask follow-up questions about every corner case.

    A real junior developer will have improved a lot in a year. Your AI agent won’t have improved.

    • mcv@lemmy.zip
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      2
      ·
      11 hours ago

      This is the real thing. You can absolutely get good code out of AI, but it requires a lot of hand holding. It helps me speed some tasks, especially boring ones, but I don’t see it ever replacing me. It makes far too many errors, and requires me to point them out, and to point in the direction of the solution.

      They are great at churning out massive amounts of code. They’re also great at completely missing the point. And the massive amount of code needs to be checked and reviewed. Personally I’d rather write the code and have the AI review it. That’s a much more pleasant way to work, and that way it actually enhances quality.

    • Grimy@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      37
      ·
      15 hours ago

      They are improving, and probably faster then junior devs. The models we had had 2 years ago would struggle with a simple black jack app. I don’t think the ceiling has been hit.

      • PetteriPano@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 hours ago

        My jr developer will eventually be familiar with the entire codebase and can make decisions with that in mind without me reminding them about details at every turn.

        LLMs would need massive context windows and/or custom training to compete with that. I’m sure we’ll get there eventually, but for now it seems far off. I think this bubble will have to burst and let hardware catch up with our ambitions. It’ll take a couple of decades.

      • lividweasel@lemmy.world
        link
        fedilink
        English
        arrow-up
        54
        arrow-down
        7
        ·
        14 hours ago

        Just a few trillion more dollars, bro. We’re almost there. Bro, if you give up a few showers, the AI datacenter will be able to work perfectly.

        Bro.

        • Grimy@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          24
          ·
          edit-2
          13 hours ago

          The cost of the improvement doesn’t change the fact that it’s happening. I guess we could all play pretend instead if it makes you feel better about it. Don’t worry bro, the models are getting dumber!

          • Eranziel@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            8 hours ago

            And I ask you - if those same trillions of dollars were instead spent on materially improving the lives of average people, how much more progress would we make as a society? This is an absolutely absurd sum of money were talking about here.

            • Grimy@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              5
              ·
              edit-2
              8 hours ago

              It’s beside the point. I’m simply saying that AI will improve in the next year. The cost to do so or all the others things that money could be spent on doesn’t matter when it’s clearly going to be spent on AI. I’m not in charge of monetary policies anywhere, I have no say in the matter. I’m just pushing back on the fantasies. I’m hoping the open source scene survives so we don’t end up in some ugly dystopia where all AI is controlled by a handful of companies.

          • underisk@lemmy.ml
            link
            fedilink
            English
            arrow-up
            19
            arrow-down
            2
            ·
            12 hours ago

            Don’t worry bro, the models are getting dumber!

            That would be pretty impressive when they already lack any intelligence at all.

          • mcv@lemmy.zip
            link
            fedilink
            English
            arrow-up
            6
            ·
            11 hours ago

            They might. The amount of money they’re pumping into this is absolutely staggering. I don’t see how they’re going to make all of that money back, unless they manage to replace nearly all employees.

            Either way it’s going to be a disaster: mass unemployment or the largest companies in the world collapsing.

  • Katzelle3@lemmy.world
    link
    fedilink
    English
    arrow-up
    140
    ·
    17 hours ago

    Almost as if it was made to simulate human output but without the ability to scrutinize itself.

    • mushroommunk@lemmy.today
      link
      fedilink
      English
      arrow-up
      76
      arrow-down
      4
      ·
      edit-2
      17 hours ago

      To be fair most humans don’t scrutinize themselves either.

      (Fuck AI though. Planet burning trash)

          • Sophienomenal@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            3
            ·
            6 hours ago

            I do this with texts/DMs, but I’d never do that with an email. I double or triple check everything, make sure my formatting is good, and that the email itself is complete. I’ll DM someone 4 or 5 times in 30 seconds though, it feels like a completely different medium ¯\_(ツ)_/¯

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        6
        ·
        14 hours ago

        (Fuck AI though. Planet burning trash)

        It’s humans burning the planet, not the spicy Linear Algebra.

        Blaming AI for burning the planet is like blaming crack for robbing your house.

        • KubeRoot@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          Blaming AI for burning the planet is like blaming guns for killing children in schools, it’s people we should be banning!

        • Rhoeri@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          10 hours ago

          How about I blame the humans that use and promote AI. The humans that defend it in arguments using stupid analogies to soften the damage it causes?

          Would that make more sense?

        • BassTurd@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          14 hours ago

          Blaming AI is in general criticising everything encompassing it, which includes how bad data centers are for the environment. It’s like also recognizing that the crack the crackhead smoked before robbing your house is also bad.

  • PissingIntoTheWind@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 hours ago

    But you see. That’s the solution. Now you pay foreigners to clean up the generated code by offshoring the engineers. At 1/100 the cost.