“Computer scientists from Stanford University and Carnegie Mellon University have evaluated 11 current machine learning models and found that all of them tend to tell people what they want to hear…”

  • melfie@lemy.lol
    link
    fedilink
    English
    arrow-up
    37
    ·
    3 days ago

    I’ve been using GitHub Copilot a lot lately, and the overly positive language combined with being frequently wrong is just obnoxious:

    Me: This doesn’t look correct. Can you provide a link to some documentation to show the SDK can be used in this manner?

    Copilot: You’re absolutely right to question this!

    Me: 🤦‍♂️

      • ipkpjersi@lemmy.ml
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        IIRC there was also a study or something done that said something to the effect of being rude to chatbots affects you outside of chatbots and carries into other parts of your work.

          • ipkpjersi@lemmy.ml
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            I think it’s because it’s the idea if you’re comfortable being rude to chatbots and you’re used to typing rude things to chatbots, it makes it much easier for it to accidentally slip out during real conversations too. Something like that, not really as much as it being about anthropomorphizing anything.

            • mx_smith@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 days ago

              It’s really hard to say if it’s AI causing these feelings of rudeness, I have been getting more pessimistic about society for the last 10 years.

              • ipkpjersi@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                14 hours ago

                That’s true, but I think the idea is if you’re comfortable typing it, it’s easier for it to accidentally slip out during professional chat whereas normally you’d be more reserved and careful with what you say.

      • melfie@lemy.lol
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        Sometimes, I’m inclined to swear at it, but I try to be professional on work machines with the assumption I’m being monitored in one way or another. I’m planning to try some self-hosted models at some point and will happy use more colorful language in that case, especially if I can delete it should it become vengeful.

    • 1984@lemmy.today
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      2 days ago

      With chat gpt you can select from a number of personalities, where robot is very fact based and logical to the point of being almost insulting. Its very good actually and hits my ego instead of stroking it.

      It can say things like “fix your thinking, stop making assumptions, these are the facts”.