“Computer scientists from Stanford University and Carnegie Mellon University have evaluated 11 current machine learning models and found that all of them tend to tell people what they want to hear…”

  • ipkpjersi@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 days ago

    IIRC there was also a study or something done that said something to the effect of being rude to chatbots affects you outside of chatbots and carries into other parts of your work.

      • ipkpjersi@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        I think it’s because it’s the idea if you’re comfortable being rude to chatbots and you’re used to typing rude things to chatbots, it makes it much easier for it to accidentally slip out during real conversations too. Something like that, not really as much as it being about anthropomorphizing anything.

        • mx_smith@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          It’s really hard to say if it’s AI causing these feelings of rudeness, I have been getting more pessimistic about society for the last 10 years.

          • ipkpjersi@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            14 hours ago

            That’s true, but I think the idea is if you’re comfortable typing it, it’s easier for it to accidentally slip out during professional chat whereas normally you’d be more reserved and careful with what you say.