“Computer scientists from Stanford University and Carnegie Mellon University have evaluated 11 current machine learning models and found that all of them tend to tell people what they want to hear…”

  • Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    7
    arrow-down
    2
    ·
    3 days ago

    How is this surprising? We know that part of LLM training is being rewarded for finding an answer that satisfies the human. It doesn’t have to be a correct answer, it just has to be received well. This doesn’t make it better, but it makes it more marketable, and that’s all that has mattered since it took off.

    As for its effect on humans, that’s why echo chambers work so well. As well as conspiracy theories. We like being right about our world view.