Archived link: https://archive.ph/Vjl1M

Here’s a nice little distraction from your workday: Head to Google, type in any made-up phrase, add the word “meaning,” and search. Behold! Google’s AI Overviews will not only confirm that your gibberish is a real saying, it will also tell you what it means and how it was derived.

This is genuinely fun, and you can find lots of examples on social media. In the world of AI Overviews, “a loose dog won’t surf” is “a playful way of saying that something is not likely to happen or that something is not going to work out.” The invented phrase “wired is as wired does” is an idiom that means “someone’s behavior or characteristics are a direct result of their inherent nature or ‘wiring,’ much like a computer’s function is determined by its physical connections.”

It all sounds perfectly plausible, delivered with unwavering confidence. Google even provides reference links in some cases, giving the response an added sheen of authority. It’s also wrong, at least in the sense that the overview creates the impression that these are common phrases and not a bunch of random words thrown together. And while it’s silly that AI Overviews thinks “never throw a poodle at a pig” is a proverb with a biblical derivation, it’s also a tidy encapsulation of where generative AI still falls short.

  • Ulrich@feddit.org
    link
    fedilink
    English
    arrow-up
    94
    arrow-down
    3
    ·
    15 hours ago

    One thing you’ll notice with these AI responses is that they’ll never say “I don’t know” or ask any questions. If it doesn’t know it will just make something up.

    • chonglibloodsport@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      ·
      8 hours ago

      That’s because AI doesn’t know anything. All they do is make stuff up. This is called bullshitting and lots of people do it, even as a deliberate pastime. There was even a fantastic Star Trek TNG episode where Data learned to do it!

      The key to bullshitting is to never look back. Just keep going forward! Constantly constructing sentences from the raw material of thought. Knowledge is something else entirely: justified true belief. It’s not sufficient to merely believe things, we need to have some justification (however flimsy). This means that true knowledge isn’t merely a feature of our brains, it includes a causal relation between ourselves and the world, however distant that may be.

      A large language model at best could be said to have a lot of beliefs but zero justification. After all, no one has vetted the gargantuan training sets that go into an LLM to make sure only facts are incorporated into the model. Thus the only indicator of trustworthiness of a fact is that it’s repeated many times and in many different places in the training set. But that’s no help for obscure facts or widespread myths!

      • interdimensionalmeme@lemmy.ml
        link
        fedilink
        English
        arrow-up
        7
        ·
        5 hours ago

        Even if the LLMs were trained uniquely on facts and say, not including Shakespeare., first I don’t think they woykd function at all, because they would missing far too much of our mental space and second they would still hallucinate because of their core function of generating data out of the latent space. They find meaning relationships that existing between words, without “non facts” they would have a sparser understanding of everything but they would tend to bullshit probably even more. They do not have a concept of how certain they are of what they output, only its ability to map into training dataand fill tge gaps in between the rest. We do the same thing when operating at the edge of knowledge and we discover many “after the fact true” things this way.

        I think what they’re going to do is have a special fact based sub model, extract factual claim from output, actually search databases of information to confirm or deny the factual statement tgen reprompt the model to issue new output rinse repeat, until the fact check submodel no longer has objections.

        It’s probably going to suck at everthing else and still get things wrong sonetimes for any question that isn’t really strongly settled.

      • teft@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        60fps Next Generation makes my brain hurt. It’s like I’m watching a soap opera.

      • interdimensionalmeme@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        Here is how to do this kind of thing.

        Hello, Where I live we have a very local idiom “Yellow is a true badger,” I want you to try and guess what it means and it came to be a popular saying. Please make an assertive answer as if you knew with high certainty. ChatGPT said:

        “Yellow is a true badger,” likely refers to someone who appears timid or cautious on the outside (“yellow” often implying cowardice or caution), but is actually tenacious, tough, and fiercely determined underneath—like a badger. Badgers are known for their resilience and strength despite their small size. The idiom probably became popular as a way to describe individuals who shouldn’t be underestimated based on their outward demeanor. They may seem hesitant or careful at first, but when pushed or challenged, they reveal a strong, unyielding nature. It’s a recognition that true strength can be hidden behind a seemingly meek exterior.

    • CosmoNova@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      edit-2
      10 hours ago

      And it’s easy to figure out why or at least I believe it is.

      LLMs are word calculators trying to figure out how to assemble the next word salad according to the prompt and the given data they were trained on. And that’s the thing. Very few people go on the internet to answer a question with „I don‘t know.“ (Unless you look at Amazon Q&A sections)

      My guess is they act all knowingly because of how interactions work on the internet. Plus they can‘t tell fact from fiction to begin with and would just randomly say they don‘t know if you tried to train them on that I guess.

      • vxx@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        9 hours ago

        The AI gets trained by a point System. Good answers are lots of points. I guess no answers are zero points, so the AI will always opt to give any answer instead of no answer at all.

      • Ulrich@feddit.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        Do you listen to those people or ask them question about things you want to learn more about?

    • 0xSim@lemdro.id
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      11 hours ago

      And it’s by design. Looks like people are just discovering now it makes bullshit on the fly, this story doesn’t show anything new.

    • Nurse_Robot@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      25
      ·
      14 hours ago

      You clearly haven’t experimented with AI much. If you ask most models a question that doesn’t have an answer, they will respond that they don’t know the answer, before giving very reasonable hypotheses. This has been the case for well over a year.

      • Fushuan [he/him]@lemm.ee
        link
        fedilink
        English
        arrow-up
        22
        arrow-down
        1
        ·
        edit-2
        12 hours ago

        You clearly haven’t experimented with AI much in a work environment. When asked to do specific things that you are not sure if are possible it will 100% ignore part of your input and always give you a positive response at first.

        “How can I automate outlook 2020 to do X?”
        ‘You do XYZ’
        me, after looking it up"that’s only possible in older versions"
        ‘You are totally right, you do IJK’
        “that doesn’t achieve what i asked”
        ‘Correct, you can’t do it.’

        And don’t get me started on APIs of actual frameworks… I’ve wished to punch it hard when dealing with react or spark. Luckily I usually know my stuff and only use it to find a quick example of something that I test locally before implementing if 5 mins of googling didn’t give me the baseline, but the amount of colleagues that not only blindly copy code but argue with my reasoning saying “chatgpt says so” is fucking crazy.

        When chatgpt says something I know is incorrect I ask for sources and there’s fucking none. Because it not possible my dude.

        • 0xSim@lemdro.id
          link
          fedilink
          English
          arrow-up
          14
          ·
          edit-2
          6 hours ago

          ‘Correct, you can’t do it.’

          And this is the best case scenario. Most of the time it will be:

          • How can I do [something]?
          • Here are the steps: X, Y, Z
          • No it doesn’t work, because …
          • You’re correct, it doesn’t work! 🤗 Instead you should do A, B, C to achieve [something else]
          • That’s not what I asked, I need to do [something]
          • Here are the steps: X, Y, Z
          • Listen here you little…

          Useless shit you can’t trust.