Archived link: https://archive.ph/Vjl1M

Here’s a nice little distraction from your workday: Head to Google, type in any made-up phrase, add the word “meaning,” and search. Behold! Google’s AI Overviews will not only confirm that your gibberish is a real saying, it will also tell you what it means and how it was derived.

This is genuinely fun, and you can find lots of examples on social media. In the world of AI Overviews, “a loose dog won’t surf” is “a playful way of saying that something is not likely to happen or that something is not going to work out.” The invented phrase “wired is as wired does” is an idiom that means “someone’s behavior or characteristics are a direct result of their inherent nature or ‘wiring,’ much like a computer’s function is determined by its physical connections.”

It all sounds perfectly plausible, delivered with unwavering confidence. Google even provides reference links in some cases, giving the response an added sheen of authority. It’s also wrong, at least in the sense that the overview creates the impression that these are common phrases and not a bunch of random words thrown together. And while it’s silly that AI Overviews thinks “never throw a poodle at a pig” is a proverb with a biblical derivation, it’s also a tidy encapsulation of where generative AI still falls short.

  • Nurse_Robot@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    25
    ·
    14 hours ago

    You clearly haven’t experimented with AI much. If you ask most models a question that doesn’t have an answer, they will respond that they don’t know the answer, before giving very reasonable hypotheses. This has been the case for well over a year.

    • Fushuan [he/him]@lemm.ee
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      1
      ·
      edit-2
      11 hours ago

      You clearly haven’t experimented with AI much in a work environment. When asked to do specific things that you are not sure if are possible it will 100% ignore part of your input and always give you a positive response at first.

      “How can I automate outlook 2020 to do X?”
      ‘You do XYZ’
      me, after looking it up"that’s only possible in older versions"
      ‘You are totally right, you do IJK’
      “that doesn’t achieve what i asked”
      ‘Correct, you can’t do it.’

      And don’t get me started on APIs of actual frameworks… I’ve wished to punch it hard when dealing with react or spark. Luckily I usually know my stuff and only use it to find a quick example of something that I test locally before implementing if 5 mins of googling didn’t give me the baseline, but the amount of colleagues that not only blindly copy code but argue with my reasoning saying “chatgpt says so” is fucking crazy.

      When chatgpt says something I know is incorrect I ask for sources and there’s fucking none. Because it not possible my dude.

      • 0xSim@lemdro.id
        link
        fedilink
        English
        arrow-up
        14
        ·
        edit-2
        6 hours ago

        ‘Correct, you can’t do it.’

        And this is the best case scenario. Most of the time it will be:

        • How can I do [something]?
        • Here are the steps: X, Y, Z
        • No it doesn’t work, because …
        • You’re correct, it doesn’t work! 🤗 Instead you should do A, B, C to achieve [something else]
        • That’s not what I asked, I need to do [something]
        • Here are the steps: X, Y, Z
        • Listen here you little…

        Useless shit you can’t trust.