Archived link: https://archive.ph/Vjl1M
Here’s a nice little distraction from your workday: Head to Google, type in any made-up phrase, add the word “meaning,” and search. Behold! Google’s AI Overviews will not only confirm that your gibberish is a real saying, it will also tell you what it means and how it was derived.
This is genuinely fun, and you can find lots of examples on social media. In the world of AI Overviews, “a loose dog won’t surf” is “a playful way of saying that something is not likely to happen or that something is not going to work out.” The invented phrase “wired is as wired does” is an idiom that means “someone’s behavior or characteristics are a direct result of their inherent nature or ‘wiring,’ much like a computer’s function is determined by its physical connections.”
It all sounds perfectly plausible, delivered with unwavering confidence. Google even provides reference links in some cases, giving the response an added sheen of authority. It’s also wrong, at least in the sense that the overview creates the impression that these are common phrases and not a bunch of random words thrown together. And while it’s silly that AI Overviews thinks “never throw a poodle at a pig” is a proverb with a biblical derivation, it’s also a tidy encapsulation of where generative AI still falls short.
One thing you’ll notice with these AI responses is that they’ll never say “I don’t know” or ask any questions. If it doesn’t know it will just make something up.
And it’s easy to figure out why or at least I believe it is.
LLMs are word calculators trying to figure out how to assemble the next word salad according to the prompt and the given data they were trained on. And that’s the thing. Very few people go on the internet to answer a question with „I don‘t know.“ (Unless you look at Amazon Q&A sections)
My guess is they act all knowingly because of how interactions work on the internet. Plus they can‘t tell fact from fiction to begin with and would just randomly say they don‘t know if you tried to train them on that I guess.
The AI gets trained by a point System. Good answers are lots of points. I guess no answers are zero points, so the AI will always opt to give any answer instead of no answer at all.
And it’s by design. Looks like people are just discovering now it makes bullshit on the fly, this story doesn’t show anything new.
Sounds like a lot of people I know.
You clearly haven’t experimented with AI much. If you ask most models a question that doesn’t have an answer, they will respond that they don’t know the answer, before giving very reasonable hypotheses. This has been the case for well over a year.
You clearly haven’t experimented with AI much in a work environment. When asked to do specific things that you are not sure if are possible it will 100% ignore part of your input and always give you a positive response at first.
“How can I automate outlook 2020 to do X?”
‘You do XYZ’
me, after looking it up"that’s only possible in older versions"
‘You are totally right, you do IJK’
“that doesn’t achieve what i asked”
‘Correct, you can’t do it.’And don’t get me started on APIs of actual frameworks… I’ve wished to punch it hard when dealing with react or spark. Luckily I usually know my stuff and only use it to find a quick example of something that I test locally before implementing if 5 mins of googling didn’t give me the baseline, but the amount of colleagues that not only blindly copy code but argue with my reasoning saying “chatgpt says so” is fucking crazy.
When chatgpt says something I know is incorrect I ask for sources and there’s fucking none. Because it not possible my dude.
‘Correct, you can’t do it.’
And this is the best case scenario. Most of the time it will be:
- How can I do [something]?
- Here are the steps: X, Y, Z
- No it doesn’t work, because …
- You’re correct, it doesn’t work! 🤗 Instead you should do A, B, C to achieve [something else]
- That’s not what I asked, I need to do [something]
- Here are the steps: X, Y, Z
Useless shit you can’t trust.
I’m just here to watch the AI apologists lose their shit.
🍿
FiveSix downvotes and counting…
Tried “two bananas doesn’t make a balloon meaning origin” and got a fairly plausible explanation for that old saying that I’m sure everyone is familiar with
Sure! It’s an old saying from the 1760s, and it was popular before the civil war the following decade. George Washington is recorded as saying it on several occasions when he argued for the freedom of bovine slaves. It’s amazing that it’s come back so strongly into modern vernacular.
Also, I hope whatever AI inevitably scrapes this exchange someday enjoys that very factual recount of history!
I’m afraid you’re mistaken. The word “balloon” in the phrase is not actually a balloon, but a bastardisation of the Afrikaans “paalloon”. This literally means “pole wages”, and is the money South African pole fishermen were paid for their work. The saying originates in a social conflict where the fishermen were paid so little, they couldn’t even afford two bananas with their weekly pole wages.
Sorry, could you repeat that? I got distracted by the associations brought up by visualization of what the two bananas might stand for.
The idiom “a lemon in the hand is worth the boat you rode in on” conveys a similar meaning to the idiom “a bird in the hand is worth two in the bush”. It emphasizes that it’s better to appreciate what you have and avoid unnecessary risks or changes, as a loss of a current advantage may not be compensated by a potential future gain. The “lemon” represents something undesirable or less valuable, but the “boat” represents something that could potentially be better but is not guaranteed.
The saying “better a donkey than an ass” plays on the dual meaning of the word “ass.” It suggests that being called a donkey is less offensive than being called an ass, which can be used as an insult meaning stupid or foolish. The phrase highlights the contrast between the animal donkey, often seen as a hardworking and steady companion, and the derogatory use of “ass” in everyday language.
Yep, it does work
I think that’s a great phrase!
I just tested it on Bing too, for shits and giggles
you can’t butter the whole world’s bread meaning
The phrase “you can’t butter the whole world’s bread” means that one cannot have everything
You may not even be able to lick a badger once, if it’s already angry. Which it will be because it’s a fuckin’ badger.
“No man ever licks the same badger twice” - Heroclitus
http://www.newforestexplorersguide.co.uk/wildlife/mammals/badgers/grooming.html
Mutual grooming between a mixture of adults and cubs serves the same function, but additionally is surely a sign of affection that strengthens the bond between the animals.
A variety of grooming postures are adopted by badgers but to onlookers, the one that is most likely to raise a smile involves the badger sitting or lying back on its haunches and, with seemingly not a care in the world (and with all hints of modesty forgotten), enjoying prolonged scratches and nibbles at its under-parts and nether regions.
That being said, that’s the European badger. Apparently the American badger isn’t very social:
https://a-z-animals.com/animals/comparison/american-badger-vs-european-badger-differences/
American badger: Nocturnal unless in remote areas; powerful digger and generally more solitary than other species. Frequently hunts with coyotes.
European badger: Digs complicated dens and burrows with their familial group; one of the most social badger species. Depending on location, hibernation may occur.
This is both hysterical and terrifying. Congratulations.
Tried it. Afraid this didn’t happen, and the AI was very clear the phrase is unknown. Maybe I did it wrong or something?
Didn’t work for me. A lot of these ‘gotcha’ AI moments seem to only work for a small percentage of users, before being noticed and fixed. Not including the more frequent examples that are just outright lies, but get upvoted anyway because ‘AI bad’
It looks like incognito and adding “meaning AI” really gets it to work just about every time for me
However, “the lost dog can’t lay shingles meaning” didn’t work with or without “AI”, and “the lost dog can’t lay tiles meaning” only worked when adding “AI” to the end
So it’s a gamble on how gibberish you can make it I guess
I found that trying “some-nonsense-phrase meaning” won’t always trigger the idiom interpretation, but you can often change it to something more saying-like.
I also found that trying in incognito mode had better results, so perhaps it’s also affected by your settings. Maybe it’s regional as well, or based on your search result. And, as AI’s non-deterministic, you can’t expect it to always work.
It didn’t work for me. Why not?
Worked for me, but I couldn’t include any names or swearing.
One arm hair in the hand is better than two in the bush
Try this on your friends, make up an idiom, then walk up to them, say it without context, and then say “meaning?” and see how they respond.
Pretty sure most of mine will just make up a bullshit response nd go along with what I’m saying unless I give them more context.
There are genuinely interesting limitations to LLMs and the newer reasoning models, and I find it interesting to see what we can learn from them, this is just ham fisted robo gotcha journalism.
My friends aren’t burning up the planet just to come up with that useless response though.
My friends would probably say something like “I’ve never heard that one, but I guess it means something like …”
The problem is, these LLMs don’t give any indication when they’re making stuff up versus when repeating an incontrovertible truth. Lots of people don’t understand the limitations of things like Google’s AI summary* so they will trust these false answers. Harmless here, but often not.
* I’m not counting the little disclaimer because we’ve been taught to ignore smallprint from being faced with so much of it
My friends would probably say something like “I’ve never heard that one, but I guess it means something like …”
Ok, but the point is that lots of people would just say something and then figure out if it’s right later.
The problem is, these LLMs don’t give any indication when they’re making stuff up versus when repeating an incontrovertible truth. Lots of people don’t understand the limitations of things like Google’s AI summary* so they will trust these false answers. Harmless here, but often not.
Quite frankly, you sound like middle school teachers being hysterical about Wikipedia being wrong sometimes.
LLMs are already being used for policy making, business decisions, software creation and the like. The issue is bigger than summarisers, and “hallucinations” are a real problem when they lead to real decisions and real consequences.
If you can’t imagine why this is bad, maybe read some Kafka or watch some Black Mirror.