

It’s actually so bad lol. Idk what Microsoft has against -
for args flags but it’s fuckn annoying
Yo whatup
It’s actually so bad lol. Idk what Microsoft has against -
for args flags but it’s fuckn annoying
ಠ_ಠ gross
But that obviously wouldn’t stop them from having him disappeared. Heck I don’t even think that’s off the table now
Yes but a lot of us who do block ads block them largely because they are intolerable. I largly only started blocking ads at all because of how utterly miserable YouTube ads became.
While this is pretty hilarious LLMs don’t actually “know” anything in the usual sense of the word. An LLM, or a Large Language Model is a basically a system that maps “words” to other “words” to allow a computer to understand language. IE all an LLM knows is that when it sees “I love” what probably comes next is “my mom|my dad|ect”. Because of this behavior, and the fact we can train them on the massive swath of people asking questions and getting awnsers on the internet LLMs essentially by chance are mostly okay at “answering” a question but really they are just picking the next most likely word over and over from their training which usually ends up reasonably accurate.
Cry about it. Crypto bros make the same excuses to this day prove your bullshit works before you start shoving it in my face. And yes, LLMs are really unhelpful. There’s extremely little value you can get out of them (outside of generating text that looks like a human wrote it which is what they are designed to do) unless you are a proper moron.
Is that so? I don’t find it odd at all when the only thing LLMs are good at so far is losing people their jobs and lowering the quality of essentially everything they get shoved into.
Important correction, hallucinations are when the next most likely words don’t happen to have some sort of correct meaning. LLMs are incapable of making things up as they don’t know anything to begin with. They are just fancy autocorrect
Whoops, meant flags