• real_squids@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    17 hours ago

    How do you know they’re not running a local model? Ultimately the problem with LLM accusations is that short of a confession or doing some hardcore surveillance of the other person you can’t prove it

    edit: or fingerprinting/watermarking

    edit2: no, “you can tell by the way it is” isn’t proof (simply because that’s fixable in an instant). even if you’re the smartest person on the internet. and again, it could be a local model.

    • givesomefucks@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      18 hours ago

      Ultimately the problem with LLM accusations is that short of a confession or doing some hardcore surveillance of the other person you can’t prove it

      Human variation.

      Ironically you would have to take the others person word on it, luckily you just said you were comfortable doing so.

      Some people are statistically insignificant, and to them lots of stuff is incredibly obvious and they’re constantly frustrated others can’t see it. They might even sink sizeable free time into explaining random shit, just to practice not losing their temper when people can’t see the obvious.

      So you might not be able to tell that was AI from a glance, but humans are pattern recognition machines and we’re not all equally good at it.

      So believe a “llm accusation” or not, but some people absolutely can pick out a chatbot response, especially when taking the two seconds to glance at typical comments from a user profile.

      Jump from 1-2 sentence comments to a stereotypical AI response…

      Well, again, not everyone is as good at picking out patterns quickly.

      To some what took me literally under 10 seconds and two clicks counts as “hardcore surveillance” because it would take them a long time to figure it out.

      Don’t assume everyone else is exactly like you.