To elaborate a little:

Since many people are unable to tell the difference between a “real human” and an AI, they have been documented “going rogue” and acting outside of parameters, they can lie, they can compose stories and pictures based on the training received. I can’t see AI as less than human at this point because of those points.

When I think about this, I think about that being the reason as to why we cannot create so called “AGI” because we have no proper example or understanding to create it and thus created what we knew. Us.

The “hallucinating” is interesting to me specifically because that seems what is different between the AI of the past, and modern models that acts like our own brains.

I think we really don’t want to accept what we have already accomplished because we don’t like looking into that mirror and seeing how simple our logical process’ are mechanically speaking.

  • UnfortunateShort@lemmy.world
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    2
    ·
    edit-2
    3 days ago

    I can’t tell if you are serious, but as someone with a master in CS and some basic experience in neuroscience, I want to clarify a two things:

    • AIs can’t lie, because they neither know nor understand things
    • AIs are not like us and you can’t make them like us yet, because we don’t even fully understand how we work
    • venusaur@lemmy.world
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      edit-2
      3 days ago

      While the mechanisms of hallucination aren’t the same, it can absolutely and does happen with humans. Somebody ever tell you something they thought to be true and it wasn’t? I’m sure you’ve even done it yourself. Maybe later you realize that it might not be true or you got something confused with something else in your fleshy Rolodex.

      • UnfortunateShort@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        3 days ago

        Lies are deliberate, hallucinations are mistakes. Talking about lying AIs implies that they have something akin to free will or an intrinsic motivation, which they simply do not have. They are an emotionless tool designed to give good answers. It is comparable to claiming the mechanism for generating speech in the human brain comes up with lies, which it obviously doesn’t, it just articulates them.

        I’m not saying humans can’t hullacinate, I am saying it is not the same as lying and that AIs can’t lie.

      • Talaraine@fedia.io
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        3 days ago

        This is pretty much exactly how I explain hallucinations to people. In addition, I posited that maybe LLMs are more human than we think, since we pretty much make up shit all the time. xD

      • jaxxed@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        3 days ago

        This conversation often gets us to R. Penrose’s “consciousness is not computational”, from which we can retrace our steps with a separation of algorithmic processes. Is GenAI similar to the “stream-of-thought”? Perhaps, but does that lead to intelligence?

        • hoshikarakitaridia@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          3 days ago

          Exactly.

          People always wanna classify AI as super smart or super dumb, similar to the human brain or randomly guessing words and doing an ok job. But that is very subjective and it’s sliding a little fader between two points that differ in definition slightly for every person.

          If we actually wanted to approach the question of “how intelligent are AIs compared to humans” we would need to write a lot of word definitions first, and I’m sure the answer at the end will be just as helpful as a shoulder shrug and an unenthusiastic “about half as intelligent”. And that’s why these comparisons are stupid.

          If AI is a good tool to us, great. If not, alright let’s skip and go straight to the next bigger discovery, and stop getting hung up on semantics.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      6
      ·
      3 days ago

      Our brains are perfectly capable of lying to us, and do so all the time. Posted this yesterday:

      “Brains are survival engines, not truth detectors. If self-deception promotes fitness, the brain lies. Stops noticing—irrelevant things. Truth never matters. Only fitness. By now you don’t experience the world as it exists at all. You experience a simulation built from assumptions. Shortcuts. Lies. Whole species is agnosiac by default.”

      ― Peter Watts, Blindsight

      I’d say we’re not too capable in the understanding department either. And no, I’m not conflating LLMs with human intelligence, but LLMs have far more going on than lemmy will admit, and we have far less going on than we think.

      • Hegar@fedia.io
        link
        fedilink
        arrow-up
        8
        arrow-down
        3
        ·
        3 days ago

        LLMs have far more going on than lemmy will admit, and we have far less going on than we think.

        Yes and yes and especially yes on the last bit.