• ExLisper@lemmy.curiana.net
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    19 hours ago

    You’re talking about consciousness, not AGI. We will never be able to tell if AI has “real” consciousness or not. The goal is really to create an AI that acts intelligent enough to convince people that it may be conscious.

    Basically, we will “hit” AGI when enough people will start treating it like it’s AGI, not when we achieve some magical technological breakthrough and say “this is AGI”.

    • Perspectivist@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      19 hours ago

      Same argument applies for consciousness as well, but I’m talking about general intelligence now.

      • ExLisper@lemmy.curiana.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        18 hours ago

        I don’t think you can define AGI in a way that would make it substrate dependent. It’s simply about behaving in a certain way. Sufficiently complex set of ‘if -> then’ statements could pass as AGI. The limitation is computation power and practicality of creating the rules. We already have supercomputers that could easily emulate AGI but we don’t have a practical way of writing all the ‘if -> then’ rules and I don’t see how creating the rules could be substrate dependent.

        Edit: Actually, I don’t know if current supercomputers could process input fast enough to pass as AGI but it’s still about computation power, not substrate. There’s nothing suggesting we will not be able to keep increasing computational power without some biological substrate.