• ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
    link
    fedilink
    arrow-up
    2
    arrow-down
    9
    ·
    10 days ago

    Ultimately the data both human brains and artificial neural networks are trained on comes from the material reality we inhabit. That’s the underlying context. We’re feeding LLMs data about our reality encoded in a way that’s compatible with how our brains interpret it. I’d argue that models being based on data encoding that we ourselves use is a feature, because ultimately we want to be able to interact with them in a meaningful way.

    • queermunist she/her@lemmy.ml
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      10 days ago

      LLMs are not getting raw data from nature. They’re being fed data produced by us and uploaded into their database: human writings and human observations and human categorizations and human judgements about what data is valuable. All the data about our reality that we feed them is from a human perspective.

      This is a feature, and will make them more useful to us, but I’m just arguing that raw natural data won’t naturally produce human-like outputs. Instead, human inputs produce human-like outputs.

      • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
        link
        fedilink
        arrow-up
        2
        arrow-down
        5
        ·
        10 days ago

        I didn’t say they’re encoding raw data from nature. I said they’re learning to interpret multimodal representations of the encodings of nature that we feed them in human compatible formats. What these networks are learning is to make associations between visual, auditory, tactile, and text representations of objects. When a model recognizes a particular modality such as a sound, it can then infer that it may be associated with a particular visual object, and so on.

        Meanwhile, the human perspective itself isn’t arbitrary either. It’s a result of evolutionary selection process that shaped the way our brains are structured. This is similar to how brains of other animals encode reality as well. If you evolved a neural network on raw data from the environment, it would eventually start creating similar types of representations as well because it’s an efficient way to model the world.

        • queermunist she/her@lemmy.ml
          link
          fedilink
          arrow-up
          4
          ·
          10 days ago

          I didn’t say they’re encoding raw data from nature

          Ultimately the data both human brains and artificial neural networks are trained on comes from the material reality we inhabit.

          Anyway, the data they are getting not only comes in a human format. The data we record is only recorded because we find meaningful as humans and most of the data is generated entirely by humans besides. You can’t separate these things; they’re human-like because they’re human-based.

          It’s not merely natural. It’s human.

          If you evolved a neural network on raw data from the environment, it would eventually start creating similar types of representations as well because it’s an efficient way to model the world.

          We don’t know that.

          We know that LLMs, when fed human-like inputs, produce human-like outputs. That’s it. That tells us more about LLMs and humans than it tells us about nature itself.

          • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
            link
            fedilink
            arrow-up
            2
            arrow-down
            3
            ·
            10 days ago

            It’s not merely natural. It’s human.

            I’m not disputing this, but I also don’t see why that’s important. It’s a representation of the world encoded in a human format. We’re basically skipping a step of evolving a way to encode this data.

            We know that LLMs, when fed human-like inputs, produce human-like outputs. That’s it. That tells us more about LLMs and humans than it tells us about nature itself.

            Did you actually read through the paper?

            • queermunist she/her@lemmy.ml
              link
              fedilink
              arrow-up
              4
              ·
              10 days ago

              I’m not disputing this, but I also don’t see why that’s important.

              What’s important the use of “natural” here, because it implies something fundamental about language and material reality, rather than this just being a reflection of the human data fed into the model. You did it yourself when you said:

              If you evolved a neural network on raw data from the environment, it would eventually start creating similar types of representations as well because it’s an efficient way to model the world.

              And we just don’t know this, and this paper doesn’t demonstrate this because (as I’ve said) we aren’t feeding the LLMs raw data from the environment. We’re feeding them inputs from humans and then they’re displaying human-like outputs.

              Did you actually read through the paper?

              From the paper:

              to what extent can complex, task-general psychological representations emerge without explicit task-specific training, and how do these compare to human cognitive processes across abroad range of tasks and domains?

              But their training is still a data set picked by humans and given textual descriptions made by humans and then used a representation learning method previously designed for human participants. That’s not “natural”, that’s human.

              A more accurate conclusion would be: human-like object concept representations emerge when fed data collected by humans, curated by humans, annotated by humans, and then tested by representation learning methods designed for humans.

              human in ➡️ human out

              • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
                link
                fedilink
                arrow-up
                2
                arrow-down
                3
                ·
                edit-2
                10 days ago

                A more accurate conclusion would be: human-like object concept representations emerge when fed data collected by humans, curated by humans, annotated by humans, and then tested by representation learning methods designed for humans.

                Again, I’m not disputing this point, but I don’t see why it’s significant to be honest. As I’ve noted, human representation of the world is not arbitrary. We evolved to create efficient models that allow us to interact with the world in an effective way. We’re now seeing that artificial neural networks are able to create similar types of internal representations that allow them to meaningfully interact with the data organized in a way that’s natural for humans.

                I’m not suggesting that human style representation of the world is the one true way to build a world model, or that other efficient representations aren’t possible. However, that in no way detracts from the fact that LLMs can create a useful representation of the world, that’s similar to our own.

                Ultimately, the end goal of this technology is to be able to interact with humans, to navigate human environments, and to accomplish tasks that humans want to accomplish.

                • queermunist she/her@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  10 days ago

                  LLMs create a useful representation of the world that is similar to our own when we feed them our human created+human curated+human annotated data. This doesn’t tell us much about the nature of large language models nor the nature of object concept representations, what it tells us is that human inputs result in human-like outputs.

                  Claims about “nature” are much broader than the findings warrant. We’d need to see LLMs fed entirely non-human datasets (no human creation, no human curation, no human annotation) before we could make claims about what emerges naturally.

                  • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
                    link
                    fedilink
                    arrow-up
                    3
                    arrow-down
                    2
                    ·
                    10 days ago

                    You continue to ignore my point that human representation are themselves not arbitrary. Our brains have emerged naturally, and that’s what makes the representations humans make natural. You could evolve a representation of the model from scratch by hooking up a neural network to raw sensory inputs, and its topology will eventually become tuned to model those inputs. I don’t see what would be fundamentally more natural about that though.