• Zwuzelmaus@feddit.org
    link
    fedilink
    English
    arrow-up
    25
    ·
    14 hours ago

    But that’s what they wanted anyway, isn’t it?

    Burning shitloads of money.

    Waiting until they can later, finally, rule the world.

    • zaphod@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      7
      ·
      13 hours ago

      Yeah, the usual startup approach. Burn investor money to get into the market (or in this case create a market) by offering services below cost. Once they have enough users and their investors want their money back they’ll ramp up prices.

      • RedGreenBlue@lemmy.zip
        link
        fedilink
        English
        arrow-up
        10
        ·
        12 hours ago

        Problem is they are competing with cheap web services like deepseek and local free models. Those alternatives are gonna become more popular when chatgpt starts charging.

        They are spending like crazy in the hope for some inovation that will give them an advantage that others can’t copy for cheap. That is a very difficult thing to accomplish. I bet they will fail. That money ain’t coming back.

        • CheeseNoodle@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          10 hours ago

          I’ve been looking into local models lately, mostly out of vague paranoia that I should get one up and running before it becomes defacto illegal for normal people to own due to some kind of regulatory capture. Seems fairly doable at the moment though not particularly user friendly.

          • sobchak@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 hours ago

            They’re OK. It’s kinda amazing how much lossy data can be compressed into a 12GB model. They don’t really start to get comparable to the large frontier models until 70B+ parameters, and you would need serious hardware to run those.

            • CheeseNoodle@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              5 hours ago

              Oh I mean with setup, like I can download ollama and a basic model fine enough and get a generic chat bot. But if I want something that can scan through PDFs with direct citation (like the Nvidia LLM) or play a character then suddenly its all git repositiories and cringey youtube tutorials.

              • sobchak@programming.dev
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 hours ago

                That would be nice. I don’t know how to do that other than programming it yourself with something like langgraph.