I prefer waterfox, OpenAI can keep its Chat chippy tea browser.

  • MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 hours ago

    I’ll look into it. OAI’s 30B model is the most I can run in my MacBook and it’s decent. I don’t think I can even run that on my desktop with a 3060 GPU. I have access to GLM 4.6 through a service but that’s the ~350B parameter model and I’m pretty sure that’s not what you’re running at home.

    It’s pretty reasonable in capability. I want to play around with setting up RAG pipelines for specific domain knowledge, but I’m just getting started.