I don’t really want companies or anyone else deciding what I’m allowed to see or learn. Are there any AI assistants out there that won’t say “sorry, I can’t talk to you about that” if I mention something modern companies don’t want us to see?

  • SpicyTaint@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    58 minutes ago

    Is there a general term for the setting that offloads the model into RAM? I’d love to be able to load larger models.

    I thought CUDA was supposed to just supposed to treat VRAM and regular RAM as one resource, but that doesn’t seem to be correct.