• eldavi@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 hours ago

    waterfox?!!! librewolf! iceweasel!

    i wish there was some way to know when these projects pop up and whether the had any staying power.

    • DrDystopia@lemy.lol
      link
      fedilink
      arrow-up
      2
      ·
      4 hours ago

      The author has maintained Waterfox for 15 years according to the article.

      If only it was possible to extract some idea of staying power from 15 years of activity…

      • eldavi@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        staying power is only half the answer and iceweasel somehow had a bigger adoption than waterfox back in the day.

  • Quibblekrust@thelemmy.club
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    10
    ·
    edit-2
    4 hours ago

    There’s so much repeated paranoia in this article. He makes the same weak points over and over.

    But how do you keep track of what a black box actually does when it’s turned on?

    And later,

    And yes, yes - disabling features is all well and good, but at the end of the day, if these AI features are black boxes, how are we to keep track of what they actually do?

    Why would you have to care? You turned them off. The browser is open source. You can see how it invokes the LLMs. If you turn off the features that invoke LLMs, it will not invoke LLMs. I don’t get it. Where’s the disconnect here? The browser is not a black box. The LLM it talks to in the cloud is a black box. If it doesn’t talk to any LLM… 🤷‍♂️

    The core browsing experience should be one that fully puts the user in control, not one where you’re constantly monitoring an inscrutable system that claims to be helping you.

    Jesus… The bias in this article is extreme and repeated often. “claims to be helping”… he even said earlier that LLMs have a measurable utility. Why are they suddenly merely “claiming” to be helpful?

    Why do you have to constantly monitor something you turned off? Really? Constantly?

    Even if you can disable individual AI features, the cognitive load of monitoring an opaque system that’s supposedly working on your behalf would be overwhelming.

    “Overwhelming cognitive load”. Riiight. I turned off telemetry in Firefox as soon as I installed it. I don’t constantly monitor that setting. There is zero cognitive load. I’ll do the same to the AI features if I don’t want them. Also, again with the “supposedly working for you” or “claiming to be helpful” language. Such bias.

    They promise AI will be optional, but that promise acknowledges they’re building AI so deeply into Firefox that an opt-out mechanism becomes necessary in the first place.

    That’s such terrible logic, but so is my original counterargument as pointed out by Undertaker below.

    If something has an opt-out, it has to be “so deeply” built into it? Are the current new-tab features deeply built into Firefox? Like Pocket and such? They’re opt out. Are address bar completions “so deeply” integrated? They’re opt out, too. Is the crash reporter “so deeply” integrated into Firefox? That’s opt out!

    Hell, you could argue crash reporting is deeply integrated because maybe there are many try-catch blocks all over the code which use it, but if you’re the kind of person who turned it off, does it require an “overwhelming cognitive load” to keep it off? Nonsense.

    I still don’t think turning a setting off requires “overwhelming cognitive load”.

    This article is a bombastic mass of paranoia and bad logic.

    If Firefox releases AI things you can’t just turn off, that can be easily invoked by accident—gestures, keyboard shortcuts, or whatever—that might send page content to an LLM, then I will stop using it. Until then, I’m happy with Firefox. It will always be more up to date than the derivatives.

    • ReverendIrreverence@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      I turned off telemetry in Firefox as soon as I installed it. I don’t constantly monitor that setting.

      I have regularly seen settings I have previously turned off become re-enabled after an update so, maybe not “constantly” but occasionally (a.k.a. “after an update”) I certainly do monitor those settings.

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      7
      ·
      6 hours ago

      I don’t want AI in my browser even if I can turn it off for the same reason I don’t want my refrigerator door booby-trapped with an explosive even if I can turn it off.

      Bugs happen. Configuration changes happen. User error happens. Software is complex, and I shouldn’t need an intimate knowledge of every goddamn app I run to be sure it’s not siphoning all my data off to god-knows-where. I use hundreds of programs on a daily basis. It is completely untenable to carefully configure every single one, stay abreast of constant updates and changes, and spend 76 full working days reading every TOS I am subject to. And of course, all their policies and defaults are subject to change without notice, so nothing I learn today will necessarily apply tomorrow anyway.

      I want to be confident that my web browser is not — either by design, due to a misconfiguration, or due to a bug — sending my data to OpenAI. I do not want a booby-trapped browser, even if I can turn off the booby-traps. I do not want my fridge to explode, so I don’t buy fridges with built-in explosives. Seems pretty simple to me.

      I also want to be confident in the same for others. If I deploy a browser to 100 employees’ machines, or even just my mom’s, a little opt-out checkbox under Settings will not give me any peace of mind.

    • pory@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      11 hours ago

      Firefox is a black box because you can’t opt out of stuff before you click the ‘update browser!’ button. They’ve added default-on data harvesting, telemetry, ads, and now chatbots to Firefox that you have to track down and disable every time it happens. All self-updating software is a “black box” like this, but Mozilla has lost my trust that their updates will have more good than bullshit. So now I use Waterfox and don’t need to worry that there’s some new scheme to monetize me every time I get a browser update.

    • Undertaker@feddit.org
      link
      fedilink
      arrow-up
      14
      arrow-down
      1
      ·
      14 hours ago

      Your post is really straining.
      You don’t get several points regarding monitoring. An evolving browser hast to be monitored for these anti features because new ones could be added at any time and thus has to be disabled immediately.

      ‘Claim’ to help is correct. While AI can have benefits, is is advertised as always helpful and has to be integrated into a browser. And that’s two different points with the latter one being wrong. The opposite will be the case. People will lose their ability to think, analyze and decide by themselfes.

      they’re building AI so deeply into Firefox that an opt-out mechanism becomes necessary in the first place

      That’s such terrible logic. If something has an opt-out, it has to be “so deeply” built into it?

      If you don’t get it, I can’t help you.
      Article says: A causes B.
      You argue: B causes A?
      Your following examples are pointless unfitting as well.

      Maybe you should take a deep breath and consider a factual view instead of pseudo arguments.

      • Eager Eagle@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        5 hours ago

        Well, the opt-out argument really doesn’t make any sense. The fact there’s an opt-out tells me nothing about how deeply a feature is embedded, if anything, it tells me the exact opposite of what the article argues: the only reason we can disable it is because it’s not deeply integrated. If it was, there most likely wouldn’t be an opt-out.