• ulterno@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    If something uses a lot of if else statements to do stuff like become a “COM” player in a game, it is called an Expert System.
    That is what is essentially in game “AI” used to be. That was not an LLM.

    Stuff like clazy and clang-tidy are neither ML nor LLM.
    They don’t rely on curve fitting or mindless grouping of data-points.
    Parameters in them are decided, based on the programming language specification and tokenisation is done directly using the features of the language. How the tokens are used, is also determined by hard logic, rather than fuzzy logic and that is why, the resultant options you get in the completion list, end up being valid syntax for said language.


    Now if you are using Cursor for code completion, of course that is AI.
    It is not programmed using features of the language, but iterated until it produces output that matches what would match the features of the language.

    It is like putting a billion monkeys in front of a typewriter and then selecting one that make something Shakespeare-ish, then killing off all the others. Then cloning the selected one and rinse and repeat.

    And that is why it takes a stupendously disproportionate amount of energy, time and money to train something that gives an output that could otherwise be easily done better using a simple bash script.

    • Legianus@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 days ago

      To be honest, I feel like what you describe in the second part (the monkey analogy) is more of a genetic algorithm than a machine learning one, but I get your point.

      Quick side note, I wasn’t at all including a discussion about energy consumption and in that case ML based algorithms, whatever form they take, will mostly consume more energy (assuming not completely inefficient “classical” algorithms). I do admit, I am not sure how much more (especially after training), but at least the LLMs with their large vector/matrix based approaches eat a lot (I mean that in the case for cross-checking tokens in different vectors or such). Non LLM, ML, may be much more power efficient.

      My main point, however, was that people only remember AI from ~2022 and forgot about things from before (e.g. non LLM, ML algorithms) that were actively used in code completion. Obviously, there are things like ruff, clang-tidy (as you rightfully mentioned) and more that can work without and machine learning. Although, I didn’t check if there literally is none, though I assume it.

      On the point of game “AI”, as in AI opponents, I wasn’t talking of that at all (though since deep mind, they did tend to be a bit more ML based also, and better at games, see Starcraft 2, instead of cheating only to get an advantage)

      • ulterno@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Yeah, my main point with all those examples was to put the point that “AI” always has been a marketing term.

        Curve-fitting and data-point clustering are both pretty efficient if used for the thing they are made for. But if you then start brute-forcing multiple nodes of the same thing just to get a semblance of something else, that is otherwise not what it is made for, of course you will end up using a lot of energy.


        We humans have it pretty hard. Our brain is pretty illogical. We then generate multiple layers of abstractions make a world view, trying to match the world we live in. Over those multiple layers, comes a semblance of logic.
        Then we make machine.

        We make machines to be inherently logical and that makes it better at logical operations than us humans. Hence calculators.
        Now someone comes and says - let’s make an abstraction layer on top of the machine to represent illogical behaviour (kinda like our brains).
        (┛`Д´)┛彡┻━┻

        And then on top of that, they want that illogical abstract machine to itself create abstractions inside it to be able to first mimic human output and then further to do logical stuff. All of that, just so one can mindlessly feed data into it to “train” it, instead of think themselves and feed it proper logic.

        This is like saying they want to install an OS on browser WASM and then install a web browser inside that OS, to do the same thing that they would have otherwise done with the original browser.

        In the monkeys analogy, you can add that the monkeys are a simulation on a computer.