Every industry is full of technical hills that people plant their flag on. What is yours?

  • jordanlund@lemmy.world
    link
    fedilink
    arrow-up
    28
    arrow-down
    3
    ·
    16 hours ago

    AI is a fad and when it collapses, it’s going to do more damage than any percieved good it’s had to date.

    • kboos1@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      14 hours ago

      The issue that I take with AI is that it’s having a similar effect on ignorance that the Internet created but worse. It’s information without understanding. Imagine a highschool drop out that is a self proclaimed genius and a Google wizard, that is AI, at least at the moment.

      Since people imagine AI as the super intelligence from movies they believe that it’s some kind of supreme being. It’s really not. It’s good at a few things and you should still take it’s answers with skepticism and proof read it before copy/paste it’s results into something.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      10 hours ago

      I can believe that LLMs might wind up being a technical dead end (or not; I could also imagine them being a component of a larger system). My own guess is that language, while important to thinking, won’t be the base unit of how thought is processed the way it is on current LLMs.

      Ditto for diffusion models used to generate images today.

      I can also believe that there might be surges and declines in funding. We’ve seen that in the past.

      But I am very confident that AI is not, over the long term, going to go away. I will confidently state that we will see systems that will use machine learning to increasingly perform human-like tasks over time.

      And I’ll say with lower, though still pretty high confidence, that the computation done by future AI will very probably be done on hardware oriented towards parallel processing. It might not look like the parallel hardware today. Maybe we find that we can deal with a lot more sparseness and dedicated subsystems that individually require less storage. Yes, neural nets approximate something that happens in the human brain, and our current systems use neural nets. But the human brain runs at something like a 90 Hz clock and definitely has specialized subsystems, so it’s a substantially-different system from something like Nvidia’s parallel compute hardware today (1,590,000,000 Hz and homogenous hardware).

      I think that the only real scenario where we have something that puts the kibosh on AI is if we reach a consensus that superintelligent AI is an unsolveable existential threat (and I think that we’re likely to still go as far as we can on limited forms of AI while still trying to maintain enough of a buffer to not fall into the abyss).

      EDIT: That being said, it may very well be that future AI won’t be called AI, and that we think of it differently, not as some kind of special category based around a set of specific technologies. For example, OCR (optical character recognition) software or speech recognition software today both typically make use of machine learning — those are established, general-use product categories that get used every day — but we typically don’t call them “AI” in popular use in 2025. When I call my credit card company, say, and navigate a menu system that uses a computer using speech recognition, I don’t say that I’m “using AI”. Same sort of way that we don’t call semi trucks or sports cars “horseless carriages” in 2025, though they derive from devices that were once called that. We don’t use the term “labor-saving device” any more — I think of a dishwasher or a vacuum cleaner as distinct devices and don’t really think of them as associated devices. But back when they were being invented, the idea of machines in the household that could automate human work using electricity did fall into a sort of bin like that.

      • Tar_Alcaran@sh.itjust.works
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        14 hours ago

        I’m a bit more pessimistic. I fear that that LLM-pushers calling their bullshit-generators “AI” is going to drag other applications with it. Because I’m pretty sure that when LLM’s all collapse in a heap of unprofitable e-waste and takes most of the stockmarket with it, the funding and capital for the rest of AI is going to die right along with LLMs.

        And there are lots of useful AI applications in every scientific field, data interpretation with AI is extremely useful, and I’m very afraid it’s going to suffer from OpenAI’s death.