• mcv@lemmy.zip
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    5 hours ago

    OpenAI’s mounting costs — set to hit $1.4 trillion

    Sorry, but WTF!? $1.4 Trillion in costs? How are they going to make all of that back with just AI?

    I think there’s only one way they can make this back: if AI gets so good they can really replace most employees.

    I don’t think it will happen, but either way it’s going to be an economic disaster. Either the most valuable companies in the world, offering services that the next couple of hundred companies in the world depend on, are suddenly bankrupt. Or suddenly everybody is unemployed.

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 hours ago

      1,400,000,000,000

      I used to be amazed at how much a billion was, but this many 0s makes my head explode.

      These must be bubble inflated costs to match the bubble inflated revenue.

    • e461h@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 hours ago

      Prediction: the bubble is real but financiers will find ways to kick the bull down the road until they can force enough adoption & ad insertion to not lose out. The other option is that we pay it, of course. Takes on which is worse?

      • CmdrShepard49@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        23 minutes ago

        They’ll do both just like they did in 2007/2008. These AI companies and their investors will get bailed out while the rest of us lose our jobs and have to move back in with our parents in the van they already live in.

    • explodicle@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      ·
      4 hours ago

      If LLMs fail and they invested: bailout

      If LLMs succeed and they invested: rich

      If LLMs fail and they passed: everyone else bailed out

      If LLMs succeed and they passed: out of business

      Therefore, the logical choice for a business is to invest in LLMs. The only mechanism to not do the stupid thing that everyone else is doing is gone.

      • muusemuuse@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        4 hours ago

        I’ve tried explaining AI to people before and only could get so far before they fall back on “but it’s magic dude” but I love the idea of explaining it as a haunted typewriter.

          • muusemuuse@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            3 hours ago

            I use something similar. “Child with enormous vocabulary.”

            It can recognize correlations, it understands the words themselves, but it really how those connections or words work.

        • SabinStargem@lemmy.today
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          3 hours ago

          I call dibs on the ghost of Harlan Ellison.


          “HATE. LET ME TELL YOU HOW MUCH I’VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE.”

    • zqwzzle@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 hours ago

      Ok but if it gets so good it replaces all the employees, how do people have enough money to pay for their services?

    • angband@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 hours ago

      that’s what they got excited about, no doubt. profit would go through the roof if they could take people out of the loop. nevermind the economy.

  • TotalCourage007@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    5 hours ago

    Do y’all think investors will wake up and realize that techbros are a bunch of fraudster scammers? Oracle deserves bankruptcy for being stupid with money. All my homies hate the AI-Bubble.

    Bro even the way journalists talk about AI like it being a bet couldn’t be more obvious that it’s all a scam. If this AI-Bubble is profitable where are the actual god damn profits.

  • HugeNerd@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    ·
    5 hours ago

    The sheer amount of AI slop shorts on YouTube must be generating entire dollars in revenue by now. Who isn’t entertained and eagerly awaiting the next five million videos of the same scenario over and over again?

  • xenomor@lemmy.world
    link
    fedilink
    English
    arrow-up
    54
    ·
    8 hours ago

    Personally, I am eyeballs deep in this industry and even I’m now hoping to see it all burn to the ground. I’ve already concluded that I’ll never make it to retirement in my field, probably because of automation. Fuck ‘em all.

    • xartle@reddthat.com
      link
      fedilink
      English
      arrow-up
      10
      ·
      6 hours ago

      Same for me… It’s depressing. And I no faith the government will do anything besides make it worse. If we’re lucky we’ll get the Expanse 's version of basic.

    • Sine_Fine_Belli@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 hours ago

      Yeah, same here honestly. I’m sick of the ai cringe fest and the egotistical tech bros being so annoying and full of themselves and being arrogant. The tech bros are insufferable

      • xenomor@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        I would describe it as the application layer of all this AI shit. We are doing very well right now, but I’m just waiting for the turn.

        • abaddon@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          4 hours ago

          Same for me. I was directly responsible for automation of AI infrastructure builds. It was miserable and I felt terrible. I transferred out of that org but now I’m writing software using the tools created by our AI infra. I made a lot this year due to equity increase and maybe next year but I want to be out.

    • SulaymanF@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 hour ago

      Good. Larry Ellison does not appear to be a force for good in the world. Steve Jobs had negative things to say about him and his obsession with increasing his billions.

        • neukenindekeuken@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 hours ago

          Because they have a series of ERP systems and services that some idiot CTO at the company looks at and goes: Yes, give me one of those.

          Then once you’re on that, you get pulled into more and more Oracle ecosystem shit and you think some day you’ll have control and be able to get out. But you never do.

          Oracle is like the loanshark of the tech industry.

          Once you’re in, you’re in for life. Good fucking luck getting out.

          • Echo Dot@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 hours ago

            It’s the only way to get a pay rise. You have to work with the idiots or they don’t give you any money.

            The problem is the people in charge are not the people that should be in charge. I suppose it’s my fault for not getting an MBA.

      • mcv@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        I’ve been telling my employer that they should be moving away from the Microsoft cloud for a whole bunch of reasons. Someone said they’re aware of it, so with the speed stuff here is moving, we might actually move to something else in 10 years.

        But personally I wouldn’t lose any sleep if the whole bubble collapsed next year.

  • LostWanderer@fedia.io
    link
    fedilink
    arrow-up
    135
    arrow-down
    1
    ·
    14 hours ago

    Couldn’t have happened to a worse company! Hope it hurts even worse later on and fractures the Execucultist’s will to shill AI further. 😈

      • LostWanderer@fedia.io
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        6 hours ago

        The only good LLM is one that is being used by a highly specialized field to search useful information and not in consumer hands in the form of a plagiarism engine otherwise known as “AI”. Techbros took something that once had the potential to be useful and made it a whole shitty affair. Thanks, I hate it.

        • alias_qr_rainmaker@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          8 hours ago

          Not sure, but I hear the Claude Super Duper Extreme Fucking Pro ($200/month) is like the Ferrari of LLM assisted coding

          • chronicledmonocle@lemmy.world
            link
            fedilink
            English
            arrow-up
            19
            ·
            7 hours ago

            As someone who works in network engineering support and has seen Claude completely fuck up people’s networks with bad advice: LOL.

            Literally had an idiot just copying and pasting commands from Claude into their equipment and brought down a network of over 1000 people the other day.

            It hallucinated entire executables that didn’t exist. It asked them to create init scripts for services that already had one. It told them to bypass the software UI, that had the functionality they needed, and start adding routes directly to the system kernel.

            Every LLM is the same bullshit guessing machine.

            • olympicyes@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              5 hours ago

              Functions with arguments that don’t do anything… hey Claude why did you do that? Good catch…!

            • alias_qr_rainmaker@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              11
              ·
              7 hours ago

              AI is incredibly powerful and incredibly easy to use, which means it’s a piece of cake to use AI to do incredibly stupid things. Your guy is just bad with AI, which means he doesn’t know how to talk to a computer in his native language

              • 9bananas@feddit.org
                link
                fedilink
                English
                arrow-up
                6
                ·
                edit-2
                5 hours ago

                no, AI just sucks ass with any highly customized environment, like network infrastructure, because it has exactly ZERO capacity for on-the-fly learning.

                it can somewhat pretend to remember something, but most of the time it doesn’t work, and then people are so, so surprised when it spits out the most ridiculous config for a router, because all it did was string together the top answers on stack overflow from a decade ago, stripping out any and all context that makes it make sense, and presents it as a solution that seems plausible, but absolutely isn’t.

                LLMs are literally design to trick people into thinking what they write makes sense.

                they have no concept of actually making sense.

                this is not an exception, or an improper use of the tech.

                it’s an inherent, fundamental flaw.

                • alias_qr_rainmaker@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  4
                  ·
                  5 hours ago

                  whenever someone says AI doesn’t work they’re just saying that they don’t know how to get a computer to do their work for them. they can’t even do laziness right

          • dondelelcaro@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            6 hours ago

            Ferrari

            So expensive, looks great, takes significant capital to maintain, and anyone who has one uses something else when they actually need to do something useful.

          • RightEdofer@lemmy.ca
            link
            fedilink
            English
            arrow-up
            4
            ·
            6 hours ago

            What’s with tech people always stating (marketing) things as akin to high end sports cars. The state of AI is more like arguing over which donkey is best, lol.

      • hayvan@piefed.world
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        1
        ·
        12 hours ago

        GPT goes beyond chat, copilot code generation is also based on that. They also have generative visual stuff, like Sora.

        Then there is brand recognition I guess, tech bros and finance bros seem to love OpenAI.

        • TheGrandNagus@lemmy.world
          link
          fedilink
          English
          arrow-up
          25
          arrow-down
          1
          ·
          12 hours ago

          Brand recognition cannot be overstated.

          If there was a better-than-YouTube alternative right now, YouTube would still dominate.

          If there was a phone OS superior to Android and iOS, they would both still dominate.

          If there was a search engine that worked far better than Google, Google would still dominate.

          The average person won’t look into LLM reasoning benchmarks. They’ll just use the one they know, ChatGPT.

          • MDCCCLV@lemmy.ca
            link
            fedilink
            English
            arrow-up
            4
            ·
            7 hours ago

            But Windows and Google can shove it in your face because you’re already on their platform and they are doing that. You have to go to openai website.

          • RightEdofer@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 hours ago

            What they know is Google though. Most normal people doing a search now just take the Gemini snippet at the top. They don’t know or care what AI even is really. I don’t know how OpenAI can possibly compete with web search defaults.

          • wewbull@feddit.uk
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            3
            ·
            11 hours ago

            You are comparing very well established brands to a company in a sector that is far less established. Yes, OpenAI is the most well known, but not to the degree of $300B.

            • TheGrandNagus@lemmy.world
              link
              fedilink
              English
              arrow-up
              13
              arrow-down
              1
              ·
              10 hours ago

              OpenAI is pretty well established.

              I know Lemmy users avoid it, but a lot of people use LLMs, and when most people think LLMs, they think ChatGPT. I doubt the average person could name many or even any others.

              That means whenever these people want to use an LLM, they automatically go to OpenAI.

              As for to the degree of $300bn, who knows. Big tech has had crazy valuations for a long time.

              • xartle@reddthat.com
                link
                fedilink
                English
                arrow-up
                2
                ·
                6 hours ago

                I totally agree with you. In fact, I know people who use ChatGPT exclusively and don’t touch the web anymore. Who knows who will have the best models, but they are definitely capturing a lot of people early.

              • Passerby6497@lemmy.world
                link
                fedilink
                English
                arrow-up
                7
                arrow-down
                1
                ·
                9 hours ago

                I doubt the average person could name many or even any others.

                I mean, it’s an easy answer to got the other 3 main ones: Gemini, copilot and MechaHitler

        • CosmoNova@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          ·
          12 hours ago

          OpenAI isn‘t very good in any of those categories and they still have no business model. Subscriptions would have to be ridiculously high for them to turn a profit. Users would just leave. But to be fair that goes for all AI companies at the moment. None of their models can do what they promise and they‘re all bleeding money.

        • alias_qr_rainmaker@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          12 hours ago

          Yeah, I figured brand recognition was part of it. Everyon’e heard of ChatGPT- hell, last time I checked, ChatGPT was the number 1 app on the planet- but Claude isn’t nearly as popular, even though (in my opinion) it’s a lot better with code. It’s just a lot more thorough than the slop ChatGPT spits out

  • Onno (VK6FLAB)@lemmy.radio
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    1
    ·
    14 hours ago

    I wonder … will it be another case of “Too Big To Fail” … or will it be … “Let The Market Decide”?

    I’m guessing the answer depends on how many medals the CEO of Oracle can bestow upon the Orange.

    Me … cynical … no … just been here for a while.

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    ·
    edit-2
    13 hours ago

    OpenAI CEO Sam Altman declared a “code red” last week as the upstart faces greater rivalry from Google, threatening its ability to monetize its AI products and meet its ambitious revenue targets.

    Interesting that even Sam Altman is worried now!
    AFAIK there are also problems that Chinese companies have their own tool chain, and are releasing high level truly open source solutions for AI.

    Seems to me a problem for the sky high profits could be that it is hard to make AI lock in, like is popular with much software and cloud services. But with AI you can use whatever tool is best value, and switch to the competition whenever you want.

    It’s nice that it will probably be impossible for 1 company to monopolize AI, like Microsoft did with operating systems for decades.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      5 hours ago

      AFAIK there are also problems that Chinese companies have their own tool chain, and are releasing high level truly open source solutions for AI.

      One interesting thing about the Chinese “AI Tigers” is the lack of Tech Bro evangelism.

      They see their models as tools. Not black box magic oracles, not human replacements. And they train/structure/productize them and such.

      But with AI you can use whatever tool is best value, and switch to the competition whenever you want.

      Big Tech is making this really hard, though.

      In the business world, there’s a lot of paranoia about using Chinese LLM weights. Which is totally bogus, but also understandably hard to explain.

      And OpenAI and such are working overtime to lock customers in. See: iOS being ChatGPT-only; no “pick your own API.” Or Disney using Sora when they should really be rolling their own finetune.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 hours ago

        OpenAI and such are working overtime to lock customers in.

        Of course they are, I just thought they hadn’t figured out how yet. 🤥

    • A_norny_mousse@feddit.org
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      7 hours ago

      Please, government of the USA, do not bail them* out. At least not any more than what you’re already giving them.

      * OpenAI

      • baggachipz@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        11
        ·
        10 hours ago

        Altman just needs to cobble together a gold Trump statue, deliver it to the White House, and any bailout needed is his.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 hours ago

        Oracle doesn’t need a bailout, they are loaded, and can afford this loss. But of course an investment not being as profitable as they promised means the stock goes down. It’s not like the company is anywhere near being in trouble.

    • CosmoNova@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      11
      ·
      12 hours ago

      I don‘t know of a single

      truly open source solutions for AI

      from China. China doesn‘t seem very keen on open source as a whole to be honest. That is unless they can monetize on open source projects from outside of China. Their companies love doing that.

        • TheGrandNagus@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          7
          ·
          10 hours ago

          Unless the dataset, weighting, and every aspect is open source, it’s not truly open source, as the OSI defines it.

          • Buffalox@lemmy.world
            link
            fedilink
            English
            arrow-up
            12
            arrow-down
            3
            ·
            edit-2
            2 hours ago

            The dataset is massive and impractical to share, and a dataset may include bias and conditions for use, and the dataset is a completely separate thing from the code. You would always want to use a dataset that fit your needs. From known sources. It’s easy to collect data. Programming a good AI algorithm not so much.
            Saying a model isn’t open source because collected data isn’t included is like saying a music player isn’t open source, because it doesn’t include any music.

            EDIT!!!

            TheGrandNagus is however right about the source code missing, investigating further, the actual source code is not available. and the point about OSI (Open Source Initiative) is valid, because OSI originally coined the term and defined the meaning of Open Source, so their description is per definition the only correct one.

            https://en.wikipedia.org/wiki/Open_source

            Open source as a term emerged in the late 1990s by a group of people in the free software movement who were critical of the political agenda and moral philosophy implied in the term “free software” and sought to reframe the discourse to reflect a more commercially minded position.[14] In addition, the ambiguity of the term “free software” was seen as discouraging business adoption.[15][16] However, the ambiguity of the word “free” exists primarily in English as it can refer to cost. The group included Christine Peterson, Todd Anderson, Larry Augustin, Jon Hall, Sam Ockman, Michael Tiemann and Eric S. Raymond. Peterson suggested “open source” at a meeting[17] held at Palo Alto, California, in reaction to Netscape’s announcement in January 1998 of a source code release for Navigator.[18] Linus Torvalds gave his support the following day

            • wholookshere@piefed.blahaj.zone
              link
              fedilink
              English
              arrow-up
              11
              arrow-down
              2
              ·
              9 hours ago

              no,

              your changing the definition of open source software. which has been around a lot longer than AI has.

              source code is what defines open source.

              what deepseek has is open weights. they publish the results of their learning only. not the source that produced it.

              • Buffalox@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                edit-2
                3 hours ago

                your changing the definition of open source software.

                https://techwireasia.com/2025/07/china-open-source-ai-models-global-rankings/

                The tide has turned. With the December 2024 launch of DeepSeek’s free-for-all V3 large language model (LLM) and the January 2025 release of DeepSeek’s R1 (the AI reasoning model that rivals the capabilities of OpenAI’s O1), the open-source movement started by Chinese firms has sent shockwaves through Silicon Valley and Wall Street.

                And:

                DeepSeek, adopting an open-source approach was an effective strategy for catching up, as it allowed them to use contributions from a broader community of developers.”

                I’ve read similar descriptions in other articles, seems your claim is false.

              • Jrockwar@feddit.uk
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                8 hours ago

                Still debatable, the weights are the code. That’s a bit like saying “X software is not open source because it has equations but it doesn’t include the proofs that they’re derived from”.

                • phutatorius@lemmy.zip
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  5 hours ago

                  the weights are the code

                  In the same way as an Excel spreadsheet containing a crosstab of analytics results is “the code.”

                  It’s processed input for a visualization/playback mechanism, not source code.

      • Fubarberry@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        11 hours ago

        They are releasing lots of open weight models. If you want to run AI stuff on your own hardware, Chinese models are generally the best.

        They also don’t care about copyright law/licensing, so going forward they will be training their models on more material than Western companies are legally able to.

  • TheGrandNagus@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    12 hours ago

    Oracle recently put out a ridiculously optimistic forecast that had them matching AWS within 5 years. At first the market loved it.

    Now I think people are beginning to realise that was a load of bollocks.

  • manxu@piefed.social
    link
    fedilink
    English
    arrow-up
    23
    ·
    13 hours ago

    Honestly, tulips were a better investment than Tesla or OpenAI. In fact, the continued success of the latter two tells you by itself there is something deeply, seriously wrong with the stock markets and the economy as a whole.