Yeah, the usual startup approach. Burn investor money to get into the market (or in this case create a market) by offering services below cost. Once they have enough users and their investors want their money back they’ll ramp up prices.
Problem is they are competing with cheap web services like deepseek and local free models. Those alternatives are gonna become more popular when chatgpt starts charging.
They are spending like crazy in the hope for some inovation that will give them an advantage that others can’t copy for cheap. That is a very difficult thing to accomplish. I bet they will fail. That money ain’t coming back.
I’ve been looking into local models lately, mostly out of vague paranoia that I should get one up and running before it becomes defacto illegal for normal people to own due to some kind of regulatory capture. Seems fairly doable at the moment though not particularly user friendly.
They’re OK. It’s kinda amazing how much lossy data can be compressed into a 12GB model. They don’t really start to get comparable to the large frontier models until 70B+ parameters, and you would need serious hardware to run those.
Oh I mean with setup, like I can download ollama and a basic model fine enough and get a generic chat bot. But if I want something that can scan through PDFs with direct citation (like the Nvidia LLM) or play a character then suddenly its all git repositiories and cringey youtube tutorials.
But that’s what they wanted anyway, isn’t it?
Burning shitloads of money.
Waiting until they can later, finally, rule the world.
Yeah, the usual startup approach. Burn investor money to get into the market (or in this case create a market) by offering services below cost. Once they have enough users and their investors want their money back they’ll ramp up prices.
Problem is they are competing with cheap web services like deepseek and local free models. Those alternatives are gonna become more popular when chatgpt starts charging.
They are spending like crazy in the hope for some inovation that will give them an advantage that others can’t copy for cheap. That is a very difficult thing to accomplish. I bet they will fail. That money ain’t coming back.
I’ve been looking into local models lately, mostly out of vague paranoia that I should get one up and running before it becomes defacto illegal for normal people to own due to some kind of regulatory capture. Seems fairly doable at the moment though not particularly user friendly.
They’re OK. It’s kinda amazing how much lossy data can be compressed into a 12GB model. They don’t really start to get comparable to the large frontier models until 70B+ parameters, and you would need serious hardware to run those.
Oh I mean with setup, like I can download ollama and a basic model fine enough and get a generic chat bot. But if I want something that can scan through PDFs with direct citation (like the Nvidia LLM) or play a character then suddenly its all git repositiories and cringey youtube tutorials.
That would be nice. I don’t know how to do that other than programming it yourself with something like langgraph.