Have not played it yet, but for me, personally, E33 looks like one of those “better to watch the cutscenes on YouTube” games.
Have not played it yet, but for me, personally, E33 looks like one of those “better to watch the cutscenes on YouTube” games.
I bounced off Witcher 3 too. Watched friends play a lot of RDR2, not interested.
…BG3 was sublime though. I don’t even like D&D combat, or ‘Tolkien-esque’ fantasy, but holy hell. It’s gorgeous, it just oozes charisma, and was quite fun in coop.


I dunno why everyone is so skeptical.
If it does Android apps, it’s got everything ‘normal’ users could want. It’s got a massively anticompetitive megacorp behind it. It’s ‘lean’ and runs on cheap computers and is compatible with work stuff. And it doesn’t bork itself with spam like Windows does.
…How could it not catch on?
99% of the population doesn’t actively seek out modularity of privacy, many don’t really know concepts like filesystems, URLs or desktops anyway. They get what’s cheapest in Best Buy, and that’s about to be Android laptops, if Google desires that.


Uh, simple.
Clear your chat history, and see if it remembers anything.
LLMs are, by current defitions, static. They’re like clones you take out of cryostasis every time you hit enter; nothing you say has an impact on them. Meanwhile, the ‘memory’ and thinking of a true AGI are not seperable; it has a state that changes with time, and everything it experiences impacts its output.
…There are a ton of other differences. Transformers models trained with glorified linear regression are about a million miles away from AGI, but this one thing is an easy one to test right now. It’d work as an LLM vs human test too.
Open WebUI isn’t very ‘open’ and kinda problematic last I saw. Same with ollama; you should absolutely avoid either.
…And actually, why is open web ui even needed? For an embeddings model or something? All the browser should need is an openai compatible endpoint.
Yeah that’s really awesome.
…But it’s also something the anti-AI crowd would hate once they realize it’s an 'LLM" doing the translation, which is a large part of FF’s userbase. The well has been poisoned by said CEOs.
I mean, there are literally hundreds of API providers. I’d probably pick Cerebras, but you can take your pick from any jurisdiction and any privacy policy.
I guess you could rent an on-demand cloud instance yourself too, that spins down when you aren’t using it.
Cool. AFAIK FSR4 uses instructions RDNA3 doesn’t even have, so I’d be interested to see if they can squeeze decent performance out of it.
I mean, you can run small models on mobile now, but they’re mostly good as a cog in an automation pipeline, not at (say) interpreting english instructions on how to alter a webpage.
…Honestly, open weight model APIs for single-off calls like this are not a bad stopgap. It costs basically nothing, you can use any provider you want, its power efficient, and if you’re on the web, you have internet.


Hear me out.
This could actually be cool:
If I could, say, mash in “get rid of the junk in this page” or “turn the page this color” or “navigate this form for me”
If it could block SEO and AI slop from search/pages, including images.
If I can pick my own API (including local) and sampling parameters
If it doesn’t preload any model in RAM.
…That’d be neat.
What I don’t want is a chatbot or summarizer or deep researcher because there are 7000 bajillion of those, and there is literally no advantage to FF baking it in like every other service on the planet.
And… Honestly, PCs are not ready for local LLMs. Not even the most exotic experimental quantization of Qwen3 30B is ‘good enough’ to be reliable for the average person, and it still takes too much CPU/RAM. And whatever Mozilla ships would be way worse.
That could change with a good bitnet model, but no one with money has pursued it yet.


Elon is more aligned with Turkey’s politics though, right?
Tim Cook is gay. I don’t think there’s much pretense of him liking this one bit.


Devils advocate:
…It’s kinda strategic for Apple to “stay.”
Let’s say they say no, and get kicked out of China. You think the Chinese tech giants are going to put up a fight about dating apps?
Seems better to Apple to keep a finger in the pie and do what they can get away with, if only for the LGBTQ folks.
Another example I used to point to is Steam which got away with sneaking a lot of culture into China under the government’s nose. And it’s cause they didn’t make loud trouble.


…Okay.
Then why are these apps being removed?


To be fair, they are too big.
They just have too many employees and costs. The way they’re organized, they’re stuck with gigantic budget, milquetoast, broad appeal games just to attempt sales they need to break even, with all the inefficiency that comes at that team size… unless they fire a ton of people and split up the rest.
My observation over the past decade is that “medium size” is the game dev sweet spot. Think Coffee Stain, Obsidian, and so on.


I hate to break it to you, but people on Lemmy call for the death of politicians and their families all the tame.
It’s… icky.
It’s made me really close to leaving the whole platform, and the .world admin’s sentiment against it is about the only thing that stopped me (even if they don’t enforce it very strictly).


There’s certainly no conflict of interest there…


Google gets a cut from the Google Ads click, which takes the user directly to the Play Store (or, if on desktop, the Chrome extension store).
If it’s some free shovelware app, they get a cut from the ads spammed onto the user’s screen. If it’s a sham subscription app, they get a cut of that. I see this a lot test clicking ads these days.
If its legit phishing, that’s a fair point; they don’t get a direct cut of the scam, other than the attention it drives towards their app stores and the data they collect for the user’s profile. But the point I’m trying to make is that it’s incredibly hypocritical to paint 3rd party apps (and indeed any competing app store) as a danger when they do such a poor job policing their own store. They may have a point, but it doesn’t really tackle scamware unless they change their moderation habits.


I mean, I think the point is that any bot asking for advice is a total lie.
So is a human being disingenuous.
The gratifying part of this kind of community is helping people with feedback. You don’t get that if you’re talking to a black hole.
Point I’m making is… if you wanted a simulated asklemmy, some specialized LLM agents could actually emulate that very well and give convincingly emotional conversation. But what’s the point? Its not about the text being AI generated or not, its about being earnest.


+1
‘Fuckwit’ or ‘spambot’ is not a synonym for LLM slop. There are plenty of human or sweatshop posts.
https://old.reddit.com/r/opensource/comments/1kfhkal/open_webui_is_no_longer_open_source/
https://old.reddit.com/r/LocalLLaMA/comments/1mncrqp/ollama/
Basically, they’re both using their popularity to push proprietary bits, which their devleopment is shifting to. They’re enshittifying.
In addition, ollama is just a demanding leech on llama.cpp that contributes nothing back, while hiding the connection to the underlying library at every opportunity. They do scummy things like.
Rename models for SEO, like “Deepseek R1” which is really the 7b distill.
It has really bad default settings (like a 2K default context limit, and default imatrix free quants) which give local LLM runners bad impressions of the whole ecosystem.
They mess with chat templates, and on top of that, create other bugs that don’t exist in base llama.cpp
Sometimes, they lag behind GGUF support.
And other times, they make thier own sloppy implementations for ‘day 1’ support of trending models. They often work poorly; the support’s just there for SEO. But this also leads to some public GGUFs not working with the underlying llama.cpp library, or working inexplicably bad, polluting the issue tracker of llama.cpp.
I could go on and on with examples of their drama, but needless to say most everyone in localllama hates them. The base llama.cpp maintainers hate them, and they’re nice devs.
You should use llama.cpp llama-server as an API endpoint. Or, alternatively the ik_llama.cpp fork, kobold.cpp, or croco.cpp. Or TabbyAPI as an ‘alternate’ GPU focused quantized runtime. Or SGLang if you just batch small models. Llamacpp-python, LMStudo; literally anything but ollama.
As for the UI, thats a muddier answer and totally depends what you use LLMs for. I use mikupad for its ‘raw’ notebook mode and logit displays, but there are many options. Llama.cpp has a pretty nice built in one now.