

-
For LLM hosting, ik_llama.cpp. You can really gigantic models at acceptable speeds with its hybrid CPU/GPU focus, at higher quality/speed than mainline llama.cpp, and it has several built in UIs.
-
LanguageTool, for self run grammar/spelling/style checking.











It’ll be interesting to see how Discord enshittifies.
It’s the default destination for the niche-interest “cozy web,” and they could go down several paths.