• 2 Posts
  • 139 Comments
Joined 17 days ago
cake
Cake day: January 6th, 2026

help-circle


  • One way to go about the network security aspect:

    Make a separate LAN(optionally: VLAN) for your internals of hosted services. Separate from the one you use to access internet and use with your main computer. At start this LAN will probably only have two machines (three if you bring the NAS into the picture separately from JF)

    • The server running Jellyfin. Not connected to your main network or internet.

    • A “bastion host” which has at least two network interfaces: One connected outwards and one inwards. This is not a router (no IP forwarding) and should be separate from your main router. This is the bridge. Here you can run (optional) VPN gateway, SSH server. And also an HTTP reverse proxy to expose Jellyfin to outside world. If you have things on the inside that need to reach out (like package updates) you can have an HTTP forward proxy for that.

    When it’s just two machines you can connect them directly with LAN cable, when you have more you add a cheap network switch.

    If you don’t have enough hardware to split machines up like this you can do similar things with VMs on one box but that’s a lot of extra complexity for beginners and you probably have enough of new things to familiarize yourself with as it is. Separating physically instead of virtually is a lot simpler to understand and also more secure.

    I recommend firewalld for system firewall.


















  • The concept is attractive.

    Since back before “atomic” and “immutable” were fashionable buzzwords, I’ve had a few Alpine installations running something like this. Their installer supports it. https://wiki.alpinelinux.org/wiki/Immutable_root_with_atomic_upgrades

    I guess I’m also not alone in having been running OpenWrt with atomic upgrades for many years.

    Since then been running a ublue fork (Aurora) for a while now. Forking it and running the builds on my own infra instead of relying on their GitHub works after hacking up the workflow files but it’s quite redudandant and inefficient with IMO one too many intermediate layers (kinoite -> akmods -> main -> aurora/silverblue/bazzite -> iso) downloading the same things multiple times repeatedly despite spending considerable overhead on caching. It’s clear that building outside of their GitHub org is not really actively supported.

    Also tried openSUSE microOS (Aeon) a year or two back for a while. I want to like it but find zypper and transactional-update pretty uncomfortable and TBH sometimes still confusing to work with. Installing it on encrypted RAID was daunting IIRC. Rough edges. Enough out-of-date docs on the official site to make Debian wiki look like ArchWiki in comparison.

    KDE Linux looks promising but it was still in a very early and undocumented stage last I looked. Great to see the progress.

    More recently been looking more at Arkane Linux and been using it for some months now. It’s an immutable with Arch base. Much easier to customize and maintain than the ublue options and a lot less time spent triggering and waiting for builds - while having less stuff pulled from third-party servers in the process and an easy way to fork packages by cloning and submoduling an AUR repo. Lot more straightforward to make work without relying on GitHub. If you’re looking at rolling your own builds and are comfortable with Arch, I highly recommend checking it out. My fav so far.

    https://arkanelinux.org/

    https://codeberg.org/arkanelinux/arkdep

    Given the self-contained nature of Debian - cloning the Debian sources is enough to do a complete offline build of everything - I think it’d be the most interesting base for a sustainable immutable distro unless you go to the opposite end with “distroless” (no comment). Looking forward to one.