I run ubuntu’s server base headless install with a self-curated minimal set of gui packages on top of that (X11, awesome, pulse, thunar) but there’s no reason you couldn’t install kde with wayland. Building the system yourself gets you really far in the anti-bloatware dept, and the breadth of wiki/google/gpt based around Debian/Ubuntu means you can figure just about any issues out. I do this on a ~$200 eBay random old Dell + a 3050 6gb (slot power only).
For lighter gaming I’ll use the Ubuntu PC directly, but for anything heavier I have a win11 PC in the basement that has no other task than to pipe steam over sunshine/moonlight
It is the best of both worlds.
the best way to learn is by doing!
I just built my own automation around their official documentation; it’s fantastic.
https://www.wireguard.com/#conceptual-overview
vyatta and vyatta-based (edgerouter, etc) I would say are good enough for the average consumer. If we’re deep enough in the weeds to be arguing the pros and cons of wireguard raw vs talescale; I think we’re certainly passed accepting a budget consumer router as acceptably meeting these and other needs.
Also you don’t need port forwarding and ddns for internal routing. My phone and laptop both have automation in place for switching wireguard profiles based on network SSID. At home, all traffic is routed locally; outside of my network everything goes through ddns/port forwarding.
If you’re really paranoid about it, you could always skip the port-forward route, and set up a wireguard-based mesh yourself using an external vps as a relay. That way you don’t have to open anything directly, and internal traffic still routes when you don’t have an internet connection at home. It’s basically what talescale is, except in this case you control the keys and have better insight into who is using them, and you reverse the authentication paradigm from external to internal.
Talescale proper gives you an external dependency (and a lot of security risk), but the underlying technology (wireguard) does not have the same limitation. You should just deploy wireguard yourself; it’s not as scary as it sounds.
Fail2ban and containers can be tricky, because under the hood, you’ll often have container policies automatically inserting themselves above host policies in iptables. The docker documentation has a good write-up on how to solve it for their implementation
https://docs.docker.com/engine/network/packet-filtering-firewalls/
For your usecase specifically: If you’re using VMs only, you could run it within any VM that is exposing traffic, but for containers you’ll have to run fail2ban on the host itself. I’m not sure how LXC handles this, but I assume it’s probably similar to docker.
The simplest solution would be to just put something between your hypervisor and the Internet physically (a raspberry-pi-based firewall, etc)
do you know why they’re illegal? is there some danger to them?
You should consider reversing the roles. There’s no reason your homelab cannot be the client, and have your vps be the server. Once the wireguard virtual network exists, network traffic doesn’t really care which was the client and which was the server. Saves you from opening a port to attackers on your home network.
Who decides what “truth” is? In concept I’m with you but in practice that sounds like a nightmare. See: mainland china
Governments should be the arbiters of law and recommendations, not the arbiters of truth.