Rant: I keep seeing people run their whole stack on a single Raspberry Pi and then act surprised when an SD card dies and six months of data and config evaporate. Selfhosting is awesome, but if you actually depend on services (Nextcloud, homeserver, backups, VPN) you need a tiny bit of ops discipline, not just duct-taped enthusiasm.

Reality check and a practical plan I actually use: run a small hybrid setup so failures are contained and restores are trivial.

Hardware picks (budget to sensible):

  • Budget: reused Intel NUC i3/i5 with 16 GB RAM and an NVMe SSD for the orchestrator. Cheap, low-power, and reliable. Use a Pi for low-risk experiments only.
  • Storage: Odroid HC4 or an entry-level mini NAS with SATA bays for bulk storage. Prefer drives in a simple mirrored pool - ZFS if you want checksums and snapshots.
  • For production-y reliability: small rack 1U or a used Supermicro with ECC RAM and an E3 CPU if you plan ZFS and many VMs.
  • UPS: a small APC/back-UPS with USB shutdown support. Test it quarterly.

Software stack I run and recommend: Proxmox as the hypervisor, LXC containers for lightweight system services, Docker for apps that expect it, Traefik as the reverse proxy, Unbound for local DNS, WireGuard for remote access. Use Nextcloud or Syncthing for file sync, Postgres for databases, and MinIO or an S3-compatible object store for backups/uploads.

Backup and restore policy (do this, now):

  • Files: Borgmatic with encrypted backups, rotated, stored locally and mirrored offsite with rclone to an S3 target.
  • Databases: regular logical dumps (pg_dump/mysql dump) pre- and post-backup to ensure consistent snapshots. Run these with cron or systemd timers.
  • Images: monthly full disk image with Clonezilla or a simple image tool so you can rebuild a dead disk quickly.
  • Test restores monthly. A backup that you never restore is just noise.

Operational rules and troubleshooting cheats:

  • Staging host: have one small “canary” VM/container where you apply updates first. If it survives a week, roll to prod.
  • Version pinning: pin docker images and keep a changelog. Don’t auto-update everything blindly.
  • Monitoring: run simple Prometheus + Grafana or even a tiny healthcheck script that alerts to failing services, full disks, and high load.
  • Common fixes: if Traefik gives 502, check backend container health and socket permissions; if ZFS reports I/O errors, isolate the drive and scrub the pool; if Docker volumes have permission issues, check UID/GID and use chown -R on the mounted path from the host.
  • Time: ensure NTP/chrony is working. Time skew breaks cert renewals and DB replication.

Final note: treat your home services like a small business - SLAs, backups, canaries, and a maintenance window. It makes selfhosting feel slightly less like playing with Lego and way more like owning your data for real. What did you learn the hard way when your single point of failure finally failed?

  • AnarchoSnowPlow@midwest.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 hours ago

    Off lease corporate thin clients with fresh ssds. You can get something that runs off a laptop power supply, will handle more than you’re going to throw at it, and they’re insanely cheap.

    I moved to one from a pi when I got serious about home assistant.

    I also run a stack of networking utilities on my OPNSense router.

    Jellyfin has been a bit more difficult to transition, I’m still running it on my wife’s gaming computer. I’ve pre transcoded most of our collection, but not all of it. I need to find something very cheap but also capable of handling the odd 4k transcode.

    • afk_strats@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      2 hours ago

      I’ve been on the internet a long time and this made me say “what the fuck” out loud

      Edit: not sure whether I should ask what this all is or if ibshpuld complement you on your “output”

  • Melon Husk™@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    20
    ·
    4 hours ago

    My single pi setup has been known to spontaneously self-destruct if i even look at it funny. ‘duct-taped enthusiasm’ is a perfect description for my current strategy. Time to level up, i suppose. Great post, mate.

    • mbirth 🇬🇧@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      Or a better SD card. I’ve used my various Raspberrys with SD cards for years without any issues. The only single incident I’ve had was a card turning read-only. Which I’ve only noticed because system updates were gone after a reboot. But all the main data was still there and accessible. And a simple clone to a new card followed by fsck restored full functionality in the Pi again.

  • Mongostein@lemmy.ca
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 hours ago

    I’m using 2 old laptops for a Jellyfin server. One runs Jellyfin, Sonarr and Radarr. The other runs Jackett and Transmission. I’m looking to add a third to handle Sonarr and Radarr and let the original one do just Jellyfin.

    I have no backups except for the ones Jellyfin, Sonarr, and Radarr create 🤷‍♂️

  • akilou@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    4 hours ago

    My one and only Pi is exclusively for piracy. Qbittorrent, VPN, Radarr, Sonarr, Lidarr, Prowlarr. My NAS does everything else. If my Pi died spontaneously, it’d be annoying but not devastating. I’ve been thinking about cloning the SD card so if/when it dies, I can just plug in a new one.

  • hodgepodgin@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 hours ago

    I’ve been running my Proxmox in suicide raid configuration for a year now and have only procrastinated this far because I can’t fix my TrueNAS replication tasks… and my proxmox machine is over the 1/2 full mark

  • Faceman🇦🇺@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 hours ago

    instructions unclear, put entire homelab into a single consumer pc server with mismatched ram and a single off-brand power supply and no battery backup.

    If you run everything on a single PI, at least take regular backups so you can image a new SD card quickly when needed and get back up and running within a few minutes.

    I used to run pretty much everything on 3 pis, but now just have a single one left that runs HAOS+Nodered+a secondary DNS (because you should always run two separate DNS servers so you can update one at a time without downtime), that gets backed up daily to the main server if a card dies and also keeps a local backup on its SD card for the odd rollback if the server is down, plus I have a spare SD taped to it ready to go with an older image but one that would be able to boot and pull the latest backup from another source, my main server is a purpose built storage and compute server that runs all the heavy stuff, then there’s a couple of N95 mini PCs that run proxmox for small tasks and general homelabbery.

  • Björn@swg-empire.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 hours ago

    I just put my configs and compose files on the same raided hdds as my data files. Add automatic snapshots and the problem is solved for me.

  • grue@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 hours ago

    I hesitate to mention this because I don’t want the ebay seller to sell out before I decide if I want one, but…

    Craft Computing has a recent video about a used Supermicro “Microcloud” server that holds 8 Intel socket R nodes in 3U and costs $400 (apparently including CPUs but not RAM). Seems like an excellent way to get cheap redundancy, albeit at the cost of probably not great power consumption because it’s so obsolete.

  • deliriousdreams@fedia.io
    link
    fedilink
    arrow-up
    1
    ·
    3 hours ago

    On the other hand I own 3 different raspberry pi’s. One for Home Assistant, one for Pihole, one for booting the server computer when I’m not home if I want to stream a movie from my library.