Back to Field Notes
Infrastructure 3 min read

My Homelab Is Where I Learn Things Before Production Teaches Them

Three years and 50 containers later, my homelab has taught me more about ops discipline than half my day jobs combined. Here's what running your own production-minimum infrastructure actually looks like.

#homelab#infrastructure#self-hosting#docker

My homelab started as “I wanna run Plex for cheap.” Three years later it’s a 50-container fortress that teaches me something new every week — usually at 2am when something decides to break.

The stack: Proxmox for virtualization, Unraid for storage, and somewhere around 50 Docker containers doing everything from media automation (Sonarr, Radarr, Plex, Jellyfin, SABnzbd) to AI workflows (Ollama, Qdrant, n8n with MCP) to home automation (Home Assistant, Zigbee2MQTT, Mosquitto). Plus the supporting cast: Pi-hole for DNS, Nginx for reverse proxy, Watchtower for auto-updates, Netdata for monitoring, Vaultwarden for password management, Portainer for container management, and WireGuard (wg-easy) for remote access.

That’s a lot of moving parts. And that’s the point.

The real value isn’t the services. It’s the failure modes.

Running your own infra means things break at 2am. Drives die. Containers go haywire after an update. Network configs get lost. And you learn why backups matter, why monitoring matters, why documentation matters — not from a blog post, but because you got burned and had to fix it before anyone noticed.

A few weeks ago I updated a container that had a breaking change. Instead of panicking, I rolled back using the snapshot I had taken that morning, fixed the update path, and was back online in 10 minutes. That instinct — always know your revert path — came directly from years of homelab trial and error. Now I apply it to every work deployment too.

What homelab taught me that translates directly to work:

  • Incremental changes with rollback plans — I ask “what’s the revert path?” before touching anything prod-adjacent. Homelab taught me what happens when you skip this step.
  • Monitoring before you need it — Netdata tells me when CPU spikes, when containers restart, when something looks off. Same instinct I apply at work: get observability in before you need it, not after.
  • Automation removes toil — manual processes always fail at the worst time. Watchtower keeping containers updated, automated backups, cron jobs handling recurring tasks. Spot the same patterns at work and fix them early.
  • Network discipline — VLANs, reverse proxies, DNS overwrites, Cloudflare DDNS. Understanding how traffic flows in and out of a system is ops knowledge that transfers everywhere.

The other thing homelab gives you: a safe place to be wrong.

I rebuilt my network VLANs four times before I got it right. Tried three different container orchestration approaches before settling on what works. Experimented with local LLM setups (Ollama + Qdrant) and learned what actually runs well on consumer hardware versus what needs real GPU compute.

Greenfield engineering is easy. Building on top of existing systems while keeping them running is the real game. Homelab is where I practice that, low stakes.

What I’d tell someone starting one: Don’t try to build the perfect stack day one. Start with one service that solves a real problem, break it, fix it, automate it. The learning is in the getting there. And yeah — take snapshots before you update anything. You’ll thank yourself at 2am.


Want to talk homelab setups? I like geeking out about self-hosted infrastructure. Hit me up.

Working through something similar?

If this note maps onto a system problem, an ops headache, or a reliability question you are carrying, I can help think it through.

Start a conversation