Everyone tells you not to run a single-node Kubernetes cluster. They're right, mostly. Here's why I did it anyway, and what I learned.
The setup
HPE ProLiant DL380 Gen9, 2× E5-2680v4 (28 cores total), 128GB RAM. Running Talos Linux as the base OS, single-node k8s cluster, all my self-hosted services containerized and declared in Helm charts.
Why Kubernetes and not just Docker Compose?
Compose is fine. I used it for two years. The issue isn't Compose — it's that I wanted to learn Kubernetes properly, and there's no substitute for running it in anger.
Reading documentation tells you what things do. Running a production-ish workload tells you why they exist.
In six months I've hit: network policy edge cases, PVC migration pain, rolling update failures from missing resource limits, and a fun incident where a CronJob filled the node's disk. All valuable.
The single-node problem
True HA requires at least three control plane nodes. I have one. So:
- No etcd quorum — if the node goes down, the cluster goes down
- No live migration of pods during maintenance
My mitigation: good backups (Velero to S3), and a documented recovery procedure I've actually tested. RTO is about 20 minutes. For a homelab, that's fine.
What I'd do differently
I'd set up the second node sooner. I added it in March (after the Proxmox migration) and the difference in stability is meaningful. HA etcd alone is worth the extra hardware.