Skip to main content

Homelab

Multi-node bare-metal Kubernetes cluster on Ubuntu 24.04 LTS, managed via GitOps with ArgoCD.

Quick Start

Choose your path based on where you want to run Kubernetes:

Run a full multi-node cluster on your workstation using Multipass VMs. This mirrors the bare-metal setup without needing real hardware.

One command:

./scripts/local-cluster.sh up

This script:

  • Creates 3 VMs (1 control plane, 2 workers)
  • Runs Ansible provisioning
  • Initializes Kubernetes with kubeadm
  • Installs Cilium CNI
  • Runs a smoke test

Time: ~10 minutes

Prerequisites: Multipass, Ansible, kubectl (Install guide)

When done, destroy with:

./scripts/local-cluster.sh down

Option B: Bare Metal Deployment

Deploy to real Ubuntu 24.04 hardware with SSH access.

Three commands:

# 1. Edit inventory with your IPs
nano ansible/inventory/hosts.yaml

# 2. Provision all nodes
ansible-playbook -i ansible/inventory/hosts.yaml ansible/playbooks/provision-cpu.yaml

# 3. Initialize control plane and install CNI (run on control plane node)
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
cilium install --set kubeProxyReplacement=true --set socketLB.hostNamespaceOnly=true

Then bootstrap GitOps:

kubectl apply -f bootstrap/root.yaml

ArgoCD will sync all infrastructure and apps from Git automatically.

Detailed guide: PrerequisitesSystem PrepKubernetes

Worker join steps: Join workers


What Gets Deployed

Once ArgoCD syncs, you get:

ComponentPurpose
CiliumCNI with kube-proxy replacement
ArgoCDGitOps continuous deployment
LonghornDistributed block storage
Envoy GatewayGateway API ingress controller
Gateway API CRDsRequired API types for Gateway API
Envoy Gateway CRDsRequired API types for Envoy Gateway
Tailscale OperatorVPN-based LoadBalancer
cert-managerAutomatic TLS certificates
ExternalDNSAutomatic DNS record management

Repository Structure

homelab/
├── ansible/ # Node provisioning (Ansible)
│ ├── inventory/ # Host definitions
│ ├── playbooks/ # Playbook entrypoints
│ └── roles/ # Reusable roles
├── bootstrap/ # ArgoCD bootstrap
├── infrastructure/ # Cluster components (ArgoCD manages)
├── apps/ # User workloads (ArgoCD manages)
├── scripts/ # Automation scripts
└── docs/ # This documentation

Reference Index

Automation Model

Everything flows through two systems:

  • Ansible provisions nodes (OS, container runtime, kubelet)
  • ArgoCD applies cluster state from Git (helm charts, manifests)

Manual kubectl apply is discouraged. Push to Git and let ArgoCD reconcile.

Read more: Automation Model

Day-2 Operations

After initial setup:

Deep Dives