Skip to content

Local Kubernetes Cluster with Multipass VMs

This tutorial runs a multi-node Kubernetes cluster on local VMs using Multipass. It is intended as an optional rehearsal path before the bare-metal setup, and mirrors the same control plane and worker layout.

Use the automation script for a one-command setup:

Terminal window
./scripts/local-cluster.sh up

This handles everything: VM creation, Ansible provisioning, kubeadm initialization, Cilium installation, and smoke testing.

For a low-resource rehearsal, run a single-node control plane and disable Hubble UI/Relay:

Terminal window
WORKER_COUNT=0 VM_CPUS=2 VM_MEMORY=3G VM_DISK=12G CILIUM_HUBBLE_ENABLED=false ./scripts/local-cluster.sh up

When done:

Terminal window
./scripts/local-cluster.sh down

If you prefer to understand each step or need to customize the process, follow the manual instructions below.

  • Multipass installed on your workstation
  • Ansible installed on your workstation
  • kubectl installed on your workstation
  • A multi-node rehearsal expects around 12GB of free RAM
  • A single-node rehearsal can run with less by reducing VM sizing

macOS (Homebrew):

Terminal window
brew install --cask multipass
brew install ansible kubectl

Ubuntu 24.04 (Snap + APT):

Terminal window
sudo snap install multipass
sudo apt-get update && sudo apt-get install -y ansible
sudo snap install kubectl --classic
Terminal window
mkdir -p ansible/.keys
ssh-keygen -t ed25519 -f ansible/.keys/multipass -N ""
Terminal window
multipass launch --name homelab-cp --cpus 2 --memory 4G --disk 20G 24.04
multipass launch --name homelab-w1 --cpus 2 --memory 4G --disk 20G 24.04
multipass launch --name homelab-w2 --cpus 2 --memory 4G --disk 20G 24.04
Terminal window
for node in homelab-cp homelab-w1 homelab-w2; do
multipass exec "$node" -- bash -c "mkdir -p /home/ubuntu/.ssh && cat >> /home/ubuntu/.ssh/authorized_keys" < ansible/.keys/multipass.pub
done

Get the VM IPs and create the inventory:

Terminal window
multipass list

Create ansible/inventory/local-cluster.yaml with the IPs from the output.

Terminal window
ANSIBLE_HOST_KEY_CHECKING=False ANSIBLE_ROLES_PATH=ansible/roles \
ansible-playbook -i ansible/inventory/local-cluster.yaml \
ansible/playbooks/provision-cpu.yaml \
-e @ansible/group_vars/all.yaml
Terminal window
multipass exec homelab-cp -- sudo kubeadm init --pod-network-cidr=10.244.0.0/16
Terminal window
JOIN_CMD=$(multipass exec homelab-cp -- sudo kubeadm token create --print-join-command)
multipass exec homelab-w1 -- sudo bash -c "$JOIN_CMD"
multipass exec homelab-w2 -- sudo bash -c "$JOIN_CMD"

Step 9: Copy kubeconfig to your workstation

Section titled “Step 9: Copy kubeconfig to your workstation”
Terminal window
multipass exec homelab-cp -- sudo cp /etc/kubernetes/admin.conf /home/ubuntu/admin.conf
multipass exec homelab-cp -- sudo chown ubuntu:ubuntu /home/ubuntu/admin.conf
multipass transfer homelab-cp:/home/ubuntu/admin.conf /tmp/homelab-admin.conf
export KUBECONFIG=/tmp/homelab-admin.conf

Install the Cilium CLI inside the control plane VM:

Terminal window
multipass exec homelab-cp -- bash -c '
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
ARCH=$(uname -m)
if [ "${ARCH}" = "aarch64" ]; then CILIUM_ARCH=arm64; else CILIUM_ARCH=amd64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CILIUM_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CILIUM_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CILIUM_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CILIUM_ARCH}.tar.gz*
'

Get the Cilium version and install:

Terminal window
CILIUM_VERSION=$(grep -E "cilium_version:" ansible/group_vars/all.yaml | awk -F'"' '{print $2}')
multipass exec homelab-cp -- sudo cilium install \
--kubeconfig /etc/kubernetes/admin.conf \
--version $CILIUM_VERSION \
--set kubeProxyReplacement=true \
--set socketLB.hostNamespaceOnly=true

The socketLB.hostNamespaceOnly=true setting is required for Tailscale Operator LoadBalancer services to work correctly. See Cilium CNI for details.

Wait for Cilium to be ready:

Terminal window
multipass exec homelab-cp -- sudo cilium status --kubeconfig /etc/kubernetes/admin.conf --wait
Terminal window
kubectl get nodes -o wide
kubectl apply -f ansible/tests/local-cluster/test-nginx/deployment.yaml
kubectl apply -f ansible/tests/local-cluster/test-nginx/service.yaml
kubectl get pods -l app=test-nginx
Terminal window
multipass delete homelab-cp homelab-w1 homelab-w2
multipass purge

When ready for real hardware:

The Ansible roles and GitOps layout are identical; only the inventory changes.