Local Kubernetes Cluster with Multipass VMs
Local Multipass Cluster
Section titled “Local Multipass Cluster”This tutorial runs a multi-node Kubernetes cluster on local VMs using Multipass. It is intended as an optional rehearsal path before the bare-metal setup, and mirrors the same control plane and worker layout.
Quick Start (Automated)
Section titled “Quick Start (Automated)”Use the automation script for a one-command setup:
./scripts/local-cluster.sh upThis handles everything: VM creation, Ansible provisioning, kubeadm initialization, Cilium installation, and smoke testing.
For a low-resource rehearsal, run a single-node control plane and disable Hubble UI/Relay:
WORKER_COUNT=0 VM_CPUS=2 VM_MEMORY=3G VM_DISK=12G CILIUM_HUBBLE_ENABLED=false ./scripts/local-cluster.sh upWhen done:
./scripts/local-cluster.sh downManual Setup (Step by Step)
Section titled “Manual Setup (Step by Step)”If you prefer to understand each step or need to customize the process, follow the manual instructions below.
Prerequisites
Section titled “Prerequisites”- Multipass installed on your workstation
- Ansible installed on your workstation
- kubectl installed on your workstation
- A multi-node rehearsal expects around 12GB of free RAM
- A single-node rehearsal can run with less by reducing VM sizing
Step 1: Install host tools
Section titled “Step 1: Install host tools”macOS (Homebrew):
brew install --cask multipassbrew install ansible kubectlUbuntu 24.04 (Snap + APT):
sudo snap install multipasssudo apt-get update && sudo apt-get install -y ansiblesudo snap install kubectl --classicStep 2: Create a dedicated SSH key
Section titled “Step 2: Create a dedicated SSH key”mkdir -p ansible/.keysssh-keygen -t ed25519 -f ansible/.keys/multipass -N ""Step 3: Launch the VMs
Section titled “Step 3: Launch the VMs”multipass launch --name homelab-cp --cpus 2 --memory 4G --disk 20G 24.04multipass launch --name homelab-w1 --cpus 2 --memory 4G --disk 20G 24.04multipass launch --name homelab-w2 --cpus 2 --memory 4G --disk 20G 24.04Step 4: Add the SSH key to each VM
Section titled “Step 4: Add the SSH key to each VM”for node in homelab-cp homelab-w1 homelab-w2; do multipass exec "$node" -- bash -c "mkdir -p /home/ubuntu/.ssh && cat >> /home/ubuntu/.ssh/authorized_keys" < ansible/.keys/multipass.pubdoneStep 5: Generate the Ansible inventory
Section titled “Step 5: Generate the Ansible inventory”Get the VM IPs and create the inventory:
multipass listCreate ansible/inventory/local-cluster.yaml with the IPs from the output.
Step 6: Run Ansible provisioning
Section titled “Step 6: Run Ansible provisioning”ANSIBLE_HOST_KEY_CHECKING=False ANSIBLE_ROLES_PATH=ansible/roles \ansible-playbook -i ansible/inventory/local-cluster.yaml \ ansible/playbooks/provision-cpu.yaml \ -e @ansible/group_vars/all.yamlStep 7: Initialize the control plane
Section titled “Step 7: Initialize the control plane”multipass exec homelab-cp -- sudo kubeadm init --pod-network-cidr=10.244.0.0/16Step 8: Join the workers
Section titled “Step 8: Join the workers”JOIN_CMD=$(multipass exec homelab-cp -- sudo kubeadm token create --print-join-command)multipass exec homelab-w1 -- sudo bash -c "$JOIN_CMD"multipass exec homelab-w2 -- sudo bash -c "$JOIN_CMD"Step 9: Copy kubeconfig to your workstation
Section titled “Step 9: Copy kubeconfig to your workstation”multipass exec homelab-cp -- sudo cp /etc/kubernetes/admin.conf /home/ubuntu/admin.confmultipass exec homelab-cp -- sudo chown ubuntu:ubuntu /home/ubuntu/admin.confmultipass transfer homelab-cp:/home/ubuntu/admin.conf /tmp/homelab-admin.confexport KUBECONFIG=/tmp/homelab-admin.confStep 10: Install Cilium
Section titled “Step 10: Install Cilium”Install the Cilium CLI inside the control plane VM:
multipass exec homelab-cp -- bash -c ' CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt) ARCH=$(uname -m) if [ "${ARCH}" = "aarch64" ]; then CILIUM_ARCH=arm64; else CILIUM_ARCH=amd64; fi curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CILIUM_ARCH}.tar.gz{,.sha256sum} sha256sum --check cilium-linux-${CILIUM_ARCH}.tar.gz.sha256sum sudo tar xzvfC cilium-linux-${CILIUM_ARCH}.tar.gz /usr/local/bin rm cilium-linux-${CILIUM_ARCH}.tar.gz*'Get the Cilium version and install:
CILIUM_VERSION=$(grep -E "cilium_version:" ansible/group_vars/all.yaml | awk -F'"' '{print $2}')multipass exec homelab-cp -- sudo cilium install \ --kubeconfig /etc/kubernetes/admin.conf \ --version $CILIUM_VERSION \ --set kubeProxyReplacement=true \ --set socketLB.hostNamespaceOnly=trueThe socketLB.hostNamespaceOnly=true setting is required for Tailscale Operator LoadBalancer services to work correctly. See Cilium CNI for details.
Wait for Cilium to be ready:
multipass exec homelab-cp -- sudo cilium status --kubeconfig /etc/kubernetes/admin.conf --waitStep 11: Verify the cluster
Section titled “Step 11: Verify the cluster”kubectl get nodes -o widekubectl apply -f ansible/tests/local-cluster/test-nginx/deployment.yamlkubectl apply -f ansible/tests/local-cluster/test-nginx/service.yamlkubectl get pods -l app=test-nginxStep 12: Tear down
Section titled “Step 12: Tear down”multipass delete homelab-cp homelab-w1 homelab-w2multipass purgeNext Steps
Section titled “Next Steps”When ready for real hardware:
- Prerequisites - Hardware and network requirements
- System Preparation - OS configuration
- Kubernetes - Cluster initialization
The Ansible roles and GitOps layout are identical; only the inventory changes.