Prerequisites for Bare Metal Kubernetes
Prerequisites
Section titled “Prerequisites”Use this guide before the bare metal tutorials. If you are following the local VM path, use Local Multipass Cluster instead.
Step 1: Install workstation tooling
Section titled “Step 1: Install workstation tooling”macOS (Homebrew)
Section titled “macOS (Homebrew)”brew install ansible kubectl helm pre-commitUbuntu
Section titled “Ubuntu”sudo apt updatesudo add-apt-repository ppa:quentiumyt/nvtopsudo apt install -y curl wget git pre-commit python3 python3-dev htop nvtop dmsetup npm nodejscurl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/nullecho "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.listsudo apt-get updatesudo apt-get install -y helmStep 2: Prepare Ansible inventory
Section titled “Step 2: Prepare Ansible inventory”Update the node list and user in ansible/inventory/hosts.yaml, then confirm versions and paths in ansible/group_vars/all.yaml.
If you are using Tailscale, set ansible_host to the Tailscale IP or MagicDNS hostname.
Step 3: Enable SSH on the nodes
Section titled “Step 3: Enable SSH on the nodes”Ensure the SSH server is installed and running on each Ubuntu node.
sudo apt updatesudo apt install -y openssh-serversudo systemctl enable --now sshIf you use UFW, allow SSH:
sudo ufw allow OpenSSHStep 4: Configure key-based SSH from the workstation
Section titled “Step 4: Configure key-based SSH from the workstation”Install your workstation SSH key on the node so Ansible can connect without passwords.
ssh-copy-id -i ~/.ssh/id_ed25519.pub sudhanva@legionIf you reinstalled the node and see a host key warning, remove the old entry and try again:
ssh-keygen -R legionssh-copy-id -i ~/.ssh/id_ed25519.pub sudhanva@legionStep 5: Run Ansible provisioning
Section titled “Step 5: Run Ansible provisioning”Run this from the repository root so the relative paths resolve correctly.
ANSIBLE_CONFIG=ansible/ansible.cfg ansible-playbook \ ansible/playbooks/provision-cpu.yaml \ -e @ansible/group_vars/all.yamlIf you need GPU support, use ansible/playbooks/provision-intel-gpu.yaml or ansible/playbooks/provision-nvidia-gpu.yaml.
If the node requires sudo with a password, add -K and enter the password when prompted:
ANSIBLE_CONFIG=ansible/ansible.cfg ansible-playbook \ ansible/playbooks/provision-cpu.yaml \ -e @ansible/group_vars/all.yaml \ -KTroubleshooting provisioning
Section titled “Troubleshooting provisioning”If APT fails with Malformed line 1 in source list /etc/apt/sources.list.d/kubernetes.list (type), remove the file and rerun the playbook:
sudo rm -f /etc/apt/sources.list.d/kubernetes.listIf you see a warning about multipathd missing, it is safe to continue. The Longhorn prereq role only disables the service if it is present.
What the provisioning playbook does
Section titled “What the provisioning playbook does”The provisioning playbooks run these roles on each node:
base: disables swap, loads kernel modules, writes sysctl and inotify settings, installs base packagescontainerd: installs containerd (upstream or apt), writes/etc/containerd/config.toml, enables the servicekubernetes: adds the Kubernetes apt repo, installs kubeadm/kubelet/kubectl, pins versions, enables kubeletlonghorn-prereqs: installs open-iscsi, nfs-common, cryptsetup, and creates the Longhorn data pathtailscale: installstailscaledand enables the service
The NVIDIA playbook also runs the nvidia-gpu role.
What you still do manually
Section titled “What you still do manually”After provisioning, continue with:
- Initialize the control plane with
kubeadm initin Kubernetes. - Install Cilium in Cilium CNI.
- Install ArgoCD and apply the bootstrap in ArgoCD and GitOps.
- Join workers with Join Worker Nodes.
- If you want node-level tailnet access, run
sudo tailscale upas described in Add a Worker Node.