Skip to content

Local Multipass VMs vs Bare Metal Deployment

This document explains how the Multipass-based local cluster differs from a bare-metal deployment.

flowchart LR
  Ansible["Ansible provisioning"] --> Local["Multipass VMs"]
  Ansible --> Bare["Bare metal nodes"]
  Kubeadm["kubeadm init/join"] --> Local
  Kubeadm --> Bare
  GitOps["ArgoCD GitOps"] --> Local
  GitOps --> Bare
flowchart TB
  subgraph Local["Local (Multipass)"]
    VMs["Ubuntu VMs"]
    VirtualNIC["Virtual NICs"]
    VirtualDisk["VM disks"]
  end

  subgraph Bare["Bare metal"]
    Nodes["Ubuntu nodes"]
    PhysicalNIC["Physical NICs"]
    PhysicalDisk["Physical disks"]
  end

  Ansible["Ansible roles"] --> VMs
  Ansible --> Nodes
  VMs --> KubeadmLocal["kubeadm init/join"]
  Nodes --> KubeadmBare["kubeadm init/join"]
  KubeadmLocal --> CiliumLocal["Cilium"]
  KubeadmBare --> CiliumBare["Cilium"]
  CiliumLocal --> GitOpsLocal["ArgoCD + ApplicationSets"]
  CiliumBare --> GitOpsBare["ArgoCD + ApplicationSets"]
  VirtualDisk --> LonghornLocal["Longhorn"]
  PhysicalDisk --> LonghornBare["Longhorn"]
FeatureLocal (Multipass VM)Bare Metal (Production)
KernelVM kernel, virtualizedDedicated kernel on hardware
Init systemNative systemd (PID 1)Native systemd (PID 1)
File systemVirtual disk imageNative filesystem
NetworkingVirtual NICs, NAT or bridgePhysical NICs
Gateway APITailscale in VMTailscale on host

Multipass VMs run their own kernels, so module loading and sysctl tuning happen inside the VM. Bare metal applies the same steps directly on hardware.

Multipass uses virtual networking, which is closer to real host networking than containers but still differs from physical NICs, routing, and latency characteristics.

Local clusters use VM disk images. Bare metal uses the host filesystem and persistent storage systems like Longhorn.

The Multipass workflow exercises the same kubeadm flow, systemd services, container runtime configuration, kernel modules, and CNI behavior as a physical node. It is a strong approximation for validating playbooks and cluster bootstrap logic.

  • Physical NIC throughput, offload behavior, and switch topology
  • Firmware, BIOS, and power management quirks
  • Disk controller performance and SMART behavior
  • Tailscale exit node performance on real uplinks

The migration path keeps the same Ansible roles and GitOps layout and switches only the host inventory and hardware assumptions.

Follow Prerequisites and System Preparation on the real machine.

Update ansible/inventory/hosts.yaml with the bare metal host IPs and user, then run the provisioning playbook again.

Follow Kubernetes, Cilium CNI, and ArgoCD and GitOps.