Skip to main content

Expose Headlamp With Tailscale Gateway

This guide shows how to deploy the Headlamp UI and expose it at headlamp.sudhanva.me through the Tailscale Gateway API.

Step 1: Add the Headlamp app manifests

Create a new app directory under apps/ with separate manifests for each resource.

Example layout:

  • apps/headlamp/app.yaml
  • apps/headlamp/namespace.yaml
  • apps/headlamp/serviceaccount.yaml
  • apps/headlamp/clusterrolebinding.yaml
  • apps/headlamp/deployment.yaml
  • apps/headlamp/service.yaml
  • apps/headlamp/httproute.yaml

app.yaml defines the app name, path, and namespace.

name: headlamp
path: apps/headlamp
namespace: headlamp

Expose the service with an HTTPRoute that points to the Tailscale Gateway.

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: headlamp
namespace: headlamp
annotations:
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
parentRefs:
- name: tailscale-gateway
namespace: tailscale
sectionName: https
hostnames:
- headlamp.sudhanva.me
rules:
- backendRefs:
- name: headlamp
port: 80

Step 2: Commit and push

ArgoCD watches the repo and applies changes via ApplicationSets.

git add apps/headlamp
git commit -m "Add headlamp app"
git push

Step 3: Access Headlamp

Open https://headlamp.sudhanva.me in your browser. Use a service account token to authenticate.

kubectl -n headlamp create token headlamp

Step 4: Adjust permissions if needed

The default setup uses cluster-admin for the Headlamp service account. If you want read-only access, bind the service account to a more restrictive cluster role.

OIDC Login With Vault

This enables long-lived OIDC logins instead of short-lived service account tokens. It uses Vault as the identity provider and requires the Kubernetes API server to trust Vault as an OIDC issuer.

Step 1: Create Vault OIDC key, provider, and client

Log into Vault with an admin token:

kubectl -n vault exec -it vault-0 -- vault login

Create a signing key and provider:

kubectl -n vault exec -it vault-0 -- vault write identity/oidc/key/headlamp rotation_period=24h
kubectl -n vault exec -it vault-0 -- vault write identity/oidc/provider/headlamp \
allowed_client_ids="*" \
issuer="https://vault.sudhanva.me/v1/identity/oidc/provider/headlamp"

Create the client and capture its ID and secret:

kubectl -n vault exec -it vault-0 -- vault write identity/oidc/client/headlamp \
redirect_uris="https://headlamp.sudhanva.me/oidc-callback"
kubectl -n vault exec -it vault-0 -- vault read -field=client_id identity/oidc/client/headlamp
kubectl -n vault exec -it vault-0 -- vault read -field=client_secret identity/oidc/client/headlamp

Step 2: Enable a Vault login method for users

Vault must allow humans to authenticate before it can issue OIDC codes. Enable a login method (for example userpass) and grant it access to the authorize endpoint.

kubectl -n vault exec -it vault-0 -- vault auth enable userpass
kubectl -n vault exec -it vault-0 -- /bin/sh -c 'cat > /tmp/headlamp-oidc.hcl <<EOF
path "identity/oidc/provider/headlamp/authorize" {
capabilities = ["read"]
}
EOF'
kubectl -n vault exec -it vault-0 -- vault policy write headlamp-oidc /tmp/headlamp-oidc.hcl
kubectl -n vault exec -it vault-0 -- vault write auth/userpass/users/headlamp \
password="REPLACE_ME" policies="default,headlamp-oidc"

Authorize the user for the Headlamp OIDC client by creating an entity, alias, and assignment, then attach that assignment to the client:

kubectl -n vault exec -it vault-0 -- vault auth list
kubectl -n vault exec -it vault-0 -- vault write -format=json identity/entity name="headlamp"
kubectl -n vault exec -it vault-0 -- vault write identity/entity-alias name="headlamp" \
canonical_id="REPLACE_WITH_ENTITY_ID" mount_accessor="REPLACE_WITH_USERPASS_ACCESSOR"
kubectl -n vault exec -it vault-0 -- vault write identity/oidc/assignment/headlamp \
entity_ids="REPLACE_WITH_ENTITY_ID"
kubectl -n vault exec -it vault-0 -- vault write identity/oidc/client/headlamp \
client_id="REPLACE_WITH_CLIENT_ID" \
client_secret="REPLACE_WITH_CLIENT_SECRET" \
redirect_uris="https://headlamp.sudhanva.me/oidc-callback" \
assignments="headlamp"

Use the userpass credentials to log in when Vault prompts during the OIDC flow.

If you prefer to allow any authenticated Vault user (no assignments), recreate the client without assignments. Vault will generate a new client ID and secret, so update the KV entry and the Kubernetes API server flags after doing this.

Step 3: Store Headlamp OIDC settings in Vault

kubectl -n vault exec -it vault-0 -- vault kv put kv/headlamp/oidc \
client_id="REPLACE_ME" \
client_secret="REPLACE_ME" \
issuer_url="https://vault.sudhanva.me/v1/identity/oidc/provider/headlamp" \
callback_url="https://headlamp.sudhanva.me/oidc-callback" \
scopes="openid"

Step 4: Configure Kubernetes API server OIDC

Update your kubeadm config to include OIDC settings. Use the client_id returned by Vault.

apiServer:
extraArgs:
oidc-issuer-url: https://vault.sudhanva.me/v1/identity/oidc/provider/headlamp
oidc-client-id: REPLACE_WITH_VAULT_CLIENT_ID
oidc-username-claim: sub
oidc-groups-claim: groups
oidc-username-prefix: "oidc:"
oidc-groups-prefix: "oidc:"

Apply the change using your kubeadm workflow and restart the API server. This is a control plane change and should be done directly on the control plane node.

If you edit the static manifest directly, keep the oidc: prefixes quoted to avoid YAML parsing errors.

Step 4a: Ensure cluster DNS resolves Vault

The API server and Headlamp pods must be able to reach vault.sudhanva.me for OIDC token validation and exchange. Tailscale DNS returns Tailscale IPs (100.x.x.x) that pods cannot route to directly.

This repo uses split-horizon DNS to solve this. CoreDNS rewrites *.sudhanva.me queries to the internal gateway service, which pods can reach via the cluster network:

apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
sudhanva.me:53 {
errors
cache 30
rewrite name regex (.*)\.sudhanva\.me gateway-internal.envoy-gateway.svc.cluster.local answer auto
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
}

The gateway-internal service in envoy-gateway namespace selects Envoy pods by label, so it dynamically tracks the gateway without hardcoded IPs:

apiVersion: v1
kind: Service
metadata:
name: gateway-internal
namespace: envoy-gateway
spec:
type: ClusterIP
selector:
gateway.envoyproxy.io/owning-gateway-name: tailscale-gateway
gateway.envoyproxy.io/owning-gateway-namespace: tailscale
ports:
- name: https
port: 443
targetPort: 10443

These manifests live in infrastructure/coredns/configmap.yaml and infrastructure/gateway/internal-service.yaml.

Step 5: Sync Headlamp and log in

Headlamp reads OIDC config from the headlamp-oidc Secret created by External Secrets. After ArgoCD syncs the app, use the Sign In button.

Repo Wiring For OIDC

These files implement the OIDC wiring for Headlamp:

  • apps/headlamp/external-secret-oidc.yaml
  • apps/headlamp/deployment.yaml

OIDC Admin Access

Headlamp users authenticate as OIDC identities. To grant full admin access, bind the OIDC subject to cluster-admin.

Use a separate manifest so ArgoCD can manage it:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: headlamp-oidc-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: oidc:REPLACE_WITH_ENTITY_ID

The repo includes apps/headlamp/clusterrolebinding-oidc.yaml. Replace the subject with your Vault entity ID if it changes.

Prometheus Metrics

Headlamp exposes /metrics when HEADLAMP_CONFIG_METRICS_ENABLED is set. The repo enables this flag and adds a ServiceMonitor so Prometheus picks it up automatically.

kubectl -n headlamp get servicemonitors