Files
home-server/clusters/noble/bootstrap/metallb/README.md

3.1 KiB
Raw Blame History

MetalLB (layer 2) — noble

Prerequisite (Talos + cni: none): install Cilium (or your CNI) before MetalLB.

Until the CNI is up, nodes stay NotReady and carry taints such as node.kubernetes.io/network-unavailable (and not-ready). The scheduler then reports 0/N nodes are available: N node(s) had untolerated taint(s) and MetalLB stays Pending — its chart does not tolerate those taints, by design. Install Cilium first (talos/CLUSTER-BUILD.md Phase B); when nodes are Ready, reinstall or rollout MetalLB if needed.

Order: namespace (Pod Security) → Helm (CRDs + controller) → kustomize (pool + L2).

If kubectl apply -k fails with no matches for kind "IPAddressPool" / ensure CRDs are installed first, Helm is not installed yet.

Pod Security warnings (would violate PodSecurity "restricted"): MetalLBs speaker/FRR use hostNetwork, NET_ADMIN, etc. That is expected unless metallb-system is labeled privileged. Apply namespace.yaml before Helm so the namespace is created with the right labels (omit --create-namespace on Helm), or patch an existing namespace:

kubectl apply -f clusters/noble/apps/metallb/namespace.yaml

If you already ran Helm with --create-namespace, either kubectl apply -f namespace.yaml (merges labels) or:

kubectl label namespace metallb-system \
  pod-security.kubernetes.io/enforce=privileged \
  pod-security.kubernetes.io/audit=privileged \
  pod-security.kubernetes.io/warn=privileged --overwrite

Then restart MetalLB pods if they were failing (kubectl get pods -n metallb-system; delete stuck pods or kubectl rollout restart each Deployment / DaemonSet in that namespace).

  1. Install the MetalLB chart (CRDs + controller). If you applied namespace.yaml above, skip --create-namespace:

    helm repo add metallb https://metallb.github.io/metallb
    helm repo update
    helm upgrade --install metallb metallb/metallb \
      --namespace metallb-system \
      --wait --timeout 20m
    
  2. Apply this folders pool and L2 advertisement:

    kubectl apply -k clusters/noble/apps/metallb
    
  3. Confirm a Service type: LoadBalancer receives an address in 192.168.50.210192.168.50.229 (e.g. kubectl get svc -n traefik traefik after installing Traefik in clusters/noble/apps/traefik/).

Reserve one IP in that range for Argo CD (e.g. 192.168.50.210) via spec.loadBalancerIP or chart values when you expose the server. Traefik pins 192.168.50.211 in clusters/noble/apps/traefik/values.yaml.

Pending MetalLB pods

  1. kubectl get nodes — every node Ready? If NotReady or NetworkUnavailable, finish CNI install first.
  2. kubectl describe pod -n metallb-system <pod-name> — read Events at the bottom (0/N nodes are available: …).
  3. L2 speaker uses the nodes uplink; kube-vip in this repo expects ens18 on control planes (clusters/noble/apps/kube-vip/vip-daemonset.yaml). If your NIC name differs, change vip_interface there.