53 lines
2.9 KiB
Markdown
53 lines
2.9 KiB
Markdown
# MetalLB (layer 2) — noble
|
||
|
||
**Prerequisite (Talos + `cni: none`):** install **Cilium** (or your CNI) **before** MetalLB.
|
||
|
||
Until the CNI is up, nodes stay **`NotReady`** and carry taints such as **`node.kubernetes.io/network-unavailable`** (and **`not-ready`**). The scheduler then reports **`0/N nodes are available: N node(s) had untolerated taint(s)`** and MetalLB stays **`Pending`** — its chart does not tolerate those taints, by design. **Install Cilium first** (`talos/CLUSTER-BUILD.md` Phase B); when nodes are **`Ready`**, reinstall or rollout MetalLB if needed.
|
||
|
||
**Order:** namespace (Pod Security) → **Helm** (CRDs + controller) → **kustomize** (pool + L2).
|
||
|
||
If `kubectl apply -k` fails with **`no matches for kind "IPAddressPool"`** / **`ensure CRDs are installed first`**, Helm is not installed yet.
|
||
|
||
**Pod Security warnings** (`would violate PodSecurity "restricted"`): MetalLB’s speaker/FRR use `hostNetwork`, `NET_ADMIN`, etc. That is expected unless `metallb-system` is labeled **privileged**. Apply `namespace.yaml` **before** Helm so the namespace is created with the right labels (omit `--create-namespace` on Helm), or patch an existing namespace:
|
||
|
||
```bash
|
||
kubectl apply -f clusters/noble/apps/metallb/namespace.yaml
|
||
```
|
||
|
||
If you already ran Helm with `--create-namespace`, either `kubectl apply -f namespace.yaml` (merges labels) or:
|
||
|
||
```bash
|
||
kubectl label namespace metallb-system \
|
||
pod-security.kubernetes.io/enforce=privileged \
|
||
pod-security.kubernetes.io/audit=privileged \
|
||
pod-security.kubernetes.io/warn=privileged --overwrite
|
||
```
|
||
|
||
Then restart MetalLB pods if they were failing (`kubectl get pods -n metallb-system`; delete stuck pods or `kubectl rollout restart` each `Deployment` / `DaemonSet` in that namespace).
|
||
|
||
1. Install the MetalLB chart (CRDs + controller). If you applied `namespace.yaml` above, **skip** `--create-namespace`:
|
||
|
||
```bash
|
||
helm repo add metallb https://metallb.github.io/metallb
|
||
helm repo update
|
||
helm upgrade --install metallb metallb/metallb \
|
||
--namespace metallb-system \
|
||
--wait
|
||
```
|
||
|
||
2. Apply this folder’s pool and L2 advertisement:
|
||
|
||
```bash
|
||
kubectl apply -k clusters/noble/apps/metallb
|
||
```
|
||
|
||
3. Confirm a test `Service` `type: LoadBalancer` receives an address in `192.168.50.210`–`192.168.50.229`.
|
||
|
||
Reserve **one** IP in that range for Argo CD (e.g. `192.168.50.210`) via `spec.loadBalancerIP` or chart values when you expose the server.
|
||
|
||
### `Pending` MetalLB pods
|
||
|
||
1. `kubectl get nodes` — every node **`Ready`**? If **`NotReady`** or **`NetworkUnavailable`**, finish **CNI** install first.
|
||
2. `kubectl describe pod -n metallb-system <pod-name>` — read **Events** at the bottom (`0/N nodes are available: …`).
|
||
3. L2 speaker uses the node’s uplink; kube-vip in this repo expects **`ens18`** on control planes (`clusters/noble/apps/kube-vip/vip-daemonset.yaml`). If your NIC name differs, change `vip_interface` there.
|