127 lines
3.4 KiB
Markdown
127 lines
3.4 KiB
Markdown
# Talos deployment (4 nodes)
|
|
|
|
This directory contains a `talhelper` cluster definition for a 4-node Talos
|
|
cluster:
|
|
|
|
- 3 hybrid control-plane/worker nodes: `noble-cp-1..3`
|
|
- 1 worker-only node: `noble-worker-1`
|
|
- `allowSchedulingOnControlPlanes: true`
|
|
- CNI: `none` (for Cilium via GitOps)
|
|
|
|
## 1) Update values for your environment
|
|
|
|
Edit `talconfig.yaml`:
|
|
|
|
- `endpoint` (Kubernetes API VIP or LB IP)
|
|
- each node `ipAddress`
|
|
- each node `installDisk` (for example `/dev/sda`, `/dev/nvme0n1`)
|
|
- `talosVersion` / `kubernetesVersion` if desired
|
|
|
|
## 2) Generate cluster secrets and machine configs
|
|
|
|
From this directory:
|
|
|
|
```bash
|
|
talhelper gensecret > talsecret.sops.yaml
|
|
talhelper genconfig
|
|
```
|
|
|
|
Generated machine configs are written to `clusterconfig/`.
|
|
|
|
## 3) Apply Talos configs
|
|
|
|
Apply each node file to the matching node IP from `talconfig.yaml`:
|
|
|
|
```bash
|
|
talosctl apply-config --insecure -n 192.168.50.20 -f clusterconfig/noble-noble-cp-1.yaml
|
|
talosctl apply-config --insecure -n 192.168.50.30 -f clusterconfig/noble-noble-cp-2.yaml
|
|
talosctl apply-config --insecure -n 192.168.50.40 -f clusterconfig/noble-noble-cp-3.yaml
|
|
talosctl apply-config --insecure -n 192.168.50.10 -f clusterconfig/noble-noble-worker-1.yaml
|
|
```
|
|
|
|
## 4) Bootstrap the cluster
|
|
|
|
After all nodes are up (bootstrap once, from any control-plane node):
|
|
|
|
```bash
|
|
talosctl bootstrap -n 192.168.50.20 -e 192.168.50.230
|
|
talosctl kubeconfig -n 192.168.50.20 -e 192.168.50.230 .
|
|
```
|
|
|
|
## 5) Validate
|
|
|
|
```bash
|
|
talosctl -n 192.168.50.20 -e 192.168.50.230 health
|
|
kubectl get nodes -o wide
|
|
```
|
|
|
|
## 6) GitOps-pinned Cilium values
|
|
|
|
The Cilium settings that worked for this Talos cluster are now persisted in:
|
|
|
|
- `clusters/noble/apps/cilium/application.yaml`
|
|
|
|
That Argo CD `Application` pins chart `1.16.6` and includes the required Helm
|
|
values for this environment (API host/port, cgroup settings, IPAM CIDR, and
|
|
security capabilities), so future reconciles do not drift back to defaults.
|
|
|
|
## 7) Argo CD app-of-apps bootstrap
|
|
|
|
This repo includes an app-of-apps structure for cluster apps:
|
|
|
|
- Root app: `clusters/noble/root-application.yaml`
|
|
- Child apps index: `clusters/noble/apps/kustomization.yaml`
|
|
- Argo CD app: `clusters/noble/apps/argocd/application.yaml`
|
|
- Cilium app: `clusters/noble/apps/cilium/application.yaml`
|
|
|
|
Bootstrap once from your workstation:
|
|
|
|
```bash
|
|
kubectl apply -k clusters/noble/bootstrap/argocd
|
|
kubectl apply -f clusters/noble/root-application.yaml
|
|
```
|
|
|
|
After this, Argo CD continuously reconciles all applications under
|
|
`clusters/noble/apps/`.
|
|
|
|
## 8) kube-vip API VIP (`192.168.50.230`)
|
|
|
|
HAProxy has been removed in favor of `kube-vip` running on control-plane nodes.
|
|
|
|
Manifests are in:
|
|
|
|
- `clusters/noble/apps/kube-vip/application.yaml`
|
|
- `clusters/noble/apps/kube-vip/vip-rbac.yaml`
|
|
- `clusters/noble/apps/kube-vip/vip-daemonset.yaml`
|
|
|
|
The DaemonSet advertises `192.168.50.230` in ARP mode and fronts the Kubernetes
|
|
API on port `6443`.
|
|
|
|
Apply manually (or let Argo CD sync from root app):
|
|
|
|
```bash
|
|
kubectl apply -k clusters/noble/apps/kube-vip
|
|
```
|
|
|
|
Validate:
|
|
|
|
```bash
|
|
kubectl -n kube-system get pods -l app.kubernetes.io/name=kube-vip-ds -o wide
|
|
nc -vz 192.168.50.230 6443
|
|
```
|
|
|
|
## 9) Argo CD via DNS host (no port)
|
|
|
|
Argo CD is exposed through Cilium Ingress with host:
|
|
|
|
- `argo.noble.lab.pcenicni.dev`
|
|
|
|
Ingress manifest:
|
|
|
|
- `clusters/noble/bootstrap/argocd/argocd-ingress.yaml`
|
|
|
|
After syncing manifests, create a Pi-hole DNS A record:
|
|
|
|
- `argo.noble.lab.pcenicni.dev` -> `192.168.50.230`
|
|
|