Refactor noble cluster configurations to transition from the deprecated apps structure to a streamlined bootstrap approach. Update paths in various YAML files and README documentation to reflect the new organization under clusters/noble/bootstrap. This change enhances clarity and consistency across the deployment process, ensuring that all components are correctly referenced and documented for user guidance.
This commit is contained in:
@@ -53,10 +53,10 @@ Use **Settings → Repositories** in the UI, or `argocd repo add` / a `Secret` o
|
||||
## 4. App-of-apps (optional GitOps only)
|
||||
|
||||
Bootstrap **platform** workloads (CNI, ingress, cert-manager, Kyverno, observability, Vault, etc.) are installed by
|
||||
**`ansible/playbooks/noble.yml`** — not by Argo. **`apps/kustomization.yaml`** is empty by default.
|
||||
**`ansible/playbooks/noble.yml`** from **`clusters/noble/bootstrap/`** — not by Argo. **`clusters/noble/apps/kustomization.yaml`** is empty by default.
|
||||
|
||||
1. Edit **`root-application.yaml`**: set **`repoURL`** and **`targetRevision`** to this repository. The **`resources-finalizer.argocd.argoproj.io/background`** finalizer uses Argo’s path-qualified form so **`kubectl apply`** does not warn about finalizer names.
|
||||
2. When you want Argo to manage specific apps, add **`Application`** manifests under **`apps/`** (see **`apps/README.md`**).
|
||||
2. When you want Argo to manage specific apps, add **`Application`** manifests under **`clusters/noble/apps/`** (see **`clusters/noble/apps/README.md`**).
|
||||
3. Apply the root:
|
||||
|
||||
```bash
|
||||
@@ -64,7 +64,7 @@ Bootstrap **platform** workloads (CNI, ingress, cert-manager, Kyverno, observabi
|
||||
```
|
||||
|
||||
If you migrated from GitOps-managed **`noble-platform`** / **`noble-kyverno`**, delete stale **`Application`** objects on
|
||||
the cluster (see **`apps/README.md`**) then re-apply the root.
|
||||
the cluster (see **`clusters/noble/apps/README.md`**) then re-apply the root.
|
||||
|
||||
## Versions
|
||||
|
||||
|
||||
@@ -3,8 +3,8 @@
|
||||
# 1. Set spec.source.repoURL (and targetRevision — **HEAD** tracks the remote default branch) to this repo.
|
||||
# 2. kubectl apply -f clusters/noble/bootstrap/argocd/root-application.yaml
|
||||
#
|
||||
# **apps/kustomization.yaml** is intentionally empty: core platform is installed by **ansible/playbooks/noble.yml**,
|
||||
# not Argo. Add **Application** manifests under **apps/** only for optional GitOps-managed workloads.
|
||||
# **clusters/noble/apps** holds optional **Application** manifests. Core platform is installed by
|
||||
# **ansible/playbooks/noble.yml** from **clusters/noble/bootstrap/**.
|
||||
#
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
@@ -21,7 +21,7 @@ spec:
|
||||
source:
|
||||
repoURL: https://gitea.pcenicni.ca/gsdavidp/home-server.git
|
||||
targetRevision: HEAD
|
||||
path: clusters/noble/bootstrap/argocd/apps
|
||||
path: clusters/noble/apps
|
||||
destination:
|
||||
server: https://kubernetes.default.svc
|
||||
namespace: argocd
|
||||
|
||||
@@ -19,7 +19,7 @@ Without this Secret, **`ClusterIssuer`** will not complete certificate orders.
|
||||
1. Create the namespace:
|
||||
|
||||
```bash
|
||||
kubectl apply -f clusters/noble/apps/cert-manager/namespace.yaml
|
||||
kubectl apply -f clusters/noble/bootstrap/cert-manager/namespace.yaml
|
||||
```
|
||||
|
||||
2. Install the chart (CRDs included via `values.yaml`):
|
||||
@@ -30,7 +30,7 @@ Without this Secret, **`ClusterIssuer`** will not complete certificate orders.
|
||||
helm upgrade --install cert-manager jetstack/cert-manager \
|
||||
--namespace cert-manager \
|
||||
--version v1.20.0 \
|
||||
-f clusters/noble/apps/cert-manager/values.yaml \
|
||||
-f clusters/noble/bootstrap/cert-manager/values.yaml \
|
||||
--wait
|
||||
```
|
||||
|
||||
@@ -39,7 +39,7 @@ Without this Secret, **`ClusterIssuer`** will not complete certificate orders.
|
||||
4. Apply ClusterIssuers (staging then prod, or both):
|
||||
|
||||
```bash
|
||||
kubectl apply -k clusters/noble/apps/cert-manager
|
||||
kubectl apply -k clusters/noble/bootstrap/cert-manager
|
||||
```
|
||||
|
||||
5. Confirm:
|
||||
|
||||
@@ -2,13 +2,13 @@
|
||||
#
|
||||
# Chart: jetstack/cert-manager — pin version on the helm command (e.g. v1.20.0).
|
||||
#
|
||||
# kubectl apply -f clusters/noble/apps/cert-manager/namespace.yaml
|
||||
# kubectl apply -f clusters/noble/bootstrap/cert-manager/namespace.yaml
|
||||
# helm repo add jetstack https://charts.jetstack.io
|
||||
# helm repo update
|
||||
# helm upgrade --install cert-manager jetstack/cert-manager -n cert-manager \
|
||||
# --version v1.20.0 -f clusters/noble/apps/cert-manager/values.yaml --wait
|
||||
# --version v1.20.0 -f clusters/noble/bootstrap/cert-manager/values.yaml --wait
|
||||
#
|
||||
# kubectl apply -k clusters/noble/apps/cert-manager
|
||||
# kubectl apply -k clusters/noble/bootstrap/cert-manager
|
||||
|
||||
crds:
|
||||
enabled: true
|
||||
|
||||
@@ -14,7 +14,7 @@ helm repo update
|
||||
helm upgrade --install cilium cilium/cilium \
|
||||
--namespace kube-system \
|
||||
--version 1.16.6 \
|
||||
-f clusters/noble/apps/cilium/values.yaml \
|
||||
-f clusters/noble/bootstrap/cilium/values.yaml \
|
||||
--wait
|
||||
```
|
||||
|
||||
@@ -25,7 +25,7 @@ kubectl -n kube-system rollout status ds/cilium
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
When nodes are **Ready**, continue with **MetalLB** (`clusters/noble/apps/metallb/README.md`) and other Phase B items. **kube-vip** for the Kubernetes API VIP is separate (L2 ARP); it can run after the API is reachable.
|
||||
When nodes are **Ready**, continue with **MetalLB** (`clusters/noble/bootstrap/metallb/README.md`) and other Phase B items. **kube-vip** for the Kubernetes API VIP is separate (L2 ARP); it can run after the API is reachable.
|
||||
|
||||
## 2. Optional: kube-proxy replacement (phase 2)
|
||||
|
||||
|
||||
@@ -11,9 +11,9 @@ Syncs secrets from external systems into Kubernetes **Secret** objects via **Ext
|
||||
```bash
|
||||
helm repo add external-secrets https://charts.external-secrets.io
|
||||
helm repo update
|
||||
kubectl apply -f clusters/noble/apps/external-secrets/namespace.yaml
|
||||
kubectl apply -f clusters/noble/bootstrap/external-secrets/namespace.yaml
|
||||
helm upgrade --install external-secrets external-secrets/external-secrets -n external-secrets \
|
||||
--version 2.2.0 -f clusters/noble/apps/external-secrets/values.yaml --wait
|
||||
--version 2.2.0 -f clusters/noble/bootstrap/external-secrets/values.yaml --wait
|
||||
```
|
||||
|
||||
Verify:
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
# Adjust server, mountPath, role, and path to match your Vault deployment. If Vault uses TLS
|
||||
# with a private CA, set provider.vault.caProvider or caBundle (see README).
|
||||
#
|
||||
# kubectl apply -f clusters/noble/apps/external-secrets/examples/vault-cluster-secret-store.yaml
|
||||
# kubectl apply -f clusters/noble/bootstrap/external-secrets/examples/vault-cluster-secret-store.yaml
|
||||
---
|
||||
apiVersion: external-secrets.io/v1
|
||||
kind: ClusterSecretStore
|
||||
|
||||
@@ -2,9 +2,9 @@
|
||||
#
|
||||
# helm repo add external-secrets https://charts.external-secrets.io
|
||||
# helm repo update
|
||||
# kubectl apply -f clusters/noble/apps/external-secrets/namespace.yaml
|
||||
# kubectl apply -f clusters/noble/bootstrap/external-secrets/namespace.yaml
|
||||
# helm upgrade --install external-secrets external-secrets/external-secrets -n external-secrets \
|
||||
# --version 2.2.0 -f clusters/noble/apps/external-secrets/values.yaml --wait
|
||||
# --version 2.2.0 -f clusters/noble/bootstrap/external-secrets/values.yaml --wait
|
||||
#
|
||||
# CRDs are installed by the chart (installCRDs: true). Vault ClusterSecretStore: see README + examples/.
|
||||
commonLabels: {}
|
||||
|
||||
@@ -5,11 +5,11 @@
|
||||
#
|
||||
# Talos: only **tail** `/var/log/containers` (no host **systemd** input — journal layout differs from typical Linux).
|
||||
#
|
||||
# kubectl apply -f clusters/noble/apps/fluent-bit/namespace.yaml
|
||||
# kubectl apply -f clusters/noble/bootstrap/fluent-bit/namespace.yaml
|
||||
# helm repo add fluent https://fluent.github.io/helm-charts
|
||||
# helm repo update
|
||||
# helm upgrade --install fluent-bit fluent/fluent-bit -n logging \
|
||||
# --version 0.56.0 -f clusters/noble/apps/fluent-bit/values.yaml --wait --timeout 15m
|
||||
# --version 0.56.0 -f clusters/noble/bootstrap/fluent-bit/values.yaml --wait --timeout 15m
|
||||
|
||||
config:
|
||||
inputs: |
|
||||
|
||||
@@ -2,9 +2,9 @@
|
||||
# The Grafana sidecar watches ConfigMaps labeled **grafana_datasource: "1"** and loads YAML keys as files.
|
||||
# Does not require editing the kube-prometheus-stack Helm release.
|
||||
#
|
||||
# kubectl apply -f clusters/noble/apps/grafana-loki-datasource/loki-datasource.yaml
|
||||
# kubectl apply -f clusters/noble/bootstrap/grafana-loki-datasource/loki-datasource.yaml
|
||||
#
|
||||
# Remove with: kubectl delete -f clusters/noble/apps/grafana-loki-datasource/loki-datasource.yaml
|
||||
# Remove with: kubectl delete -f clusters/noble/bootstrap/grafana-loki-datasource/loki-datasource.yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
|
||||
@@ -10,9 +10,9 @@
|
||||
```bash
|
||||
helm repo add headlamp https://kubernetes-sigs.github.io/headlamp/
|
||||
helm repo update
|
||||
kubectl apply -f clusters/noble/apps/headlamp/namespace.yaml
|
||||
kubectl apply -f clusters/noble/bootstrap/headlamp/namespace.yaml
|
||||
helm upgrade --install headlamp headlamp/headlamp -n headlamp \
|
||||
--version 0.40.1 -f clusters/noble/apps/headlamp/values.yaml --wait --timeout 10m
|
||||
--version 0.40.1 -f clusters/noble/bootstrap/headlamp/values.yaml --wait --timeout 10m
|
||||
```
|
||||
|
||||
Sign-in uses a **ServiceAccount token** (Headlamp docs: create a limited SA for day-to-day use). This repo binds the Headlamp workload SA to the built-in **`edit`** ClusterRole (**`clusterRoleBinding.clusterRoleName: edit`** in **`values.yaml`**) — not **`cluster-admin`**. For cluster-scoped admin work, use **`kubectl`** with your admin kubeconfig. Optional **OIDC** in **`config.oidc`** replaces token login for SSO.
|
||||
|
||||
@@ -2,9 +2,9 @@
|
||||
#
|
||||
# helm repo add headlamp https://kubernetes-sigs.github.io/headlamp/
|
||||
# helm repo update
|
||||
# kubectl apply -f clusters/noble/apps/headlamp/namespace.yaml
|
||||
# kubectl apply -f clusters/noble/bootstrap/headlamp/namespace.yaml
|
||||
# helm upgrade --install headlamp headlamp/headlamp -n headlamp \
|
||||
# --version 0.40.1 -f clusters/noble/apps/headlamp/values.yaml --wait --timeout 10m
|
||||
# --version 0.40.1 -f clusters/noble/bootstrap/headlamp/values.yaml --wait --timeout 10m
|
||||
#
|
||||
# DNS: headlamp.apps.noble.lab.pcenicni.dev → Traefik LB (see talos/CLUSTER-BUILD.md).
|
||||
# Default chart RBAC is broad — restrict for production (Phase G).
|
||||
|
||||
@@ -4,10 +4,10 @@
|
||||
#
|
||||
# Install (use one terminal; chain with && so `helm upgrade` always runs after `helm repo update`):
|
||||
#
|
||||
# kubectl apply -f clusters/noble/apps/kube-prometheus-stack/namespace.yaml
|
||||
# kubectl apply -f clusters/noble/bootstrap/kube-prometheus-stack/namespace.yaml
|
||||
# helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
|
||||
# helm repo update && helm upgrade --install kube-prometheus prometheus-community/kube-prometheus-stack -n monitoring \
|
||||
# --version 82.15.1 -f clusters/noble/apps/kube-prometheus-stack/values.yaml --wait --timeout 30m
|
||||
# --version 82.15.1 -f clusters/noble/bootstrap/kube-prometheus-stack/values.yaml --wait --timeout 30m
|
||||
#
|
||||
# Why it looks "stalled": with --wait, Helm prints almost nothing until the release finishes (can be many minutes).
|
||||
# Do not use --timeout 5m for first install — Longhorn PVCs + StatefulSets often need 15–30m. To watch progress,
|
||||
@@ -87,7 +87,7 @@ grafana:
|
||||
size: 10Gi
|
||||
|
||||
# HTTPS via Traefik + cert-manager (ClusterIssuer letsencrypt-prod; same pattern as other *.apps.noble.lab.pcenicni.dev hosts).
|
||||
# DNS: grafana.apps.noble.lab.pcenicni.dev → Traefik LoadBalancer (192.168.50.211) — see clusters/noble/apps/traefik/values.yaml
|
||||
# DNS: grafana.apps.noble.lab.pcenicni.dev → Traefik LoadBalancer (192.168.50.211) — see clusters/noble/bootstrap/traefik/values.yaml
|
||||
ingress:
|
||||
enabled: true
|
||||
ingressClassName: traefik
|
||||
@@ -109,4 +109,4 @@ grafana:
|
||||
# Traefik sets X-Forwarded-*; required for correct redirects and cookies behind the ingress.
|
||||
use_proxy_headers: true
|
||||
|
||||
# Loki datasource: apply `clusters/noble/apps/grafana-loki-datasource/loki-datasource.yaml` (sidecar ConfigMap) instead of additionalDataSources here.
|
||||
# Loki datasource: apply `clusters/noble/bootstrap/grafana-loki-datasource/loki-datasource.yaml` (sidecar ConfigMap) instead of additionalDataSources here.
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
# Plain Kustomize only (namespaces + extra YAML). Helm installs are driven by **ansible/playbooks/noble.yml**
|
||||
# (role **noble_platform**) — avoids **kustomize --enable-helm** in-repo.
|
||||
# Ansible bootstrap: plain Kustomize (namespaces + extra YAML). Helm installs are driven by
|
||||
# **ansible/playbooks/noble.yml** (role **noble_platform**) — avoids **kustomize --enable-helm** in-repo.
|
||||
# Optional GitOps workloads live under **../apps/** (Argo **noble-root**).
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
|
||||
@@ -10,11 +10,11 @@ Admission policies using [Kyverno](https://kyverno.io/). The main chart installs
|
||||
```bash
|
||||
helm repo add kyverno https://kyverno.github.io/kyverno/
|
||||
helm repo update
|
||||
kubectl apply -f clusters/noble/apps/kyverno/namespace.yaml
|
||||
kubectl apply -f clusters/noble/bootstrap/kyverno/namespace.yaml
|
||||
helm upgrade --install kyverno kyverno/kyverno -n kyverno \
|
||||
--version 3.7.1 -f clusters/noble/apps/kyverno/values.yaml --wait --timeout 15m
|
||||
--version 3.7.1 -f clusters/noble/bootstrap/kyverno/values.yaml --wait --timeout 15m
|
||||
helm upgrade --install kyverno-policies kyverno/kyverno-policies -n kyverno \
|
||||
--version 3.7.1 -f clusters/noble/apps/kyverno/policies-values.yaml --wait --timeout 10m
|
||||
--version 3.7.1 -f clusters/noble/bootstrap/kyverno/policies-values.yaml --wait --timeout 10m
|
||||
```
|
||||
|
||||
Verify:
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
# kyverno/kyverno-policies — Pod Security Standards as Kyverno ClusterPolicies
|
||||
#
|
||||
# helm upgrade --install kyverno-policies kyverno/kyverno-policies -n kyverno \
|
||||
# --version 3.7.1 -f clusters/noble/apps/kyverno/policies-values.yaml --wait --timeout 10m
|
||||
# --version 3.7.1 -f clusters/noble/bootstrap/kyverno/policies-values.yaml --wait --timeout 10m
|
||||
#
|
||||
# Default profile is baseline; validationFailureAction is Audit so existing privileged
|
||||
# workloads are not blocked. Kyverno still emits PolicyReports for matches — Headlamp
|
||||
# surfaces those as “policy violations”. Exclude namespaces that intentionally run
|
||||
# outside baseline (see namespace PSA labels under clusters/noble/apps/*/namespace.yaml)
|
||||
# outside baseline (see namespace PSA labels under clusters/noble/bootstrap/*/namespace.yaml)
|
||||
# plus core Kubernetes namespaces and every Ansible-managed app namespace on noble.
|
||||
#
|
||||
# After widening excludes, Kyverno does not always prune old PolicyReport rows; refresh:
|
||||
@@ -25,7 +25,7 @@ validationFailureAction: Audit
|
||||
failurePolicy: Fail
|
||||
validationAllowExistingViolations: true
|
||||
|
||||
# All platform namespaces on noble (ansible/playbooks/noble.yml + clusters/noble/apps).
|
||||
# All platform namespaces on noble (ansible/playbooks/noble.yml + clusters/noble/bootstrap).
|
||||
x-kyverno-exclude-infra: &kyverno_exclude_infra
|
||||
any:
|
||||
- resources:
|
||||
|
||||
@@ -2,9 +2,9 @@
|
||||
#
|
||||
# helm repo add kyverno https://kyverno.github.io/kyverno/
|
||||
# helm repo update
|
||||
# kubectl apply -f clusters/noble/apps/kyverno/namespace.yaml
|
||||
# kubectl apply -f clusters/noble/bootstrap/kyverno/namespace.yaml
|
||||
# helm upgrade --install kyverno kyverno/kyverno -n kyverno \
|
||||
# --version 3.7.1 -f clusters/noble/apps/kyverno/values.yaml --wait --timeout 15m
|
||||
# --version 3.7.1 -f clusters/noble/bootstrap/kyverno/values.yaml --wait --timeout 15m
|
||||
#
|
||||
# Baseline Pod Security policies (separate chart): see policies-values.yaml + README.md
|
||||
#
|
||||
|
||||
@@ -2,11 +2,11 @@
|
||||
#
|
||||
# Chart: grafana/loki — pin version on install (e.g. 6.55.0).
|
||||
#
|
||||
# kubectl apply -f clusters/noble/apps/loki/namespace.yaml
|
||||
# kubectl apply -f clusters/noble/bootstrap/loki/namespace.yaml
|
||||
# helm repo add grafana https://grafana.github.io/helm-charts
|
||||
# helm repo update
|
||||
# helm upgrade --install loki grafana/loki -n loki \
|
||||
# --version 6.55.0 -f clusters/noble/apps/loki/values.yaml --wait --timeout 30m
|
||||
# --version 6.55.0 -f clusters/noble/bootstrap/loki/values.yaml --wait --timeout 30m
|
||||
#
|
||||
# Query/push URL for Grafana + Fluent Bit: http://loki-gateway.loki.svc.cluster.local:80
|
||||
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
# Longhorn Helm values — use with Talos user volume + kubelet mounts (see talos/talconfig.yaml).
|
||||
# 1) PSA: `kubectl apply -k clusters/noble/apps/longhorn` (privileged namespace) before or after Helm.
|
||||
# 1) PSA: `kubectl apply -k clusters/noble/bootstrap/longhorn` (privileged namespace) before or after Helm.
|
||||
# 2) Talos: bind `/var/lib/longhorn` → `/var/mnt/longhorn` in kubelet extraMounts — chart hostPath is fixed to /var/lib/longhorn.
|
||||
# Example (run from home-server repo root so -f path resolves):
|
||||
# kubectl apply -k clusters/noble/apps/longhorn
|
||||
# kubectl apply -k clusters/noble/bootstrap/longhorn
|
||||
# helm repo add longhorn https://charts.longhorn.io && helm repo update
|
||||
# helm upgrade --install longhorn longhorn/longhorn -n longhorn-system --create-namespace \
|
||||
# -f clusters/noble/apps/longhorn/values.yaml
|
||||
# -f clusters/noble/bootstrap/longhorn/values.yaml
|
||||
# "helm upgrade --install" needs two arguments: RELEASE_NAME and CHART (e.g. longhorn longhorn/longhorn).
|
||||
#
|
||||
# If you already installed Longhorn without this file: fix Default Settings in the UI or edit each
|
||||
|
||||
@@ -11,7 +11,7 @@ If `kubectl apply -k` fails with **`no matches for kind "IPAddressPool"`** / **`
|
||||
**Pod Security warnings** (`would violate PodSecurity "restricted"`): MetalLB’s speaker/FRR use `hostNetwork`, `NET_ADMIN`, etc. That is expected unless `metallb-system` is labeled **privileged**. Apply `namespace.yaml` **before** Helm so the namespace is created with the right labels (omit `--create-namespace` on Helm), or patch an existing namespace:
|
||||
|
||||
```bash
|
||||
kubectl apply -f clusters/noble/apps/metallb/namespace.yaml
|
||||
kubectl apply -f clusters/noble/bootstrap/metallb/namespace.yaml
|
||||
```
|
||||
|
||||
If you already ran Helm with `--create-namespace`, either `kubectl apply -f namespace.yaml` (merges labels) or:
|
||||
@@ -38,15 +38,15 @@ Then restart MetalLB pods if they were failing (`kubectl get pods -n metallb-sys
|
||||
2. Apply this folder’s pool and L2 advertisement:
|
||||
|
||||
```bash
|
||||
kubectl apply -k clusters/noble/apps/metallb
|
||||
kubectl apply -k clusters/noble/bootstrap/metallb
|
||||
```
|
||||
|
||||
3. Confirm a `Service` `type: LoadBalancer` receives an address in `192.168.50.210`–`192.168.50.229` (e.g. **`kubectl get svc -n traefik traefik`** after installing **Traefik** in `clusters/noble/apps/traefik/`).
|
||||
3. Confirm a `Service` `type: LoadBalancer` receives an address in `192.168.50.210`–`192.168.50.229` (e.g. **`kubectl get svc -n traefik traefik`** after installing **Traefik** in `clusters/noble/bootstrap/traefik/`).
|
||||
|
||||
Reserve **one** IP in that range for Argo CD (e.g. `192.168.50.210`) via `spec.loadBalancerIP` or chart values when you expose the server. Traefik pins **`192.168.50.211`** in **`clusters/noble/apps/traefik/values.yaml`**.
|
||||
Reserve **one** IP in that range for Argo CD (e.g. `192.168.50.210`) via `spec.loadBalancerIP` or chart values when you expose the server. Traefik pins **`192.168.50.211`** in **`clusters/noble/bootstrap/traefik/values.yaml`**.
|
||||
|
||||
## `Pending` MetalLB pods
|
||||
|
||||
1. `kubectl get nodes` — every node **`Ready`**? If **`NotReady`** or **`NetworkUnavailable`**, finish **CNI** install first.
|
||||
2. `kubectl describe pod -n metallb-system <pod-name>` — read **Events** at the bottom (`0/N nodes are available: …`).
|
||||
3. L2 speaker uses the node’s uplink; kube-vip in this repo expects **`ens18`** on control planes (`clusters/noble/apps/kube-vip/vip-daemonset.yaml`). If your NIC name differs, change `vip_interface` there.
|
||||
3. L2 speaker uses the node’s uplink; kube-vip in this repo expects **`ens18`** on control planes (`clusters/noble/bootstrap/kube-vip/vip-daemonset.yaml`). If your NIC name differs, change `vip_interface` there.
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# Apply before Helm if you do not use --create-namespace, or use this to fix PSA after the fact:
|
||||
# kubectl apply -f clusters/noble/apps/metallb/namespace.yaml
|
||||
# kubectl apply -f clusters/noble/bootstrap/metallb/namespace.yaml
|
||||
# MetalLB speaker needs hostNetwork + NET_ADMIN; incompatible with Pod Security "restricted".
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
#
|
||||
# helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/
|
||||
# helm upgrade --install metrics-server metrics-server/metrics-server -n kube-system \
|
||||
# --version 3.13.0 -f clusters/noble/apps/metrics-server/values.yaml --wait
|
||||
# --version 3.13.0 -f clusters/noble/bootstrap/metrics-server/values.yaml --wait
|
||||
|
||||
args:
|
||||
- --kubelet-insecure-tls
|
||||
|
||||
@@ -10,14 +10,14 @@ Keys must match `values.yaml` (`PANGOLIN_ENDPOINT`, `NEWT_ID`, `NEWT_SECRET`).
|
||||
|
||||
### Option A — Sealed Secret (safe for GitOps)
|
||||
|
||||
With the [Sealed Secrets](https://github.com/bitnami-labs/sealed-secrets) controller installed (`clusters/noble/apps/sealed-secrets/`), generate a `SealedSecret` from your workstation (rotate credentials in Pangolin first if they were exposed):
|
||||
With the [Sealed Secrets](https://github.com/bitnami-labs/sealed-secrets) controller installed (`clusters/noble/bootstrap/sealed-secrets/`), generate a `SealedSecret` from your workstation (rotate credentials in Pangolin first if they were exposed):
|
||||
|
||||
```bash
|
||||
chmod +x clusters/noble/apps/sealed-secrets/examples/kubeseal-newt-pangolin-auth.sh
|
||||
chmod +x clusters/noble/bootstrap/sealed-secrets/examples/kubeseal-newt-pangolin-auth.sh
|
||||
export PANGOLIN_ENDPOINT='https://pangolin.pcenicni.dev'
|
||||
export NEWT_ID='YOUR_NEWT_ID'
|
||||
export NEWT_SECRET='YOUR_NEWT_SECRET'
|
||||
./clusters/noble/apps/sealed-secrets/examples/kubeseal-newt-pangolin-auth.sh > newt-pangolin-auth.sealedsecret.yaml
|
||||
./clusters/noble/bootstrap/sealed-secrets/examples/kubeseal-newt-pangolin-auth.sh > newt-pangolin-auth.sealedsecret.yaml
|
||||
kubectl apply -f newt-pangolin-auth.sealedsecret.yaml
|
||||
```
|
||||
|
||||
@@ -26,7 +26,7 @@ Commit only the `.sealedsecret.yaml` file, not plain `Secret` YAML.
|
||||
### Option B — Imperative Secret (not in git)
|
||||
|
||||
```bash
|
||||
kubectl apply -f clusters/noble/apps/newt/namespace.yaml
|
||||
kubectl apply -f clusters/noble/bootstrap/newt/namespace.yaml
|
||||
|
||||
kubectl -n newt create secret generic newt-pangolin-auth \
|
||||
--from-literal=PANGOLIN_ENDPOINT='https://pangolin.pcenicni.dev' \
|
||||
@@ -44,7 +44,7 @@ helm repo update
|
||||
helm upgrade --install newt fossorial/newt \
|
||||
--namespace newt \
|
||||
--version 1.2.0 \
|
||||
-f clusters/noble/apps/newt/values.yaml \
|
||||
-f clusters/noble/bootstrap/newt/values.yaml \
|
||||
--wait
|
||||
```
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
#
|
||||
# Credentials MUST come from a Secret — do not put endpoint/id/secret in git.
|
||||
#
|
||||
# kubectl apply -f clusters/noble/apps/newt/namespace.yaml
|
||||
# kubectl apply -f clusters/noble/bootstrap/newt/namespace.yaml
|
||||
# kubectl -n newt create secret generic newt-pangolin-auth \
|
||||
# --from-literal=PANGOLIN_ENDPOINT='https://pangolin.example.com' \
|
||||
# --from-literal=NEWT_ID='...' \
|
||||
@@ -10,7 +10,7 @@
|
||||
#
|
||||
# helm repo add fossorial https://charts.fossorial.io
|
||||
# helm upgrade --install newt fossorial/newt -n newt \
|
||||
# --version 1.2.0 -f clusters/noble/apps/newt/values.yaml --wait
|
||||
# --version 1.2.0 -f clusters/noble/bootstrap/newt/values.yaml --wait
|
||||
#
|
||||
# See README.md for Pangolin Integration API (domains + HTTP resources + CNAME).
|
||||
|
||||
|
||||
@@ -10,9 +10,9 @@ Encrypts `Secret` manifests so they can live in git; the controller decrypts **S
|
||||
```bash
|
||||
helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets
|
||||
helm repo update
|
||||
kubectl apply -f clusters/noble/apps/sealed-secrets/namespace.yaml
|
||||
kubectl apply -f clusters/noble/bootstrap/sealed-secrets/namespace.yaml
|
||||
helm upgrade --install sealed-secrets sealed-secrets/sealed-secrets -n sealed-secrets \
|
||||
--version 2.18.4 -f clusters/noble/apps/sealed-secrets/values.yaml --wait
|
||||
--version 2.18.4 -f clusters/noble/bootstrap/sealed-secrets/values.yaml --wait
|
||||
```
|
||||
|
||||
## Workstation: `kubeseal`
|
||||
|
||||
@@ -2,15 +2,15 @@
|
||||
#
|
||||
# helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets
|
||||
# helm repo update
|
||||
# kubectl apply -f clusters/noble/apps/sealed-secrets/namespace.yaml
|
||||
# kubectl apply -f clusters/noble/bootstrap/sealed-secrets/namespace.yaml
|
||||
# helm upgrade --install sealed-secrets sealed-secrets/sealed-secrets -n sealed-secrets \
|
||||
# --version 2.18.4 -f clusters/noble/apps/sealed-secrets/values.yaml --wait
|
||||
# --version 2.18.4 -f clusters/noble/bootstrap/sealed-secrets/values.yaml --wait
|
||||
#
|
||||
# Client: install kubeseal (same minor as controller — see README).
|
||||
# Defaults are sufficient for the lab; override here if you need key renewal, resources, etc.
|
||||
#
|
||||
# GitOps pattern: create Secrets only via SealedSecret (or External Secrets + Vault).
|
||||
# Example (Newt): clusters/noble/apps/sealed-secrets/examples/kubeseal-newt-pangolin-auth.sh
|
||||
# Example (Newt): clusters/noble/bootstrap/sealed-secrets/examples/kubeseal-newt-pangolin-auth.sh
|
||||
# Backup the controller's sealing key: kubectl -n sealed-secrets get secret sealed-secrets-key -o yaml
|
||||
#
|
||||
# Talos cluster secrets (bootstrap token, cluster secret, certs) belong in talhelper talsecret /
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
1. Create the namespace (Pod Security **baseline** — Traefik needs more than **restricted**):
|
||||
|
||||
```bash
|
||||
kubectl apply -f clusters/noble/apps/traefik/namespace.yaml
|
||||
kubectl apply -f clusters/noble/bootstrap/traefik/namespace.yaml
|
||||
```
|
||||
|
||||
2. Install the chart (**do not** use `--create-namespace` if the namespace already exists):
|
||||
@@ -16,11 +16,11 @@
|
||||
helm upgrade --install traefik traefik/traefik \
|
||||
--namespace traefik \
|
||||
--version 39.0.6 \
|
||||
-f clusters/noble/apps/traefik/values.yaml \
|
||||
-f clusters/noble/bootstrap/traefik/values.yaml \
|
||||
--wait
|
||||
```
|
||||
|
||||
3. Confirm the Service has a pool address. On the **LAN**, **`*.apps.noble.lab.pcenicni.dev`** can resolve to this IP (split horizon / local DNS). **Public** names go through **Pangolin + Newt** (CNAME + API), not ExternalDNS — see **`clusters/noble/apps/newt/README.md`**.
|
||||
3. Confirm the Service has a pool address. On the **LAN**, **`*.apps.noble.lab.pcenicni.dev`** can resolve to this IP (split horizon / local DNS). **Public** names go through **Pangolin + Newt** (CNAME + API), not ExternalDNS — see **`clusters/noble/bootstrap/newt/README.md`**.
|
||||
|
||||
```bash
|
||||
kubectl get svc -n traefik traefik
|
||||
@@ -28,6 +28,6 @@
|
||||
|
||||
Values pin **`192.168.50.211`** via **`metallb.io/loadBalancerIPs`**. **`192.168.50.210`** stays free for Argo CD.
|
||||
|
||||
4. Create **Ingress** resources with **`ingressClassName: traefik`** (or rely on the default class). **TLS:** add **`cert-manager.io/cluster-issuer: letsencrypt-staging`** (or **`letsencrypt-prod`**) and **`tls`** hosts — see **`clusters/noble/apps/cert-manager/README.md`**.
|
||||
4. Create **Ingress** resources with **`ingressClassName: traefik`** (or rely on the default class). **TLS:** add **`cert-manager.io/cluster-issuer: letsencrypt-staging`** (or **`letsencrypt-prod`**) and **`tls`** hosts — see **`clusters/noble/bootstrap/cert-manager/README.md`**.
|
||||
|
||||
5. **Public DNS:** use **Newt** + Pangolin (**CNAME** at your DNS host + **Integration API** for resources/targets) — **`clusters/noble/apps/newt/README.md`**.
|
||||
5. **Public DNS:** use **Newt** + Pangolin (**CNAME** at your DNS host + **Integration API** for resources/targets) — **`clusters/noble/bootstrap/newt/README.md`**.
|
||||
|
||||
@@ -3,10 +3,10 @@
|
||||
# Chart: traefik/traefik — pin version on the helm command (e.g. 39.0.6).
|
||||
# DNS: point *.apps.noble.lab.pcenicni.dev to the LoadBalancer IP below.
|
||||
#
|
||||
# kubectl apply -f clusters/noble/apps/traefik/namespace.yaml
|
||||
# kubectl apply -f clusters/noble/bootstrap/traefik/namespace.yaml
|
||||
# helm repo add traefik https://traefik.github.io/charts
|
||||
# helm upgrade --install traefik traefik/traefik -n traefik \
|
||||
# --version 39.0.6 -f clusters/noble/apps/traefik/values.yaml --wait
|
||||
# --version 39.0.6 -f clusters/noble/bootstrap/traefik/values.yaml --wait
|
||||
|
||||
service:
|
||||
type: LoadBalancer
|
||||
|
||||
@@ -10,9 +10,9 @@ Standalone Vault with **file** storage on a **Longhorn** PVC (`server.dataStorag
|
||||
```bash
|
||||
helm repo add hashicorp https://helm.releases.hashicorp.com
|
||||
helm repo update
|
||||
kubectl apply -f clusters/noble/apps/vault/namespace.yaml
|
||||
kubectl apply -f clusters/noble/bootstrap/vault/namespace.yaml
|
||||
helm upgrade --install vault hashicorp/vault -n vault \
|
||||
--version 0.32.0 -f clusters/noble/apps/vault/values.yaml --wait --timeout 15m
|
||||
--version 0.32.0 -f clusters/noble/bootstrap/vault/values.yaml --wait --timeout 15m
|
||||
```
|
||||
|
||||
Verify:
|
||||
@@ -27,7 +27,7 @@ kubectl -n vault exec -i sts/vault -- vault status
|
||||
After **Cilium** is up, optionally restrict HTTP access to the Vault server pods (**TCP 8200**) to **`external-secrets`** and same-namespace clients:
|
||||
|
||||
```bash
|
||||
kubectl apply -f clusters/noble/apps/vault/cilium-network-policy.yaml
|
||||
kubectl apply -f clusters/noble/bootstrap/vault/cilium-network-policy.yaml
|
||||
```
|
||||
|
||||
If you add workloads in other namespaces that call Vault, extend **`ingress`** in that manifest.
|
||||
@@ -53,7 +53,7 @@ Or create the Secret used by the optional CronJob and apply it:
|
||||
|
||||
```bash
|
||||
kubectl -n vault create secret generic vault-unseal-key --from-literal=key='YOUR_UNSEAL_KEY'
|
||||
kubectl apply -f clusters/noble/apps/vault/unseal-cronjob.yaml
|
||||
kubectl apply -f clusters/noble/bootstrap/vault/unseal-cronjob.yaml
|
||||
```
|
||||
|
||||
The CronJob runs every minute and unseals if Vault is sealed and the Secret is present.
|
||||
@@ -64,7 +64,7 @@ Vault **OSS** auto-unseal uses cloud KMS (AWS, GCP, Azure, OCI), **Transit** (an
|
||||
|
||||
## Kubernetes auth (External Secrets / ClusterSecretStore)
|
||||
|
||||
**One-shot:** from the repo root, `export KUBECONFIG=talos/kubeconfig` and `export VAULT_TOKEN=…`, then run **`./clusters/noble/apps/vault/configure-kubernetes-auth.sh`** (idempotent). Then **`kubectl apply -f clusters/noble/apps/external-secrets/examples/vault-cluster-secret-store.yaml`** on its own line (shell comments **`# …`** on the same line are parsed as extra `kubectl` args and break `apply`). **`kubectl get clustersecretstore vault`** should show **READY=True** after a few seconds.
|
||||
**One-shot:** from the repo root, `export KUBECONFIG=talos/kubeconfig` and `export VAULT_TOKEN=…`, then run **`./clusters/noble/bootstrap/vault/configure-kubernetes-auth.sh`** (idempotent). Then **`kubectl apply -f clusters/noble/bootstrap/external-secrets/examples/vault-cluster-secret-store.yaml`** on its own line (shell comments **`# …`** on the same line are parsed as extra `kubectl` args and break `apply`). **`kubectl get clustersecretstore vault`** should show **READY=True** after a few seconds.
|
||||
|
||||
Run these **from your workstation** (needs `kubectl`; no local `vault` binary required). Use a **short-lived admin token** or the root token **only in your shell** — do not paste tokens into logs or chat.
|
||||
|
||||
@@ -139,7 +139,7 @@ EOF
|
||||
'
|
||||
```
|
||||
|
||||
**5. Apply** **`clusters/noble/apps/external-secrets/examples/vault-cluster-secret-store.yaml`** if you have not already, then verify:
|
||||
**5. Apply** **`clusters/noble/bootstrap/external-secrets/examples/vault-cluster-secret-store.yaml`** if you have not already, then verify:
|
||||
|
||||
```bash
|
||||
kubectl describe clustersecretstore vault
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# CiliumNetworkPolicy — restrict who may reach Vault HTTP listener (8200).
|
||||
# Apply after Cilium is healthy: kubectl apply -f clusters/noble/apps/vault/cilium-network-policy.yaml
|
||||
# Apply after Cilium is healthy: kubectl apply -f clusters/noble/bootstrap/vault/cilium-network-policy.yaml
|
||||
#
|
||||
# Ingress-only policy: egress from Vault is unchanged (Kubernetes auth needs API + DNS).
|
||||
# Extend ingress rules if other namespaces must call Vault (e.g. app workloads).
|
||||
|
||||
@@ -5,9 +5,9 @@
|
||||
# Usage (from repo root):
|
||||
# export KUBECONFIG=talos/kubeconfig # or your path
|
||||
# export VAULT_TOKEN='…' # root or admin token — never commit
|
||||
# ./clusters/noble/apps/vault/configure-kubernetes-auth.sh
|
||||
# ./clusters/noble/bootstrap/vault/configure-kubernetes-auth.sh
|
||||
#
|
||||
# Then: kubectl apply -f clusters/noble/apps/external-secrets/examples/vault-cluster-secret-store.yaml
|
||||
# Then: kubectl apply -f clusters/noble/bootstrap/external-secrets/examples/vault-cluster-secret-store.yaml
|
||||
# Verify: kubectl describe clustersecretstore vault
|
||||
|
||||
set -euo pipefail
|
||||
@@ -73,5 +73,5 @@ EOF
|
||||
echo "Done. Issuer used: $ISSUER"
|
||||
echo ""
|
||||
echo "Next (each command on its own line — do not paste # comments after kubectl):"
|
||||
echo " kubectl apply -f clusters/noble/apps/external-secrets/examples/vault-cluster-secret-store.yaml"
|
||||
echo " kubectl apply -f clusters/noble/bootstrap/external-secrets/examples/vault-cluster-secret-store.yaml"
|
||||
echo " kubectl get clustersecretstore vault"
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
#
|
||||
# 1) vault operator init -key-shares=1 -key-threshold=1 (lab only — single key)
|
||||
# 2) kubectl -n vault create secret generic vault-unseal-key --from-literal=key='YOUR_UNSEAL_KEY'
|
||||
# 3) kubectl apply -f clusters/noble/apps/vault/unseal-cronjob.yaml
|
||||
# 3) kubectl apply -f clusters/noble/bootstrap/vault/unseal-cronjob.yaml
|
||||
#
|
||||
# OSS Vault has no Kubernetes/KMS seal; this CronJob runs vault operator unseal when the server is sealed.
|
||||
# Protect the Secret with RBAC; prefer cloud KMS auto-unseal for real environments.
|
||||
|
||||
@@ -2,9 +2,9 @@
|
||||
#
|
||||
# helm repo add hashicorp https://helm.releases.hashicorp.com
|
||||
# helm repo update
|
||||
# kubectl apply -f clusters/noble/apps/vault/namespace.yaml
|
||||
# kubectl apply -f clusters/noble/bootstrap/vault/namespace.yaml
|
||||
# helm upgrade --install vault hashicorp/vault -n vault \
|
||||
# --version 0.32.0 -f clusters/noble/apps/vault/values.yaml --wait --timeout 15m
|
||||
# --version 0.32.0 -f clusters/noble/bootstrap/vault/values.yaml --wait --timeout 15m
|
||||
#
|
||||
# Post-install: initialize, store unseal key in Secret, apply optional unseal CronJob — see README.md
|
||||
#
|
||||
|
||||
Reference in New Issue
Block a user