Update CLUSTER-BUILD.md to reflect the current state of the Talos cluster, detailing progress through Phase D (observability) and advancements in Phase E (secrets). Include updates on Sealed Secrets, External Secrets Operator, and Vault configurations, along with deployment instructions and next steps for Kubernetes auth and ClusterSecretStore integration. Mark relevant tasks as completed and outline remaining objectives for future phases.
This commit is contained in:
60
clusters/noble/apps/external-secrets/README.md
Normal file
60
clusters/noble/apps/external-secrets/README.md
Normal file
@@ -0,0 +1,60 @@
|
|||||||
|
# External Secrets Operator (noble)
|
||||||
|
|
||||||
|
Syncs secrets from external systems into Kubernetes **Secret** objects via **ExternalSecret** / **ClusterExternalSecret** CRDs.
|
||||||
|
|
||||||
|
- **Chart:** `external-secrets/external-secrets` **2.2.0** (app **v2.2.0**)
|
||||||
|
- **Namespace:** `external-secrets`
|
||||||
|
- **Helm release name:** `external-secrets` (matches the operator **ServiceAccount** name `external-secrets`)
|
||||||
|
|
||||||
|
## Install
|
||||||
|
|
||||||
|
```bash
|
||||||
|
helm repo add external-secrets https://charts.external-secrets.io
|
||||||
|
helm repo update
|
||||||
|
kubectl apply -f clusters/noble/apps/external-secrets/namespace.yaml
|
||||||
|
helm upgrade --install external-secrets external-secrets/external-secrets -n external-secrets \
|
||||||
|
--version 2.2.0 -f clusters/noble/apps/external-secrets/values.yaml --wait
|
||||||
|
```
|
||||||
|
|
||||||
|
Verify:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl -n external-secrets get deploy,pods
|
||||||
|
kubectl get crd | grep external-secrets
|
||||||
|
```
|
||||||
|
|
||||||
|
## Vault `ClusterSecretStore` (after Vault is deployed)
|
||||||
|
|
||||||
|
The checklist expects a **Vault**-backed store. Install Vault first (`talos/CLUSTER-BUILD.md` Phase E — Vault on Longhorn + auto-unseal), then:
|
||||||
|
|
||||||
|
1. Enable **KV v2** secrets engine and **Kubernetes** auth in Vault; create a **role** (e.g. `external-secrets`) that maps the cluster’s **`external-secrets` / `external-secrets`** service account to a policy that can read the paths you need.
|
||||||
|
2. Copy **`examples/vault-cluster-secret-store.yaml`**, set **`spec.provider.vault.server`** to your Vault URL. This repo’s Vault Helm values use **HTTP** on port **8200** (`global.tlsDisable: true`): **`http://vault.vault.svc.cluster.local:8200`**. Use **`https://`** if you enable TLS on the Vault listener.
|
||||||
|
3. If Vault uses a **private TLS CA**, configure **`caProvider`** or **`caBundle`** on the Vault provider — see [HashiCorp Vault provider](https://external-secrets.io/latest/provider/hashicorp-vault/). Do not commit private CA material to public git unless intended.
|
||||||
|
4. Apply: **`kubectl apply -f …/vault-cluster-secret-store.yaml`**
|
||||||
|
5. Confirm the store is ready: **`kubectl describe clustersecretstore vault`**
|
||||||
|
|
||||||
|
Example **ExternalSecret** (after the store is healthy):
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: external-secrets.io/v1
|
||||||
|
kind: ExternalSecret
|
||||||
|
metadata:
|
||||||
|
name: demo
|
||||||
|
namespace: default
|
||||||
|
spec:
|
||||||
|
refreshInterval: 1h
|
||||||
|
secretStoreRef:
|
||||||
|
name: vault
|
||||||
|
kind: ClusterSecretStore
|
||||||
|
target:
|
||||||
|
name: demo-synced
|
||||||
|
data:
|
||||||
|
- secretKey: password
|
||||||
|
remoteRef:
|
||||||
|
key: secret/data/myapp
|
||||||
|
property: password
|
||||||
|
```
|
||||||
|
|
||||||
|
## Upgrades
|
||||||
|
|
||||||
|
Pin the chart version in `values.yaml` header comments; run the same **`helm upgrade --install`** with the new **`--version`** after reviewing [release notes](https://github.com/external-secrets/external-secrets/releases).
|
||||||
@@ -0,0 +1,31 @@
|
|||||||
|
# ClusterSecretStore for HashiCorp Vault (KV v2) using Kubernetes auth.
|
||||||
|
#
|
||||||
|
# Do not apply until Vault is running, reachable from the cluster, and configured with:
|
||||||
|
# - Kubernetes auth at mountPath (default: kubernetes)
|
||||||
|
# - A role (below: external-secrets) bound to this service account:
|
||||||
|
# name: external-secrets
|
||||||
|
# namespace: external-secrets
|
||||||
|
# - A policy allowing read on the KV path used below (e.g. secret/data/* for path "secret")
|
||||||
|
#
|
||||||
|
# Adjust server, mountPath, role, and path to match your Vault deployment. If Vault uses TLS
|
||||||
|
# with a private CA, set provider.vault.caProvider or caBundle (see README).
|
||||||
|
#
|
||||||
|
# kubectl apply -f clusters/noble/apps/external-secrets/examples/vault-cluster-secret-store.yaml
|
||||||
|
---
|
||||||
|
apiVersion: external-secrets.io/v1
|
||||||
|
kind: ClusterSecretStore
|
||||||
|
metadata:
|
||||||
|
name: vault
|
||||||
|
spec:
|
||||||
|
provider:
|
||||||
|
vault:
|
||||||
|
server: "http://vault.vault.svc.cluster.local:8200"
|
||||||
|
path: secret
|
||||||
|
version: v2
|
||||||
|
auth:
|
||||||
|
kubernetes:
|
||||||
|
mountPath: kubernetes
|
||||||
|
role: external-secrets
|
||||||
|
serviceAccountRef:
|
||||||
|
name: external-secrets
|
||||||
|
namespace: external-secrets
|
||||||
5
clusters/noble/apps/external-secrets/namespace.yaml
Normal file
5
clusters/noble/apps/external-secrets/namespace.yaml
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# External Secrets Operator — apply before Helm.
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Namespace
|
||||||
|
metadata:
|
||||||
|
name: external-secrets
|
||||||
10
clusters/noble/apps/external-secrets/values.yaml
Normal file
10
clusters/noble/apps/external-secrets/values.yaml
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
# External Secrets Operator — noble
|
||||||
|
#
|
||||||
|
# helm repo add external-secrets https://charts.external-secrets.io
|
||||||
|
# helm repo update
|
||||||
|
# kubectl apply -f clusters/noble/apps/external-secrets/namespace.yaml
|
||||||
|
# helm upgrade --install external-secrets external-secrets/external-secrets -n external-secrets \
|
||||||
|
# --version 2.2.0 -f clusters/noble/apps/external-secrets/values.yaml --wait
|
||||||
|
#
|
||||||
|
# CRDs are installed by the chart (installCRDs: true). Vault ClusterSecretStore: see README + examples/.
|
||||||
|
commonLabels: {}
|
||||||
48
clusters/noble/apps/sealed-secrets/README.md
Normal file
48
clusters/noble/apps/sealed-secrets/README.md
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
# Sealed Secrets (noble)
|
||||||
|
|
||||||
|
Encrypts `Secret` manifests so they can live in git; the controller decrypts **SealedSecret** resources into **Secret**s in-cluster.
|
||||||
|
|
||||||
|
- **Chart:** `sealed-secrets/sealed-secrets` **2.18.4** (app **0.36.1**)
|
||||||
|
- **Namespace:** `sealed-secrets`
|
||||||
|
|
||||||
|
## Install
|
||||||
|
|
||||||
|
```bash
|
||||||
|
helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets
|
||||||
|
helm repo update
|
||||||
|
kubectl apply -f clusters/noble/apps/sealed-secrets/namespace.yaml
|
||||||
|
helm upgrade --install sealed-secrets sealed-secrets/sealed-secrets -n sealed-secrets \
|
||||||
|
--version 2.18.4 -f clusters/noble/apps/sealed-secrets/values.yaml --wait
|
||||||
|
```
|
||||||
|
|
||||||
|
## Workstation: `kubeseal`
|
||||||
|
|
||||||
|
Install a **kubeseal** build compatible with the controller (match **app** minor, e.g. **0.36.x** for **0.36.1**). Examples:
|
||||||
|
|
||||||
|
- **Homebrew:** `brew install kubeseal` (check `kubeseal --version` against the chart’s `image.tag` in `helm show values`).
|
||||||
|
- **GitHub releases:** [bitnami-labs/sealed-secrets](https://github.com/bitnami-labs/sealed-secrets/releases)
|
||||||
|
|
||||||
|
Fetch the cluster’s public seal cert (once per kube context):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubeseal --fetch-cert > /tmp/noble-sealed-secrets.pem
|
||||||
|
```
|
||||||
|
|
||||||
|
Create a sealed secret from a normal secret manifest:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl create secret generic example --from-literal=foo=bar --dry-run=client -o yaml \
|
||||||
|
| kubeseal --cert /tmp/noble-sealed-secrets.pem -o yaml > example-sealedsecret.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Commit `example-sealedsecret.yaml`; apply it with `kubectl apply -f`. The controller creates the **Secret** in the same namespace as the **SealedSecret**.
|
||||||
|
|
||||||
|
## Backup the sealing key
|
||||||
|
|
||||||
|
If the controller’s private key is lost, existing sealed files cannot be decrypted on a new cluster. Back up the key secret after install:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl get secret -n sealed-secrets -l sealedsecrets.bitnami.com/sealed-secrets-key=active -o yaml > sealed-secrets-key-backup.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Store `sealed-secrets-key-backup.yaml` in a safe offline location (not in public git).
|
||||||
5
clusters/noble/apps/sealed-secrets/namespace.yaml
Normal file
5
clusters/noble/apps/sealed-secrets/namespace.yaml
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# Sealed Secrets controller — apply before Helm.
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Namespace
|
||||||
|
metadata:
|
||||||
|
name: sealed-secrets
|
||||||
11
clusters/noble/apps/sealed-secrets/values.yaml
Normal file
11
clusters/noble/apps/sealed-secrets/values.yaml
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
# Sealed Secrets — noble (Git-encrypted Secret workflow)
|
||||||
|
#
|
||||||
|
# helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets
|
||||||
|
# helm repo update
|
||||||
|
# kubectl apply -f clusters/noble/apps/sealed-secrets/namespace.yaml
|
||||||
|
# helm upgrade --install sealed-secrets sealed-secrets/sealed-secrets -n sealed-secrets \
|
||||||
|
# --version 2.18.4 -f clusters/noble/apps/sealed-secrets/values.yaml --wait
|
||||||
|
#
|
||||||
|
# Client: install kubeseal (same minor as controller — see README).
|
||||||
|
# Defaults are sufficient for the lab; override here if you need key renewal, resources, etc.
|
||||||
|
commonLabels: {}
|
||||||
150
clusters/noble/apps/vault/README.md
Normal file
150
clusters/noble/apps/vault/README.md
Normal file
@@ -0,0 +1,150 @@
|
|||||||
|
# HashiCorp Vault (noble)
|
||||||
|
|
||||||
|
Standalone Vault with **file** storage on a **Longhorn** PVC (`server.dataStorage`). The listener uses **HTTP** (`global.tlsDisable: true`) for in-cluster use; add TLS at the listener when exposing outside the cluster.
|
||||||
|
|
||||||
|
- **Chart:** `hashicorp/vault` **0.32.0** (Vault **1.21.2**)
|
||||||
|
- **Namespace:** `vault`
|
||||||
|
|
||||||
|
## Install
|
||||||
|
|
||||||
|
```bash
|
||||||
|
helm repo add hashicorp https://helm.releases.hashicorp.com
|
||||||
|
helm repo update
|
||||||
|
kubectl apply -f clusters/noble/apps/vault/namespace.yaml
|
||||||
|
helm upgrade --install vault hashicorp/vault -n vault \
|
||||||
|
--version 0.32.0 -f clusters/noble/apps/vault/values.yaml --wait --timeout 15m
|
||||||
|
```
|
||||||
|
|
||||||
|
Verify:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl -n vault get pods,pvc,svc
|
||||||
|
kubectl -n vault exec -i sts/vault -- vault status
|
||||||
|
```
|
||||||
|
|
||||||
|
## Initialize and unseal (first time)
|
||||||
|
|
||||||
|
From a workstation with `kubectl` (or `kubectl exec` into any pod with `vault` CLI):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl -n vault exec -i sts/vault -- vault operator init -key-shares=1 -key-threshold=1
|
||||||
|
```
|
||||||
|
|
||||||
|
**Lab-only:** `-key-shares=1 -key-threshold=1` keeps a single unseal key. For stronger Shamir splits, use more shares and store them safely.
|
||||||
|
|
||||||
|
Save the **Unseal Key** and **Root Token** offline. Then unseal once:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl -n vault exec -i sts/vault -- vault operator unseal
|
||||||
|
# paste unseal key
|
||||||
|
```
|
||||||
|
|
||||||
|
Or create the Secret used by the optional CronJob and apply it:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl -n vault create secret generic vault-unseal-key --from-literal=key='YOUR_UNSEAL_KEY'
|
||||||
|
kubectl apply -f clusters/noble/apps/vault/unseal-cronjob.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
The CronJob runs every minute and unseals if Vault is sealed and the Secret is present.
|
||||||
|
|
||||||
|
## Auto-unseal note
|
||||||
|
|
||||||
|
Vault **OSS** auto-unseal uses cloud KMS (AWS, GCP, Azure, OCI), **Transit** (another Vault), etc. There is no first-class “Kubernetes Secret” seal. This repo uses an optional **CronJob** as a **lab** substitute. Production clusters should use a supported seal backend.
|
||||||
|
|
||||||
|
## Kubernetes auth (External Secrets / ClusterSecretStore)
|
||||||
|
|
||||||
|
Run these **from your workstation** (needs `kubectl`; no local `vault` binary required). Use a **short-lived admin token** or the root token **only in your shell** — do not paste tokens into logs or chat.
|
||||||
|
|
||||||
|
**1. Enable the auth method** (skip if already done):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl -n vault exec -it sts/vault -- sh -c '
|
||||||
|
export VAULT_ADDR=http://127.0.0.1:8200
|
||||||
|
export VAULT_TOKEN="YOUR_ROOT_OR_ADMIN_TOKEN"
|
||||||
|
vault auth enable kubernetes
|
||||||
|
'
|
||||||
|
```
|
||||||
|
|
||||||
|
**2. Configure `auth/kubernetes`** — the API **issuer** must match the `iss` claim on service account JWTs. With **kube-vip** / a custom API URL, discover it from the cluster (do not assume `kubernetes.default`):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ISSUER=$(kubectl get --raw /.well-known/openid-configuration | jq -r .issuer)
|
||||||
|
REVIEWER=$(kubectl -n vault create token vault --duration=8760h)
|
||||||
|
CA_B64=$(kubectl config view --raw --minify -o jsonpath='{.clusters[0].cluster.certificate-authority-data}')
|
||||||
|
```
|
||||||
|
|
||||||
|
Then apply config **inside** the Vault pod (environment variables are passed in with `env` so quoting stays correct):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export VAULT_TOKEN="YOUR_ROOT_OR_ADMIN_TOKEN"
|
||||||
|
export ISSUER REVIEWER CA_B64
|
||||||
|
kubectl -n vault exec -i sts/vault -- env \
|
||||||
|
VAULT_ADDR=http://127.0.0.1:8200 \
|
||||||
|
VAULT_TOKEN="$VAULT_TOKEN" \
|
||||||
|
CA_B64="$CA_B64" \
|
||||||
|
REVIEWER="$REVIEWER" \
|
||||||
|
ISSUER="$ISSUER" \
|
||||||
|
sh -ec '
|
||||||
|
echo "$CA_B64" | base64 -d > /tmp/k8s-ca.crt
|
||||||
|
vault write auth/kubernetes/config \
|
||||||
|
kubernetes_host="https://kubernetes.default.svc:443" \
|
||||||
|
kubernetes_ca_cert=@/tmp/k8s-ca.crt \
|
||||||
|
token_reviewer_jwt="$REVIEWER" \
|
||||||
|
issuer="$ISSUER"
|
||||||
|
'
|
||||||
|
```
|
||||||
|
|
||||||
|
**3. KV v2** at path `secret` (skip if already enabled):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl -n vault exec -it sts/vault -- sh -c '
|
||||||
|
export VAULT_ADDR=http://127.0.0.1:8200
|
||||||
|
export VAULT_TOKEN="YOUR_ROOT_OR_ADMIN_TOKEN"
|
||||||
|
vault secrets enable -path=secret kv-v2
|
||||||
|
'
|
||||||
|
```
|
||||||
|
|
||||||
|
**4. Policy + role** for the External Secrets operator SA (`external-secrets` / `external-secrets`):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl -n vault exec -it sts/vault -- sh -c '
|
||||||
|
export VAULT_ADDR=http://127.0.0.1:8200
|
||||||
|
export VAULT_TOKEN="YOUR_ROOT_OR_ADMIN_TOKEN"
|
||||||
|
vault policy write external-secrets - <<EOF
|
||||||
|
path "secret/data/*" {
|
||||||
|
capabilities = ["read", "list"]
|
||||||
|
}
|
||||||
|
path "secret/metadata/*" {
|
||||||
|
capabilities = ["read", "list"]
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
vault write auth/kubernetes/role/external-secrets \
|
||||||
|
bound_service_account_names=external-secrets \
|
||||||
|
bound_service_account_namespaces=external-secrets \
|
||||||
|
policies=external-secrets \
|
||||||
|
ttl=24h
|
||||||
|
'
|
||||||
|
```
|
||||||
|
|
||||||
|
**5. Apply** **`clusters/noble/apps/external-secrets/examples/vault-cluster-secret-store.yaml`** if you have not already, then verify:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl describe clustersecretstore vault
|
||||||
|
```
|
||||||
|
|
||||||
|
See also [Kubernetes auth](https://developer.hashicorp.com/vault/docs/auth/kubernetes#configuration).
|
||||||
|
|
||||||
|
## TLS and External Secrets
|
||||||
|
|
||||||
|
`values.yaml` disables TLS on the Vault listener. The **`ClusterSecretStore`** example uses **`http://vault.vault.svc.cluster.local:8200`**. If you enable TLS on the listener, switch the URL to **`https://`** and configure **`caBundle`** or **`caProvider`** on the store.
|
||||||
|
|
||||||
|
## UI
|
||||||
|
|
||||||
|
Port-forward:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl -n vault port-forward svc/vault-ui 8200:8200
|
||||||
|
```
|
||||||
|
|
||||||
|
Open `http://127.0.0.1:8200` and log in with the root token (rotate for production workflows).
|
||||||
5
clusters/noble/apps/vault/namespace.yaml
Normal file
5
clusters/noble/apps/vault/namespace.yaml
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# HashiCorp Vault — apply before Helm.
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Namespace
|
||||||
|
metadata:
|
||||||
|
name: vault
|
||||||
63
clusters/noble/apps/vault/unseal-cronjob.yaml
Normal file
63
clusters/noble/apps/vault/unseal-cronjob.yaml
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# Optional lab auto-unseal: applies after Vault is initialized and Secret `vault-unseal-key` exists.
|
||||||
|
#
|
||||||
|
# 1) vault operator init -key-shares=1 -key-threshold=1 (lab only — single key)
|
||||||
|
# 2) kubectl -n vault create secret generic vault-unseal-key --from-literal=key='YOUR_UNSEAL_KEY'
|
||||||
|
# 3) kubectl apply -f clusters/noble/apps/vault/unseal-cronjob.yaml
|
||||||
|
#
|
||||||
|
# OSS Vault has no Kubernetes/KMS seal; this CronJob runs vault operator unseal when the server is sealed.
|
||||||
|
# Protect the Secret with RBAC; prefer cloud KMS auto-unseal for real environments.
|
||||||
|
---
|
||||||
|
apiVersion: batch/v1
|
||||||
|
kind: CronJob
|
||||||
|
metadata:
|
||||||
|
name: vault-auto-unseal
|
||||||
|
namespace: vault
|
||||||
|
spec:
|
||||||
|
concurrencyPolicy: Forbid
|
||||||
|
successfulJobsHistoryLimit: 1
|
||||||
|
failedJobsHistoryLimit: 3
|
||||||
|
schedule: "*/1 * * * *"
|
||||||
|
jobTemplate:
|
||||||
|
spec:
|
||||||
|
template:
|
||||||
|
spec:
|
||||||
|
restartPolicy: OnFailure
|
||||||
|
securityContext:
|
||||||
|
runAsNonRoot: true
|
||||||
|
runAsUser: 100
|
||||||
|
runAsGroup: 1000
|
||||||
|
seccompProfile:
|
||||||
|
type: RuntimeDefault
|
||||||
|
containers:
|
||||||
|
- name: unseal
|
||||||
|
image: hashicorp/vault:1.21.2
|
||||||
|
imagePullPolicy: IfNotPresent
|
||||||
|
securityContext:
|
||||||
|
allowPrivilegeEscalation: false
|
||||||
|
capabilities:
|
||||||
|
drop:
|
||||||
|
- ALL
|
||||||
|
env:
|
||||||
|
- name: VAULT_ADDR
|
||||||
|
value: http://vault.vault.svc:8200
|
||||||
|
command:
|
||||||
|
- /bin/sh
|
||||||
|
- -ec
|
||||||
|
- |
|
||||||
|
test -f /secrets/key || exit 0
|
||||||
|
status="$(vault status -format=json 2>/dev/null || true)"
|
||||||
|
echo "$status" | grep -q '"initialized":true' || exit 0
|
||||||
|
echo "$status" | grep -q '"sealed":false' && exit 0
|
||||||
|
vault operator unseal "$(cat /secrets/key)"
|
||||||
|
volumeMounts:
|
||||||
|
- name: unseal
|
||||||
|
mountPath: /secrets
|
||||||
|
readOnly: true
|
||||||
|
volumes:
|
||||||
|
- name: unseal
|
||||||
|
secret:
|
||||||
|
secretName: vault-unseal-key
|
||||||
|
optional: true
|
||||||
|
items:
|
||||||
|
- key: key
|
||||||
|
path: key
|
||||||
48
clusters/noble/apps/vault/values.yaml
Normal file
48
clusters/noble/apps/vault/values.yaml
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
# HashiCorp Vault — noble (standalone, file storage on Longhorn; TLS disabled on listener for in-cluster HTTP).
|
||||||
|
#
|
||||||
|
# helm repo add hashicorp https://helm.releases.hashicorp.com
|
||||||
|
# helm repo update
|
||||||
|
# kubectl apply -f clusters/noble/apps/vault/namespace.yaml
|
||||||
|
# helm upgrade --install vault hashicorp/vault -n vault \
|
||||||
|
# --version 0.32.0 -f clusters/noble/apps/vault/values.yaml --wait --timeout 15m
|
||||||
|
#
|
||||||
|
# Post-install: initialize, store unseal key in Secret, apply optional unseal CronJob — see README.md
|
||||||
|
#
|
||||||
|
global:
|
||||||
|
tlsDisable: true
|
||||||
|
|
||||||
|
injector:
|
||||||
|
enabled: true
|
||||||
|
|
||||||
|
server:
|
||||||
|
enabled: true
|
||||||
|
dataStorage:
|
||||||
|
enabled: true
|
||||||
|
size: 10Gi
|
||||||
|
storageClass: longhorn
|
||||||
|
accessMode: ReadWriteOnce
|
||||||
|
ha:
|
||||||
|
enabled: false
|
||||||
|
standalone:
|
||||||
|
enabled: true
|
||||||
|
config: |
|
||||||
|
ui = true
|
||||||
|
|
||||||
|
listener "tcp" {
|
||||||
|
tls_disable = 1
|
||||||
|
address = "[::]:8200"
|
||||||
|
cluster_address = "[::]:8201"
|
||||||
|
}
|
||||||
|
|
||||||
|
storage "file" {
|
||||||
|
path = "/vault/data"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Allow pod Ready before init/unseal so Helm --wait succeeds (see Vault /v1/sys/health docs).
|
||||||
|
readinessProbe:
|
||||||
|
enabled: true
|
||||||
|
path: "/v1/sys/health?uninitcode=204&sealedcode=204&standbyok=true"
|
||||||
|
port: 8200
|
||||||
|
|
||||||
|
ui:
|
||||||
|
enabled: true
|
||||||
@@ -4,7 +4,7 @@ This document is the **exported TODO** for the **noble** Talos cluster (4 nodes)
|
|||||||
|
|
||||||
## Current state (2026-03-28)
|
## Current state (2026-03-28)
|
||||||
|
|
||||||
Lab stack is **up** on-cluster for bootstrap through **Phase D** (observability), with manifests matching this repo. **Next focus:** Phase E (secrets), optional Pangolin/sample Ingress validation, Velero when S3 exists.
|
Lab stack is **up** on-cluster for bootstrap through **Phase D** (observability) and **Phase E** (Sealed Secrets, External Secrets, **Vault** Helm install), with manifests matching this repo. **Next focus:** **Vault** `operator init` / unseal, optional **`unseal-cronjob.yaml`**, Kubernetes auth + **`ClusterSecretStore`**, optional Pangolin/sample Ingress validation, Velero when S3 exists.
|
||||||
|
|
||||||
- **Talos** v1.12.6 (target) / **Kubernetes** as bundled — four nodes **Ready** unless upgrading; **`talosctl health`**; **`talos/kubeconfig`** for `kubectl` (root `kubeconfig` may still be a stub). **Image Factory (nocloud installer):** `factory.talos.dev/nocloud-installer/249d9135de54962744e917cfe654117000cba369f9152fbab9d055a00aa3664f:v1.12.6`
|
- **Talos** v1.12.6 (target) / **Kubernetes** as bundled — four nodes **Ready** unless upgrading; **`talosctl health`**; **`talos/kubeconfig`** for `kubectl` (root `kubeconfig` may still be a stub). **Image Factory (nocloud installer):** `factory.talos.dev/nocloud-installer/249d9135de54962744e917cfe654117000cba369f9152fbab9d055a00aa3664f:v1.12.6`
|
||||||
- **Cilium** Helm **1.16.6** / app **1.16.6** (`clusters/noble/apps/cilium/`, phase 1 values).
|
- **Cilium** Helm **1.16.6** / app **1.16.6** (`clusters/noble/apps/cilium/`, phase 1 values).
|
||||||
@@ -18,7 +18,10 @@ Lab stack is **up** on-cluster for bootstrap through **Phase D** (observability)
|
|||||||
- **Argo CD** Helm **9.4.17** / app **v3.3.6** — `clusters/noble/bootstrap/argocd/`; **`argocd-server`** **`LoadBalancer`** **`192.168.50.210`**; app-of-apps scaffold under **`bootstrap/argocd/apps/`** (edit **`root-application.yaml`** `repoURL` before applying).
|
- **Argo CD** Helm **9.4.17** / app **v3.3.6** — `clusters/noble/bootstrap/argocd/`; **`argocd-server`** **`LoadBalancer`** **`192.168.50.210`**; app-of-apps scaffold under **`bootstrap/argocd/apps/`** (edit **`root-application.yaml`** `repoURL` before applying).
|
||||||
- **kube-prometheus-stack** — Helm chart **82.15.1** — `clusters/noble/apps/kube-prometheus-stack/` (**namespace** `monitoring`, PSA **privileged** — **node-exporter** needs host mounts); **Longhorn** PVCs for Prometheus, Grafana, Alertmanager; **node-exporter** DaemonSet **4/4**. **Grafana Ingress:** **`https://grafana.apps.noble.lab.pcenicni.dev`** (Traefik **`ingressClassName: traefik`**, **`cert-manager.io/cluster-issuer: letsencrypt-prod`**). **Loki** datasource in Grafana: ConfigMap **`clusters/noble/apps/grafana-loki-datasource/loki-datasource.yaml`** (sidecar label **`grafana_datasource: "1"`**) — not via **`grafana.additionalDataSources`** in the chart. **`helm upgrade --install` with `--wait` is silent until done** — use **`--timeout 30m`**; Grafana admin: Secret **`kube-prometheus-grafana`**, keys **`admin-user`** / **`admin-password`**.
|
- **kube-prometheus-stack** — Helm chart **82.15.1** — `clusters/noble/apps/kube-prometheus-stack/` (**namespace** `monitoring`, PSA **privileged** — **node-exporter** needs host mounts); **Longhorn** PVCs for Prometheus, Grafana, Alertmanager; **node-exporter** DaemonSet **4/4**. **Grafana Ingress:** **`https://grafana.apps.noble.lab.pcenicni.dev`** (Traefik **`ingressClassName: traefik`**, **`cert-manager.io/cluster-issuer: letsencrypt-prod`**). **Loki** datasource in Grafana: ConfigMap **`clusters/noble/apps/grafana-loki-datasource/loki-datasource.yaml`** (sidecar label **`grafana_datasource: "1"`**) — not via **`grafana.additionalDataSources`** in the chart. **`helm upgrade --install` with `--wait` is silent until done** — use **`--timeout 30m`**; Grafana admin: Secret **`kube-prometheus-grafana`**, keys **`admin-user`** / **`admin-password`**.
|
||||||
- **Loki** + **Fluent Bit** — **`grafana/loki` 6.55.0** SingleBinary + **filesystem** on **Longhorn** (`clusters/noble/apps/loki/`); **`loki.auth_enabled: false`**; **`chunksCache.enabled: false`** (no memcached chunk cache). **`fluent/fluent-bit` 0.56.0** → **`loki-gateway.loki.svc:80`** (`clusters/noble/apps/fluent-bit/`); **`logging`** PSA **privileged**. **Grafana Explore:** **`kubectl apply -f clusters/noble/apps/grafana-loki-datasource/loki-datasource.yaml`** then **Explore → Loki** (e.g. `{job="fluent-bit"}`).
|
- **Loki** + **Fluent Bit** — **`grafana/loki` 6.55.0** SingleBinary + **filesystem** on **Longhorn** (`clusters/noble/apps/loki/`); **`loki.auth_enabled: false`**; **`chunksCache.enabled: false`** (no memcached chunk cache). **`fluent/fluent-bit` 0.56.0** → **`loki-gateway.loki.svc:80`** (`clusters/noble/apps/fluent-bit/`); **`logging`** PSA **privileged**. **Grafana Explore:** **`kubectl apply -f clusters/noble/apps/grafana-loki-datasource/loki-datasource.yaml`** then **Explore → Loki** (e.g. `{job="fluent-bit"}`).
|
||||||
- **Still open:** **Phase E** (Sealed Secrets / Vault / ESO); **Phase F–G**; optional **sample Ingress + cert + Pangolin** end-to-end; **Velero** when S3 is ready; **Argo CD SSO**.
|
- **Sealed Secrets** Helm **2.18.4** / app **0.36.1** — `clusters/noble/apps/sealed-secrets/` (namespace **`sealed-secrets`**); **`kubeseal`** on client should match controller minor (**README**); back up **`sealed-secrets-key`** (see README).
|
||||||
|
- **External Secrets Operator** Helm **2.2.0** / app **v2.2.0** — `clusters/noble/apps/external-secrets/`; Vault **`ClusterSecretStore`** in **`examples/vault-cluster-secret-store.yaml`** (**`http://`** to match Vault listener — apply after Vault **Kubernetes auth**).
|
||||||
|
- **Vault** Helm **0.32.0** / app **1.21.2** — `clusters/noble/apps/vault/` — standalone **file** storage, **Longhorn** PVC; **HTTP** listener (`global.tlsDisable`); optional **CronJob** lab unseal **`unseal-cronjob.yaml`**; **not** initialized in git — run **`vault operator init`** per **`README.md`**.
|
||||||
|
- **Still open:** Vault **Kubernetes auth** + **`ClusterSecretStore`** apply + KV for ESO; **Phase F–G**; optional **sample Ingress + cert + Pangolin** end-to-end; **Velero** when S3 is ready; **Argo CD SSO**.
|
||||||
|
|
||||||
## Inventory
|
## Inventory
|
||||||
|
|
||||||
@@ -58,6 +61,9 @@ Lab stack is **up** on-cluster for bootstrap through **Phase D** (observability)
|
|||||||
- kube-prometheus-stack: **82.15.1** (Helm chart `prometheus-community/kube-prometheus-stack`; app **v0.89.x** bundle)
|
- kube-prometheus-stack: **82.15.1** (Helm chart `prometheus-community/kube-prometheus-stack`; app **v0.89.x** bundle)
|
||||||
- Loki: **6.55.0** (Helm chart `grafana/loki`; app **3.6.7**)
|
- Loki: **6.55.0** (Helm chart `grafana/loki`; app **3.6.7**)
|
||||||
- Fluent Bit: **0.56.0** (Helm chart `fluent/fluent-bit`; app **4.2.3**)
|
- Fluent Bit: **0.56.0** (Helm chart `fluent/fluent-bit`; app **4.2.3**)
|
||||||
|
- Sealed Secrets: **2.18.4** (Helm chart `sealed-secrets/sealed-secrets`; app **0.36.1**)
|
||||||
|
- External Secrets Operator: **2.2.0** (Helm chart `external-secrets/external-secrets`; app **v2.2.0**)
|
||||||
|
- Vault: **0.32.0** (Helm chart `hashicorp/vault`; app **1.21.2**)
|
||||||
|
|
||||||
## Repo paths (this workspace)
|
## Repo paths (this workspace)
|
||||||
|
|
||||||
@@ -81,6 +87,9 @@ Lab stack is **up** on-cluster for bootstrap through **Phase D** (observability)
|
|||||||
| Grafana Loki datasource (ConfigMap; no chart change) | `clusters/noble/apps/grafana-loki-datasource/loki-datasource.yaml` |
|
| Grafana Loki datasource (ConfigMap; no chart change) | `clusters/noble/apps/grafana-loki-datasource/loki-datasource.yaml` |
|
||||||
| Loki (Helm values) | `clusters/noble/apps/loki/` — `values.yaml`, `namespace.yaml` |
|
| Loki (Helm values) | `clusters/noble/apps/loki/` — `values.yaml`, `namespace.yaml` |
|
||||||
| Fluent Bit → Loki (Helm values) | `clusters/noble/apps/fluent-bit/` — `values.yaml`, `namespace.yaml` |
|
| Fluent Bit → Loki (Helm values) | `clusters/noble/apps/fluent-bit/` — `values.yaml`, `namespace.yaml` |
|
||||||
|
| Sealed Secrets (Helm) | `clusters/noble/apps/sealed-secrets/` — `values.yaml`, `namespace.yaml`, `README.md` |
|
||||||
|
| External Secrets Operator (Helm + Vault store example) | `clusters/noble/apps/external-secrets/` — `values.yaml`, `namespace.yaml`, `README.md`, `examples/vault-cluster-secret-store.yaml` |
|
||||||
|
| Vault (Helm + optional unseal CronJob) | `clusters/noble/apps/vault/` — `values.yaml`, `namespace.yaml`, `unseal-cronjob.yaml`, `README.md` |
|
||||||
|
|
||||||
**Git vs cluster:** manifests and `talconfig` live in git; **`talhelper genconfig -o out`**, bootstrap, Helm, and `kubectl` run on your LAN. See **`talos/README.md`** for workstation reachability (lab LAN/VPN), **`talosctl kubeconfig`** vs Kubernetes `server:` (VIP vs node IP), and **`--insecure`** only in maintenance.
|
**Git vs cluster:** manifests and `talconfig` live in git; **`talhelper genconfig -o out`**, bootstrap, Helm, and `kubectl` run on your LAN. See **`talos/README.md`** for workstation reachability (lab LAN/VPN), **`talosctl kubeconfig`** vs Kubernetes `server:` (VIP vs node IP), and **`--insecure`** only in maintenance.
|
||||||
|
|
||||||
@@ -91,6 +100,7 @@ Lab stack is **up** on-cluster for bootstrap through **Phase D** (observability)
|
|||||||
3. **`clusters/noble/apps/metallb/namespace.yaml`** before or merged onto `metallb-system` so Pod Security does not block speaker (see `apps/metallb/README.md`).
|
3. **`clusters/noble/apps/metallb/namespace.yaml`** before or merged onto `metallb-system` so Pod Security does not block speaker (see `apps/metallb/README.md`).
|
||||||
4. **Longhorn:** Talos user volume + extensions in `talconfig.with-longhorn.yaml` (when restored); Helm **`defaultDataPath`** in `clusters/noble/apps/longhorn/values.yaml`.
|
4. **Longhorn:** Talos user volume + extensions in `talconfig.with-longhorn.yaml` (when restored); Helm **`defaultDataPath`** in `clusters/noble/apps/longhorn/values.yaml`.
|
||||||
5. **Loki → Fluent Bit → Grafana datasource:** deploy **Loki** (`loki-gateway` Service) before **Fluent Bit**; apply **`clusters/noble/apps/grafana-loki-datasource/loki-datasource.yaml`** after **Loki** (sidecar picks up the ConfigMap — no kube-prometheus values change for Loki).
|
5. **Loki → Fluent Bit → Grafana datasource:** deploy **Loki** (`loki-gateway` Service) before **Fluent Bit**; apply **`clusters/noble/apps/grafana-loki-datasource/loki-datasource.yaml`** after **Loki** (sidecar picks up the ConfigMap — no kube-prometheus values change for Loki).
|
||||||
|
6. **Vault:** **Longhorn** default **StorageClass** before **`clusters/noble/apps/vault/`** Helm (PVC **`data-vault-0`**); **External Secrets** **`ClusterSecretStore`** after Vault is initialized, unsealed, and **Kubernetes auth** is configured.
|
||||||
|
|
||||||
## Prerequisites (before phases)
|
## Prerequisites (before phases)
|
||||||
|
|
||||||
@@ -140,9 +150,9 @@ Lab stack is **up** on-cluster for bootstrap through **Phase D** (observability)
|
|||||||
|
|
||||||
## Phase E — Secrets
|
## Phase E — Secrets
|
||||||
|
|
||||||
- [ ] **Sealed Secrets** (optional Git workflow)
|
- [x] **Sealed Secrets** (optional Git workflow) — `clusters/noble/apps/sealed-secrets/` (Helm **2.18.4**); **`kubeseal`** + key backup per **`README.md`**
|
||||||
- [ ] **Vault** in-cluster on Longhorn + **auto-unseal**
|
- [x] **Vault** in-cluster on Longhorn + **auto-unseal** — `clusters/noble/apps/vault/` (Helm **0.32.0**); **Longhorn** PVC; **OSS** “auto-unseal” = optional **`unseal-cronjob.yaml`** + Secret (**README**); init/unseal/Kubernetes auth for ESO still **to do** on cluster
|
||||||
- [ ] **External Secrets Operator** + Vault `ClusterSecretStore`
|
- [x] **External Secrets Operator** + Vault `ClusterSecretStore` — operator **`clusters/noble/apps/external-secrets/`** (Helm **2.2.0**); apply **`examples/vault-cluster-secret-store.yaml`** after Vault (**`README.md`**)
|
||||||
|
|
||||||
## Phase F — Policy + backups
|
## Phase F — Policy + backups
|
||||||
|
|
||||||
@@ -166,6 +176,9 @@ Lab stack is **up** on-cluster for bootstrap through **Phase D** (observability)
|
|||||||
- [x] **`loki`** — **Loki** SingleBinary + **gateway** **Running**; **`loki`** PVC **Bound** on **longhorn** (no chunks-cache by design)
|
- [x] **`loki`** — **Loki** SingleBinary + **gateway** **Running**; **`loki`** PVC **Bound** on **longhorn** (no chunks-cache by design)
|
||||||
- [x] **`logging`** — **Fluent Bit** DaemonSet **Running** on all nodes (logs → **Loki**)
|
- [x] **`logging`** — **Fluent Bit** DaemonSet **Running** on all nodes (logs → **Loki**)
|
||||||
- [x] **Grafana** — **Loki** datasource from **`grafana-loki-datasource`** ConfigMap (**Explore** works after apply + sidecar sync)
|
- [x] **Grafana** — **Loki** datasource from **`grafana-loki-datasource`** ConfigMap (**Explore** works after apply + sidecar sync)
|
||||||
|
- [x] **`sealed-secrets`** — controller **Deployment** **Running** in **`sealed-secrets`** (install + **`kubeseal`** per **`apps/sealed-secrets/README.md`**)
|
||||||
|
- [x] **`external-secrets`** — controller + webhook + cert-controller **Running** in **`external-secrets`**; apply **`ClusterSecretStore`** after Vault **Kubernetes auth**
|
||||||
|
- [x] **`vault`** — **StatefulSet** **Running**, **`data-vault-0`** PVC **Bound** on **longhorn**; **`vault operator init`** + unseal per **`apps/vault/README.md`**
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user