Update README.md and CLUSTER-BUILD.md to enhance documentation for Vault Kubernetes auth and ClusterSecretStore integration. Add one-shot configuration instructions for Kubernetes auth in README.md, and update CLUSTER-BUILD.md to reflect the current state of the Talos cluster, including new components like Headlamp and Renovate, along with their deployment details and next steps.

This commit is contained in:
Nikholas Pcenicni
2026-03-28 01:41:52 -04:00
parent a65b553252
commit d5f38bd766
11 changed files with 454 additions and 5 deletions

View File

@@ -0,0 +1,18 @@
# Headlamp (noble)
[Headlamp](https://headlamp.dev/) web UI for the cluster. Exposed on **`https://headlamp.apps.noble.lab.pcenicni.dev`** via **Traefik** + **cert-manager** (`letsencrypt-prod`), same pattern as Grafana.
- **Chart:** `headlamp/headlamp` **0.40.1**
- **Namespace:** `headlamp`
## Install
```bash
helm repo add headlamp https://kubernetes-sigs.github.io/headlamp/
helm repo update
kubectl apply -f clusters/noble/apps/headlamp/namespace.yaml
helm upgrade --install headlamp headlamp/headlamp -n headlamp \
--version 0.40.1 -f clusters/noble/apps/headlamp/values.yaml --wait --timeout 10m
```
Sign-in uses a **ServiceAccount token** (Headlamp docs: create a limited SA for day-to-day use). The charts default **ClusterRole** is powerful — tighten RBAC and/or add **OIDC** in **`values.yaml`** under **`config.oidc`** when hardening (**Phase G**).

View File

@@ -0,0 +1,10 @@
# Headlamp — apply before Helm.
# Chart pods do not satisfy PSA "restricted" (see install warnings); align with other UIs.
apiVersion: v1
kind: Namespace
metadata:
name: headlamp
labels:
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/warn: privileged

View File

@@ -0,0 +1,25 @@
# Headlamp — noble (Kubernetes web UI)
#
# helm repo add headlamp https://kubernetes-sigs.github.io/headlamp/
# helm repo update
# kubectl apply -f clusters/noble/apps/headlamp/namespace.yaml
# helm upgrade --install headlamp headlamp/headlamp -n headlamp \
# --version 0.40.1 -f clusters/noble/apps/headlamp/values.yaml --wait --timeout 10m
#
# DNS: headlamp.apps.noble.lab.pcenicni.dev → Traefik LB (see talos/CLUSTER-BUILD.md).
# Default chart RBAC is broad — restrict for production (Phase G).
ingress:
enabled: true
ingressClassName: traefik
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
hosts:
- host: headlamp.apps.noble.lab.pcenicni.dev
paths:
- path: /
type: Prefix
tls:
- secretName: headlamp-apps-noble-tls
hosts:
- headlamp.apps.noble.lab.pcenicni.dev

View File

@@ -0,0 +1,31 @@
# Kyverno (noble)
Admission policies using [Kyverno](https://kyverno.io/). The main chart installs controllers and CRDs; **`kyverno-policies`** installs **Pod Security Standard** rules matching the **`baseline`** profile in **`Audit`** mode (violations are visible in policy reports; workloads are not denied).
- **Charts:** `kyverno/kyverno` **3.7.1** (app **v1.17.1**), `kyverno/kyverno-policies` **3.7.1**
- **Namespace:** `kyverno`
## Install
```bash
helm repo add kyverno https://kyverno.github.io/kyverno/
helm repo update
kubectl apply -f clusters/noble/apps/kyverno/namespace.yaml
helm upgrade --install kyverno kyverno/kyverno -n kyverno \
--version 3.7.1 -f clusters/noble/apps/kyverno/values.yaml --wait --timeout 15m
helm upgrade --install kyverno-policies kyverno/kyverno-policies -n kyverno \
--version 3.7.1 -f clusters/noble/apps/kyverno/policies-values.yaml --wait --timeout 10m
```
Verify:
```bash
kubectl -n kyverno get pods
kubectl get clusterpolicy | head
```
## Notes
- **`validationFailureAction: Audit`** in `policies-values.yaml` avoids breaking namespaces that need **privileged** behavior (Longhorn, monitoring node-exporter, etc.). Switch specific policies or namespaces to **`Enforce`** when you are ready.
- To use **`restricted`** instead of **`baseline`**, change **`podSecurityStandard`** in `policies-values.yaml` and reconcile expectations for host mounts and capabilities.
- Upgrade: bump **`--version`** on both charts together; read [Kyverno release notes](https://github.com/kyverno/kyverno/releases) for breaking changes.

View File

@@ -0,0 +1,5 @@
# Kyverno — apply before Helm.
apiVersion: v1
kind: Namespace
metadata:
name: kyverno

View File

@@ -0,0 +1,16 @@
# kyverno/kyverno-policies — Pod Security Standards as Kyverno ClusterPolicies
#
# helm upgrade --install kyverno-policies kyverno/kyverno-policies -n kyverno \
# --version 3.7.1 -f clusters/noble/apps/kyverno/policies-values.yaml --wait --timeout 10m
#
# Default profile is baseline; validationFailureAction is Audit so existing privileged
# workloads (monitoring, longhorn, etc.) are reported, not blocked. Tighten per policy or
# namespace when ready (see README).
#
policyKind: ClusterPolicy
policyType: ClusterPolicy
podSecurityStandard: baseline
podSecuritySeverity: medium
validationFailureAction: Audit
failurePolicy: Fail
validationAllowExistingViolations: true

View File

@@ -0,0 +1,10 @@
# Kyverno — noble (policy engine)
#
# helm repo add kyverno https://kyverno.github.io/kyverno/
# helm repo update
# kubectl apply -f clusters/noble/apps/kyverno/namespace.yaml
# helm upgrade --install kyverno kyverno/kyverno -n kyverno \
# --version 3.7.1 -f clusters/noble/apps/kyverno/values.yaml --wait --timeout 15m
#
# Baseline Pod Security policies (separate chart): see policies-values.yaml + README.md
#

View File

@@ -54,6 +54,8 @@ Vault **OSS** auto-unseal uses cloud KMS (AWS, GCP, Azure, OCI), **Transit** (an
## Kubernetes auth (External Secrets / ClusterSecretStore)
**One-shot:** from the repo root, `export KUBECONFIG=talos/kubeconfig` and `export VAULT_TOKEN=…`, then run **`./clusters/noble/apps/vault/configure-kubernetes-auth.sh`** (idempotent). Then **`kubectl apply -f clusters/noble/apps/external-secrets/examples/vault-cluster-secret-store.yaml`** on its own line (shell comments **`# …`** on the same line are parsed as extra `kubectl` args and break `apply`). **`kubectl get clustersecretstore vault`** should show **READY=True** after a few seconds.
Run these **from your workstation** (needs `kubectl`; no local `vault` binary required). Use a **short-lived admin token** or the root token **only in your shell** — do not paste tokens into logs or chat.
**1. Enable the auth method** (skip if already done):

View File

@@ -0,0 +1,77 @@
#!/usr/bin/env bash
# Configure Vault Kubernetes auth + KV v2 + policy/role for External Secrets Operator.
# Requires: kubectl (cluster access), jq optional (openid issuer); Vault reachable via sts/vault.
#
# Usage (from repo root):
# export KUBECONFIG=talos/kubeconfig # or your path
# export VAULT_TOKEN='…' # root or admin token — never commit
# ./clusters/noble/apps/vault/configure-kubernetes-auth.sh
#
# Then: kubectl apply -f clusters/noble/apps/external-secrets/examples/vault-cluster-secret-store.yaml
# Verify: kubectl describe clustersecretstore vault
set -euo pipefail
: "${VAULT_TOKEN:?Set VAULT_TOKEN to your Vault root or admin token}"
ISSUER=$(kubectl get --raw /.well-known/openid-configuration | jq -r .issuer)
REVIEWER=$(kubectl -n vault create token vault --duration=8760h)
CA_B64=$(kubectl config view --raw --minify -o jsonpath='{.clusters[0].cluster.certificate-authority-data}')
kubectl -n vault exec -i sts/vault -- env \
VAULT_ADDR=http://127.0.0.1:8200 \
VAULT_TOKEN="$VAULT_TOKEN" \
sh -ec '
set -e
vault auth list >/tmp/vauth.txt
grep -q "^kubernetes/" /tmp/vauth.txt || vault auth enable kubernetes
'
kubectl -n vault exec -i sts/vault -- env \
VAULT_ADDR=http://127.0.0.1:8200 \
VAULT_TOKEN="$VAULT_TOKEN" \
CA_B64="$CA_B64" \
REVIEWER="$REVIEWER" \
ISSUER="$ISSUER" \
sh -ec '
echo "$CA_B64" | base64 -d > /tmp/k8s-ca.crt
vault write auth/kubernetes/config \
kubernetes_host="https://kubernetes.default.svc:443" \
kubernetes_ca_cert=@/tmp/k8s-ca.crt \
token_reviewer_jwt="$REVIEWER" \
issuer="$ISSUER"
'
kubectl -n vault exec -i sts/vault -- env \
VAULT_ADDR=http://127.0.0.1:8200 \
VAULT_TOKEN="$VAULT_TOKEN" \
sh -ec '
set -e
vault secrets list >/tmp/vsec.txt
grep -q "^secret/" /tmp/vsec.txt || vault secrets enable -path=secret kv-v2
'
kubectl -n vault exec -i sts/vault -- env \
VAULT_ADDR=http://127.0.0.1:8200 \
VAULT_TOKEN="$VAULT_TOKEN" \
sh -ec '
vault policy write external-secrets - <<EOF
path "secret/data/*" {
capabilities = ["read", "list"]
}
path "secret/metadata/*" {
capabilities = ["read", "list"]
}
EOF
vault write auth/kubernetes/role/external-secrets \
bound_service_account_names=external-secrets \
bound_service_account_namespaces=external-secrets \
policies=external-secrets \
ttl=24h
'
echo "Done. Issuer used: $ISSUER"
echo ""
echo "Next (each command on its own line — do not paste # comments after kubectl):"
echo " kubectl apply -f clusters/noble/apps/external-secrets/examples/vault-cluster-secret-store.yaml"
echo " kubectl get clustersecretstore vault"