HashiCorp Vault (noble)
Standalone Vault with file storage on a Longhorn PVC (server.dataStorage). The listener uses HTTP (global.tlsDisable: true) for in-cluster use; add TLS at the listener when exposing outside the cluster.
- Chart:
hashicorp/vault0.32.0 (Vault 1.21.2) - Namespace:
vault
Install
helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo update
kubectl apply -f clusters/noble/apps/vault/namespace.yaml
helm upgrade --install vault hashicorp/vault -n vault \
--version 0.32.0 -f clusters/noble/apps/vault/values.yaml --wait --timeout 15m
Verify:
kubectl -n vault get pods,pvc,svc
kubectl -n vault exec -i sts/vault -- vault status
Cilium network policy (Phase G)
After Cilium is up, optionally restrict HTTP access to the Vault server pods (TCP 8200) to external-secrets and same-namespace clients:
kubectl apply -f clusters/noble/apps/vault/cilium-network-policy.yaml
If you add workloads in other namespaces that call Vault, extend ingress in that manifest.
Initialize and unseal (first time)
From a workstation with kubectl (or kubectl exec into any pod with vault CLI):
kubectl -n vault exec -i sts/vault -- vault operator init -key-shares=1 -key-threshold=1
Lab-only: -key-shares=1 -key-threshold=1 keeps a single unseal key. For stronger Shamir splits, use more shares and store them safely.
Save the Unseal Key and Root Token offline. Then unseal once:
kubectl -n vault exec -i sts/vault -- vault operator unseal
# paste unseal key
Or create the Secret used by the optional CronJob and apply it:
kubectl -n vault create secret generic vault-unseal-key --from-literal=key='YOUR_UNSEAL_KEY'
kubectl apply -f clusters/noble/apps/vault/unseal-cronjob.yaml
The CronJob runs every minute and unseals if Vault is sealed and the Secret is present.
Auto-unseal note
Vault OSS auto-unseal uses cloud KMS (AWS, GCP, Azure, OCI), Transit (another Vault), etc. There is no first-class “Kubernetes Secret” seal. This repo uses an optional CronJob as a lab substitute. Production clusters should use a supported seal backend.
Kubernetes auth (External Secrets / ClusterSecretStore)
One-shot: from the repo root, export KUBECONFIG=talos/kubeconfig and export VAULT_TOKEN=…, then run ./clusters/noble/apps/vault/configure-kubernetes-auth.sh (idempotent). Then kubectl apply -f clusters/noble/apps/external-secrets/examples/vault-cluster-secret-store.yaml on its own line (shell comments # … on the same line are parsed as extra kubectl args and break apply). kubectl get clustersecretstore vault should show READY=True after a few seconds.
Run these from your workstation (needs kubectl; no local vault binary required). Use a short-lived admin token or the root token only in your shell — do not paste tokens into logs or chat.
1. Enable the auth method (skip if already done):
kubectl -n vault exec -it sts/vault -- sh -c '
export VAULT_ADDR=http://127.0.0.1:8200
export VAULT_TOKEN="YOUR_ROOT_OR_ADMIN_TOKEN"
vault auth enable kubernetes
'
2. Configure auth/kubernetes — the API issuer must match the iss claim on service account JWTs. With kube-vip / a custom API URL, discover it from the cluster (do not assume kubernetes.default):
ISSUER=$(kubectl get --raw /.well-known/openid-configuration | jq -r .issuer)
REVIEWER=$(kubectl -n vault create token vault --duration=8760h)
CA_B64=$(kubectl config view --raw --minify -o jsonpath='{.clusters[0].cluster.certificate-authority-data}')
Then apply config inside the Vault pod (environment variables are passed in with env so quoting stays correct):
export VAULT_TOKEN="YOUR_ROOT_OR_ADMIN_TOKEN"
export ISSUER REVIEWER CA_B64
kubectl -n vault exec -i sts/vault -- env \
VAULT_ADDR=http://127.0.0.1:8200 \
VAULT_TOKEN="$VAULT_TOKEN" \
CA_B64="$CA_B64" \
REVIEWER="$REVIEWER" \
ISSUER="$ISSUER" \
sh -ec '
echo "$CA_B64" | base64 -d > /tmp/k8s-ca.crt
vault write auth/kubernetes/config \
kubernetes_host="https://kubernetes.default.svc:443" \
kubernetes_ca_cert=@/tmp/k8s-ca.crt \
token_reviewer_jwt="$REVIEWER" \
issuer="$ISSUER"
'
3. KV v2 at path secret (skip if already enabled):
kubectl -n vault exec -it sts/vault -- sh -c '
export VAULT_ADDR=http://127.0.0.1:8200
export VAULT_TOKEN="YOUR_ROOT_OR_ADMIN_TOKEN"
vault secrets enable -path=secret kv-v2
'
4. Policy + role for the External Secrets operator SA (external-secrets / external-secrets):
kubectl -n vault exec -it sts/vault -- sh -c '
export VAULT_ADDR=http://127.0.0.1:8200
export VAULT_TOKEN="YOUR_ROOT_OR_ADMIN_TOKEN"
vault policy write external-secrets - <<EOF
path "secret/data/*" {
capabilities = ["read", "list"]
}
path "secret/metadata/*" {
capabilities = ["read", "list"]
}
EOF
vault write auth/kubernetes/role/external-secrets \
bound_service_account_names=external-secrets \
bound_service_account_namespaces=external-secrets \
policies=external-secrets \
ttl=24h
'
5. Apply clusters/noble/apps/external-secrets/examples/vault-cluster-secret-store.yaml if you have not already, then verify:
kubectl describe clustersecretstore vault
See also Kubernetes auth.
TLS and External Secrets
values.yaml disables TLS on the Vault listener. The ClusterSecretStore example uses http://vault.vault.svc.cluster.local:8200. If you enable TLS on the listener, switch the URL to https:// and configure caBundle or caProvider on the store.
UI
Port-forward:
kubectl -n vault port-forward svc/vault-ui 8200:8200
Open http://127.0.0.1:8200 and log in with the root token (rotate for production workflows).