Update Headlamp and Vault documentation; enhance RBAC configurations in Argo CD. Modify Headlamp README to clarify sessionTTL handling and ServiceAccount permissions. Add Cilium network policy instructions to Vault README. Update Argo CD values.yaml for default RBAC settings, ensuring local admin retains full access while new users start with read-only permissions. Reflect these changes in CLUSTER-BUILD.md.

This commit is contained in:
Nikholas Pcenicni
2026-03-28 02:02:17 -04:00
parent 906c24b1d5
commit 445a1ac211
15 changed files with 188 additions and 13 deletions

View File

@@ -2,7 +2,7 @@
[Headlamp](https://headlamp.dev/) web UI for the cluster. Exposed on **`https://headlamp.apps.noble.lab.pcenicni.dev`** via **Traefik** + **cert-manager** (`letsencrypt-prod`), same pattern as Grafana.
- **Chart:** `headlamp/headlamp` **0.40.1**
- **Chart:** `headlamp/headlamp` **0.40.1** (`config.sessionTTL: null` avoids chart/binary mismatch — [issue #4883](https://github.com/kubernetes-sigs/headlamp/issues/4883))
- **Namespace:** `headlamp`
## Install
@@ -15,4 +15,4 @@ helm upgrade --install headlamp headlamp/headlamp -n headlamp \
--version 0.40.1 -f clusters/noble/apps/headlamp/values.yaml --wait --timeout 10m
```
Sign-in uses a **ServiceAccount token** (Headlamp docs: create a limited SA for day-to-day use). The charts default **ClusterRole** is powerful — tighten RBAC and/or add **OIDC** in **`values.yaml`** under **`config.oidc`** when hardening (**Phase G**).
Sign-in uses a **ServiceAccount token** (Headlamp docs: create a limited SA for day-to-day use). This repo binds the Headlamp workload SA to the built-in **`edit`** ClusterRole (**`clusterRoleBinding.clusterRoleName: edit`** in **`values.yaml`**) — not **`cluster-admin`**. For cluster-scoped admin work, use **`kubectl`** with your admin kubeconfig. Optional **OIDC** in **`config.oidc`** replaces token login for SSO.

View File

@@ -8,6 +8,18 @@
#
# DNS: headlamp.apps.noble.lab.pcenicni.dev → Traefik LB (see talos/CLUSTER-BUILD.md).
# Default chart RBAC is broad — restrict for production (Phase G).
# Bind Headlamps ServiceAccount to the built-in **edit** ClusterRole (not **cluster-admin**).
# For break-glass cluster-admin, use kubectl with your admin kubeconfig — not Headlamp.
# If changing **clusterRoleName** on an existing install, Kubernetes forbids mutating **roleRef**:
# kubectl delete clusterrolebinding headlamp-admin
# helm upgrade … (same command as in the header comments)
clusterRoleBinding:
clusterRoleName: edit
#
# Chart 0.40.1 passes -session-ttl but the v0.40.1 binary does not define it — omit the flag:
# https://github.com/kubernetes-sigs/headlamp/issues/4883
config:
sessionTTL: null
ingress:
enabled: true

View File

@@ -22,6 +22,16 @@ kubectl -n vault get pods,pvc,svc
kubectl -n vault exec -i sts/vault -- vault status
```
## Cilium network policy (Phase G)
After **Cilium** is up, optionally restrict HTTP access to the Vault server pods (**TCP 8200**) to **`external-secrets`** and same-namespace clients:
```bash
kubectl apply -f clusters/noble/apps/vault/cilium-network-policy.yaml
```
If you add workloads in other namespaces that call Vault, extend **`ingress`** in that manifest.
## Initialize and unseal (first time)
From a workstation with `kubectl` (or `kubectl exec` into any pod with `vault` CLI):

View File

@@ -0,0 +1,33 @@
# CiliumNetworkPolicy — restrict who may reach Vault HTTP listener (8200).
# Apply after Cilium is healthy: kubectl apply -f clusters/noble/apps/vault/cilium-network-policy.yaml
#
# Ingress-only policy: egress from Vault is unchanged (Kubernetes auth needs API + DNS).
# Extend ingress rules if other namespaces must call Vault (e.g. app workloads).
#
# Ref: https://docs.cilium.io/en/stable/security/policy/language/
---
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: vault-http-ingress
namespace: vault
spec:
endpointSelector:
matchLabels:
app.kubernetes.io/name: vault
component: server
ingress:
- fromEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": external-secrets
toPorts:
- ports:
- port: "8200"
protocol: TCP
- fromEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": vault
toPorts:
- ports:
- port: "8200"
protocol: TCP