Update kube-prometheus-stack values.yaml to clarify Loki datasource configuration and enhance observability documentation in CLUSTER-BUILD.md. Include deployment instructions for Loki and Fluent Bit, and mark tasks related to Grafana integration as completed.
This commit is contained in:
@@ -14,8 +14,9 @@ This document is the **exported TODO** for the **noble** Talos cluster (4 nodes)
|
||||
- **cert-manager** Helm **v1.20.0** / app **v1.20.0** — `clusters/noble/apps/cert-manager/`; **`ClusterIssuer`** **`letsencrypt-staging`** and **`letsencrypt-prod`** (HTTP-01, ingress class **`traefik`**); ACME email **`certificates@noble.lab.pcenicni.dev`** (edit in manifests if you want a different mailbox).
|
||||
- **Newt** Helm **1.2.0** / app **1.10.1** — `clusters/noble/apps/newt/` (**fossorial/newt**); Pangolin site tunnel — **`newt-pangolin-auth`** Secret (**`PANGOLIN_ENDPOINT`**, **`NEWT_ID`**, **`NEWT_SECRET`**). **Public DNS** is **not** automated with ExternalDNS: **CNAME** records at your DNS host per Pangolin’s domain instructions, plus **Integration API** for HTTP resources/targets — see **`clusters/noble/apps/newt/README.md`**. LAN access to Traefik can still use **`*.apps.noble.lab.pcenicni.dev`** → **`192.168.50.211`** (split horizon / local resolver).
|
||||
- **Argo CD** Helm **9.4.17** / app **v3.3.6** — `clusters/noble/bootstrap/argocd/`; **`argocd-server`** **`LoadBalancer`** **`192.168.50.210`**; app-of-apps scaffold under **`bootstrap/argocd/apps/`** (edit **`root-application.yaml`** `repoURL` before applying).
|
||||
- **kube-prometheus-stack** — Helm chart **82.15.1** — `clusters/noble/apps/kube-prometheus-stack/` (**namespace** `monitoring`, PSA **privileged** — **node-exporter** needs host mounts); **Longhorn** PVCs for Prometheus, Grafana, Alertmanager. **Grafana Ingress:** **`https://grafana.apps.noble.lab.pcenicni.dev`** (Traefik **`ingressClassName: traefik`**, **`cert-manager.io/cluster-issuer: letsencrypt-prod`**). **`helm upgrade --install` with `--wait` is silent until done** — use **`--timeout 30m`** (not `5m`) and watch **`kubectl -n monitoring get pods -w`** in another terminal. Grafana admin password: Secret **`kube-prometheus-grafana`**, keys **`admin-user`** / **`admin-password`**.
|
||||
- **Still open:** **Loki** + **Fluent Bit** + Grafana datasource (Phase D).
|
||||
- **kube-prometheus-stack** — Helm chart **82.15.1** — `clusters/noble/apps/kube-prometheus-stack/` (**namespace** `monitoring`, PSA **privileged** — **node-exporter** needs host mounts); **Longhorn** PVCs for Prometheus, Grafana, Alertmanager. **Grafana Ingress:** **`https://grafana.apps.noble.lab.pcenicni.dev`** (Traefik **`ingressClassName: traefik`**, **`cert-manager.io/cluster-issuer: letsencrypt-prod`**). **Loki** in Grafana: ConfigMap **`clusters/noble/apps/grafana-loki-datasource/loki-datasource.yaml`** (sidecar label **`grafana_datasource`**) — apply after **Loki** is running; does not use **`grafana.additionalDataSources`** on the chart. **`helm upgrade --install` with `--wait` is silent until done** — use **`--timeout 30m`** (not `5m`) and watch **`kubectl -n monitoring get pods -w`** in another terminal. Grafana admin password: Secret **`kube-prometheus-grafana`**, keys **`admin-user`** / **`admin-password`**.
|
||||
- **Loki** + **Fluent Bit** (manifests in repo) — **`grafana/loki` 6.55.0** SingleBinary + **filesystem** on **Longhorn** (`clusters/noble/apps/loki/`); **`loki.auth_enabled: false`** (single-tenant lab — avoids **`X-Scope-OrgID`** on Grafana/Fluent Bit); **`chunksCache.enabled: false`** (default memcached cache is heavy / often Pending on small nodes). **`fluent/fluent-bit` 0.56.0** tails **`/var/log/containers`** only → **`loki-gateway.loki.svc:80`** (`clusters/noble/apps/fluent-bit/`). **`logging`** namespace PSA **privileged** (hostPath).
|
||||
- **Still open:** deploy **Loki** → **Fluent Bit** → **`helm upgrade kube-prometheus`** (Phase D checklist).
|
||||
|
||||
## Inventory
|
||||
|
||||
@@ -53,6 +54,8 @@ This document is the **exported TODO** for the **noble** Talos cluster (4 nodes)
|
||||
- Newt (Fossorial): **1.2.0** (Helm chart; app **1.10.1**)
|
||||
- Argo CD: **9.4.17** (Helm chart `argo/argo-cd`; app **v3.3.6**)
|
||||
- kube-prometheus-stack: **82.15.1** (Helm chart `prometheus-community/kube-prometheus-stack`; app **v0.89.x** bundle)
|
||||
- Loki: **6.55.0** (Helm chart `grafana/loki`; app **3.6.7**)
|
||||
- Fluent Bit: **0.56.0** (Helm chart `fluent/fluent-bit`; app **4.2.3**)
|
||||
|
||||
## Repo paths (this workspace)
|
||||
|
||||
@@ -73,6 +76,9 @@ This document is the **exported TODO** for the **noble** Talos cluster (4 nodes)
|
||||
| Newt / Pangolin tunnel (Helm) | `clusters/noble/apps/newt/` — `values.yaml`, `namespace.yaml`, `README.md` |
|
||||
| Argo CD (bootstrap + app-of-apps) | `clusters/noble/bootstrap/argocd/` — `values.yaml`, `root-application.yaml`, `apps/`, `README.md` |
|
||||
| kube-prometheus-stack (Helm values) | `clusters/noble/apps/kube-prometheus-stack/` — `values.yaml`, `namespace.yaml` |
|
||||
| Grafana Loki datasource (ConfigMap; no chart change) | `clusters/noble/apps/grafana-loki-datasource/loki-datasource.yaml` |
|
||||
| Loki (Helm values) | `clusters/noble/apps/loki/` — `values.yaml`, `namespace.yaml` |
|
||||
| Fluent Bit → Loki (Helm values) | `clusters/noble/apps/fluent-bit/` — `values.yaml`, `namespace.yaml` |
|
||||
|
||||
**Git vs cluster:** manifests and `talconfig` live in git; **`talhelper genconfig -o out`**, bootstrap, Helm, and `kubectl` run on your LAN. See **`talos/README.md`** for workstation reachability (lab LAN/VPN), **`talosctl kubeconfig`** vs Kubernetes `server:` (VIP vs node IP), and **`--insecure`** only in maintenance.
|
||||
|
||||
@@ -82,6 +88,7 @@ This document is the **exported TODO** for the **noble** Talos cluster (4 nodes)
|
||||
2. **MetalLB Helm chart** (CRDs + controller) **before** `kubectl apply -k` on the pool manifests.
|
||||
3. **`clusters/noble/apps/metallb/namespace.yaml`** before or merged onto `metallb-system` so Pod Security does not block speaker (see `apps/metallb/README.md`).
|
||||
4. **Longhorn:** Talos user volume + extensions in `talconfig.with-longhorn.yaml` (when restored); Helm **`defaultDataPath`** in `clusters/noble/apps/longhorn/values.yaml`.
|
||||
5. **Loki → Fluent Bit → Grafana:** deploy **Loki** (`loki-gateway` Service) before **Fluent Bit**; run **`helm upgrade`** on **kube-prometheus-stack** after **Loki** so Grafana provisions the **Loki** datasource.
|
||||
|
||||
## Prerequisites (before phases)
|
||||
|
||||
@@ -127,7 +134,7 @@ This document is the **exported TODO** for the **noble** Talos cluster (4 nodes)
|
||||
## Phase D — Observability
|
||||
|
||||
- [x] **kube-prometheus-stack** — `kubectl apply -f clusters/noble/apps/kube-prometheus-stack/namespace.yaml` then **`helm upgrade --install`** as in `clusters/noble/apps/kube-prometheus-stack/values.yaml` (chart **82.15.1**); PVCs **`longhorn`**; **`--wait --timeout 30m`** recommended; verify **`kubectl -n monitoring get pods,pvc`**
|
||||
- [ ] **Loki** + **Fluent Bit**; Grafana datasource
|
||||
- [ ] **Loki** + **Fluent Bit** + **Grafana Loki datasource** — **order:** **`kubectl apply -f clusters/noble/apps/loki/namespace.yaml`** → **`helm upgrade --install loki`** `grafana/loki` **6.55.0** `-f clusters/noble/apps/loki/values.yaml` → **`kubectl apply -f clusters/noble/apps/fluent-bit/namespace.yaml`** → **`helm upgrade --install fluent-bit`** `fluent/fluent-bit` **0.56.0** `-f clusters/noble/apps/fluent-bit/values.yaml` → **`kubectl apply -f clusters/noble/apps/grafana-loki-datasource/loki-datasource.yaml`**. Verify **Explore → Loki** in Grafana; **`kubectl -n loki get pods,pvc`**, **`kubectl -n logging get pods`**
|
||||
|
||||
## Phase E — Secrets
|
||||
|
||||
|
||||
Reference in New Issue
Block a user