Compare commits
29 Commits
41841abc84
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
aeffc7d6dd | ||
|
|
0f88a33216 | ||
|
|
bfb72cb519 | ||
|
|
51eb64dd9d | ||
|
|
f259285f6e | ||
|
|
c312ceeb56 | ||
|
|
c15bf4d708 | ||
|
|
89be30884e | ||
|
|
16948c62f9 | ||
|
|
3a6e5dff5b | ||
|
|
023ebfee5d | ||
|
|
27fb4113eb | ||
|
|
4026591f0b | ||
|
|
8a740019ad | ||
|
|
544f75b0ee | ||
|
|
33a10dc7e9 | ||
|
|
a4b9913b7e | ||
|
|
11c62009a4 | ||
|
|
03ed4e70a2 | ||
|
|
7855b10982 | ||
|
|
079c11b20c | ||
|
|
bf108a37e2 | ||
|
|
97b56581ed | ||
|
|
f154658d79 | ||
|
|
90509bacc5 | ||
|
|
e4741ecd15 | ||
|
|
f6647056be | ||
|
|
76eb7df18c | ||
|
|
90fd8fb8a6 |
10
.env.sample
10
.env.sample
@@ -2,12 +2,18 @@
|
||||
# Ansible **noble_cert_manager** role sources `.env` after cert-manager Helm install and creates
|
||||
# **cert-manager/cloudflare-dns-api-token** when **CLOUDFLARE_DNS_API_TOKEN** is set.
|
||||
#
|
||||
# Cloudflare: Zone → DNS → Edit + Zone → Read for **pcenicni.dev** (see clusters/noble/apps/cert-manager/README.md).
|
||||
# Cloudflare: Zone → DNS → Edit + Zone → Read for **pcenicni.dev** (see clusters/noble/bootstrap/cert-manager/README.md).
|
||||
CLOUDFLARE_DNS_API_TOKEN=
|
||||
|
||||
# --- Optional: other deploy-time values (documented for manual use or future automation) ---
|
||||
|
||||
# Pangolin / Newt — with **noble_newt_install=true**, Ansible creates **newt/newt-pangolin-auth** when all are set (see clusters/noble/apps/newt/README.md).
|
||||
# Pangolin / Newt — with **noble_newt_install=true**, Ansible creates **newt/newt-pangolin-auth** when all are set (see clusters/noble/bootstrap/newt/README.md).
|
||||
PANGOLIN_ENDPOINT=
|
||||
NEWT_ID=
|
||||
NEWT_SECRET=
|
||||
|
||||
# Velero — when **noble_velero_install=true**, set bucket + S3 API URL and credentials (see clusters/noble/bootstrap/velero/README.md).
|
||||
NOBLE_VELERO_S3_BUCKET=
|
||||
NOBLE_VELERO_S3_URL=
|
||||
NOBLE_VELERO_AWS_ACCESS_KEY_ID=
|
||||
NOBLE_VELERO_AWS_SECRET_ACCESS_KEY=
|
||||
|
||||
7
.sops.yaml
Normal file
7
.sops.yaml
Normal file
@@ -0,0 +1,7 @@
|
||||
# Mozilla SOPS — encrypt/decrypt Kubernetes Secret manifests under clusters/noble/secrets/
|
||||
# Generate a key: age-keygen -o age-key.txt (age-key.txt is gitignored)
|
||||
# Add the printed public key below (one recipient per line is supported).
|
||||
creation_rules:
|
||||
- path_regex: clusters/noble/secrets/.*\.yaml$
|
||||
age: >-
|
||||
age1juym5p3ez3dkt0dxlznydgfgqvaujfnyk9hpdsssf50hsxeh3p4sjpf3gn
|
||||
@@ -180,6 +180,12 @@ Shared services used across multiple applications.
|
||||
|
||||
**Configuration:** Requires Pangolin endpoint URL, Newt ID, and Newt secret.
|
||||
|
||||
### versitygw/ (`komodo/s3/versitygw/`)
|
||||
|
||||
- **[Versity S3 Gateway](https://github.com/versity/versitygw)** — S3 API on port **10000** by default; optional **WebUI** on **8080** (not the same listener—enable `VERSITYGW_WEBUI_PORT` / `VGW_WEBUI_GATEWAYS` per `.env.sample`). Behind **Pangolin**, expose the API and WebUI separately (or you will see **404** browsing the API URL).
|
||||
|
||||
**Configuration:** Set either `ROOT_ACCESS_KEY` / `ROOT_SECRET_KEY` or `ROOT_ACCESS_KEY_ID` / `ROOT_SECRET_ACCESS_KEY`. Optional `VERSITYGW_PORT`. Compose uses `${VAR}` interpolation so credentials work with Komodo’s `docker compose --env-file <run_directory>/.env` (avoid `env_file:` in the service when `run_directory` is not the same folder as `compose.yaml`, or the written `.env` will not be found).
|
||||
|
||||
---
|
||||
|
||||
## 📊 Monitoring (`komodo/monitor/`)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Ansible — noble cluster
|
||||
|
||||
Automates [`talos/CLUSTER-BUILD.md`](../talos/CLUSTER-BUILD.md): optional **Talos Phase A** (genconfig → apply → bootstrap → kubeconfig), then **Phase B+** (CNI → add-ons → ingress → Argo CD → Kyverno → observability, etc.). **Argo CD** does not reconcile core charts — optional GitOps starts from an empty [`clusters/noble/bootstrap/argocd/apps/kustomization.yaml`](../clusters/noble/bootstrap/argocd/apps/kustomization.yaml).
|
||||
Automates [`talos/CLUSTER-BUILD.md`](../talos/CLUSTER-BUILD.md): optional **Talos Phase A** (genconfig → apply → bootstrap → kubeconfig), then **Phase B+** (CNI → add-ons → ingress → Argo CD → Kyverno → observability, etc.). **Argo CD** does not reconcile core charts — optional GitOps starts from an empty [`clusters/noble/apps/kustomization.yaml`](../clusters/noble/apps/kustomization.yaml).
|
||||
|
||||
## Order of operations
|
||||
|
||||
@@ -24,6 +24,7 @@ Copy **`.env.sample`** to **`.env`** at the repository root (`.env` is gitignore
|
||||
## Prerequisites
|
||||
|
||||
- `talosctl` (matches node Talos version), `talhelper`, `helm`, `kubectl`.
|
||||
- **SOPS secrets:** `sops` and `age` on the control host if you use **`clusters/noble/secrets/`** with **`age-key.txt`** (see **`clusters/noble/secrets/README.md`**).
|
||||
- **Phase A:** same LAN/VPN as nodes so **Talos :50000** and **Kubernetes :6443** are reachable (see [`talos/README.md`](../talos/README.md) §3).
|
||||
- **noble.yml:** bootstrapped cluster and **`talos/kubeconfig`** (or `KUBECONFIG`).
|
||||
|
||||
@@ -34,8 +35,16 @@ Copy **`.env.sample`** to **`.env`** at the repository root (`.env` is gitignore
|
||||
| [`playbooks/deploy.yml`](playbooks/deploy.yml) | **Talos Phase A** then **`noble.yml`** (full automation). |
|
||||
| [`playbooks/talos_phase_a.yml`](playbooks/talos_phase_a.yml) | `genconfig` → `apply-config` → `bootstrap` → `kubeconfig` only. |
|
||||
| [`playbooks/noble.yml`](playbooks/noble.yml) | Helm + `kubectl` platform (after Phase A). |
|
||||
| [`playbooks/post_deploy.yml`](playbooks/post_deploy.yml) | Vault / ESO reminders (`noble_apply_vault_cluster_secret_store`). |
|
||||
| [`playbooks/post_deploy.yml`](playbooks/post_deploy.yml) | SOPS reminders and optional Argo root Application note. |
|
||||
| [`playbooks/talos_bootstrap.yml`](playbooks/talos_bootstrap.yml) | **`talhelper genconfig` only** (legacy shortcut; prefer **`talos_phase_a.yml`**). |
|
||||
| [`playbooks/debian_harden.yml`](playbooks/debian_harden.yml) | Baseline hardening for Debian servers (SSH/sysctl/fail2ban/unattended-upgrades). |
|
||||
| [`playbooks/debian_maintenance.yml`](playbooks/debian_maintenance.yml) | Debian maintenance run (apt upgrades, autoremove/autoclean, reboot when required). |
|
||||
| [`playbooks/debian_rotate_ssh_keys.yml`](playbooks/debian_rotate_ssh_keys.yml) | Rotate managed users' `authorized_keys`. |
|
||||
| [`playbooks/debian_ops.yml`](playbooks/debian_ops.yml) | Convenience pipeline: harden then maintenance for Debian servers. |
|
||||
| [`playbooks/proxmox_prepare.yml`](playbooks/proxmox_prepare.yml) | Configure Proxmox community repos and disable no-subscription UI warning. |
|
||||
| [`playbooks/proxmox_upgrade.yml`](playbooks/proxmox_upgrade.yml) | Proxmox maintenance run (apt dist-upgrade, cleanup, reboot when required). |
|
||||
| [`playbooks/proxmox_cluster.yml`](playbooks/proxmox_cluster.yml) | Create a Proxmox cluster on the master and join additional hosts. |
|
||||
| [`playbooks/proxmox_ops.yml`](playbooks/proxmox_ops.yml) | Convenience pipeline: prepare, upgrade, then cluster Proxmox hosts. |
|
||||
|
||||
```bash
|
||||
cd ansible
|
||||
@@ -65,11 +74,13 @@ Override with `-e` when needed, e.g. **`-e noble_talos_skip_bootstrap=true`** if
|
||||
```bash
|
||||
ansible-playbook playbooks/noble.yml --tags cilium,metallb
|
||||
ansible-playbook playbooks/noble.yml --skip-tags newt
|
||||
ansible-playbook playbooks/noble.yml --tags velero -e noble_velero_install=true -e noble_velero_s3_bucket=... -e noble_velero_s3_url=...
|
||||
```
|
||||
|
||||
### Variables — `group_vars/all.yml`
|
||||
### Variables — `group_vars/all.yml` and role defaults
|
||||
|
||||
- **`noble_newt_install`**, **`noble_cert_manager_require_cloudflare_secret`**, **`noble_apply_vault_cluster_secret_store`**, **`noble_k8s_api_server_override`**, **`noble_k8s_api_server_auto_fallback`**, **`noble_k8s_api_server_fallback`**, **`noble_skip_k8s_health_check`**.
|
||||
- **`group_vars/all.yml`:** **`noble_newt_install`**, **`noble_velero_install`**, **`noble_cert_manager_require_cloudflare_secret`**, **`noble_argocd_apply_root_application`**, **`noble_argocd_apply_bootstrap_root_application`**, **`noble_k8s_api_server_override`**, **`noble_k8s_api_server_auto_fallback`**, **`noble_k8s_api_server_fallback`**, **`noble_skip_k8s_health_check`**
|
||||
- **`roles/noble_platform/defaults/main.yml`:** **`noble_apply_sops_secrets`**, **`noble_sops_age_key_file`** (SOPS secrets under **`clusters/noble/secrets/`**)
|
||||
|
||||
## Roles
|
||||
|
||||
@@ -77,10 +88,67 @@ ansible-playbook playbooks/noble.yml --skip-tags newt
|
||||
|------|----------|
|
||||
| `talos_phase_a` | Talos genconfig, apply-config, bootstrap, kubeconfig |
|
||||
| `helm_repos` | `helm repo add` / `update` |
|
||||
| `noble_*` | Cilium, metrics-server, Longhorn, MetalLB (20m Helm wait), kube-vip, Traefik, cert-manager, Newt, Argo CD, Kyverno, platform stack |
|
||||
| `noble_*` | Cilium, CSI Volume Snapshot CRDs + controller, metrics-server, Longhorn, MetalLB (20m Helm wait), kube-vip, Traefik, cert-manager, Newt, Argo CD, Kyverno, platform stack, Velero (optional) |
|
||||
| `noble_landing_urls` | Writes **`ansible/output/noble-lab-ui-urls.md`** — URLs, service names, and (optional) Argo/Grafana passwords from Secrets |
|
||||
| `noble_post_deploy` | Post-install reminders |
|
||||
| `talos_bootstrap` | Genconfig-only (used by older playbook) |
|
||||
| `debian_baseline_hardening` | Baseline Debian hardening (SSH policy, sysctl profile, fail2ban, unattended upgrades) |
|
||||
| `debian_maintenance` | Routine Debian maintenance tasks (updates, cleanup, reboot-on-required) |
|
||||
| `debian_ssh_key_rotation` | Declarative `authorized_keys` rotation for server users |
|
||||
| `proxmox_baseline` | Proxmox repo prep (community repos) and no-subscription warning suppression |
|
||||
| `proxmox_maintenance` | Proxmox package maintenance (dist-upgrade, cleanup, reboot-on-required) |
|
||||
| `proxmox_cluster` | Proxmox cluster bootstrap/join automation using `pvecm` |
|
||||
|
||||
## Debian server ops quick start
|
||||
|
||||
These playbooks are separate from the Talos/noble flow and target hosts in `debian_servers`.
|
||||
|
||||
1. Copy `inventory/debian.example.yml` to `inventory/debian.yml` and update hosts/users.
|
||||
2. Update `group_vars/debian_servers.yml` with your allowed SSH users and real public keys.
|
||||
3. Run with the Debian inventory:
|
||||
|
||||
```bash
|
||||
cd ansible
|
||||
ansible-playbook -i inventory/debian.yml playbooks/debian_harden.yml
|
||||
ansible-playbook -i inventory/debian.yml playbooks/debian_rotate_ssh_keys.yml
|
||||
ansible-playbook -i inventory/debian.yml playbooks/debian_maintenance.yml
|
||||
```
|
||||
|
||||
Or run the combined maintenance pipeline:
|
||||
|
||||
```bash
|
||||
cd ansible
|
||||
ansible-playbook -i inventory/debian.yml playbooks/debian_ops.yml
|
||||
```
|
||||
|
||||
## Proxmox host + cluster quick start
|
||||
|
||||
These playbooks are separate from the Talos/noble flow and target hosts in `proxmox_hosts`.
|
||||
|
||||
1. Copy `inventory/proxmox.example.yml` to `inventory/proxmox.yml` and update hosts/users.
|
||||
2. Update `group_vars/proxmox_hosts.yml` with your cluster name (`proxmox_cluster_name`), chosen cluster master, and root public key file paths to install.
|
||||
3. First run (no SSH keys yet): use `--ask-pass` **or** set `ansible_password` (prefer Ansible Vault). Keep `ansible_ssh_common_args: "-o StrictHostKeyChecking=accept-new"` in inventory for first-contact hosts.
|
||||
4. Run prepare first to install your public keys on each host, then continue:
|
||||
|
||||
```bash
|
||||
cd ansible
|
||||
ansible-playbook -i inventory/proxmox.yml playbooks/proxmox_prepare.yml --ask-pass
|
||||
ansible-playbook -i inventory/proxmox.yml playbooks/proxmox_upgrade.yml
|
||||
ansible-playbook -i inventory/proxmox.yml playbooks/proxmox_cluster.yml
|
||||
```
|
||||
|
||||
After `proxmox_prepare.yml` finishes, SSH key auth should work for root (keys from `proxmox_root_authorized_key_files`), so `--ask-pass` is usually no longer needed.
|
||||
|
||||
If `pvecm add` still prompts for the master root password during join, set `proxmox_cluster_master_root_password` (prefer Vault) to run join non-interactively.
|
||||
|
||||
Changing `proxmox_cluster_name` only affects new cluster creation; it does not rename an already-created cluster.
|
||||
|
||||
Or run the full Proxmox pipeline:
|
||||
|
||||
```bash
|
||||
cd ansible
|
||||
ansible-playbook -i inventory/proxmox.yml playbooks/proxmox_ops.yml
|
||||
```
|
||||
|
||||
## Migrating from Argo-managed `noble-platform`
|
||||
|
||||
|
||||
@@ -13,11 +13,16 @@ noble_k8s_api_server_fallback: "https://192.168.50.20:6443"
|
||||
# Only if you must skip the kubectl /healthz preflight (not recommended).
|
||||
noble_skip_k8s_health_check: false
|
||||
|
||||
# Pangolin / Newt — set true only after creating newt-pangolin-auth Secret (see clusters/noble/apps/newt/README.md)
|
||||
# Pangolin / Newt — set true only after newt-pangolin-auth Secret exists (SOPS: clusters/noble/secrets/ or imperative — see clusters/noble/bootstrap/newt/README.md)
|
||||
noble_newt_install: false
|
||||
|
||||
# cert-manager needs Secret cloudflare-dns-api-token in cert-manager namespace before ClusterIssuers work
|
||||
noble_cert_manager_require_cloudflare_secret: true
|
||||
|
||||
# post_deploy.yml — apply Vault ClusterSecretStore only after Vault is initialized and K8s auth is configured
|
||||
noble_apply_vault_cluster_secret_store: false
|
||||
# Velero — set **noble_velero_install: true** plus S3 bucket/URL (and credentials — see clusters/noble/bootstrap/velero/README.md)
|
||||
noble_velero_install: false
|
||||
|
||||
# Argo CD — apply app-of-apps root Application (clusters/noble/bootstrap/argocd/root-application.yaml). Set false to skip.
|
||||
noble_argocd_apply_root_application: true
|
||||
# Bootstrap kustomize in Argo (**noble-bootstrap-root** → **clusters/noble/bootstrap**). Applied with manual sync; enable automation after **noble.yml** (see **clusters/noble/bootstrap/argocd/README.md** §5).
|
||||
noble_argocd_apply_bootstrap_root_application: true
|
||||
|
||||
12
ansible/group_vars/debian_servers.yml
Normal file
12
ansible/group_vars/debian_servers.yml
Normal file
@@ -0,0 +1,12 @@
|
||||
---
|
||||
# Hardened SSH settings
|
||||
debian_baseline_ssh_allow_users:
|
||||
- admin
|
||||
|
||||
# Example key rotation entries. Replace with your real users and keys.
|
||||
debian_ssh_rotation_users:
|
||||
- name: admin
|
||||
home: /home/admin
|
||||
state: present
|
||||
keys:
|
||||
- "ssh-ed25519 AAAAEXAMPLE_REPLACE_ME admin@workstation"
|
||||
37
ansible/group_vars/proxmox_hosts.yml
Normal file
37
ansible/group_vars/proxmox_hosts.yml
Normal file
@@ -0,0 +1,37 @@
|
||||
---
|
||||
# Proxmox repositories
|
||||
proxmox_repo_debian_codename: trixie
|
||||
proxmox_repo_disable_enterprise: true
|
||||
proxmox_repo_disable_ceph_enterprise: true
|
||||
proxmox_repo_enable_pve_no_subscription: true
|
||||
proxmox_repo_enable_ceph_no_subscription: true
|
||||
|
||||
# Suppress "No valid subscription" warning in UI
|
||||
proxmox_no_subscription_notice_disable: true
|
||||
|
||||
# Public keys to install for root on each Proxmox host.
|
||||
proxmox_root_authorized_key_files:
|
||||
- "{{ lookup('env', 'HOME') }}/.ssh/id_ed25519.pub"
|
||||
- "{{ lookup('env', 'HOME') }}/.ssh/ansible.pub"
|
||||
|
||||
# Package upgrade/reboot policy
|
||||
proxmox_upgrade_apt_cache_valid_time: 3600
|
||||
proxmox_upgrade_autoremove: true
|
||||
proxmox_upgrade_autoclean: true
|
||||
proxmox_upgrade_reboot_if_required: true
|
||||
proxmox_upgrade_reboot_timeout: 1800
|
||||
|
||||
# Cluster settings
|
||||
proxmox_cluster_enabled: true
|
||||
proxmox_cluster_name: atomic-hub
|
||||
|
||||
# Bootstrap host name from inventory (first host by default if empty)
|
||||
proxmox_cluster_master: ""
|
||||
|
||||
# Optional explicit IP/FQDN for joining; leave empty to use ansible_host of master
|
||||
proxmox_cluster_master_ip: ""
|
||||
proxmox_cluster_force: false
|
||||
|
||||
# Optional: use only for first cluster joins when inter-node SSH trust is not established.
|
||||
# Prefer storing with Ansible Vault if you set this.
|
||||
proxmox_cluster_master_root_password: "Hemroid8"
|
||||
11
ansible/inventory/debian.example.yml
Normal file
11
ansible/inventory/debian.example.yml
Normal file
@@ -0,0 +1,11 @@
|
||||
---
|
||||
all:
|
||||
children:
|
||||
debian_servers:
|
||||
hosts:
|
||||
debian-01:
|
||||
ansible_host: 192.168.50.101
|
||||
ansible_user: admin
|
||||
debian-02:
|
||||
ansible_host: 192.168.50.102
|
||||
ansible_user: admin
|
||||
24
ansible/inventory/proxmox.example.yml
Normal file
24
ansible/inventory/proxmox.example.yml
Normal file
@@ -0,0 +1,24 @@
|
||||
---
|
||||
all:
|
||||
children:
|
||||
proxmox_hosts:
|
||||
vars:
|
||||
ansible_ssh_common_args: "-o StrictHostKeyChecking=accept-new"
|
||||
hosts:
|
||||
helium:
|
||||
ansible_host: 192.168.1.100
|
||||
ansible_user: root
|
||||
# First run without SSH keys:
|
||||
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||
neon:
|
||||
ansible_host: 192.168.1.90
|
||||
ansible_user: root
|
||||
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||
argon:
|
||||
ansible_host: 192.168.1.80
|
||||
ansible_user: root
|
||||
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||
krypton:
|
||||
ansible_host: 192.168.1.70
|
||||
ansible_user: root
|
||||
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||
24
ansible/inventory/proxmox.yml
Normal file
24
ansible/inventory/proxmox.yml
Normal file
@@ -0,0 +1,24 @@
|
||||
---
|
||||
all:
|
||||
children:
|
||||
proxmox_hosts:
|
||||
vars:
|
||||
ansible_ssh_common_args: "-o StrictHostKeyChecking=accept-new"
|
||||
hosts:
|
||||
helium:
|
||||
ansible_host: 192.168.1.100
|
||||
ansible_user: root
|
||||
# First run without SSH keys:
|
||||
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||
neon:
|
||||
ansible_host: 192.168.1.90
|
||||
ansible_user: root
|
||||
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||
argon:
|
||||
ansible_host: 192.168.1.80
|
||||
ansible_user: root
|
||||
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||
krypton:
|
||||
ansible_host: 192.168.1.70
|
||||
ansible_user: root
|
||||
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||
8
ansible/playbooks/debian_harden.yml
Normal file
8
ansible/playbooks/debian_harden.yml
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
- name: Debian server baseline hardening
|
||||
hosts: debian_servers
|
||||
become: true
|
||||
gather_facts: true
|
||||
roles:
|
||||
- role: debian_baseline_hardening
|
||||
tags: [hardening, baseline]
|
||||
8
ansible/playbooks/debian_maintenance.yml
Normal file
8
ansible/playbooks/debian_maintenance.yml
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
- name: Debian maintenance (updates + reboot handling)
|
||||
hosts: debian_servers
|
||||
become: true
|
||||
gather_facts: true
|
||||
roles:
|
||||
- role: debian_maintenance
|
||||
tags: [maintenance, updates]
|
||||
3
ansible/playbooks/debian_ops.yml
Normal file
3
ansible/playbooks/debian_ops.yml
Normal file
@@ -0,0 +1,3 @@
|
||||
---
|
||||
- import_playbook: debian_harden.yml
|
||||
- import_playbook: debian_maintenance.yml
|
||||
8
ansible/playbooks/debian_rotate_ssh_keys.yml
Normal file
8
ansible/playbooks/debian_rotate_ssh_keys.yml
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
- name: Debian SSH key rotation
|
||||
hosts: debian_servers
|
||||
become: true
|
||||
gather_facts: false
|
||||
roles:
|
||||
- role: debian_ssh_key_rotation
|
||||
tags: [ssh, ssh_keys, rotation]
|
||||
@@ -3,8 +3,8 @@
|
||||
# Do not run until `kubectl get --raw /healthz` returns ok (see talos/README.md §3, CLUSTER-BUILD Phase A).
|
||||
# Run from repo **ansible/** directory: ansible-playbook playbooks/noble.yml
|
||||
#
|
||||
# Tags: repos, cilium, metrics, longhorn, metallb, kube_vip, traefik, cert_manager, newt,
|
||||
# argocd, kyverno, kyverno_policies, platform, all (default)
|
||||
# Tags: repos, cilium, csi_snapshot, metrics, longhorn, metallb, kube_vip, traefik, cert_manager, newt,
|
||||
# argocd, kyverno, kyverno_policies, platform, velero, all (default)
|
||||
- name: Noble cluster — platform stack (Ansible-managed)
|
||||
hosts: localhost
|
||||
connection: local
|
||||
@@ -113,6 +113,7 @@
|
||||
tags: [always]
|
||||
|
||||
# talosctl kubeconfig often sets server to the VIP; off-LAN you can reach a control-plane IP but not 192.168.50.230.
|
||||
# kubectl stderr is often "The connection to the server ... was refused" (no substring "connection refused").
|
||||
- name: Auto-fallback API server when VIP is unreachable (temp kubeconfig)
|
||||
tags: [always]
|
||||
when:
|
||||
@@ -120,8 +121,7 @@
|
||||
- noble_k8s_api_server_override | default('') | length == 0
|
||||
- not (noble_skip_k8s_health_check | default(false) | bool)
|
||||
- (noble_k8s_health_first.rc | default(1)) != 0 or (noble_k8s_health_first.stdout | default('') | trim) != 'ok'
|
||||
- ('network is unreachable' in (noble_k8s_health_first.stderr | default('') | lower)) or
|
||||
('no route to host' in (noble_k8s_health_first.stderr | default('') | lower))
|
||||
- (((noble_k8s_health_first.stderr | default('')) ~ (noble_k8s_health_first.stdout | default(''))) | lower) is search('network is unreachable|no route to host|connection refused|was refused', multiline=False)
|
||||
block:
|
||||
- name: Ensure temp dir for kubeconfig auto-fallback
|
||||
ansible.builtin.file:
|
||||
@@ -202,6 +202,8 @@
|
||||
tags: [repos, helm]
|
||||
- role: noble_cilium
|
||||
tags: [cilium, cni]
|
||||
- role: noble_csi_snapshot_controller
|
||||
tags: [csi_snapshot, snapshot, storage]
|
||||
- role: noble_metrics_server
|
||||
tags: [metrics, metrics_server]
|
||||
- role: noble_longhorn
|
||||
@@ -224,5 +226,7 @@
|
||||
tags: [kyverno_policies, policy]
|
||||
- role: noble_platform
|
||||
tags: [platform, observability, apps]
|
||||
- role: noble_velero
|
||||
tags: [velero, backups]
|
||||
- role: noble_landing_urls
|
||||
tags: [landing, platform, observability, apps]
|
||||
|
||||
@@ -1,12 +1,7 @@
|
||||
---
|
||||
# Manual follow-ups after **noble.yml**: Vault init/unseal, Kubernetes auth for Vault, ESO ClusterSecretStore.
|
||||
# Run: ansible-playbook playbooks/post_deploy.yml
|
||||
- name: Noble cluster — post-install reminders
|
||||
hosts: localhost
|
||||
# Manual follow-ups after **noble.yml**: SOPS key backup, optional Argo root Application.
|
||||
- hosts: localhost
|
||||
connection: local
|
||||
gather_facts: false
|
||||
vars:
|
||||
noble_repo_root: "{{ playbook_dir | dirname | dirname }}"
|
||||
noble_kubeconfig: "{{ lookup('env', 'KUBECONFIG') | default(noble_repo_root + '/talos/kubeconfig', true) }}"
|
||||
roles:
|
||||
- role: noble_post_deploy
|
||||
- noble_post_deploy
|
||||
|
||||
9
ansible/playbooks/proxmox_cluster.yml
Normal file
9
ansible/playbooks/proxmox_cluster.yml
Normal file
@@ -0,0 +1,9 @@
|
||||
---
|
||||
- name: Proxmox cluster bootstrap/join
|
||||
hosts: proxmox_hosts
|
||||
become: true
|
||||
gather_facts: false
|
||||
serial: 1
|
||||
roles:
|
||||
- role: proxmox_cluster
|
||||
tags: [proxmox, cluster]
|
||||
4
ansible/playbooks/proxmox_ops.yml
Normal file
4
ansible/playbooks/proxmox_ops.yml
Normal file
@@ -0,0 +1,4 @@
|
||||
---
|
||||
- import_playbook: proxmox_prepare.yml
|
||||
- import_playbook: proxmox_upgrade.yml
|
||||
- import_playbook: proxmox_cluster.yml
|
||||
8
ansible/playbooks/proxmox_prepare.yml
Normal file
8
ansible/playbooks/proxmox_prepare.yml
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
- name: Proxmox host preparation (community repos + no-subscription notice)
|
||||
hosts: proxmox_hosts
|
||||
become: true
|
||||
gather_facts: true
|
||||
roles:
|
||||
- role: proxmox_baseline
|
||||
tags: [proxmox, prepare, repos, ui]
|
||||
9
ansible/playbooks/proxmox_upgrade.yml
Normal file
9
ansible/playbooks/proxmox_upgrade.yml
Normal file
@@ -0,0 +1,9 @@
|
||||
---
|
||||
- name: Proxmox host maintenance (upgrade to latest)
|
||||
hosts: proxmox_hosts
|
||||
become: true
|
||||
gather_facts: true
|
||||
serial: 1
|
||||
roles:
|
||||
- role: proxmox_maintenance
|
||||
tags: [proxmox, maintenance, updates]
|
||||
39
ansible/roles/debian_baseline_hardening/defaults/main.yml
Normal file
39
ansible/roles/debian_baseline_hardening/defaults/main.yml
Normal file
@@ -0,0 +1,39 @@
|
||||
---
|
||||
# Update apt metadata only when stale (seconds)
|
||||
debian_baseline_apt_cache_valid_time: 3600
|
||||
|
||||
# Core host hardening packages
|
||||
debian_baseline_packages:
|
||||
- unattended-upgrades
|
||||
- apt-listchanges
|
||||
- fail2ban
|
||||
- needrestart
|
||||
- sudo
|
||||
- ca-certificates
|
||||
|
||||
# SSH hardening controls
|
||||
debian_baseline_ssh_permit_root_login: "no"
|
||||
debian_baseline_ssh_password_authentication: "no"
|
||||
debian_baseline_ssh_pubkey_authentication: "yes"
|
||||
debian_baseline_ssh_x11_forwarding: "no"
|
||||
debian_baseline_ssh_max_auth_tries: 3
|
||||
debian_baseline_ssh_client_alive_interval: 300
|
||||
debian_baseline_ssh_client_alive_count_max: 2
|
||||
debian_baseline_ssh_allow_users: []
|
||||
|
||||
# unattended-upgrades controls
|
||||
debian_baseline_enable_unattended_upgrades: true
|
||||
debian_baseline_unattended_auto_upgrade: "1"
|
||||
debian_baseline_unattended_update_lists: "1"
|
||||
|
||||
# Kernel and network hardening sysctls
|
||||
debian_baseline_sysctl_settings:
|
||||
net.ipv4.conf.all.accept_redirects: "0"
|
||||
net.ipv4.conf.default.accept_redirects: "0"
|
||||
net.ipv4.conf.all.send_redirects: "0"
|
||||
net.ipv4.conf.default.send_redirects: "0"
|
||||
net.ipv4.conf.all.log_martians: "1"
|
||||
net.ipv4.conf.default.log_martians: "1"
|
||||
net.ipv4.tcp_syncookies: "1"
|
||||
net.ipv6.conf.all.accept_redirects: "0"
|
||||
net.ipv6.conf.default.accept_redirects: "0"
|
||||
12
ansible/roles/debian_baseline_hardening/handlers/main.yml
Normal file
12
ansible/roles/debian_baseline_hardening/handlers/main.yml
Normal file
@@ -0,0 +1,12 @@
|
||||
---
|
||||
- name: Restart ssh
|
||||
ansible.builtin.service:
|
||||
name: ssh
|
||||
state: restarted
|
||||
|
||||
- name: Reload sysctl
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- sysctl
|
||||
- --system
|
||||
changed_when: true
|
||||
52
ansible/roles/debian_baseline_hardening/tasks/main.yml
Normal file
52
ansible/roles/debian_baseline_hardening/tasks/main.yml
Normal file
@@ -0,0 +1,52 @@
|
||||
---
|
||||
- name: Refresh apt cache
|
||||
ansible.builtin.apt:
|
||||
update_cache: true
|
||||
cache_valid_time: "{{ debian_baseline_apt_cache_valid_time }}"
|
||||
|
||||
- name: Install baseline hardening packages
|
||||
ansible.builtin.apt:
|
||||
name: "{{ debian_baseline_packages }}"
|
||||
state: present
|
||||
|
||||
- name: Configure unattended-upgrades auto settings
|
||||
ansible.builtin.copy:
|
||||
dest: /etc/apt/apt.conf.d/20auto-upgrades
|
||||
mode: "0644"
|
||||
content: |
|
||||
APT::Periodic::Update-Package-Lists "{{ debian_baseline_unattended_update_lists }}";
|
||||
APT::Periodic::Unattended-Upgrade "{{ debian_baseline_unattended_auto_upgrade }}";
|
||||
when: debian_baseline_enable_unattended_upgrades | bool
|
||||
|
||||
- name: Configure SSH hardening options
|
||||
ansible.builtin.copy:
|
||||
dest: /etc/ssh/sshd_config.d/99-hardening.conf
|
||||
mode: "0644"
|
||||
content: |
|
||||
PermitRootLogin {{ debian_baseline_ssh_permit_root_login }}
|
||||
PasswordAuthentication {{ debian_baseline_ssh_password_authentication }}
|
||||
PubkeyAuthentication {{ debian_baseline_ssh_pubkey_authentication }}
|
||||
X11Forwarding {{ debian_baseline_ssh_x11_forwarding }}
|
||||
MaxAuthTries {{ debian_baseline_ssh_max_auth_tries }}
|
||||
ClientAliveInterval {{ debian_baseline_ssh_client_alive_interval }}
|
||||
ClientAliveCountMax {{ debian_baseline_ssh_client_alive_count_max }}
|
||||
{% if debian_baseline_ssh_allow_users | length > 0 %}
|
||||
AllowUsers {{ debian_baseline_ssh_allow_users | join(' ') }}
|
||||
{% endif %}
|
||||
notify: Restart ssh
|
||||
|
||||
- name: Configure baseline sysctls
|
||||
ansible.builtin.copy:
|
||||
dest: /etc/sysctl.d/99-hardening.conf
|
||||
mode: "0644"
|
||||
content: |
|
||||
{% for key, value in debian_baseline_sysctl_settings.items() %}
|
||||
{{ key }} = {{ value }}
|
||||
{% endfor %}
|
||||
notify: Reload sysctl
|
||||
|
||||
- name: Ensure fail2ban service is enabled
|
||||
ansible.builtin.service:
|
||||
name: fail2ban
|
||||
enabled: true
|
||||
state: started
|
||||
7
ansible/roles/debian_maintenance/defaults/main.yml
Normal file
7
ansible/roles/debian_maintenance/defaults/main.yml
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
debian_maintenance_apt_cache_valid_time: 3600
|
||||
debian_maintenance_upgrade_type: dist
|
||||
debian_maintenance_autoremove: true
|
||||
debian_maintenance_autoclean: true
|
||||
debian_maintenance_reboot_if_required: true
|
||||
debian_maintenance_reboot_timeout: 1800
|
||||
30
ansible/roles/debian_maintenance/tasks/main.yml
Normal file
30
ansible/roles/debian_maintenance/tasks/main.yml
Normal file
@@ -0,0 +1,30 @@
|
||||
---
|
||||
- name: Refresh apt cache
|
||||
ansible.builtin.apt:
|
||||
update_cache: true
|
||||
cache_valid_time: "{{ debian_maintenance_apt_cache_valid_time }}"
|
||||
|
||||
- name: Upgrade Debian packages
|
||||
ansible.builtin.apt:
|
||||
upgrade: "{{ debian_maintenance_upgrade_type }}"
|
||||
|
||||
- name: Remove orphaned packages
|
||||
ansible.builtin.apt:
|
||||
autoremove: "{{ debian_maintenance_autoremove }}"
|
||||
|
||||
- name: Clean apt package cache
|
||||
ansible.builtin.apt:
|
||||
autoclean: "{{ debian_maintenance_autoclean }}"
|
||||
|
||||
- name: Check if reboot is required
|
||||
ansible.builtin.stat:
|
||||
path: /var/run/reboot-required
|
||||
register: debian_maintenance_reboot_required_file
|
||||
|
||||
- name: Reboot when required by package updates
|
||||
ansible.builtin.reboot:
|
||||
reboot_timeout: "{{ debian_maintenance_reboot_timeout }}"
|
||||
msg: "Reboot initiated by Ansible maintenance playbook"
|
||||
when:
|
||||
- debian_maintenance_reboot_if_required | bool
|
||||
- debian_maintenance_reboot_required_file.stat.exists | default(false)
|
||||
10
ansible/roles/debian_ssh_key_rotation/defaults/main.yml
Normal file
10
ansible/roles/debian_ssh_key_rotation/defaults/main.yml
Normal file
@@ -0,0 +1,10 @@
|
||||
---
|
||||
# List of users to manage keys for.
|
||||
# Example:
|
||||
# debian_ssh_rotation_users:
|
||||
# - name: deploy
|
||||
# home: /home/deploy
|
||||
# state: present
|
||||
# keys:
|
||||
# - "ssh-ed25519 AAAA... deploy@laptop"
|
||||
debian_ssh_rotation_users: []
|
||||
50
ansible/roles/debian_ssh_key_rotation/tasks/main.yml
Normal file
50
ansible/roles/debian_ssh_key_rotation/tasks/main.yml
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
- name: Validate SSH key rotation inputs
|
||||
ansible.builtin.assert:
|
||||
that:
|
||||
- item.name is defined
|
||||
- item.home is defined
|
||||
- (item.state | default('present')) in ['present', 'absent']
|
||||
- (item.state | default('present')) == 'absent' or (item.keys is defined and item.keys | length > 0)
|
||||
fail_msg: >-
|
||||
Each entry in debian_ssh_rotation_users must include name, home, and either:
|
||||
state=absent, or keys with at least one SSH public key.
|
||||
loop: "{{ debian_ssh_rotation_users }}"
|
||||
loop_control:
|
||||
label: "{{ item.name | default('unknown') }}"
|
||||
|
||||
- name: Ensure ~/.ssh exists for managed users
|
||||
ansible.builtin.file:
|
||||
path: "{{ item.home }}/.ssh"
|
||||
state: directory
|
||||
owner: "{{ item.name }}"
|
||||
group: "{{ item.name }}"
|
||||
mode: "0700"
|
||||
loop: "{{ debian_ssh_rotation_users }}"
|
||||
loop_control:
|
||||
label: "{{ item.name }}"
|
||||
when: (item.state | default('present')) == 'present'
|
||||
|
||||
- name: Rotate authorized_keys for managed users
|
||||
ansible.builtin.copy:
|
||||
dest: "{{ item.home }}/.ssh/authorized_keys"
|
||||
owner: "{{ item.name }}"
|
||||
group: "{{ item.name }}"
|
||||
mode: "0600"
|
||||
content: |
|
||||
{% for key in item.keys %}
|
||||
{{ key }}
|
||||
{% endfor %}
|
||||
loop: "{{ debian_ssh_rotation_users }}"
|
||||
loop_control:
|
||||
label: "{{ item.name }}"
|
||||
when: (item.state | default('present')) == 'present'
|
||||
|
||||
- name: Remove authorized_keys for users marked absent
|
||||
ansible.builtin.file:
|
||||
path: "{{ item.home }}/.ssh/authorized_keys"
|
||||
state: absent
|
||||
loop: "{{ debian_ssh_rotation_users }}"
|
||||
loop_control:
|
||||
label: "{{ item.name }}"
|
||||
when: (item.state | default('present')) == 'absent'
|
||||
@@ -8,11 +8,9 @@ noble_helm_repos:
|
||||
- { name: fossorial, url: "https://charts.fossorial.io" }
|
||||
- { name: argo, url: "https://argoproj.github.io/argo-helm" }
|
||||
- { name: metrics-server, url: "https://kubernetes-sigs.github.io/metrics-server/" }
|
||||
- { name: sealed-secrets, url: "https://bitnami-labs.github.io/sealed-secrets" }
|
||||
- { name: external-secrets, url: "https://charts.external-secrets.io" }
|
||||
- { name: hashicorp, url: "https://helm.releases.hashicorp.com" }
|
||||
- { name: prometheus-community, url: "https://prometheus-community.github.io/helm-charts" }
|
||||
- { name: grafana, url: "https://grafana.github.io/helm-charts" }
|
||||
- { name: fluent, url: "https://fluent.github.io/helm-charts" }
|
||||
- { name: headlamp, url: "https://kubernetes-sigs.github.io/headlamp/" }
|
||||
- { name: kyverno, url: "https://kyverno.github.io/kyverno/" }
|
||||
- { name: vmware-tanzu, url: "https://vmware-tanzu.github.io/helm-charts" }
|
||||
|
||||
6
ansible/roles/noble_argocd/defaults/main.yml
Normal file
6
ansible/roles/noble_argocd/defaults/main.yml
Normal file
@@ -0,0 +1,6 @@
|
||||
---
|
||||
# When true, applies clusters/noble/bootstrap/argocd/root-application.yaml (app-of-apps).
|
||||
# Edit spec.source.repoURL in that file if your Git remote differs.
|
||||
noble_argocd_apply_root_application: false
|
||||
# When true, applies clusters/noble/bootstrap/argocd/bootstrap-root-application.yaml (noble-bootstrap-root; manual sync until README §5).
|
||||
noble_argocd_apply_bootstrap_root_application: true
|
||||
@@ -15,6 +15,32 @@
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/argocd/values.yaml"
|
||||
- --wait
|
||||
- --timeout
|
||||
- 15m
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
|
||||
- name: Apply Argo CD root Application (app-of-apps)
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- apply
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/argocd/root-application.yaml"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
when: noble_argocd_apply_root_application | default(false) | bool
|
||||
changed_when: true
|
||||
|
||||
- name: Apply Argo CD bootstrap app-of-apps Application
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- apply
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/argocd/bootstrap-root-application.yaml"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
when: noble_argocd_apply_bootstrap_root_application | default(false) | bool
|
||||
changed_when: true
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
- kubectl
|
||||
- apply
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/cert-manager/namespace.yaml"
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/cert-manager/namespace.yaml"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
@@ -23,7 +23,7 @@
|
||||
- --version
|
||||
- v1.20.0
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/cert-manager/values.yaml"
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/cert-manager/values.yaml"
|
||||
- --wait
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
@@ -51,7 +51,7 @@
|
||||
ansible.builtin.debug:
|
||||
msg: >-
|
||||
Secret cert-manager/cloudflare-dns-api-token not found.
|
||||
Create it per clusters/noble/apps/cert-manager/README.md before ClusterIssuers can succeed.
|
||||
Create it per clusters/noble/bootstrap/cert-manager/README.md before ClusterIssuers can succeed.
|
||||
when:
|
||||
- noble_cert_manager_require_cloudflare_secret | default(true) | bool
|
||||
- noble_cf_secret.rc != 0
|
||||
@@ -62,7 +62,7 @@
|
||||
- kubectl
|
||||
- apply
|
||||
- -k
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/cert-manager"
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/cert-manager"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
- --version
|
||||
- "1.16.6"
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/cilium/values.yaml"
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/cilium/values.yaml"
|
||||
- --wait
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
|
||||
@@ -0,0 +1,2 @@
|
||||
---
|
||||
noble_csi_snapshot_kubectl_timeout: 120s
|
||||
39
ansible/roles/noble_csi_snapshot_controller/tasks/main.yml
Normal file
39
ansible/roles/noble_csi_snapshot_controller/tasks/main.yml
Normal file
@@ -0,0 +1,39 @@
|
||||
---
|
||||
# Volume Snapshot CRDs + snapshot-controller (Velero CSI / Longhorn snapshots).
|
||||
- name: Apply Volume Snapshot CRDs (snapshot.storage.k8s.io)
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- apply
|
||||
- "--request-timeout={{ noble_csi_snapshot_kubectl_timeout | default('120s') }}"
|
||||
- -k
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/csi-snapshot-controller/crd"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
|
||||
- name: Apply snapshot-controller in kube-system
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- apply
|
||||
- "--request-timeout={{ noble_csi_snapshot_kubectl_timeout | default('120s') }}"
|
||||
- -k
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/csi-snapshot-controller/controller"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
|
||||
- name: Wait for snapshot-controller Deployment
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- -n
|
||||
- kube-system
|
||||
- rollout
|
||||
- status
|
||||
- deploy/snapshot-controller
|
||||
- --timeout=120s
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: false
|
||||
@@ -5,7 +5,7 @@
|
||||
- kubectl
|
||||
- apply
|
||||
- -k
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/kube-vip"
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/kube-vip"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
- kubectl
|
||||
- apply
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/kyverno/namespace.yaml"
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/kyverno/namespace.yaml"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
@@ -23,7 +23,7 @@
|
||||
- --version
|
||||
- "3.7.1"
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/kyverno/values.yaml"
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/kyverno/values.yaml"
|
||||
- --wait
|
||||
- --timeout
|
||||
- 15m
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
- --version
|
||||
- "3.7.1"
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/kyverno/policies-values.yaml"
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/kyverno/policies-values.yaml"
|
||||
- --wait
|
||||
- --timeout
|
||||
- 10m
|
||||
|
||||
@@ -39,8 +39,13 @@ noble_lab_ui_entries:
|
||||
namespace: longhorn-system
|
||||
service: longhorn-frontend
|
||||
url: https://longhorn.apps.noble.lab.pcenicni.dev
|
||||
- name: Vault
|
||||
description: Secrets engine UI (after init/unseal)
|
||||
namespace: vault
|
||||
service: vault
|
||||
url: https://vault.apps.noble.lab.pcenicni.dev
|
||||
- name: Velero
|
||||
description: Cluster backups — no web UI (velero CLI / kubectl CRDs)
|
||||
namespace: velero
|
||||
service: velero
|
||||
url: ""
|
||||
- name: Homepage
|
||||
description: App dashboard (links to lab UIs)
|
||||
namespace: homepage
|
||||
service: homepage
|
||||
url: https://homepage.apps.noble.lab.pcenicni.dev
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
> **Sensitive:** This file may include **passwords read from Kubernetes Secrets** when credential fetch ran. It is **gitignored** — do not commit or share.
|
||||
|
||||
**DNS:** point **`*.apps.noble.lab.pcenicni.dev`** at the Traefik **LoadBalancer** (MetalLB **`192.168.50.211`** by default — see `clusters/noble/apps/traefik/values.yaml`).
|
||||
**DNS:** point **`*.apps.noble.lab.pcenicni.dev`** at the Traefik **LoadBalancer** (MetalLB **`192.168.50.211`** by default — see `clusters/noble/bootstrap/traefik/values.yaml`).
|
||||
|
||||
**TLS:** **cert-manager** + **`letsencrypt-prod`** on each Ingress (public **DNS-01** for **`pcenicni.dev`**).
|
||||
|
||||
@@ -11,7 +11,7 @@ This file is **generated** by Ansible (`noble_landing_urls` role). Use it as a t
|
||||
| UI | What | Kubernetes service | Namespace | URL |
|
||||
|----|------|----------------------|-----------|-----|
|
||||
{% for e in noble_lab_ui_entries %}
|
||||
| {{ e.name }} | {{ e.description }} | `{{ e.service }}` | `{{ e.namespace }}` | [{{ e.url }}]({{ e.url }}) |
|
||||
| {{ e.name }} | {{ e.description }} | `{{ e.service }}` | `{{ e.namespace }}` | {% if e.url | default('') | length > 0 %}[{{ e.url }}]({{ e.url }}){% else %}—{% endif %} |
|
||||
{% endfor %}
|
||||
|
||||
## Initial access (logins)
|
||||
@@ -24,7 +24,6 @@ This file is **generated** by Ansible (`noble_landing_urls` role). Use it as a t
|
||||
| **Prometheus** | — | No auth in default install (lab). |
|
||||
| **Alertmanager** | — | No auth in default install (lab). |
|
||||
| **Longhorn** | — | No default login unless you enable access control in the UI settings. |
|
||||
| **Vault** | Token | Root token is only from **`vault operator init`** (not stored in git). See `clusters/noble/apps/vault/README.md`. |
|
||||
|
||||
### Commands to retrieve passwords (if not filled above)
|
||||
|
||||
@@ -46,6 +45,7 @@ To generate this file **without** calling kubectl, run Ansible with **`-e noble_
|
||||
|
||||
- **Argo CD** `argocd-initial-admin-secret` disappears after you change the admin password.
|
||||
- **Grafana** password is random unless you set `grafana.adminPassword` in chart values.
|
||||
- **Vault** UI needs **unsealed** Vault; tokens come from your chosen auth method.
|
||||
- **Prometheus / Alertmanager** UIs are unauthenticated by default — restrict when hardening (`talos/CLUSTER-BUILD.md` Phase G).
|
||||
- **SOPS:** cluster secrets in git under **`clusters/noble/secrets/`** are encrypted; decrypt with **`age-key.txt`** (not in git). See **`clusters/noble/secrets/README.md`**.
|
||||
- **Headlamp** token above expires after the configured duration; re-run Ansible or `kubectl create token` to refresh.
|
||||
- **Velero** has **no web UI** — use **`velero`** CLI or **`kubectl -n velero get backup,schedule,backupstoragelocation`**. Metrics: **`velero`** Service in **`velero`** (Prometheus scrape). See `clusters/noble/bootstrap/velero/README.md`.
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
- kubectl
|
||||
- apply
|
||||
- -k
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/longhorn"
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/longhorn"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
@@ -22,7 +22,7 @@
|
||||
- longhorn-system
|
||||
- --create-namespace
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/longhorn/values.yaml"
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/longhorn/values.yaml"
|
||||
- --wait
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
- kubectl
|
||||
- apply
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/metallb/namespace.yaml"
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/metallb/namespace.yaml"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
@@ -33,7 +33,7 @@
|
||||
- kubectl
|
||||
- apply
|
||||
- -k
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/metallb"
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/metallb"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
- --version
|
||||
- "3.13.0"
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/metrics-server/values.yaml"
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/metrics-server/values.yaml"
|
||||
- --wait
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
- kubectl
|
||||
- apply
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/newt/namespace.yaml"
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/newt/namespace.yaml"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
when: noble_newt_install | bool
|
||||
@@ -33,7 +33,7 @@
|
||||
- --version
|
||||
- "1.2.0"
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/newt/values.yaml"
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/newt/values.yaml"
|
||||
- --wait
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
|
||||
@@ -4,5 +4,6 @@ noble_platform_kubectl_request_timeout: 120s
|
||||
noble_platform_kustomize_retries: 5
|
||||
noble_platform_kustomize_delay: 20
|
||||
|
||||
# Vault: injector (vault-k8s) owns MutatingWebhookConfiguration.caBundle; Helm upgrade can SSA-conflict. Delete webhook so Helm can recreate it.
|
||||
noble_vault_delete_injector_webhook_before_helm: true
|
||||
# Decrypt **clusters/noble/secrets/*.yaml** with SOPS and kubectl apply (requires **sops**, **age**, and **age-key.txt**).
|
||||
noble_apply_sops_secrets: true
|
||||
noble_sops_age_key_file: "{{ noble_repo_root }}/age-key.txt"
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
---
|
||||
# Mirrors former **noble-platform** Argo Application: Helm releases + plain manifests under clusters/noble/apps.
|
||||
- name: Apply clusters/noble/apps kustomize (namespaces, Grafana Loki datasource, Vault extras)
|
||||
# Mirrors former **noble-platform** Argo Application: Helm releases + plain manifests under clusters/noble/bootstrap.
|
||||
- name: Apply clusters/noble/bootstrap kustomize (namespaces, Grafana Loki datasource)
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- apply
|
||||
- "--request-timeout={{ noble_platform_kubectl_request_timeout }}"
|
||||
- -k
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps"
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
register: noble_platform_kustomize
|
||||
@@ -16,77 +16,26 @@
|
||||
until: noble_platform_kustomize.rc == 0
|
||||
changed_when: true
|
||||
|
||||
- name: Install Sealed Secrets
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- helm
|
||||
- upgrade
|
||||
- --install
|
||||
- sealed-secrets
|
||||
- sealed-secrets/sealed-secrets
|
||||
- --namespace
|
||||
- sealed-secrets
|
||||
- --version
|
||||
- "2.18.4"
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/sealed-secrets/values.yaml"
|
||||
- --wait
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
- name: Stat SOPS age private key (age-key.txt)
|
||||
ansible.builtin.stat:
|
||||
path: "{{ noble_sops_age_key_file }}"
|
||||
register: noble_sops_age_key_stat
|
||||
|
||||
- name: Install External Secrets Operator
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- helm
|
||||
- upgrade
|
||||
- --install
|
||||
- external-secrets
|
||||
- external-secrets/external-secrets
|
||||
- --namespace
|
||||
- external-secrets
|
||||
- --version
|
||||
- "2.2.0"
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/external-secrets/values.yaml"
|
||||
- --wait
|
||||
- name: Apply SOPS-encrypted cluster secrets (clusters/noble/secrets/*.yaml)
|
||||
ansible.builtin.shell: |
|
||||
set -euo pipefail
|
||||
shopt -s nullglob
|
||||
for f in "{{ noble_repo_root }}/clusters/noble/secrets"/*.yaml; do
|
||||
sops -d "$f" | kubectl apply -f -
|
||||
done
|
||||
args:
|
||||
executable: /bin/bash
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
|
||||
# vault-k8s patches webhook CA after install; Helm 3/4 SSA then conflicts on upgrade. Removing the MWC lets Helm re-apply cleanly; injector repopulates caBundle.
|
||||
- name: Delete Vault agent injector MutatingWebhookConfiguration before Helm (avoids caBundle field conflict)
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- delete
|
||||
- mutatingwebhookconfiguration
|
||||
- vault-agent-injector-cfg
|
||||
- --ignore-not-found
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
register: noble_vault_mwc_delete
|
||||
when: noble_vault_delete_injector_webhook_before_helm | default(true) | bool
|
||||
changed_when: "'deleted' in (noble_vault_mwc_delete.stdout | default(''))"
|
||||
|
||||
- name: Install Vault
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- helm
|
||||
- upgrade
|
||||
- --install
|
||||
- vault
|
||||
- hashicorp/vault
|
||||
- --namespace
|
||||
- vault
|
||||
- --version
|
||||
- "0.32.0"
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/vault/values.yaml"
|
||||
- --wait
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
HELM_SERVER_SIDE_APPLY: "false"
|
||||
SOPS_AGE_KEY_FILE: "{{ noble_sops_age_key_file }}"
|
||||
when:
|
||||
- noble_apply_sops_secrets | default(true) | bool
|
||||
- noble_sops_age_key_stat.stat.exists
|
||||
changed_when: true
|
||||
|
||||
- name: Install kube-prometheus-stack
|
||||
@@ -102,7 +51,7 @@
|
||||
- --version
|
||||
- "82.15.1"
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/kube-prometheus-stack/values.yaml"
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/kube-prometheus-stack/values.yaml"
|
||||
- --wait
|
||||
- --timeout
|
||||
- 30m
|
||||
@@ -123,7 +72,7 @@
|
||||
- --version
|
||||
- "6.55.0"
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/loki/values.yaml"
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/loki/values.yaml"
|
||||
- --wait
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
@@ -142,7 +91,7 @@
|
||||
- --version
|
||||
- "0.56.0"
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/fluent-bit/values.yaml"
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/fluent-bit/values.yaml"
|
||||
- --wait
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
@@ -161,7 +110,7 @@
|
||||
- -n
|
||||
- headlamp
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/headlamp/values.yaml"
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/headlamp/values.yaml"
|
||||
- --wait
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
|
||||
@@ -1,27 +1,15 @@
|
||||
---
|
||||
- name: Vault — manual steps (not automated)
|
||||
- name: SOPS secrets (workstation)
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
1. kubectl -n vault get pods (wait for Running)
|
||||
2. kubectl -n vault exec -it vault-0 -- vault operator init (once; save keys)
|
||||
3. Unseal per clusters/noble/apps/vault/README.md
|
||||
4. ./clusters/noble/apps/vault/configure-kubernetes-auth.sh
|
||||
5. kubectl apply -f clusters/noble/apps/external-secrets/examples/vault-cluster-secret-store.yaml
|
||||
|
||||
- name: Optional — apply Vault ClusterSecretStore for External Secrets
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- apply
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/external-secrets/examples/vault-cluster-secret-store.yaml"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
when: noble_apply_vault_cluster_secret_store | default(false) | bool
|
||||
changed_when: true
|
||||
Encrypted Kubernetes Secrets live under clusters/noble/secrets/ (Mozilla SOPS + age).
|
||||
Private key: age-key.txt at repo root (gitignored). See clusters/noble/secrets/README.md
|
||||
and .sops.yaml. noble.yml decrypt-applies these when age-key.txt exists.
|
||||
|
||||
- name: Argo CD optional root Application (empty app-of-apps)
|
||||
ansible.builtin.debug:
|
||||
msg: >-
|
||||
Optional: kubectl apply -f clusters/noble/bootstrap/argocd/root-application.yaml
|
||||
after editing repoURL. Core workloads are not synced by Argo — see bootstrap/argocd/apps/README.md
|
||||
App-of-apps: noble.yml applies root-application.yaml when noble_argocd_apply_root_application is true;
|
||||
bootstrap-root-application.yaml when noble_argocd_apply_bootstrap_root_application is true (group_vars/all.yml).
|
||||
noble-bootstrap-root uses manual sync until you enable automation after the playbook —
|
||||
clusters/noble/bootstrap/argocd/README.md §5. See clusters/noble/apps/README.md and that README.
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
- kubectl
|
||||
- apply
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/traefik/namespace.yaml"
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/traefik/namespace.yaml"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
@@ -23,7 +23,7 @@
|
||||
- --version
|
||||
- "39.0.6"
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/apps/traefik/values.yaml"
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/traefik/values.yaml"
|
||||
- --wait
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
|
||||
13
ansible/roles/noble_velero/defaults/main.yml
Normal file
13
ansible/roles/noble_velero/defaults/main.yml
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
# **noble_velero_install** is in **ansible/group_vars/all.yml**. Override S3 fields via extra-vars or group_vars.
|
||||
noble_velero_chart_version: "12.0.0"
|
||||
|
||||
noble_velero_s3_bucket: ""
|
||||
noble_velero_s3_url: ""
|
||||
noble_velero_s3_region: "us-east-1"
|
||||
noble_velero_s3_force_path_style: "true"
|
||||
noble_velero_s3_prefix: ""
|
||||
|
||||
# Optional — if unset, Ansible expects Secret **velero/velero-cloud-credentials** (key **cloud**) to exist.
|
||||
noble_velero_aws_access_key_id: ""
|
||||
noble_velero_aws_secret_access_key: ""
|
||||
68
ansible/roles/noble_velero/tasks/from_env.yml
Normal file
68
ansible/roles/noble_velero/tasks/from_env.yml
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
# See repository **.env.sample** — copy to **.env** (gitignored).
|
||||
- name: Stat repository .env for Velero
|
||||
ansible.builtin.stat:
|
||||
path: "{{ noble_repo_root }}/.env"
|
||||
register: noble_deploy_env_file
|
||||
changed_when: false
|
||||
|
||||
- name: Load NOBLE_VELERO_S3_BUCKET from .env when unset
|
||||
ansible.builtin.shell: |
|
||||
set -a
|
||||
. "{{ noble_repo_root }}/.env"
|
||||
set +a
|
||||
echo "${NOBLE_VELERO_S3_BUCKET:-}"
|
||||
register: noble_velero_s3_bucket_from_env
|
||||
when:
|
||||
- noble_deploy_env_file.stat.exists | default(false)
|
||||
- noble_velero_s3_bucket | default('') | length == 0
|
||||
changed_when: false
|
||||
|
||||
- name: Apply NOBLE_VELERO_S3_BUCKET from .env
|
||||
ansible.builtin.set_fact:
|
||||
noble_velero_s3_bucket: "{{ noble_velero_s3_bucket_from_env.stdout | trim }}"
|
||||
when:
|
||||
- noble_velero_s3_bucket_from_env is defined
|
||||
- (noble_velero_s3_bucket_from_env.stdout | default('') | trim | length) > 0
|
||||
|
||||
- name: Load NOBLE_VELERO_S3_URL from .env when unset
|
||||
ansible.builtin.shell: |
|
||||
set -a
|
||||
. "{{ noble_repo_root }}/.env"
|
||||
set +a
|
||||
echo "${NOBLE_VELERO_S3_URL:-}"
|
||||
register: noble_velero_s3_url_from_env
|
||||
when:
|
||||
- noble_deploy_env_file.stat.exists | default(false)
|
||||
- noble_velero_s3_url | default('') | length == 0
|
||||
changed_when: false
|
||||
|
||||
- name: Apply NOBLE_VELERO_S3_URL from .env
|
||||
ansible.builtin.set_fact:
|
||||
noble_velero_s3_url: "{{ noble_velero_s3_url_from_env.stdout | trim }}"
|
||||
when:
|
||||
- noble_velero_s3_url_from_env is defined
|
||||
- (noble_velero_s3_url_from_env.stdout | default('') | trim | length) > 0
|
||||
|
||||
- name: Create velero-cloud-credentials from .env when keys present
|
||||
ansible.builtin.shell: |
|
||||
set -euo pipefail
|
||||
set -a
|
||||
. "{{ noble_repo_root }}/.env"
|
||||
set +a
|
||||
if [ -z "${NOBLE_VELERO_AWS_ACCESS_KEY_ID:-}" ] || [ -z "${NOBLE_VELERO_AWS_SECRET_ACCESS_KEY:-}" ]; then
|
||||
echo SKIP
|
||||
exit 0
|
||||
fi
|
||||
CLOUD="$(printf '[default]\naws_access_key_id=%s\naws_secret_access_key=%s\n' \
|
||||
"${NOBLE_VELERO_AWS_ACCESS_KEY_ID}" "${NOBLE_VELERO_AWS_SECRET_ACCESS_KEY}")"
|
||||
kubectl -n velero create secret generic velero-cloud-credentials \
|
||||
--from-literal=cloud="${CLOUD}" \
|
||||
--dry-run=client -o yaml | kubectl apply -f -
|
||||
echo APPLIED
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
when: noble_deploy_env_file.stat.exists | default(false)
|
||||
no_log: true
|
||||
register: noble_velero_secret_from_env
|
||||
changed_when: "'APPLIED' in (noble_velero_secret_from_env.stdout | default(''))"
|
||||
85
ansible/roles/noble_velero/tasks/main.yml
Normal file
85
ansible/roles/noble_velero/tasks/main.yml
Normal file
@@ -0,0 +1,85 @@
|
||||
---
|
||||
# Velero — S3 backup target + built-in CSI snapshots (Longhorn: label VolumeSnapshotClass per README).
|
||||
- name: Apply velero namespace
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- apply
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/velero/namespace.yaml"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
when: noble_velero_install | default(false) | bool
|
||||
changed_when: true
|
||||
|
||||
- name: Include Velero settings from repository .env (S3 bucket, URL, credentials)
|
||||
ansible.builtin.include_tasks: from_env.yml
|
||||
when: noble_velero_install | default(false) | bool
|
||||
|
||||
- name: Require S3 bucket and endpoint for Velero
|
||||
ansible.builtin.assert:
|
||||
that:
|
||||
- noble_velero_s3_bucket | default('') | length > 0
|
||||
- noble_velero_s3_url | default('') | length > 0
|
||||
fail_msg: >-
|
||||
Set NOBLE_VELERO_S3_BUCKET and NOBLE_VELERO_S3_URL in .env, or noble_velero_s3_bucket / noble_velero_s3_url
|
||||
(e.g. -e ...), or group_vars when noble_velero_install is true.
|
||||
when: noble_velero_install | default(false) | bool
|
||||
|
||||
- name: Create velero-cloud-credentials from Ansible vars
|
||||
ansible.builtin.shell: |
|
||||
set -euo pipefail
|
||||
CLOUD="$(printf '[default]\naws_access_key_id=%s\naws_secret_access_key=%s\n' \
|
||||
"${AWS_ACCESS_KEY_ID}" "${AWS_SECRET_ACCESS_KEY}")"
|
||||
kubectl -n velero create secret generic velero-cloud-credentials \
|
||||
--from-literal=cloud="${CLOUD}" \
|
||||
--dry-run=client -o yaml | kubectl apply -f -
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
AWS_ACCESS_KEY_ID: "{{ noble_velero_aws_access_key_id }}"
|
||||
AWS_SECRET_ACCESS_KEY: "{{ noble_velero_aws_secret_access_key }}"
|
||||
when:
|
||||
- noble_velero_install | default(false) | bool
|
||||
- noble_velero_aws_access_key_id | default('') | length > 0
|
||||
- noble_velero_aws_secret_access_key | default('') | length > 0
|
||||
no_log: true
|
||||
changed_when: true
|
||||
|
||||
- name: Check velero-cloud-credentials Secret
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- -n
|
||||
- velero
|
||||
- get
|
||||
- secret
|
||||
- velero-cloud-credentials
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
register: noble_velero_secret_check
|
||||
failed_when: false
|
||||
changed_when: false
|
||||
when: noble_velero_install | default(false) | bool
|
||||
|
||||
- name: Require velero-cloud-credentials before Helm
|
||||
ansible.builtin.assert:
|
||||
that:
|
||||
- noble_velero_secret_check.rc == 0
|
||||
fail_msg: >-
|
||||
Velero needs Secret velero/velero-cloud-credentials (key cloud). Set NOBLE_VELERO_AWS_ACCESS_KEY_ID and
|
||||
NOBLE_VELERO_AWS_SECRET_ACCESS_KEY in .env, or noble_velero_aws_* extra-vars, or create the Secret manually
|
||||
(see clusters/noble/bootstrap/velero/README.md).
|
||||
when: noble_velero_install | default(false) | bool
|
||||
|
||||
- name: Optional object prefix argv for Helm
|
||||
ansible.builtin.set_fact:
|
||||
noble_velero_helm_prefix_argv: "{{ ['--set-string', 'configuration.backupStorageLocation[0].prefix=' ~ (noble_velero_s3_prefix | default(''))] if (noble_velero_s3_prefix | default('') | length > 0) else [] }}"
|
||||
when: noble_velero_install | default(false) | bool
|
||||
|
||||
- name: Install Velero
|
||||
ansible.builtin.command:
|
||||
argv: "{{ ['helm', 'upgrade', '--install', 'velero', 'vmware-tanzu/velero', '--namespace', 'velero', '--version', noble_velero_chart_version, '-f', noble_repo_root ~ '/clusters/noble/bootstrap/velero/values.yaml', '--set-string', 'configuration.backupStorageLocation[0].bucket=' ~ noble_velero_s3_bucket, '--set-string', 'configuration.backupStorageLocation[0].config.s3Url=' ~ noble_velero_s3_url, '--set-string', 'configuration.backupStorageLocation[0].config.region=' ~ noble_velero_s3_region, '--set-string', 'configuration.backupStorageLocation[0].config.s3ForcePathStyle=' ~ noble_velero_s3_force_path_style] + (noble_velero_helm_prefix_argv | default([])) + ['--wait'] }}"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
when: noble_velero_install | default(false) | bool
|
||||
changed_when: true
|
||||
14
ansible/roles/proxmox_baseline/defaults/main.yml
Normal file
14
ansible/roles/proxmox_baseline/defaults/main.yml
Normal file
@@ -0,0 +1,14 @@
|
||||
---
|
||||
proxmox_repo_debian_codename: "{{ ansible_facts['distribution_release'] | default('bookworm') }}"
|
||||
proxmox_repo_disable_enterprise: true
|
||||
proxmox_repo_disable_ceph_enterprise: true
|
||||
proxmox_repo_enable_pve_no_subscription: true
|
||||
proxmox_repo_enable_ceph_no_subscription: false
|
||||
|
||||
proxmox_no_subscription_notice_disable: true
|
||||
proxmox_widget_toolkit_file: /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js
|
||||
|
||||
# Bootstrap root SSH keys from the control machine so subsequent runs can use key auth.
|
||||
proxmox_root_authorized_key_files:
|
||||
- "{{ lookup('env', 'HOME') }}/.ssh/id_ed25519.pub"
|
||||
- "{{ lookup('env', 'HOME') }}/.ssh/ansible.pub"
|
||||
5
ansible/roles/proxmox_baseline/handlers/main.yml
Normal file
5
ansible/roles/proxmox_baseline/handlers/main.yml
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
- name: Restart pveproxy
|
||||
ansible.builtin.service:
|
||||
name: pveproxy
|
||||
state: restarted
|
||||
100
ansible/roles/proxmox_baseline/tasks/main.yml
Normal file
100
ansible/roles/proxmox_baseline/tasks/main.yml
Normal file
@@ -0,0 +1,100 @@
|
||||
---
|
||||
- name: Check configured local public key files
|
||||
ansible.builtin.stat:
|
||||
path: "{{ item }}"
|
||||
register: proxmox_root_pubkey_stats
|
||||
loop: "{{ proxmox_root_authorized_key_files }}"
|
||||
delegate_to: localhost
|
||||
become: false
|
||||
|
||||
- name: Fail when a configured local public key file is missing
|
||||
ansible.builtin.fail:
|
||||
msg: "Configured key file does not exist on the control host: {{ item.item }}"
|
||||
when: not item.stat.exists
|
||||
loop: "{{ proxmox_root_pubkey_stats.results }}"
|
||||
delegate_to: localhost
|
||||
become: false
|
||||
|
||||
- name: Ensure root authorized_keys contains configured public keys
|
||||
ansible.posix.authorized_key:
|
||||
user: root
|
||||
state: present
|
||||
key: "{{ lookup('ansible.builtin.file', item) }}"
|
||||
manage_dir: true
|
||||
loop: "{{ proxmox_root_authorized_key_files }}"
|
||||
|
||||
- name: Remove enterprise repository lines from /etc/apt/sources.list
|
||||
ansible.builtin.lineinfile:
|
||||
path: /etc/apt/sources.list
|
||||
regexp: ".*enterprise\\.proxmox\\.com.*"
|
||||
state: absent
|
||||
when:
|
||||
- proxmox_repo_disable_enterprise | bool or proxmox_repo_disable_ceph_enterprise | bool
|
||||
failed_when: false
|
||||
|
||||
- name: Find apt source files that contain Proxmox enterprise repositories
|
||||
ansible.builtin.find:
|
||||
paths: /etc/apt/sources.list.d
|
||||
file_type: file
|
||||
patterns:
|
||||
- "*.list"
|
||||
- "*.sources"
|
||||
contains: "enterprise\\.proxmox\\.com"
|
||||
use_regex: true
|
||||
register: proxmox_enterprise_repo_files
|
||||
when:
|
||||
- proxmox_repo_disable_enterprise | bool or proxmox_repo_disable_ceph_enterprise | bool
|
||||
|
||||
- name: Remove enterprise repository lines from apt source files
|
||||
ansible.builtin.lineinfile:
|
||||
path: "{{ item.path }}"
|
||||
regexp: ".*enterprise\\.proxmox\\.com.*"
|
||||
state: absent
|
||||
loop: "{{ proxmox_enterprise_repo_files.files | default([]) }}"
|
||||
when:
|
||||
- proxmox_repo_disable_enterprise | bool or proxmox_repo_disable_ceph_enterprise | bool
|
||||
|
||||
- name: Find apt source files that already contain pve-no-subscription
|
||||
ansible.builtin.find:
|
||||
paths: /etc/apt/sources.list.d
|
||||
file_type: file
|
||||
patterns:
|
||||
- "*.list"
|
||||
- "*.sources"
|
||||
contains: "pve-no-subscription"
|
||||
use_regex: false
|
||||
register: proxmox_no_sub_repo_files
|
||||
when: proxmox_repo_enable_pve_no_subscription | bool
|
||||
|
||||
- name: Ensure Proxmox no-subscription repository is configured when absent
|
||||
ansible.builtin.copy:
|
||||
dest: /etc/apt/sources.list.d/pve-no-subscription.list
|
||||
content: "deb http://download.proxmox.com/debian/pve {{ proxmox_repo_debian_codename }} pve-no-subscription\n"
|
||||
mode: "0644"
|
||||
when:
|
||||
- proxmox_repo_enable_pve_no_subscription | bool
|
||||
- (proxmox_no_sub_repo_files.matched | default(0) | int) == 0
|
||||
|
||||
- name: Remove duplicate pve-no-subscription.list when another source already provides it
|
||||
ansible.builtin.file:
|
||||
path: /etc/apt/sources.list.d/pve-no-subscription.list
|
||||
state: absent
|
||||
when:
|
||||
- proxmox_repo_enable_pve_no_subscription | bool
|
||||
- (proxmox_no_sub_repo_files.files | default([]) | map(attribute='path') | list | select('ne', '/etc/apt/sources.list.d/pve-no-subscription.list') | list | length) > 0
|
||||
|
||||
- name: Ensure Ceph no-subscription repository is configured
|
||||
ansible.builtin.copy:
|
||||
dest: /etc/apt/sources.list.d/ceph-no-subscription.list
|
||||
content: "deb http://download.proxmox.com/debian/ceph-{{ proxmox_repo_debian_codename }} {{ proxmox_repo_debian_codename }} no-subscription\n"
|
||||
mode: "0644"
|
||||
when: proxmox_repo_enable_ceph_no_subscription | bool
|
||||
|
||||
- name: Disable no-subscription pop-up in Proxmox UI
|
||||
ansible.builtin.replace:
|
||||
path: "{{ proxmox_widget_toolkit_file }}"
|
||||
regexp: "if \\(data\\.status !== 'Active'\\)"
|
||||
replace: "if (false)"
|
||||
backup: true
|
||||
when: proxmox_no_subscription_notice_disable | bool
|
||||
notify: Restart pveproxy
|
||||
7
ansible/roles/proxmox_cluster/defaults/main.yml
Normal file
7
ansible/roles/proxmox_cluster/defaults/main.yml
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
proxmox_cluster_enabled: true
|
||||
proxmox_cluster_name: pve-cluster
|
||||
proxmox_cluster_master: ""
|
||||
proxmox_cluster_master_ip: ""
|
||||
proxmox_cluster_force: false
|
||||
proxmox_cluster_master_root_password: ""
|
||||
63
ansible/roles/proxmox_cluster/tasks/main.yml
Normal file
63
ansible/roles/proxmox_cluster/tasks/main.yml
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
- name: Skip cluster role when disabled
|
||||
ansible.builtin.meta: end_host
|
||||
when: not (proxmox_cluster_enabled | bool)
|
||||
|
||||
- name: Check whether corosync cluster config exists
|
||||
ansible.builtin.stat:
|
||||
path: /etc/pve/corosync.conf
|
||||
register: proxmox_cluster_corosync_conf
|
||||
|
||||
- name: Set effective Proxmox cluster master
|
||||
ansible.builtin.set_fact:
|
||||
proxmox_cluster_master_effective: "{{ proxmox_cluster_master | default(groups['proxmox_hosts'][0], true) }}"
|
||||
|
||||
- name: Set effective Proxmox cluster master IP
|
||||
ansible.builtin.set_fact:
|
||||
proxmox_cluster_master_ip_effective: >-
|
||||
{{
|
||||
proxmox_cluster_master_ip
|
||||
| default(hostvars[proxmox_cluster_master_effective].ansible_host
|
||||
| default(proxmox_cluster_master_effective), true)
|
||||
}}
|
||||
|
||||
- name: Create cluster on designated master
|
||||
ansible.builtin.command:
|
||||
cmd: "pvecm create {{ proxmox_cluster_name }}"
|
||||
when:
|
||||
- inventory_hostname == proxmox_cluster_master_effective
|
||||
- not proxmox_cluster_corosync_conf.stat.exists
|
||||
|
||||
- name: Ensure python3-pexpect is installed for password-based cluster join
|
||||
ansible.builtin.apt:
|
||||
name: python3-pexpect
|
||||
state: present
|
||||
update_cache: true
|
||||
when:
|
||||
- inventory_hostname != proxmox_cluster_master_effective
|
||||
- not proxmox_cluster_corosync_conf.stat.exists
|
||||
- proxmox_cluster_master_root_password | length > 0
|
||||
|
||||
- name: Join node to existing cluster (password provided)
|
||||
ansible.builtin.expect:
|
||||
command: >-
|
||||
pvecm add {{ proxmox_cluster_master_ip_effective }}
|
||||
{% if proxmox_cluster_force | bool %}--force{% endif %}
|
||||
responses:
|
||||
"Please enter superuser \\(root\\) password for '.*':": "{{ proxmox_cluster_master_root_password }}"
|
||||
"password:": "{{ proxmox_cluster_master_root_password }}"
|
||||
no_log: true
|
||||
when:
|
||||
- inventory_hostname != proxmox_cluster_master_effective
|
||||
- not proxmox_cluster_corosync_conf.stat.exists
|
||||
- proxmox_cluster_master_root_password | length > 0
|
||||
|
||||
- name: Join node to existing cluster (SSH trust/no prompt)
|
||||
ansible.builtin.command:
|
||||
cmd: >-
|
||||
pvecm add {{ proxmox_cluster_master_ip_effective }}
|
||||
{% if proxmox_cluster_force | bool %}--force{% endif %}
|
||||
when:
|
||||
- inventory_hostname != proxmox_cluster_master_effective
|
||||
- not proxmox_cluster_corosync_conf.stat.exists
|
||||
- proxmox_cluster_master_root_password | length == 0
|
||||
6
ansible/roles/proxmox_maintenance/defaults/main.yml
Normal file
6
ansible/roles/proxmox_maintenance/defaults/main.yml
Normal file
@@ -0,0 +1,6 @@
|
||||
---
|
||||
proxmox_upgrade_apt_cache_valid_time: 3600
|
||||
proxmox_upgrade_autoremove: true
|
||||
proxmox_upgrade_autoclean: true
|
||||
proxmox_upgrade_reboot_if_required: true
|
||||
proxmox_upgrade_reboot_timeout: 1800
|
||||
30
ansible/roles/proxmox_maintenance/tasks/main.yml
Normal file
30
ansible/roles/proxmox_maintenance/tasks/main.yml
Normal file
@@ -0,0 +1,30 @@
|
||||
---
|
||||
- name: Refresh apt cache
|
||||
ansible.builtin.apt:
|
||||
update_cache: true
|
||||
cache_valid_time: "{{ proxmox_upgrade_apt_cache_valid_time }}"
|
||||
|
||||
- name: Upgrade Proxmox host packages
|
||||
ansible.builtin.apt:
|
||||
upgrade: dist
|
||||
|
||||
- name: Remove orphaned packages
|
||||
ansible.builtin.apt:
|
||||
autoremove: "{{ proxmox_upgrade_autoremove }}"
|
||||
|
||||
- name: Clean apt package cache
|
||||
ansible.builtin.apt:
|
||||
autoclean: "{{ proxmox_upgrade_autoclean }}"
|
||||
|
||||
- name: Check if reboot is required
|
||||
ansible.builtin.stat:
|
||||
path: /var/run/reboot-required
|
||||
register: proxmox_reboot_required_file
|
||||
|
||||
- name: Reboot when required by package upgrades
|
||||
ansible.builtin.reboot:
|
||||
reboot_timeout: "{{ proxmox_upgrade_reboot_timeout }}"
|
||||
msg: "Reboot initiated by Ansible Proxmox maintenance playbook"
|
||||
when:
|
||||
- proxmox_upgrade_reboot_if_required | bool
|
||||
- proxmox_reboot_required_file.stat.exists | default(false)
|
||||
BIN
branding/nikflix/logo.png
Normal file
BIN
branding/nikflix/logo.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 277 KiB |
7
clusters/noble/apps/README.md
Normal file
7
clusters/noble/apps/README.md
Normal file
@@ -0,0 +1,7 @@
|
||||
# Argo CD — optional applications (non-bootstrap)
|
||||
|
||||
**Base cluster configuration** (CNI, MetalLB, ingress, cert-manager, storage, observability stack, policy, SOPS secrets path, etc.) is installed by **`ansible/playbooks/noble.yml`** from **`clusters/noble/bootstrap/`** — not from here.
|
||||
|
||||
**`noble-root`** (`clusters/noble/bootstrap/argocd/root-application.yaml`) points at **`clusters/noble/apps`**. Add **`Application`** manifests (and optional **`AppProject`** definitions) under this directory only for workloads that are additive and do not subsume the core platform.
|
||||
|
||||
Bootstrap kustomize (namespaces, static YAML, leaf **`Application`**s) lives in **`clusters/noble/bootstrap/`** and is tracked by **`noble-bootstrap-root`** — enable automated sync for that app only after **`noble.yml`** completes (**`clusters/noble/bootstrap/argocd/README.md`** §5). Put Helm **`Application`** migrations under **`clusters/noble/bootstrap/argocd/app-of-apps/`**.
|
||||
@@ -1,60 +0,0 @@
|
||||
# External Secrets Operator (noble)
|
||||
|
||||
Syncs secrets from external systems into Kubernetes **Secret** objects via **ExternalSecret** / **ClusterExternalSecret** CRDs.
|
||||
|
||||
- **Chart:** `external-secrets/external-secrets` **2.2.0** (app **v2.2.0**)
|
||||
- **Namespace:** `external-secrets`
|
||||
- **Helm release name:** `external-secrets` (matches the operator **ServiceAccount** name `external-secrets`)
|
||||
|
||||
## Install
|
||||
|
||||
```bash
|
||||
helm repo add external-secrets https://charts.external-secrets.io
|
||||
helm repo update
|
||||
kubectl apply -f clusters/noble/apps/external-secrets/namespace.yaml
|
||||
helm upgrade --install external-secrets external-secrets/external-secrets -n external-secrets \
|
||||
--version 2.2.0 -f clusters/noble/apps/external-secrets/values.yaml --wait
|
||||
```
|
||||
|
||||
Verify:
|
||||
|
||||
```bash
|
||||
kubectl -n external-secrets get deploy,pods
|
||||
kubectl get crd | grep external-secrets
|
||||
```
|
||||
|
||||
## Vault `ClusterSecretStore` (after Vault is deployed)
|
||||
|
||||
The checklist expects a **Vault**-backed store. Install Vault first (`talos/CLUSTER-BUILD.md` Phase E — Vault on Longhorn + auto-unseal), then:
|
||||
|
||||
1. Enable **KV v2** secrets engine and **Kubernetes** auth in Vault; create a **role** (e.g. `external-secrets`) that maps the cluster’s **`external-secrets` / `external-secrets`** service account to a policy that can read the paths you need.
|
||||
2. Copy **`examples/vault-cluster-secret-store.yaml`**, set **`spec.provider.vault.server`** to your Vault URL. This repo’s Vault Helm values use **HTTP** on port **8200** (`global.tlsDisable: true`): **`http://vault.vault.svc.cluster.local:8200`**. Use **`https://`** if you enable TLS on the Vault listener.
|
||||
3. If Vault uses a **private TLS CA**, configure **`caProvider`** or **`caBundle`** on the Vault provider — see [HashiCorp Vault provider](https://external-secrets.io/latest/provider/hashicorp-vault/). Do not commit private CA material to public git unless intended.
|
||||
4. Apply: **`kubectl apply -f …/vault-cluster-secret-store.yaml`**
|
||||
5. Confirm the store is ready: **`kubectl describe clustersecretstore vault`**
|
||||
|
||||
Example **ExternalSecret** (after the store is healthy):
|
||||
|
||||
```yaml
|
||||
apiVersion: external-secrets.io/v1
|
||||
kind: ExternalSecret
|
||||
metadata:
|
||||
name: demo
|
||||
namespace: default
|
||||
spec:
|
||||
refreshInterval: 1h
|
||||
secretStoreRef:
|
||||
name: vault
|
||||
kind: ClusterSecretStore
|
||||
target:
|
||||
name: demo-synced
|
||||
data:
|
||||
- secretKey: password
|
||||
remoteRef:
|
||||
key: secret/data/myapp
|
||||
property: password
|
||||
```
|
||||
|
||||
## Upgrades
|
||||
|
||||
Pin the chart version in `values.yaml` header comments; run the same **`helm upgrade --install`** with the new **`--version`** after reviewing [release notes](https://github.com/external-secrets/external-secrets/releases).
|
||||
@@ -1,31 +0,0 @@
|
||||
# ClusterSecretStore for HashiCorp Vault (KV v2) using Kubernetes auth.
|
||||
#
|
||||
# Do not apply until Vault is running, reachable from the cluster, and configured with:
|
||||
# - Kubernetes auth at mountPath (default: kubernetes)
|
||||
# - A role (below: external-secrets) bound to this service account:
|
||||
# name: external-secrets
|
||||
# namespace: external-secrets
|
||||
# - A policy allowing read on the KV path used below (e.g. secret/data/* for path "secret")
|
||||
#
|
||||
# Adjust server, mountPath, role, and path to match your Vault deployment. If Vault uses TLS
|
||||
# with a private CA, set provider.vault.caProvider or caBundle (see README).
|
||||
#
|
||||
# kubectl apply -f clusters/noble/apps/external-secrets/examples/vault-cluster-secret-store.yaml
|
||||
---
|
||||
apiVersion: external-secrets.io/v1
|
||||
kind: ClusterSecretStore
|
||||
metadata:
|
||||
name: vault
|
||||
spec:
|
||||
provider:
|
||||
vault:
|
||||
server: "http://vault.vault.svc.cluster.local:8200"
|
||||
path: secret
|
||||
version: v2
|
||||
auth:
|
||||
kubernetes:
|
||||
mountPath: kubernetes
|
||||
role: external-secrets
|
||||
serviceAccountRef:
|
||||
name: external-secrets
|
||||
namespace: external-secrets
|
||||
@@ -1,5 +0,0 @@
|
||||
# External Secrets Operator — apply before Helm.
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: external-secrets
|
||||
@@ -1,10 +0,0 @@
|
||||
# External Secrets Operator — noble
|
||||
#
|
||||
# helm repo add external-secrets https://charts.external-secrets.io
|
||||
# helm repo update
|
||||
# kubectl apply -f clusters/noble/apps/external-secrets/namespace.yaml
|
||||
# helm upgrade --install external-secrets external-secrets/external-secrets -n external-secrets \
|
||||
# --version 2.2.0 -f clusters/noble/apps/external-secrets/values.yaml --wait
|
||||
#
|
||||
# CRDs are installed by the chart (installCRDs: true). Vault ClusterSecretStore: see README + examples/.
|
||||
commonLabels: {}
|
||||
32
clusters/noble/apps/homepage/application.yaml
Normal file
32
clusters/noble/apps/homepage/application.yaml
Normal file
@@ -0,0 +1,32 @@
|
||||
# Argo CD — optional [Homepage](https://gethomepage.dev/) dashboard (Helm from [jameswynn.github.io/helm-charts](https://jameswynn.github.io/helm-charts/)).
|
||||
# Values: **`./values.yaml`** (multi-source **`$values`** ref).
|
||||
#
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: homepage
|
||||
namespace: argocd
|
||||
finalizers:
|
||||
- resources-finalizer.argocd.argoproj.io/background
|
||||
spec:
|
||||
project: default
|
||||
sources:
|
||||
- repoURL: https://jameswynn.github.io/helm-charts
|
||||
chart: homepage
|
||||
targetRevision: 2.1.0
|
||||
helm:
|
||||
releaseName: homepage
|
||||
valueFiles:
|
||||
- $values/clusters/noble/apps/homepage/values.yaml
|
||||
- repoURL: https://gitea.pcenicni.ca/gsdavidp/home-server.git
|
||||
targetRevision: HEAD
|
||||
ref: values
|
||||
destination:
|
||||
server: https://kubernetes.default.svc
|
||||
namespace: homepage
|
||||
syncPolicy:
|
||||
automated:
|
||||
prune: true
|
||||
selfHeal: true
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
122
clusters/noble/apps/homepage/values.yaml
Normal file
122
clusters/noble/apps/homepage/values.yaml
Normal file
@@ -0,0 +1,122 @@
|
||||
# Homepage — [gethomepage/homepage](https://github.com/gethomepage/homepage) via [jameswynn/homepage](https://github.com/jameswynn/helm-charts) Helm chart.
|
||||
# Ingress: Traefik + cert-manager (same pattern as `clusters/noble/bootstrap/headlamp/values.yaml`).
|
||||
# Service links match **`ansible/roles/noble_landing_urls/defaults/main.yml`** (`noble_lab_ui_entries`).
|
||||
# **Velero** has no in-cluster web UI — tile links to upstream docs (no **siteMonitor**).
|
||||
#
|
||||
# **`siteMonitor`** runs **server-side** in the Homepage pod (see `gethomepage/homepage` `siteMonitor.js`).
|
||||
# Public FQDNs like **`*.apps.noble.lab.pcenicni.dev`** often do **not** resolve inside the cluster
|
||||
# (split-horizon / LAN DNS only) → `ENOTFOUND` / HTTP **500** in the monitor. Use **in-cluster Service**
|
||||
# URLs for **`siteMonitor`** only; **`href`** stays the human-facing ingress URL.
|
||||
#
|
||||
# **Prometheus widget** also resolves from the pod — use the real **Service** name (Helm may truncate to
|
||||
# 63 chars — this repo’s generated UI list uses **`kube-prometheus-kube-prome-prometheus`**).
|
||||
# Verify: `kubectl -n monitoring get svc | grep -E 'prometheus|alertmanager|grafana'`.
|
||||
#
|
||||
image:
|
||||
repository: ghcr.io/gethomepage/homepage
|
||||
tag: v1.2.0
|
||||
|
||||
enableRbac: true
|
||||
|
||||
serviceAccount:
|
||||
create: true
|
||||
|
||||
ingress:
|
||||
main:
|
||||
enabled: true
|
||||
ingressClassName: traefik
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: letsencrypt-prod
|
||||
hosts:
|
||||
- host: homepage.apps.noble.lab.pcenicni.dev
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
tls:
|
||||
- hosts:
|
||||
- homepage.apps.noble.lab.pcenicni.dev
|
||||
secretName: homepage-apps-noble-tls
|
||||
|
||||
env:
|
||||
- name: HOMEPAGE_ALLOWED_HOSTS
|
||||
value: homepage.apps.noble.lab.pcenicni.dev
|
||||
|
||||
config:
|
||||
bookmarks: []
|
||||
services:
|
||||
- Noble Lab:
|
||||
- Argo CD:
|
||||
icon: si-argocd
|
||||
href: https://argo.apps.noble.lab.pcenicni.dev
|
||||
siteMonitor: http://argocd-server.argocd.svc.cluster.local:80
|
||||
description: GitOps UI (sync, apps, repos)
|
||||
- Grafana:
|
||||
icon: si-grafana
|
||||
href: https://grafana.apps.noble.lab.pcenicni.dev
|
||||
siteMonitor: http://kube-prometheus-grafana.monitoring.svc.cluster.local:80
|
||||
description: Dashboards, Loki explore (logs)
|
||||
- Prometheus:
|
||||
icon: si-prometheus
|
||||
href: https://prometheus.apps.noble.lab.pcenicni.dev
|
||||
siteMonitor: http://kube-prometheus-kube-prome-prometheus.monitoring.svc.cluster.local:9090
|
||||
description: Prometheus UI (queries, targets) — lab; protect in production
|
||||
widget:
|
||||
type: prometheus
|
||||
url: http://kube-prometheus-kube-prome-prometheus.monitoring.svc.cluster.local:9090
|
||||
fields: ["targets_up", "targets_down", "targets_total"]
|
||||
- Alertmanager:
|
||||
icon: alertmanager.png
|
||||
href: https://alertmanager.apps.noble.lab.pcenicni.dev
|
||||
siteMonitor: http://kube-prometheus-kube-prome-alertmanager.monitoring.svc.cluster.local:9093
|
||||
description: Alertmanager UI (silences, status)
|
||||
- Headlamp:
|
||||
icon: mdi-kubernetes
|
||||
href: https://headlamp.apps.noble.lab.pcenicni.dev
|
||||
siteMonitor: http://headlamp.headlamp.svc.cluster.local:80
|
||||
description: Kubernetes UI (cluster resources)
|
||||
- Longhorn:
|
||||
icon: longhorn.png
|
||||
href: https://longhorn.apps.noble.lab.pcenicni.dev
|
||||
siteMonitor: http://longhorn-frontend.longhorn-system.svc.cluster.local:80
|
||||
description: Storage volumes, nodes, backups
|
||||
- Velero:
|
||||
icon: mdi-backup-restore
|
||||
href: https://velero.io/docs/
|
||||
description: Cluster backups — no in-cluster web UI; use velero CLI or kubectl (docs)
|
||||
widgets:
|
||||
- datetime:
|
||||
text_size: xl
|
||||
format:
|
||||
dateStyle: medium
|
||||
timeStyle: short
|
||||
- kubernetes:
|
||||
cluster:
|
||||
show: true
|
||||
cpu: true
|
||||
memory: true
|
||||
showLabel: true
|
||||
label: Cluster
|
||||
nodes:
|
||||
show: true
|
||||
cpu: true
|
||||
memory: true
|
||||
showLabel: true
|
||||
- search:
|
||||
provider: duckduckgo
|
||||
target: _blank
|
||||
kubernetes:
|
||||
mode: cluster
|
||||
settingsString: |
|
||||
title: Noble Lab
|
||||
description: Homelab services — in-cluster uptime checks, cluster resources, Prometheus targets
|
||||
theme: dark
|
||||
color: slate
|
||||
headerStyle: boxedWidgets
|
||||
statusStyle: dot
|
||||
iconStyle: theme
|
||||
fullWidth: true
|
||||
useEqualHeights: true
|
||||
layout:
|
||||
Noble Lab:
|
||||
style: row
|
||||
columns: 4
|
||||
@@ -1,17 +1,7 @@
|
||||
# Plain Kustomize only (namespaces + extra YAML). Helm installs are driven by **ansible/playbooks/noble.yml**
|
||||
# (role **noble_platform**) — avoids **kustomize --enable-helm** in-repo.
|
||||
# Argo CD **noble-root** syncs this directory. Add **Application** / **AppProject** manifests only for
|
||||
# optional workloads that do not replace Ansible bootstrap (CNI, ingress, storage, core observability, etc.).
|
||||
# Helm value files for those apps can live in subdirectories here (for example **./homepage/values.yaml**).
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
resources:
|
||||
- kube-prometheus-stack/namespace.yaml
|
||||
- loki/namespace.yaml
|
||||
- fluent-bit/namespace.yaml
|
||||
- sealed-secrets/namespace.yaml
|
||||
- external-secrets/namespace.yaml
|
||||
- vault/namespace.yaml
|
||||
- kyverno/namespace.yaml
|
||||
- headlamp/namespace.yaml
|
||||
- grafana-loki-datasource/loki-datasource.yaml
|
||||
- vault/unseal-cronjob.yaml
|
||||
- vault/cilium-network-policy.yaml
|
||||
- homepage/application.yaml
|
||||
|
||||
@@ -1,50 +0,0 @@
|
||||
# Sealed Secrets (noble)
|
||||
|
||||
Encrypts `Secret` manifests so they can live in git; the controller decrypts **SealedSecret** resources into **Secret**s in-cluster.
|
||||
|
||||
- **Chart:** `sealed-secrets/sealed-secrets` **2.18.4** (app **0.36.1**)
|
||||
- **Namespace:** `sealed-secrets`
|
||||
|
||||
## Install
|
||||
|
||||
```bash
|
||||
helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets
|
||||
helm repo update
|
||||
kubectl apply -f clusters/noble/apps/sealed-secrets/namespace.yaml
|
||||
helm upgrade --install sealed-secrets sealed-secrets/sealed-secrets -n sealed-secrets \
|
||||
--version 2.18.4 -f clusters/noble/apps/sealed-secrets/values.yaml --wait
|
||||
```
|
||||
|
||||
## Workstation: `kubeseal`
|
||||
|
||||
Install a **kubeseal** build compatible with the controller (match **app** minor, e.g. **0.36.x** for **0.36.1**). Examples:
|
||||
|
||||
- **Homebrew:** `brew install kubeseal` (check `kubeseal --version` against the chart’s `image.tag` in `helm show values`).
|
||||
- **GitHub releases:** [bitnami-labs/sealed-secrets](https://github.com/bitnami-labs/sealed-secrets/releases)
|
||||
|
||||
Fetch the cluster’s public seal cert (once per kube context):
|
||||
|
||||
```bash
|
||||
kubeseal --fetch-cert > /tmp/noble-sealed-secrets.pem
|
||||
```
|
||||
|
||||
Create a sealed secret from a normal secret manifest:
|
||||
|
||||
```bash
|
||||
kubectl create secret generic example --from-literal=foo=bar --dry-run=client -o yaml \
|
||||
| kubeseal --cert /tmp/noble-sealed-secrets.pem -o yaml > example-sealedsecret.yaml
|
||||
```
|
||||
|
||||
Commit `example-sealedsecret.yaml`; apply it with `kubectl apply -f`. The controller creates the **Secret** in the same namespace as the **SealedSecret**.
|
||||
|
||||
**Noble example:** `examples/kubeseal-newt-pangolin-auth.sh` (Newt / Pangolin tunnel credentials).
|
||||
|
||||
## Backup the sealing key
|
||||
|
||||
If the controller’s private key is lost, existing sealed files cannot be decrypted on a new cluster. Back up the key secret after install:
|
||||
|
||||
```bash
|
||||
kubectl get secret -n sealed-secrets -l sealedsecrets.bitnami.com/sealed-secrets-key=active -o yaml > sealed-secrets-key-backup.yaml
|
||||
```
|
||||
|
||||
Store `sealed-secrets-key-backup.yaml` in a safe offline location (not in public git).
|
||||
@@ -1,19 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Emit a SealedSecret for newt-pangolin-auth (namespace newt).
|
||||
# Prerequisites: sealed-secrets controller running; kubeseal client (same minor as controller).
|
||||
# Rotate Pangolin/Newt credentials in the UI first if they were exposed, then set env vars and run:
|
||||
#
|
||||
# export PANGOLIN_ENDPOINT='https://pangolin.example.com'
|
||||
# export NEWT_ID='...'
|
||||
# export NEWT_SECRET='...'
|
||||
# ./kubeseal-newt-pangolin-auth.sh > newt-pangolin-auth.sealedsecret.yaml
|
||||
# kubectl apply -f newt-pangolin-auth.sealedsecret.yaml
|
||||
#
|
||||
set -euo pipefail
|
||||
kubectl apply -f "$(dirname "$0")/../../newt/namespace.yaml" >/dev/null 2>&1 || true
|
||||
kubectl -n newt create secret generic newt-pangolin-auth \
|
||||
--dry-run=client \
|
||||
--from-literal=PANGOLIN_ENDPOINT="${PANGOLIN_ENDPOINT:?}" \
|
||||
--from-literal=NEWT_ID="${NEWT_ID:?}" \
|
||||
--from-literal=NEWT_SECRET="${NEWT_SECRET:?}" \
|
||||
-o yaml | kubeseal -o yaml
|
||||
@@ -1,5 +0,0 @@
|
||||
# Sealed Secrets controller — apply before Helm.
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: sealed-secrets
|
||||
@@ -1,18 +0,0 @@
|
||||
# Sealed Secrets — noble (Git-encrypted Secret workflow)
|
||||
#
|
||||
# helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets
|
||||
# helm repo update
|
||||
# kubectl apply -f clusters/noble/apps/sealed-secrets/namespace.yaml
|
||||
# helm upgrade --install sealed-secrets sealed-secrets/sealed-secrets -n sealed-secrets \
|
||||
# --version 2.18.4 -f clusters/noble/apps/sealed-secrets/values.yaml --wait
|
||||
#
|
||||
# Client: install kubeseal (same minor as controller — see README).
|
||||
# Defaults are sufficient for the lab; override here if you need key renewal, resources, etc.
|
||||
#
|
||||
# GitOps pattern: create Secrets only via SealedSecret (or External Secrets + Vault).
|
||||
# Example (Newt): clusters/noble/apps/sealed-secrets/examples/kubeseal-newt-pangolin-auth.sh
|
||||
# Backup the controller's sealing key: kubectl -n sealed-secrets get secret sealed-secrets-key -o yaml
|
||||
#
|
||||
# Talos cluster secrets (bootstrap token, cluster secret, certs) belong in talhelper talsecret /
|
||||
# SOPS — not Sealed Secrets. See talos/README.md.
|
||||
commonLabels: {}
|
||||
@@ -1,162 +0,0 @@
|
||||
# HashiCorp Vault (noble)
|
||||
|
||||
Standalone Vault with **file** storage on a **Longhorn** PVC (`server.dataStorage`). The listener uses **HTTP** (`global.tlsDisable: true`) for in-cluster use; add TLS at the listener when exposing outside the cluster.
|
||||
|
||||
- **Chart:** `hashicorp/vault` **0.32.0** (Vault **1.21.2**)
|
||||
- **Namespace:** `vault`
|
||||
|
||||
## Install
|
||||
|
||||
```bash
|
||||
helm repo add hashicorp https://helm.releases.hashicorp.com
|
||||
helm repo update
|
||||
kubectl apply -f clusters/noble/apps/vault/namespace.yaml
|
||||
helm upgrade --install vault hashicorp/vault -n vault \
|
||||
--version 0.32.0 -f clusters/noble/apps/vault/values.yaml --wait --timeout 15m
|
||||
```
|
||||
|
||||
Verify:
|
||||
|
||||
```bash
|
||||
kubectl -n vault get pods,pvc,svc
|
||||
kubectl -n vault exec -i sts/vault -- vault status
|
||||
```
|
||||
|
||||
## Cilium network policy (Phase G)
|
||||
|
||||
After **Cilium** is up, optionally restrict HTTP access to the Vault server pods (**TCP 8200**) to **`external-secrets`** and same-namespace clients:
|
||||
|
||||
```bash
|
||||
kubectl apply -f clusters/noble/apps/vault/cilium-network-policy.yaml
|
||||
```
|
||||
|
||||
If you add workloads in other namespaces that call Vault, extend **`ingress`** in that manifest.
|
||||
|
||||
## Initialize and unseal (first time)
|
||||
|
||||
From a workstation with `kubectl` (or `kubectl exec` into any pod with `vault` CLI):
|
||||
|
||||
```bash
|
||||
kubectl -n vault exec -i sts/vault -- vault operator init -key-shares=1 -key-threshold=1
|
||||
```
|
||||
|
||||
**Lab-only:** `-key-shares=1 -key-threshold=1` keeps a single unseal key. For stronger Shamir splits, use more shares and store them safely.
|
||||
|
||||
Save the **Unseal Key** and **Root Token** offline. Then unseal once:
|
||||
|
||||
```bash
|
||||
kubectl -n vault exec -i sts/vault -- vault operator unseal
|
||||
# paste unseal key
|
||||
```
|
||||
|
||||
Or create the Secret used by the optional CronJob and apply it:
|
||||
|
||||
```bash
|
||||
kubectl -n vault create secret generic vault-unseal-key --from-literal=key='YOUR_UNSEAL_KEY'
|
||||
kubectl apply -f clusters/noble/apps/vault/unseal-cronjob.yaml
|
||||
```
|
||||
|
||||
The CronJob runs every minute and unseals if Vault is sealed and the Secret is present.
|
||||
|
||||
## Auto-unseal note
|
||||
|
||||
Vault **OSS** auto-unseal uses cloud KMS (AWS, GCP, Azure, OCI), **Transit** (another Vault), etc. There is no first-class “Kubernetes Secret” seal. This repo uses an optional **CronJob** as a **lab** substitute. Production clusters should use a supported seal backend.
|
||||
|
||||
## Kubernetes auth (External Secrets / ClusterSecretStore)
|
||||
|
||||
**One-shot:** from the repo root, `export KUBECONFIG=talos/kubeconfig` and `export VAULT_TOKEN=…`, then run **`./clusters/noble/apps/vault/configure-kubernetes-auth.sh`** (idempotent). Then **`kubectl apply -f clusters/noble/apps/external-secrets/examples/vault-cluster-secret-store.yaml`** on its own line (shell comments **`# …`** on the same line are parsed as extra `kubectl` args and break `apply`). **`kubectl get clustersecretstore vault`** should show **READY=True** after a few seconds.
|
||||
|
||||
Run these **from your workstation** (needs `kubectl`; no local `vault` binary required). Use a **short-lived admin token** or the root token **only in your shell** — do not paste tokens into logs or chat.
|
||||
|
||||
**1. Enable the auth method** (skip if already done):
|
||||
|
||||
```bash
|
||||
kubectl -n vault exec -it sts/vault -- sh -c '
|
||||
export VAULT_ADDR=http://127.0.0.1:8200
|
||||
export VAULT_TOKEN="YOUR_ROOT_OR_ADMIN_TOKEN"
|
||||
vault auth enable kubernetes
|
||||
'
|
||||
```
|
||||
|
||||
**2. Configure `auth/kubernetes`** — the API **issuer** must match the `iss` claim on service account JWTs. With **kube-vip** / a custom API URL, discover it from the cluster (do not assume `kubernetes.default`):
|
||||
|
||||
```bash
|
||||
ISSUER=$(kubectl get --raw /.well-known/openid-configuration | jq -r .issuer)
|
||||
REVIEWER=$(kubectl -n vault create token vault --duration=8760h)
|
||||
CA_B64=$(kubectl config view --raw --minify -o jsonpath='{.clusters[0].cluster.certificate-authority-data}')
|
||||
```
|
||||
|
||||
Then apply config **inside** the Vault pod (environment variables are passed in with `env` so quoting stays correct):
|
||||
|
||||
```bash
|
||||
export VAULT_TOKEN="YOUR_ROOT_OR_ADMIN_TOKEN"
|
||||
export ISSUER REVIEWER CA_B64
|
||||
kubectl -n vault exec -i sts/vault -- env \
|
||||
VAULT_ADDR=http://127.0.0.1:8200 \
|
||||
VAULT_TOKEN="$VAULT_TOKEN" \
|
||||
CA_B64="$CA_B64" \
|
||||
REVIEWER="$REVIEWER" \
|
||||
ISSUER="$ISSUER" \
|
||||
sh -ec '
|
||||
echo "$CA_B64" | base64 -d > /tmp/k8s-ca.crt
|
||||
vault write auth/kubernetes/config \
|
||||
kubernetes_host="https://kubernetes.default.svc:443" \
|
||||
kubernetes_ca_cert=@/tmp/k8s-ca.crt \
|
||||
token_reviewer_jwt="$REVIEWER" \
|
||||
issuer="$ISSUER"
|
||||
'
|
||||
```
|
||||
|
||||
**3. KV v2** at path `secret` (skip if already enabled):
|
||||
|
||||
```bash
|
||||
kubectl -n vault exec -it sts/vault -- sh -c '
|
||||
export VAULT_ADDR=http://127.0.0.1:8200
|
||||
export VAULT_TOKEN="YOUR_ROOT_OR_ADMIN_TOKEN"
|
||||
vault secrets enable -path=secret kv-v2
|
||||
'
|
||||
```
|
||||
|
||||
**4. Policy + role** for the External Secrets operator SA (`external-secrets` / `external-secrets`):
|
||||
|
||||
```bash
|
||||
kubectl -n vault exec -it sts/vault -- sh -c '
|
||||
export VAULT_ADDR=http://127.0.0.1:8200
|
||||
export VAULT_TOKEN="YOUR_ROOT_OR_ADMIN_TOKEN"
|
||||
vault policy write external-secrets - <<EOF
|
||||
path "secret/data/*" {
|
||||
capabilities = ["read", "list"]
|
||||
}
|
||||
path "secret/metadata/*" {
|
||||
capabilities = ["read", "list"]
|
||||
}
|
||||
EOF
|
||||
vault write auth/kubernetes/role/external-secrets \
|
||||
bound_service_account_names=external-secrets \
|
||||
bound_service_account_namespaces=external-secrets \
|
||||
policies=external-secrets \
|
||||
ttl=24h
|
||||
'
|
||||
```
|
||||
|
||||
**5. Apply** **`clusters/noble/apps/external-secrets/examples/vault-cluster-secret-store.yaml`** if you have not already, then verify:
|
||||
|
||||
```bash
|
||||
kubectl describe clustersecretstore vault
|
||||
```
|
||||
|
||||
See also [Kubernetes auth](https://developer.hashicorp.com/vault/docs/auth/kubernetes#configuration).
|
||||
|
||||
## TLS and External Secrets
|
||||
|
||||
`values.yaml` disables TLS on the Vault listener. The **`ClusterSecretStore`** example uses **`http://vault.vault.svc.cluster.local:8200`**. If you enable TLS on the listener, switch the URL to **`https://`** and configure **`caBundle`** or **`caProvider`** on the store.
|
||||
|
||||
## UI
|
||||
|
||||
Port-forward:
|
||||
|
||||
```bash
|
||||
kubectl -n vault port-forward svc/vault-ui 8200:8200
|
||||
```
|
||||
|
||||
Open `http://127.0.0.1:8200` and log in with the root token (rotate for production workflows).
|
||||
@@ -1,40 +0,0 @@
|
||||
# CiliumNetworkPolicy — restrict who may reach Vault HTTP listener (8200).
|
||||
# Apply after Cilium is healthy: kubectl apply -f clusters/noble/apps/vault/cilium-network-policy.yaml
|
||||
#
|
||||
# Ingress-only policy: egress from Vault is unchanged (Kubernetes auth needs API + DNS).
|
||||
# Extend ingress rules if other namespaces must call Vault (e.g. app workloads).
|
||||
#
|
||||
# Ref: https://docs.cilium.io/en/stable/security/policy/language/
|
||||
---
|
||||
apiVersion: cilium.io/v2
|
||||
kind: CiliumNetworkPolicy
|
||||
metadata:
|
||||
name: vault-http-ingress
|
||||
namespace: vault
|
||||
spec:
|
||||
endpointSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: vault
|
||||
component: server
|
||||
ingress:
|
||||
- fromEndpoints:
|
||||
- matchLabels:
|
||||
"k8s:io.kubernetes.pod.namespace": external-secrets
|
||||
toPorts:
|
||||
- ports:
|
||||
- port: "8200"
|
||||
protocol: TCP
|
||||
- fromEndpoints:
|
||||
- matchLabels:
|
||||
"k8s:io.kubernetes.pod.namespace": traefik
|
||||
toPorts:
|
||||
- ports:
|
||||
- port: "8200"
|
||||
protocol: TCP
|
||||
- fromEndpoints:
|
||||
- matchLabels:
|
||||
"k8s:io.kubernetes.pod.namespace": vault
|
||||
toPorts:
|
||||
- ports:
|
||||
- port: "8200"
|
||||
protocol: TCP
|
||||
@@ -1,77 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Configure Vault Kubernetes auth + KV v2 + policy/role for External Secrets Operator.
|
||||
# Requires: kubectl (cluster access), jq optional (openid issuer); Vault reachable via sts/vault.
|
||||
#
|
||||
# Usage (from repo root):
|
||||
# export KUBECONFIG=talos/kubeconfig # or your path
|
||||
# export VAULT_TOKEN='…' # root or admin token — never commit
|
||||
# ./clusters/noble/apps/vault/configure-kubernetes-auth.sh
|
||||
#
|
||||
# Then: kubectl apply -f clusters/noble/apps/external-secrets/examples/vault-cluster-secret-store.yaml
|
||||
# Verify: kubectl describe clustersecretstore vault
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
: "${VAULT_TOKEN:?Set VAULT_TOKEN to your Vault root or admin token}"
|
||||
|
||||
ISSUER=$(kubectl get --raw /.well-known/openid-configuration | jq -r .issuer)
|
||||
REVIEWER=$(kubectl -n vault create token vault --duration=8760h)
|
||||
CA_B64=$(kubectl config view --raw --minify -o jsonpath='{.clusters[0].cluster.certificate-authority-data}')
|
||||
|
||||
kubectl -n vault exec -i sts/vault -- env \
|
||||
VAULT_ADDR=http://127.0.0.1:8200 \
|
||||
VAULT_TOKEN="$VAULT_TOKEN" \
|
||||
sh -ec '
|
||||
set -e
|
||||
vault auth list >/tmp/vauth.txt
|
||||
grep -q "^kubernetes/" /tmp/vauth.txt || vault auth enable kubernetes
|
||||
'
|
||||
|
||||
kubectl -n vault exec -i sts/vault -- env \
|
||||
VAULT_ADDR=http://127.0.0.1:8200 \
|
||||
VAULT_TOKEN="$VAULT_TOKEN" \
|
||||
CA_B64="$CA_B64" \
|
||||
REVIEWER="$REVIEWER" \
|
||||
ISSUER="$ISSUER" \
|
||||
sh -ec '
|
||||
echo "$CA_B64" | base64 -d > /tmp/k8s-ca.crt
|
||||
vault write auth/kubernetes/config \
|
||||
kubernetes_host="https://kubernetes.default.svc:443" \
|
||||
kubernetes_ca_cert=@/tmp/k8s-ca.crt \
|
||||
token_reviewer_jwt="$REVIEWER" \
|
||||
issuer="$ISSUER"
|
||||
'
|
||||
|
||||
kubectl -n vault exec -i sts/vault -- env \
|
||||
VAULT_ADDR=http://127.0.0.1:8200 \
|
||||
VAULT_TOKEN="$VAULT_TOKEN" \
|
||||
sh -ec '
|
||||
set -e
|
||||
vault secrets list >/tmp/vsec.txt
|
||||
grep -q "^secret/" /tmp/vsec.txt || vault secrets enable -path=secret kv-v2
|
||||
'
|
||||
|
||||
kubectl -n vault exec -i sts/vault -- env \
|
||||
VAULT_ADDR=http://127.0.0.1:8200 \
|
||||
VAULT_TOKEN="$VAULT_TOKEN" \
|
||||
sh -ec '
|
||||
vault policy write external-secrets - <<EOF
|
||||
path "secret/data/*" {
|
||||
capabilities = ["read", "list"]
|
||||
}
|
||||
path "secret/metadata/*" {
|
||||
capabilities = ["read", "list"]
|
||||
}
|
||||
EOF
|
||||
vault write auth/kubernetes/role/external-secrets \
|
||||
bound_service_account_names=external-secrets \
|
||||
bound_service_account_namespaces=external-secrets \
|
||||
policies=external-secrets \
|
||||
ttl=24h
|
||||
'
|
||||
|
||||
echo "Done. Issuer used: $ISSUER"
|
||||
echo ""
|
||||
echo "Next (each command on its own line — do not paste # comments after kubectl):"
|
||||
echo " kubectl apply -f clusters/noble/apps/external-secrets/examples/vault-cluster-secret-store.yaml"
|
||||
echo " kubectl get clustersecretstore vault"
|
||||
@@ -1,5 +0,0 @@
|
||||
# HashiCorp Vault — apply before Helm.
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: vault
|
||||
@@ -1,63 +0,0 @@
|
||||
# Optional lab auto-unseal: applies after Vault is initialized and Secret `vault-unseal-key` exists.
|
||||
#
|
||||
# 1) vault operator init -key-shares=1 -key-threshold=1 (lab only — single key)
|
||||
# 2) kubectl -n vault create secret generic vault-unseal-key --from-literal=key='YOUR_UNSEAL_KEY'
|
||||
# 3) kubectl apply -f clusters/noble/apps/vault/unseal-cronjob.yaml
|
||||
#
|
||||
# OSS Vault has no Kubernetes/KMS seal; this CronJob runs vault operator unseal when the server is sealed.
|
||||
# Protect the Secret with RBAC; prefer cloud KMS auto-unseal for real environments.
|
||||
---
|
||||
apiVersion: batch/v1
|
||||
kind: CronJob
|
||||
metadata:
|
||||
name: vault-auto-unseal
|
||||
namespace: vault
|
||||
spec:
|
||||
concurrencyPolicy: Forbid
|
||||
successfulJobsHistoryLimit: 1
|
||||
failedJobsHistoryLimit: 3
|
||||
schedule: "*/1 * * * *"
|
||||
jobTemplate:
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
restartPolicy: OnFailure
|
||||
securityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 100
|
||||
runAsGroup: 1000
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: unseal
|
||||
image: hashicorp/vault:1.21.2
|
||||
imagePullPolicy: IfNotPresent
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
env:
|
||||
- name: VAULT_ADDR
|
||||
value: http://vault.vault.svc:8200
|
||||
command:
|
||||
- /bin/sh
|
||||
- -ec
|
||||
- |
|
||||
test -f /secrets/key || exit 0
|
||||
status="$(vault status -format=json 2>/dev/null || true)"
|
||||
echo "$status" | grep -q '"initialized":true' || exit 0
|
||||
echo "$status" | grep -q '"sealed":false' && exit 0
|
||||
vault operator unseal "$(cat /secrets/key)"
|
||||
volumeMounts:
|
||||
- name: unseal
|
||||
mountPath: /secrets
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: unseal
|
||||
secret:
|
||||
secretName: vault-unseal-key
|
||||
optional: true
|
||||
items:
|
||||
- key: key
|
||||
path: key
|
||||
@@ -1,62 +0,0 @@
|
||||
# HashiCorp Vault — noble (standalone, file storage on Longhorn; TLS disabled on listener for in-cluster HTTP).
|
||||
#
|
||||
# helm repo add hashicorp https://helm.releases.hashicorp.com
|
||||
# helm repo update
|
||||
# kubectl apply -f clusters/noble/apps/vault/namespace.yaml
|
||||
# helm upgrade --install vault hashicorp/vault -n vault \
|
||||
# --version 0.32.0 -f clusters/noble/apps/vault/values.yaml --wait --timeout 15m
|
||||
#
|
||||
# Post-install: initialize, store unseal key in Secret, apply optional unseal CronJob — see README.md
|
||||
#
|
||||
global:
|
||||
tlsDisable: true
|
||||
|
||||
injector:
|
||||
enabled: true
|
||||
|
||||
server:
|
||||
enabled: true
|
||||
dataStorage:
|
||||
enabled: true
|
||||
size: 10Gi
|
||||
storageClass: longhorn
|
||||
accessMode: ReadWriteOnce
|
||||
ha:
|
||||
enabled: false
|
||||
standalone:
|
||||
enabled: true
|
||||
config: |
|
||||
ui = true
|
||||
|
||||
listener "tcp" {
|
||||
tls_disable = 1
|
||||
address = "[::]:8200"
|
||||
cluster_address = "[::]:8201"
|
||||
}
|
||||
|
||||
storage "file" {
|
||||
path = "/vault/data"
|
||||
}
|
||||
|
||||
# Allow pod Ready before init/unseal so Helm --wait succeeds (see Vault /v1/sys/health docs).
|
||||
readinessProbe:
|
||||
enabled: true
|
||||
path: "/v1/sys/health?uninitcode=204&sealedcode=204&standbyok=true"
|
||||
port: 8200
|
||||
|
||||
# LAN: TLS terminates at Traefik + cert-manager; listener stays HTTP (global.tlsDisable).
|
||||
ingress:
|
||||
enabled: true
|
||||
ingressClassName: traefik
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: letsencrypt-prod
|
||||
hosts:
|
||||
- host: vault.apps.noble.lab.pcenicni.dev
|
||||
paths: []
|
||||
tls:
|
||||
- secretName: vault-apps-noble-tls
|
||||
hosts:
|
||||
- vault.apps.noble.lab.pcenicni.dev
|
||||
|
||||
ui:
|
||||
enabled: true
|
||||
@@ -50,21 +50,56 @@ helm upgrade --install argocd argo/argo-cd -n argocd --create-namespace \
|
||||
|
||||
Use **Settings → Repositories** in the UI, or `argocd repo add` / a `Secret` of type `repository`.
|
||||
|
||||
## 4. App-of-apps (optional GitOps only)
|
||||
## 4. App-of-apps (GitOps)
|
||||
|
||||
Bootstrap **platform** workloads (CNI, ingress, cert-manager, Kyverno, observability, Vault, etc.) are installed by
|
||||
**`ansible/playbooks/noble.yml`** — not by Argo. **`apps/kustomization.yaml`** is empty by default.
|
||||
**Ansible** (`ansible/playbooks/noble.yml`) performs the **initial** install: Helm releases and **`kubectl apply -k clusters/noble/bootstrap`**. **Argo** then tracks the same git paths for ongoing reconciliation.
|
||||
|
||||
1. Edit **`root-application.yaml`**: set **`repoURL`** and **`targetRevision`** to this repository. The **`resources-finalizer.argocd.argoproj.io/background`** finalizer uses Argo’s path-qualified form so **`kubectl apply`** does not warn about finalizer names.
|
||||
2. When you want Argo to manage specific apps, add **`Application`** manifests under **`apps/`** (see **`apps/README.md`**).
|
||||
3. Apply the root:
|
||||
1. Edit **`root-application.yaml`** and **`bootstrap-root-application.yaml`**: set **`repoURL`** and **`targetRevision`**. The **`resources-finalizer.argocd.argoproj.io/background`** finalizer uses Argo’s path-qualified form so **`kubectl apply`** does not warn about finalizer names.
|
||||
2. Optional add-on apps: add **`Application`** manifests under **`clusters/noble/apps/`** (see **`clusters/noble/apps/README.md`**).
|
||||
3. **Bootstrap kustomize** (namespaces, datasource, leaf **`Application`**s under **`argocd/app-of-apps/`**, etc.): **`noble-bootstrap-root`** syncs **`clusters/noble/bootstrap`**. It is created with **manual** sync only so Argo does not apply changes while **`noble.yml`** is still running.
|
||||
|
||||
**`ansible/playbooks/noble.yml`** (role **`noble_argocd`**) applies both roots when **`noble_argocd_apply_root_application`** / **`noble_argocd_apply_bootstrap_root_application`** are true in **`ansible/group_vars/all.yml`**.
|
||||
|
||||
```bash
|
||||
kubectl apply -f clusters/noble/bootstrap/argocd/root-application.yaml
|
||||
kubectl apply -f clusters/noble/bootstrap/argocd/bootstrap-root-application.yaml
|
||||
```
|
||||
|
||||
If you migrated from GitOps-managed **`noble-platform`** / **`noble-kyverno`**, delete stale **`Application`** objects on
|
||||
the cluster (see **`apps/README.md`**) then re-apply the root.
|
||||
If you migrated from older GitOps **`Application`** names, delete stale **`Application`** objects on the cluster (see **`clusters/noble/apps/README.md`**) then re-apply the roots.
|
||||
|
||||
## 5. After Ansible: enable automated sync for **noble-bootstrap-root**
|
||||
|
||||
Do this only after **`ansible-playbook playbooks/noble.yml`** has finished successfully (including **`noble_platform`** `kubectl apply -k` and any Helm stages you rely on). Until then, leave **manual** sync so Argo does not fight the playbook.
|
||||
|
||||
**Required steps**
|
||||
|
||||
1. Confirm the cluster matches git for kustomize output (optional): `kubectl kustomize clusters/noble/bootstrap | kubectl diff -f -` or inspect resources in the UI.
|
||||
2. Register the git repo in Argo if you have not already (**§3**).
|
||||
3. **Refresh** the app so Argo compares **`clusters/noble/bootstrap`** to the cluster: Argo UI → **noble-bootstrap-root** → **Refresh**, or:
|
||||
|
||||
```bash
|
||||
argocd app get noble-bootstrap-root --refresh
|
||||
```
|
||||
|
||||
4. **Enable automated sync** (prune + self-heal), preserving **`CreateNamespace`**, using any one of:
|
||||
|
||||
**kubectl**
|
||||
|
||||
```bash
|
||||
kubectl patch application noble-bootstrap-root -n argocd --type merge -p '{"spec":{"syncPolicy":{"automated":{"prune":true,"selfHeal":true},"syncOptions":["CreateNamespace=true"]}}}'
|
||||
```
|
||||
|
||||
**argocd** CLI (logged in)
|
||||
|
||||
```bash
|
||||
argocd app set noble-bootstrap-root --sync-policy automated --auto-prune --self-heal
|
||||
```
|
||||
|
||||
**UI:** open **noble-bootstrap-root** → **App Details** → enable **AUTO-SYNC** (and **Prune** / **Self Heal** if shown).
|
||||
|
||||
5. Trigger a sync if the app does not go green immediately: **Sync** in the UI, or `argocd app sync noble-bootstrap-root`.
|
||||
|
||||
After this, **git** is the source of truth for everything under **`clusters/noble/bootstrap/kustomization.yaml`** (including **`argocd/app-of-apps/`**). Helm-managed platform components remain whatever Ansible last installed until you model them as Argo **`Application`**s under **`app-of-apps/`** and stop installing them from Ansible.
|
||||
|
||||
## Versions
|
||||
|
||||
|
||||
@@ -1,17 +0,0 @@
|
||||
# Argo CD — app-of-apps children (optional GitOps only)
|
||||
|
||||
**Core platform is Ansible-managed** — see repository **`ansible/README.md`** and **`ansible/playbooks/noble.yml`**.
|
||||
|
||||
This directory’s **`kustomization.yaml`** has **`resources: []`** so **`noble-root`** (if applied) does not reconcile Helm charts or cluster add-ons. **Add `Application` manifests here only** for apps you want Argo to manage (for example, sample workloads or third-party charts not covered by the bootstrap playbook).
|
||||
|
||||
| Previous (removed) | Now |
|
||||
|--------------------|-----|
|
||||
| **`noble-kyverno`**, **`noble-kyverno-policies`**, **`noble-platform`** | Installed by Ansible roles **`noble_kyverno`**, **`noble_kyverno_policies`**, **`noble_platform`** |
|
||||
|
||||
If you previously synced **`noble-root`** with the old child manifests, delete stale Applications on the cluster:
|
||||
|
||||
```bash
|
||||
kubectl delete application -n argocd noble-platform noble-kyverno noble-kyverno-policies --ignore-not-found
|
||||
```
|
||||
|
||||
Then re-apply **`root-application.yaml`** so Argo matches this repo.
|
||||
@@ -1,6 +0,0 @@
|
||||
# Intentionally empty: core platform (CNI, ingress, storage, observability, policy, etc.) is
|
||||
# installed by **ansible/playbooks/noble.yml** — not by Argo CD. Add optional Application
|
||||
# manifests here only for workloads you want GitOps-managed.
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources: []
|
||||
@@ -3,8 +3,10 @@
|
||||
# 1. Set spec.source.repoURL (and targetRevision — **HEAD** tracks the remote default branch) to this repo.
|
||||
# 2. kubectl apply -f clusters/noble/bootstrap/argocd/root-application.yaml
|
||||
#
|
||||
# **apps/kustomization.yaml** is intentionally empty: core platform is installed by **ansible/playbooks/noble.yml**,
|
||||
# not Argo. Add **Application** manifests under **apps/** only for optional GitOps-managed workloads.
|
||||
# **clusters/noble/apps** holds optional **Application** manifests. Core platform Helm + kustomize is
|
||||
# installed by **ansible/playbooks/noble.yml** from **clusters/noble/bootstrap/**. **bootstrap-root-application.yaml**
|
||||
# registers **noble-bootstrap-root** for the same kustomize tree (**manual** sync until you enable
|
||||
# automation after the playbook — see **README.md** §5).
|
||||
#
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
@@ -21,7 +23,7 @@ spec:
|
||||
source:
|
||||
repoURL: https://gitea.pcenicni.ca/gsdavidp/home-server.git
|
||||
targetRevision: HEAD
|
||||
path: clusters/noble/bootstrap/argocd/apps
|
||||
path: clusters/noble/apps
|
||||
destination:
|
||||
server: https://kubernetes.default.svc
|
||||
namespace: argocd
|
||||
|
||||
@@ -19,7 +19,7 @@ Without this Secret, **`ClusterIssuer`** will not complete certificate orders.
|
||||
1. Create the namespace:
|
||||
|
||||
```bash
|
||||
kubectl apply -f clusters/noble/apps/cert-manager/namespace.yaml
|
||||
kubectl apply -f clusters/noble/bootstrap/cert-manager/namespace.yaml
|
||||
```
|
||||
|
||||
2. Install the chart (CRDs included via `values.yaml`):
|
||||
@@ -30,7 +30,7 @@ Without this Secret, **`ClusterIssuer`** will not complete certificate orders.
|
||||
helm upgrade --install cert-manager jetstack/cert-manager \
|
||||
--namespace cert-manager \
|
||||
--version v1.20.0 \
|
||||
-f clusters/noble/apps/cert-manager/values.yaml \
|
||||
-f clusters/noble/bootstrap/cert-manager/values.yaml \
|
||||
--wait
|
||||
```
|
||||
|
||||
@@ -39,7 +39,7 @@ Without this Secret, **`ClusterIssuer`** will not complete certificate orders.
|
||||
4. Apply ClusterIssuers (staging then prod, or both):
|
||||
|
||||
```bash
|
||||
kubectl apply -k clusters/noble/apps/cert-manager
|
||||
kubectl apply -k clusters/noble/bootstrap/cert-manager
|
||||
```
|
||||
|
||||
5. Confirm:
|
||||
@@ -2,13 +2,13 @@
|
||||
#
|
||||
# Chart: jetstack/cert-manager — pin version on the helm command (e.g. v1.20.0).
|
||||
#
|
||||
# kubectl apply -f clusters/noble/apps/cert-manager/namespace.yaml
|
||||
# kubectl apply -f clusters/noble/bootstrap/cert-manager/namespace.yaml
|
||||
# helm repo add jetstack https://charts.jetstack.io
|
||||
# helm repo update
|
||||
# helm upgrade --install cert-manager jetstack/cert-manager -n cert-manager \
|
||||
# --version v1.20.0 -f clusters/noble/apps/cert-manager/values.yaml --wait
|
||||
# --version v1.20.0 -f clusters/noble/bootstrap/cert-manager/values.yaml --wait
|
||||
#
|
||||
# kubectl apply -k clusters/noble/apps/cert-manager
|
||||
# kubectl apply -k clusters/noble/bootstrap/cert-manager
|
||||
|
||||
crds:
|
||||
enabled: true
|
||||
@@ -14,7 +14,7 @@ helm repo update
|
||||
helm upgrade --install cilium cilium/cilium \
|
||||
--namespace kube-system \
|
||||
--version 1.16.6 \
|
||||
-f clusters/noble/apps/cilium/values.yaml \
|
||||
-f clusters/noble/bootstrap/cilium/values.yaml \
|
||||
--wait
|
||||
```
|
||||
|
||||
@@ -25,7 +25,7 @@ kubectl -n kube-system rollout status ds/cilium
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
When nodes are **Ready**, continue with **MetalLB** (`clusters/noble/apps/metallb/README.md`) and other Phase B items. **kube-vip** for the Kubernetes API VIP is separate (L2 ARP); it can run after the API is reachable.
|
||||
When nodes are **Ready**, continue with **MetalLB** (`clusters/noble/bootstrap/metallb/README.md`) and other Phase B items. **kube-vip** for the Kubernetes API VIP is separate (L2 ARP); it can run after the API is reachable.
|
||||
|
||||
## 2. Optional: kube-proxy replacement (phase 2)
|
||||
|
||||
16
clusters/noble/bootstrap/csi-snapshot-controller/README.md
Normal file
16
clusters/noble/bootstrap/csi-snapshot-controller/README.md
Normal file
@@ -0,0 +1,16 @@
|
||||
# CSI Volume Snapshot (external-snapshotter)
|
||||
|
||||
Installs the **Volume Snapshot** CRDs and the **snapshot-controller** so CSI drivers (e.g. **Longhorn**) and **Velero** can use `VolumeSnapshot` / `VolumeSnapshotContent` / `VolumeSnapshotClass`.
|
||||
|
||||
- Upstream: [kubernetes-csi/external-snapshotter](https://github.com/kubernetes-csi/external-snapshotter) **v8.5.0**
|
||||
- **Not** the per-driver **csi-snapshotter** sidecar — Longhorn ships that with its CSI components.
|
||||
|
||||
**Order:** apply **before** relying on volume snapshots (e.g. before or early with **Longhorn**; **Ansible** runs this after **Cilium**, before **metrics-server** / **Longhorn**).
|
||||
|
||||
```bash
|
||||
kubectl apply -k clusters/noble/bootstrap/csi-snapshot-controller/crd
|
||||
kubectl apply -k clusters/noble/bootstrap/csi-snapshot-controller/controller
|
||||
kubectl -n kube-system rollout status deploy/snapshot-controller --timeout=120s
|
||||
```
|
||||
|
||||
After this, create or label a **VolumeSnapshotClass** for Longhorn (`velero.io/csi-volumesnapshot-class: "true"`) per `clusters/noble/bootstrap/velero/README.md`.
|
||||
@@ -0,0 +1,8 @@
|
||||
# Snapshot controller — **kube-system** (upstream default).
|
||||
# Image tag should match the external-snapshotter release family (see setup-snapshot-controller.yaml in that tag).
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
namespace: kube-system
|
||||
resources:
|
||||
- https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v8.5.0/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml
|
||||
- https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v8.5.0/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml
|
||||
@@ -0,0 +1,9 @@
|
||||
# kubernetes-csi/external-snapshotter — Volume Snapshot GA CRDs only (no VolumeGroupSnapshot).
|
||||
# Pin **ref** when bumping; keep in sync with **controller** image below.
|
||||
# https://github.com/kubernetes-csi/external-snapshotter/tree/v8.5.0/client/config/crd
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v8.5.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
|
||||
- https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v8.5.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
|
||||
- https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v8.5.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
|
||||
@@ -5,11 +5,11 @@
|
||||
#
|
||||
# Talos: only **tail** `/var/log/containers` (no host **systemd** input — journal layout differs from typical Linux).
|
||||
#
|
||||
# kubectl apply -f clusters/noble/apps/fluent-bit/namespace.yaml
|
||||
# kubectl apply -f clusters/noble/bootstrap/fluent-bit/namespace.yaml
|
||||
# helm repo add fluent https://fluent.github.io/helm-charts
|
||||
# helm repo update
|
||||
# helm upgrade --install fluent-bit fluent/fluent-bit -n logging \
|
||||
# --version 0.56.0 -f clusters/noble/apps/fluent-bit/values.yaml --wait --timeout 15m
|
||||
# --version 0.56.0 -f clusters/noble/bootstrap/fluent-bit/values.yaml --wait --timeout 15m
|
||||
|
||||
config:
|
||||
inputs: |
|
||||
@@ -2,9 +2,9 @@
|
||||
# The Grafana sidecar watches ConfigMaps labeled **grafana_datasource: "1"** and loads YAML keys as files.
|
||||
# Does not require editing the kube-prometheus-stack Helm release.
|
||||
#
|
||||
# kubectl apply -f clusters/noble/apps/grafana-loki-datasource/loki-datasource.yaml
|
||||
# kubectl apply -f clusters/noble/bootstrap/grafana-loki-datasource/loki-datasource.yaml
|
||||
#
|
||||
# Remove with: kubectl delete -f clusters/noble/apps/grafana-loki-datasource/loki-datasource.yaml
|
||||
# Remove with: kubectl delete -f clusters/noble/bootstrap/grafana-loki-datasource/loki-datasource.yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
@@ -10,9 +10,9 @@
|
||||
```bash
|
||||
helm repo add headlamp https://kubernetes-sigs.github.io/headlamp/
|
||||
helm repo update
|
||||
kubectl apply -f clusters/noble/apps/headlamp/namespace.yaml
|
||||
kubectl apply -f clusters/noble/bootstrap/headlamp/namespace.yaml
|
||||
helm upgrade --install headlamp headlamp/headlamp -n headlamp \
|
||||
--version 0.40.1 -f clusters/noble/apps/headlamp/values.yaml --wait --timeout 10m
|
||||
--version 0.40.1 -f clusters/noble/bootstrap/headlamp/values.yaml --wait --timeout 10m
|
||||
```
|
||||
|
||||
Sign-in uses a **ServiceAccount token** (Headlamp docs: create a limited SA for day-to-day use). This repo binds the Headlamp workload SA to the built-in **`edit`** ClusterRole (**`clusterRoleBinding.clusterRoleName: edit`** in **`values.yaml`**) — not **`cluster-admin`**. For cluster-scoped admin work, use **`kubectl`** with your admin kubeconfig. Optional **OIDC** in **`config.oidc`** replaces token login for SSO.
|
||||
@@ -2,9 +2,9 @@
|
||||
#
|
||||
# helm repo add headlamp https://kubernetes-sigs.github.io/headlamp/
|
||||
# helm repo update
|
||||
# kubectl apply -f clusters/noble/apps/headlamp/namespace.yaml
|
||||
# kubectl apply -f clusters/noble/bootstrap/headlamp/namespace.yaml
|
||||
# helm upgrade --install headlamp headlamp/headlamp -n headlamp \
|
||||
# --version 0.40.1 -f clusters/noble/apps/headlamp/values.yaml --wait --timeout 10m
|
||||
# --version 0.40.1 -f clusters/noble/bootstrap/headlamp/values.yaml --wait --timeout 10m
|
||||
#
|
||||
# DNS: headlamp.apps.noble.lab.pcenicni.dev → Traefik LB (see talos/CLUSTER-BUILD.md).
|
||||
# Default chart RBAC is broad — restrict for production (Phase G).
|
||||
@@ -4,10 +4,10 @@
|
||||
#
|
||||
# Install (use one terminal; chain with && so `helm upgrade` always runs after `helm repo update`):
|
||||
#
|
||||
# kubectl apply -f clusters/noble/apps/kube-prometheus-stack/namespace.yaml
|
||||
# kubectl apply -f clusters/noble/bootstrap/kube-prometheus-stack/namespace.yaml
|
||||
# helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
|
||||
# helm repo update && helm upgrade --install kube-prometheus prometheus-community/kube-prometheus-stack -n monitoring \
|
||||
# --version 82.15.1 -f clusters/noble/apps/kube-prometheus-stack/values.yaml --wait --timeout 30m
|
||||
# --version 82.15.1 -f clusters/noble/bootstrap/kube-prometheus-stack/values.yaml --wait --timeout 30m
|
||||
#
|
||||
# Why it looks "stalled": with --wait, Helm prints almost nothing until the release finishes (can be many minutes).
|
||||
# Do not use --timeout 5m for first install — Longhorn PVCs + StatefulSets often need 15–30m. To watch progress,
|
||||
@@ -87,7 +87,7 @@ grafana:
|
||||
size: 10Gi
|
||||
|
||||
# HTTPS via Traefik + cert-manager (ClusterIssuer letsencrypt-prod; same pattern as other *.apps.noble.lab.pcenicni.dev hosts).
|
||||
# DNS: grafana.apps.noble.lab.pcenicni.dev → Traefik LoadBalancer (192.168.50.211) — see clusters/noble/apps/traefik/values.yaml
|
||||
# DNS: grafana.apps.noble.lab.pcenicni.dev → Traefik LoadBalancer (192.168.50.211) — see clusters/noble/bootstrap/traefik/values.yaml
|
||||
ingress:
|
||||
enabled: true
|
||||
ingressClassName: traefik
|
||||
@@ -109,4 +109,4 @@ grafana:
|
||||
# Traefik sets X-Forwarded-*; required for correct redirects and cookies behind the ingress.
|
||||
use_proxy_headers: true
|
||||
|
||||
# Loki datasource: apply `clusters/noble/apps/grafana-loki-datasource/loki-datasource.yaml` (sidecar ConfigMap) instead of additionalDataSources here.
|
||||
# Loki datasource: apply `clusters/noble/bootstrap/grafana-loki-datasource/loki-datasource.yaml` (sidecar ConfigMap) instead of additionalDataSources here.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user