Compare commits
55 Commits
2a64f40f93
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
aeffc7d6dd | ||
|
|
0f88a33216 | ||
|
|
bfb72cb519 | ||
|
|
51eb64dd9d | ||
|
|
f259285f6e | ||
|
|
c312ceeb56 | ||
|
|
c15bf4d708 | ||
|
|
89be30884e | ||
|
|
16948c62f9 | ||
|
|
3a6e5dff5b | ||
|
|
023ebfee5d | ||
|
|
27fb4113eb | ||
|
|
4026591f0b | ||
|
|
8a740019ad | ||
|
|
544f75b0ee | ||
|
|
33a10dc7e9 | ||
|
|
a4b9913b7e | ||
|
|
11c62009a4 | ||
|
|
03ed4e70a2 | ||
|
|
7855b10982 | ||
|
|
079c11b20c | ||
|
|
bf108a37e2 | ||
|
|
97b56581ed | ||
|
|
f154658d79 | ||
|
|
90509bacc5 | ||
|
|
e4741ecd15 | ||
|
|
f6647056be | ||
|
|
76eb7df18c | ||
|
|
90fd8fb8a6 | ||
|
|
41841abc84 | ||
|
|
7a62489ad6 | ||
|
|
0e8eaa2f0d | ||
|
|
a48ac16c14 | ||
|
|
46cedc965f | ||
|
|
207cdca0cf | ||
|
|
bf185b71a9 | ||
|
|
fc985932fe | ||
|
|
ee7669c788 | ||
|
|
90cd34c34f | ||
|
|
1a3c8378d4 | ||
|
|
05717c7e6a | ||
|
|
0dd642f0c5 | ||
|
|
0a6c9976da | ||
|
|
c5319a5436 | ||
|
|
c148454e91 | ||
|
|
445a1ac211 | ||
|
|
906c24b1d5 | ||
|
|
d5f38bd766 | ||
|
|
a65b553252 | ||
|
|
a5e624f542 | ||
|
|
d2b52f3518 | ||
|
|
2b4f568632 | ||
|
|
7caba0d90c | ||
|
|
fd4afef992 | ||
|
|
092a6febe4 |
19
.env.sample
Normal file
19
.env.sample
Normal file
@@ -0,0 +1,19 @@
|
||||
# Copy to **.env** in this repository root (`.env` is gitignored).
|
||||
# Ansible **noble_cert_manager** role sources `.env` after cert-manager Helm install and creates
|
||||
# **cert-manager/cloudflare-dns-api-token** when **CLOUDFLARE_DNS_API_TOKEN** is set.
|
||||
#
|
||||
# Cloudflare: Zone → DNS → Edit + Zone → Read for **pcenicni.dev** (see clusters/noble/bootstrap/cert-manager/README.md).
|
||||
CLOUDFLARE_DNS_API_TOKEN=
|
||||
|
||||
# --- Optional: other deploy-time values (documented for manual use or future automation) ---
|
||||
|
||||
# Pangolin / Newt — with **noble_newt_install=true**, Ansible creates **newt/newt-pangolin-auth** when all are set (see clusters/noble/bootstrap/newt/README.md).
|
||||
PANGOLIN_ENDPOINT=
|
||||
NEWT_ID=
|
||||
NEWT_SECRET=
|
||||
|
||||
# Velero — when **noble_velero_install=true**, set bucket + S3 API URL and credentials (see clusters/noble/bootstrap/velero/README.md).
|
||||
NOBLE_VELERO_S3_BUCKET=
|
||||
NOBLE_VELERO_S3_URL=
|
||||
NOBLE_VELERO_AWS_ACCESS_KEY_ID=
|
||||
NOBLE_VELERO_AWS_SECRET_ACCESS_KEY=
|
||||
7
.gitignore
vendored
7
.gitignore
vendored
@@ -1,6 +1,11 @@
|
||||
ansible/inventory/hosts.ini
|
||||
# Talos generated
|
||||
talos/out/
|
||||
talos/kubeconfig
|
||||
|
||||
# Local secrets
|
||||
age-key.txt
|
||||
age-key.txt
|
||||
.env
|
||||
|
||||
# Generated by ansible noble_landing_urls
|
||||
ansible/output/noble-lab-ui-urls.md
|
||||
7
.sops.yaml
Normal file
7
.sops.yaml
Normal file
@@ -0,0 +1,7 @@
|
||||
# Mozilla SOPS — encrypt/decrypt Kubernetes Secret manifests under clusters/noble/secrets/
|
||||
# Generate a key: age-keygen -o age-key.txt (age-key.txt is gitignored)
|
||||
# Add the printed public key below (one recipient per line is supported).
|
||||
creation_rules:
|
||||
- path_regex: clusters/noble/secrets/.*\.yaml$
|
||||
age: >-
|
||||
age1juym5p3ez3dkt0dxlznydgfgqvaujfnyk9hpdsssf50hsxeh3p4sjpf3gn
|
||||
@@ -180,6 +180,12 @@ Shared services used across multiple applications.
|
||||
|
||||
**Configuration:** Requires Pangolin endpoint URL, Newt ID, and Newt secret.
|
||||
|
||||
### versitygw/ (`komodo/s3/versitygw/`)
|
||||
|
||||
- **[Versity S3 Gateway](https://github.com/versity/versitygw)** — S3 API on port **10000** by default; optional **WebUI** on **8080** (not the same listener—enable `VERSITYGW_WEBUI_PORT` / `VGW_WEBUI_GATEWAYS` per `.env.sample`). Behind **Pangolin**, expose the API and WebUI separately (or you will see **404** browsing the API URL).
|
||||
|
||||
**Configuration:** Set either `ROOT_ACCESS_KEY` / `ROOT_SECRET_KEY` or `ROOT_ACCESS_KEY_ID` / `ROOT_SECRET_ACCESS_KEY`. Optional `VERSITYGW_PORT`. Compose uses `${VAR}` interpolation so credentials work with Komodo’s `docker compose --env-file <run_directory>/.env` (avoid `env_file:` in the service when `run_directory` is not the same folder as `compose.yaml`, or the written `.env` will not be found).
|
||||
|
||||
---
|
||||
|
||||
## 📊 Monitoring (`komodo/monitor/`)
|
||||
|
||||
1
ansible/.gitignore
vendored
Normal file
1
ansible/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
.ansible-tmp/
|
||||
@@ -1,119 +1,160 @@
|
||||
# Proxmox VM Management Suite
|
||||
# Ansible — noble cluster
|
||||
|
||||
A comprehensive Ansible automation suite for managing Proxmox Virtual Machines. This suite allows you to easily create Cloud-Init templates, provision new VMs, manage backups, and decommission resources across multiple Proxmox hosts.
|
||||
Automates [`talos/CLUSTER-BUILD.md`](../talos/CLUSTER-BUILD.md): optional **Talos Phase A** (genconfig → apply → bootstrap → kubeconfig), then **Phase B+** (CNI → add-ons → ingress → Argo CD → Kyverno → observability, etc.). **Argo CD** does not reconcile core charts — optional GitOps starts from an empty [`clusters/noble/apps/kustomization.yaml`](../clusters/noble/apps/kustomization.yaml).
|
||||
|
||||
## Features
|
||||
## Order of operations
|
||||
|
||||
- **Template Management**:
|
||||
- Automatically download Cloud Images (Ubuntu, Debian, etc.).
|
||||
- Pre-configured with Cloud-Init (SSH keys, IP Config).
|
||||
- Support for selecting images from a curated list or custom URLs.
|
||||
- **VM Provisioning**:
|
||||
- Clone from templates (Full or Linked clones).
|
||||
- Auto-start option.
|
||||
- **Lifecycle Management**:
|
||||
- Backup VMs (Snapshot mode).
|
||||
- Delete/Purge VMs.
|
||||
- **Security**:
|
||||
- **Automatic SSH Key Injection**: Automatically adds a defined Admin SSH key to every template.
|
||||
- Support for injecting additional SSH keys per deployment.
|
||||
1. **From `talos/`:** `talhelper gensecret` / `talsecret` as in [`talos/README.md`](../talos/README.md) §1 (if not already done).
|
||||
2. **Talos Phase A (automated):** run [`playbooks/talos_phase_a.yml`](playbooks/talos_phase_a.yml) **or** the full pipeline [`playbooks/deploy.yml`](playbooks/deploy.yml). This runs **`talhelper genconfig -o out`**, **`talosctl apply-config`** on each node, **`talosctl bootstrap`**, and **`talosctl kubeconfig`** → **`talos/kubeconfig`**.
|
||||
3. **Platform stack:** [`playbooks/noble.yml`](playbooks/noble.yml) (included at the end of **`deploy.yml`**).
|
||||
|
||||
## Setup
|
||||
Your workstation must be able to reach **node IPs on the lab LAN** (Talos API **:50000** for `talosctl`, Kubernetes **:6443** for `kubectl` / Helm). If `kubectl` cannot reach the VIP (`192.168.50.230`), use `-e 'noble_k8s_api_server_override=https://<control-plane-ip>:6443'` on **`noble.yml`** (see `group_vars/all.yml`).
|
||||
|
||||
### 1. Requirements
|
||||
|
||||
Install the required Ansible collections:
|
||||
```bash
|
||||
ansible-galaxy install -r requirements.yml
|
||||
```
|
||||
|
||||
### 2. Configuration
|
||||
|
||||
Edit `roles/proxmox_vm/defaults/main.yml` to set your global defaults, specifically the **Admin SSH Key**.
|
||||
|
||||
**Important Variable to Change:**
|
||||
```yaml
|
||||
# ansible/roles/proxmox_vm/defaults/main.yml
|
||||
admin_ssh_key: "ssh-ed25519 AAAAC3... your-actual-public-key"
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
The main entry point is the playbook `playbooks/manage_vm.yml`. You control the behavior using the `proxmox_action` variable.
|
||||
|
||||
### 1. Create a Cloud-Init Template
|
||||
|
||||
You can create a template by selecting a predefined alias (e.g., `ubuntu-22.04`) or providing a custom URL.
|
||||
|
||||
**Option A: Select from List (Default)**
|
||||
Current aliases: `ubuntu-22.04`, `ubuntu-24.04`, `debian-12`.
|
||||
**One-shot full deploy** (after nodes are booted and reachable):
|
||||
|
||||
```bash
|
||||
# Create Ubuntu 22.04 Template (ID: 9000)
|
||||
ansible-playbook playbooks/manage_vm.yml \
|
||||
-e "proxmox_action=create_template vmid=9000 template_name=ubuntu-22-template image_alias=ubuntu-22.04"
|
||||
cd ansible
|
||||
ansible-playbook playbooks/deploy.yml
|
||||
```
|
||||
|
||||
**Option B: Custom URL**
|
||||
```bash
|
||||
ansible-playbook playbooks/manage_vm.yml \
|
||||
-e "proxmox_action=create_template \
|
||||
vmid=9001 \
|
||||
template_name=custom-linux \
|
||||
image_source_type=url \
|
||||
custom_image_url='https://example.com/image.qcow2'"
|
||||
```
|
||||
## Deploy secrets (`.env`)
|
||||
|
||||
### 2. Create a VM from Template
|
||||
Copy **`.env.sample`** to **`.env`** at the repository root (`.env` is gitignored). At minimum set **`CLOUDFLARE_DNS_API_TOKEN`** for cert-manager DNS-01. The **cert-manager** role applies it automatically during **`noble.yml`**. See **`.env.sample`** for optional placeholders (e.g. Newt/Pangolin).
|
||||
|
||||
Clone a valid template to a new VM.
|
||||
## Prerequisites
|
||||
|
||||
- `talosctl` (matches node Talos version), `talhelper`, `helm`, `kubectl`.
|
||||
- **SOPS secrets:** `sops` and `age` on the control host if you use **`clusters/noble/secrets/`** with **`age-key.txt`** (see **`clusters/noble/secrets/README.md`**).
|
||||
- **Phase A:** same LAN/VPN as nodes so **Talos :50000** and **Kubernetes :6443** are reachable (see [`talos/README.md`](../talos/README.md) §3).
|
||||
- **noble.yml:** bootstrapped cluster and **`talos/kubeconfig`** (or `KUBECONFIG`).
|
||||
|
||||
## Playbooks
|
||||
|
||||
| Playbook | Purpose |
|
||||
|----------|---------|
|
||||
| [`playbooks/deploy.yml`](playbooks/deploy.yml) | **Talos Phase A** then **`noble.yml`** (full automation). |
|
||||
| [`playbooks/talos_phase_a.yml`](playbooks/talos_phase_a.yml) | `genconfig` → `apply-config` → `bootstrap` → `kubeconfig` only. |
|
||||
| [`playbooks/noble.yml`](playbooks/noble.yml) | Helm + `kubectl` platform (after Phase A). |
|
||||
| [`playbooks/post_deploy.yml`](playbooks/post_deploy.yml) | SOPS reminders and optional Argo root Application note. |
|
||||
| [`playbooks/talos_bootstrap.yml`](playbooks/talos_bootstrap.yml) | **`talhelper genconfig` only** (legacy shortcut; prefer **`talos_phase_a.yml`**). |
|
||||
| [`playbooks/debian_harden.yml`](playbooks/debian_harden.yml) | Baseline hardening for Debian servers (SSH/sysctl/fail2ban/unattended-upgrades). |
|
||||
| [`playbooks/debian_maintenance.yml`](playbooks/debian_maintenance.yml) | Debian maintenance run (apt upgrades, autoremove/autoclean, reboot when required). |
|
||||
| [`playbooks/debian_rotate_ssh_keys.yml`](playbooks/debian_rotate_ssh_keys.yml) | Rotate managed users' `authorized_keys`. |
|
||||
| [`playbooks/debian_ops.yml`](playbooks/debian_ops.yml) | Convenience pipeline: harden then maintenance for Debian servers. |
|
||||
| [`playbooks/proxmox_prepare.yml`](playbooks/proxmox_prepare.yml) | Configure Proxmox community repos and disable no-subscription UI warning. |
|
||||
| [`playbooks/proxmox_upgrade.yml`](playbooks/proxmox_upgrade.yml) | Proxmox maintenance run (apt dist-upgrade, cleanup, reboot when required). |
|
||||
| [`playbooks/proxmox_cluster.yml`](playbooks/proxmox_cluster.yml) | Create a Proxmox cluster on the master and join additional hosts. |
|
||||
| [`playbooks/proxmox_ops.yml`](playbooks/proxmox_ops.yml) | Convenience pipeline: prepare, upgrade, then cluster Proxmox hosts. |
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/manage_vm.yml \
|
||||
-e "proxmox_action=create_vm \
|
||||
vmid=9000 \
|
||||
new_vmid=105 \
|
||||
new_vm_name=web-server-01"
|
||||
cd ansible
|
||||
export KUBECONFIG=/absolute/path/to/home-server/talos/kubeconfig
|
||||
|
||||
# noble.yml only — if VIP is unreachable from this host:
|
||||
# ansible-playbook playbooks/noble.yml -e 'noble_k8s_api_server_override=https://192.168.50.20:6443'
|
||||
|
||||
ansible-playbook playbooks/noble.yml
|
||||
ansible-playbook playbooks/post_deploy.yml
|
||||
```
|
||||
|
||||
### 3. Backup a VM
|
||||
### Talos Phase A variables (role `talos_phase_a` defaults)
|
||||
|
||||
Create a snapshot backup of a specific VM.
|
||||
Override with `-e` when needed, e.g. **`-e noble_talos_skip_bootstrap=true`** if etcd is already initialized.
|
||||
|
||||
| Variable | Default | Meaning |
|
||||
|----------|---------|---------|
|
||||
| `noble_talos_genconfig` | `true` | Run **`talhelper genconfig -o out`** first. |
|
||||
| `noble_talos_apply_mode` | `auto` | **`auto`** — **`talosctl apply-config --dry-run`** on the first node picks maintenance (**`--insecure`**) vs joined (**`TALOSCONFIG`**). **`insecure`** / **`secure`** force talos/README §2 A or B. |
|
||||
| `noble_talos_skip_bootstrap` | `false` | Skip **`talosctl bootstrap`**. If etcd is **already** initialized, bootstrap is treated as a no-op (same as **`talosctl`** “etcd data directory is not empty”). |
|
||||
| `noble_talos_apid_wait_delay` / `noble_talos_apid_wait_timeout` | `20` / `900` | Seconds to wait for **apid :50000** on the bootstrap node after **apply-config** (nodes reboot). Increase if bootstrap hits **connection refused** to `:50000`. |
|
||||
| `noble_talos_nodes` | neon/argon/krypton/helium | IP + **`out/*.yaml`** filename — align with **`talos/talconfig.yaml`**. |
|
||||
|
||||
### Tags (partial runs)
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/manage_vm.yml \
|
||||
-e "proxmox_action=backup_vm vmid=105"
|
||||
ansible-playbook playbooks/noble.yml --tags cilium,metallb
|
||||
ansible-playbook playbooks/noble.yml --skip-tags newt
|
||||
ansible-playbook playbooks/noble.yml --tags velero -e noble_velero_install=true -e noble_velero_s3_bucket=... -e noble_velero_s3_url=...
|
||||
```
|
||||
|
||||
### 4. Delete a VM
|
||||
### Variables — `group_vars/all.yml` and role defaults
|
||||
|
||||
Stop and purge a VM.
|
||||
- **`group_vars/all.yml`:** **`noble_newt_install`**, **`noble_velero_install`**, **`noble_cert_manager_require_cloudflare_secret`**, **`noble_argocd_apply_root_application`**, **`noble_argocd_apply_bootstrap_root_application`**, **`noble_k8s_api_server_override`**, **`noble_k8s_api_server_auto_fallback`**, **`noble_k8s_api_server_fallback`**, **`noble_skip_k8s_health_check`**
|
||||
- **`roles/noble_platform/defaults/main.yml`:** **`noble_apply_sops_secrets`**, **`noble_sops_age_key_file`** (SOPS secrets under **`clusters/noble/secrets/`**)
|
||||
|
||||
## Roles
|
||||
|
||||
| Role | Contents |
|
||||
|------|----------|
|
||||
| `talos_phase_a` | Talos genconfig, apply-config, bootstrap, kubeconfig |
|
||||
| `helm_repos` | `helm repo add` / `update` |
|
||||
| `noble_*` | Cilium, CSI Volume Snapshot CRDs + controller, metrics-server, Longhorn, MetalLB (20m Helm wait), kube-vip, Traefik, cert-manager, Newt, Argo CD, Kyverno, platform stack, Velero (optional) |
|
||||
| `noble_landing_urls` | Writes **`ansible/output/noble-lab-ui-urls.md`** — URLs, service names, and (optional) Argo/Grafana passwords from Secrets |
|
||||
| `noble_post_deploy` | Post-install reminders |
|
||||
| `talos_bootstrap` | Genconfig-only (used by older playbook) |
|
||||
| `debian_baseline_hardening` | Baseline Debian hardening (SSH policy, sysctl profile, fail2ban, unattended upgrades) |
|
||||
| `debian_maintenance` | Routine Debian maintenance tasks (updates, cleanup, reboot-on-required) |
|
||||
| `debian_ssh_key_rotation` | Declarative `authorized_keys` rotation for server users |
|
||||
| `proxmox_baseline` | Proxmox repo prep (community repos) and no-subscription warning suppression |
|
||||
| `proxmox_maintenance` | Proxmox package maintenance (dist-upgrade, cleanup, reboot-on-required) |
|
||||
| `proxmox_cluster` | Proxmox cluster bootstrap/join automation using `pvecm` |
|
||||
|
||||
## Debian server ops quick start
|
||||
|
||||
These playbooks are separate from the Talos/noble flow and target hosts in `debian_servers`.
|
||||
|
||||
1. Copy `inventory/debian.example.yml` to `inventory/debian.yml` and update hosts/users.
|
||||
2. Update `group_vars/debian_servers.yml` with your allowed SSH users and real public keys.
|
||||
3. Run with the Debian inventory:
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/manage_vm.yml \
|
||||
-e "proxmox_action=delete_vm vmid=105"
|
||||
cd ansible
|
||||
ansible-playbook -i inventory/debian.yml playbooks/debian_harden.yml
|
||||
ansible-playbook -i inventory/debian.yml playbooks/debian_rotate_ssh_keys.yml
|
||||
ansible-playbook -i inventory/debian.yml playbooks/debian_maintenance.yml
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Handling Multiple Hosts
|
||||
You can target a specific Proxmox node using the `target_host` variable.
|
||||
Or run the combined maintenance pipeline:
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/manage_vm.yml -e "proxmox_action=create_vm ... target_host=mercury"
|
||||
cd ansible
|
||||
ansible-playbook -i inventory/debian.yml playbooks/debian_ops.yml
|
||||
```
|
||||
|
||||
### Injecting Additional SSH Keys
|
||||
You can add extra SSH keys for a specific run (or add them to the defaults file).
|
||||
## Proxmox host + cluster quick start
|
||||
|
||||
These playbooks are separate from the Talos/noble flow and target hosts in `proxmox_hosts`.
|
||||
|
||||
1. Copy `inventory/proxmox.example.yml` to `inventory/proxmox.yml` and update hosts/users.
|
||||
2. Update `group_vars/proxmox_hosts.yml` with your cluster name (`proxmox_cluster_name`), chosen cluster master, and root public key file paths to install.
|
||||
3. First run (no SSH keys yet): use `--ask-pass` **or** set `ansible_password` (prefer Ansible Vault). Keep `ansible_ssh_common_args: "-o StrictHostKeyChecking=accept-new"` in inventory for first-contact hosts.
|
||||
4. Run prepare first to install your public keys on each host, then continue:
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/manage_vm.yml \
|
||||
-e "proxmox_action=create_template ... additional_ssh_keys=['ssh-rsa AAAAB3... key1', 'ssh-ed25519 AAAA... key2']"
|
||||
cd ansible
|
||||
ansible-playbook -i inventory/proxmox.yml playbooks/proxmox_prepare.yml --ask-pass
|
||||
ansible-playbook -i inventory/proxmox.yml playbooks/proxmox_upgrade.yml
|
||||
ansible-playbook -i inventory/proxmox.yml playbooks/proxmox_cluster.yml
|
||||
```
|
||||
|
||||
## Directory Structure
|
||||
After `proxmox_prepare.yml` finishes, SSH key auth should work for root (keys from `proxmox_root_authorized_key_files`), so `--ask-pass` is usually no longer needed.
|
||||
|
||||
- `roles/proxmox_vm/`: Core logic role.
|
||||
- `defaults/main.yml`: Configuration variables (Images, Keys, Defaults).
|
||||
- `tasks/`: Action modules (`create_template.yml`, `create_vm.yml`, etc.).
|
||||
- `inventory/`: Host definitions.
|
||||
If `pvecm add` still prompts for the master root password during join, set `proxmox_cluster_master_root_password` (prefer Vault) to run join non-interactively.
|
||||
|
||||
Changing `proxmox_cluster_name` only affects new cluster creation; it does not rename an already-created cluster.
|
||||
|
||||
Or run the full Proxmox pipeline:
|
||||
|
||||
```bash
|
||||
cd ansible
|
||||
ansible-playbook -i inventory/proxmox.yml playbooks/proxmox_ops.yml
|
||||
```
|
||||
|
||||
## Migrating from Argo-managed `noble-platform`
|
||||
|
||||
```bash
|
||||
kubectl delete application -n argocd noble-platform noble-kyverno noble-kyverno-policies --ignore-not-found
|
||||
kubectl apply -f clusters/noble/bootstrap/argocd/root-application.yaml
|
||||
```
|
||||
|
||||
Then run `playbooks/noble.yml` so Helm state matches git values.
|
||||
|
||||
@@ -1,6 +1,10 @@
|
||||
[defaults]
|
||||
inventory = inventory/hosts.ini
|
||||
host_key_checking = False
|
||||
retry_files_enabled = False
|
||||
interpreter_python = auto_silent
|
||||
inventory = inventory/localhost.yml
|
||||
roles_path = roles
|
||||
retry_files_enabled = False
|
||||
stdout_callback = default
|
||||
callback_result_format = yaml
|
||||
local_tmp = .ansible-tmp
|
||||
|
||||
[privilege_escalation]
|
||||
become = False
|
||||
|
||||
28
ansible/group_vars/all.yml
Normal file
28
ansible/group_vars/all.yml
Normal file
@@ -0,0 +1,28 @@
|
||||
---
|
||||
# noble_repo_root / noble_kubeconfig are set in playbooks (use **playbook_dir** magic var).
|
||||
|
||||
# When kubeconfig points at the API VIP but this workstation cannot reach the lab LAN (VPN off, etc.),
|
||||
# set a reachable control-plane URL — same as: kubectl config set-cluster noble --server=https://<cp-ip>:6443
|
||||
# Example: ansible-playbook playbooks/noble.yml -e 'noble_k8s_api_server_override=https://192.168.50.20:6443'
|
||||
noble_k8s_api_server_override: ""
|
||||
|
||||
# When /healthz fails with **network unreachable** to the VIP and **override** is empty, retry using this URL (neon).
|
||||
noble_k8s_api_server_auto_fallback: true
|
||||
noble_k8s_api_server_fallback: "https://192.168.50.20:6443"
|
||||
|
||||
# Only if you must skip the kubectl /healthz preflight (not recommended).
|
||||
noble_skip_k8s_health_check: false
|
||||
|
||||
# Pangolin / Newt — set true only after newt-pangolin-auth Secret exists (SOPS: clusters/noble/secrets/ or imperative — see clusters/noble/bootstrap/newt/README.md)
|
||||
noble_newt_install: false
|
||||
|
||||
# cert-manager needs Secret cloudflare-dns-api-token in cert-manager namespace before ClusterIssuers work
|
||||
noble_cert_manager_require_cloudflare_secret: true
|
||||
|
||||
# Velero — set **noble_velero_install: true** plus S3 bucket/URL (and credentials — see clusters/noble/bootstrap/velero/README.md)
|
||||
noble_velero_install: false
|
||||
|
||||
# Argo CD — apply app-of-apps root Application (clusters/noble/bootstrap/argocd/root-application.yaml). Set false to skip.
|
||||
noble_argocd_apply_root_application: true
|
||||
# Bootstrap kustomize in Argo (**noble-bootstrap-root** → **clusters/noble/bootstrap**). Applied with manual sync; enable automation after **noble.yml** (see **clusters/noble/bootstrap/argocd/README.md** §5).
|
||||
noble_argocd_apply_bootstrap_root_application: true
|
||||
12
ansible/group_vars/debian_servers.yml
Normal file
12
ansible/group_vars/debian_servers.yml
Normal file
@@ -0,0 +1,12 @@
|
||||
---
|
||||
# Hardened SSH settings
|
||||
debian_baseline_ssh_allow_users:
|
||||
- admin
|
||||
|
||||
# Example key rotation entries. Replace with your real users and keys.
|
||||
debian_ssh_rotation_users:
|
||||
- name: admin
|
||||
home: /home/admin
|
||||
state: present
|
||||
keys:
|
||||
- "ssh-ed25519 AAAAEXAMPLE_REPLACE_ME admin@workstation"
|
||||
37
ansible/group_vars/proxmox_hosts.yml
Normal file
37
ansible/group_vars/proxmox_hosts.yml
Normal file
@@ -0,0 +1,37 @@
|
||||
---
|
||||
# Proxmox repositories
|
||||
proxmox_repo_debian_codename: trixie
|
||||
proxmox_repo_disable_enterprise: true
|
||||
proxmox_repo_disable_ceph_enterprise: true
|
||||
proxmox_repo_enable_pve_no_subscription: true
|
||||
proxmox_repo_enable_ceph_no_subscription: true
|
||||
|
||||
# Suppress "No valid subscription" warning in UI
|
||||
proxmox_no_subscription_notice_disable: true
|
||||
|
||||
# Public keys to install for root on each Proxmox host.
|
||||
proxmox_root_authorized_key_files:
|
||||
- "{{ lookup('env', 'HOME') }}/.ssh/id_ed25519.pub"
|
||||
- "{{ lookup('env', 'HOME') }}/.ssh/ansible.pub"
|
||||
|
||||
# Package upgrade/reboot policy
|
||||
proxmox_upgrade_apt_cache_valid_time: 3600
|
||||
proxmox_upgrade_autoremove: true
|
||||
proxmox_upgrade_autoclean: true
|
||||
proxmox_upgrade_reboot_if_required: true
|
||||
proxmox_upgrade_reboot_timeout: 1800
|
||||
|
||||
# Cluster settings
|
||||
proxmox_cluster_enabled: true
|
||||
proxmox_cluster_name: atomic-hub
|
||||
|
||||
# Bootstrap host name from inventory (first host by default if empty)
|
||||
proxmox_cluster_master: ""
|
||||
|
||||
# Optional explicit IP/FQDN for joining; leave empty to use ansible_host of master
|
||||
proxmox_cluster_master_ip: ""
|
||||
proxmox_cluster_force: false
|
||||
|
||||
# Optional: use only for first cluster joins when inter-node SSH trust is not established.
|
||||
# Prefer storing with Ansible Vault if you set this.
|
||||
proxmox_cluster_master_root_password: "Hemroid8"
|
||||
11
ansible/inventory/debian.example.yml
Normal file
11
ansible/inventory/debian.example.yml
Normal file
@@ -0,0 +1,11 @@
|
||||
---
|
||||
all:
|
||||
children:
|
||||
debian_servers:
|
||||
hosts:
|
||||
debian-01:
|
||||
ansible_host: 192.168.50.101
|
||||
ansible_user: admin
|
||||
debian-02:
|
||||
ansible_host: 192.168.50.102
|
||||
ansible_user: admin
|
||||
@@ -1,14 +0,0 @@
|
||||
[proxmox]
|
||||
# Replace pve1 with your proxmox node hostname or IP
|
||||
mercury ansible_host=192.168.50.100 ansible_user=root
|
||||
|
||||
[proxmox:vars]
|
||||
# If using password auth (ssh key recommended though):
|
||||
# ansible_ssh_pass=yourpassword
|
||||
|
||||
# Connection variables for the proxmox modules (api)
|
||||
proxmox_api_user=root@pam
|
||||
proxmox_api_password=CHANGE_ME
|
||||
proxmox_api_host=192.168.50.100
|
||||
# proxmox_api_token_id=
|
||||
# proxmox_api_token_secret=
|
||||
6
ansible/inventory/localhost.yml
Normal file
6
ansible/inventory/localhost.yml
Normal file
@@ -0,0 +1,6 @@
|
||||
---
|
||||
all:
|
||||
hosts:
|
||||
localhost:
|
||||
ansible_connection: local
|
||||
ansible_python_interpreter: "{{ ansible_playbook_python }}"
|
||||
24
ansible/inventory/proxmox.example.yml
Normal file
24
ansible/inventory/proxmox.example.yml
Normal file
@@ -0,0 +1,24 @@
|
||||
---
|
||||
all:
|
||||
children:
|
||||
proxmox_hosts:
|
||||
vars:
|
||||
ansible_ssh_common_args: "-o StrictHostKeyChecking=accept-new"
|
||||
hosts:
|
||||
helium:
|
||||
ansible_host: 192.168.1.100
|
||||
ansible_user: root
|
||||
# First run without SSH keys:
|
||||
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||
neon:
|
||||
ansible_host: 192.168.1.90
|
||||
ansible_user: root
|
||||
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||
argon:
|
||||
ansible_host: 192.168.1.80
|
||||
ansible_user: root
|
||||
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||
krypton:
|
||||
ansible_host: 192.168.1.70
|
||||
ansible_user: root
|
||||
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||
24
ansible/inventory/proxmox.yml
Normal file
24
ansible/inventory/proxmox.yml
Normal file
@@ -0,0 +1,24 @@
|
||||
---
|
||||
all:
|
||||
children:
|
||||
proxmox_hosts:
|
||||
vars:
|
||||
ansible_ssh_common_args: "-o StrictHostKeyChecking=accept-new"
|
||||
hosts:
|
||||
helium:
|
||||
ansible_host: 192.168.1.100
|
||||
ansible_user: root
|
||||
# First run without SSH keys:
|
||||
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||
neon:
|
||||
ansible_host: 192.168.1.90
|
||||
ansible_user: root
|
||||
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||
argon:
|
||||
ansible_host: 192.168.1.80
|
||||
ansible_user: root
|
||||
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||
krypton:
|
||||
ansible_host: 192.168.1.70
|
||||
ansible_user: root
|
||||
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||
@@ -1,16 +0,0 @@
|
||||
---
|
||||
- name: Create Ubuntu Cloud-Init Template
|
||||
hosts: proxmox
|
||||
become: yes
|
||||
|
||||
vars:
|
||||
template_id: 9000
|
||||
template_name: ubuntu-2204-cloud
|
||||
# Override defaults if needed
|
||||
image_alias: ubuntu-22.04
|
||||
storage_pool: local-lvm
|
||||
|
||||
tasks:
|
||||
- name: Run Proxmox Template Manage Role
|
||||
include_role:
|
||||
name: proxmox_template_manage
|
||||
8
ansible/playbooks/debian_harden.yml
Normal file
8
ansible/playbooks/debian_harden.yml
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
- name: Debian server baseline hardening
|
||||
hosts: debian_servers
|
||||
become: true
|
||||
gather_facts: true
|
||||
roles:
|
||||
- role: debian_baseline_hardening
|
||||
tags: [hardening, baseline]
|
||||
8
ansible/playbooks/debian_maintenance.yml
Normal file
8
ansible/playbooks/debian_maintenance.yml
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
- name: Debian maintenance (updates + reboot handling)
|
||||
hosts: debian_servers
|
||||
become: true
|
||||
gather_facts: true
|
||||
roles:
|
||||
- role: debian_maintenance
|
||||
tags: [maintenance, updates]
|
||||
3
ansible/playbooks/debian_ops.yml
Normal file
3
ansible/playbooks/debian_ops.yml
Normal file
@@ -0,0 +1,3 @@
|
||||
---
|
||||
- import_playbook: debian_harden.yml
|
||||
- import_playbook: debian_maintenance.yml
|
||||
8
ansible/playbooks/debian_rotate_ssh_keys.yml
Normal file
8
ansible/playbooks/debian_rotate_ssh_keys.yml
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
- name: Debian SSH key rotation
|
||||
hosts: debian_servers
|
||||
become: true
|
||||
gather_facts: false
|
||||
roles:
|
||||
- role: debian_ssh_key_rotation
|
||||
tags: [ssh, ssh_keys, rotation]
|
||||
5
ansible/playbooks/deploy.yml
Normal file
5
ansible/playbooks/deploy.yml
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
# Full bring-up: Talos Phase A then platform stack.
|
||||
# Run from **ansible/**: ansible-playbook playbooks/deploy.yml
|
||||
- import_playbook: talos_phase_a.yml
|
||||
- import_playbook: noble.yml
|
||||
@@ -1,33 +0,0 @@
|
||||
---
|
||||
- name: Hello World Provisioning
|
||||
hosts: localhost # Run API calls from control node
|
||||
gather_facts: no
|
||||
vars_files:
|
||||
- "../inventory/hosts.ini" # Load connection details if needed manually, OR rely on inventory
|
||||
|
||||
vars:
|
||||
# Target Proxmox Details (override from inventory/extra vars)
|
||||
proxmox_api_host: "192.168.50.100"
|
||||
proxmox_api_user: "root@pam"
|
||||
proxmox_api_password: "Hemroid8" # Consider moving to Vault!
|
||||
proxmox_node: "mercury"
|
||||
|
||||
# VM Spec
|
||||
vmid: 101
|
||||
vm_name: "hello-world-vm"
|
||||
template_name: "ubuntu-2204-cloud"
|
||||
ci_user: "ubuntu"
|
||||
# Replace with your actual public key or pass via -e "ssh_key=..."
|
||||
ssh_keys:
|
||||
- "{{ ssh_key | default('ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAI...') }}"
|
||||
|
||||
tasks:
|
||||
- name: Run Proxmox Provision Role
|
||||
include_role:
|
||||
name: proxmox_provision
|
||||
vars:
|
||||
vmid: "{{ vmid }}"
|
||||
vm_name: "{{ vm_name }}"
|
||||
template_name: "{{ template_name }}"
|
||||
ci_user: "{{ ci_user }}"
|
||||
ssh_keys: "{{ ssh_keys }}"
|
||||
@@ -1,6 +0,0 @@
|
||||
---
|
||||
- name: Manage Proxmox VMs
|
||||
hosts: "{{ target_host | default('proxmox') }}"
|
||||
become: yes
|
||||
roles:
|
||||
- proxmox_vm
|
||||
232
ansible/playbooks/noble.yml
Normal file
232
ansible/playbooks/noble.yml
Normal file
@@ -0,0 +1,232 @@
|
||||
---
|
||||
# Full platform install — **after** Talos bootstrap (`talosctl bootstrap` + working kubeconfig).
|
||||
# Do not run until `kubectl get --raw /healthz` returns ok (see talos/README.md §3, CLUSTER-BUILD Phase A).
|
||||
# Run from repo **ansible/** directory: ansible-playbook playbooks/noble.yml
|
||||
#
|
||||
# Tags: repos, cilium, csi_snapshot, metrics, longhorn, metallb, kube_vip, traefik, cert_manager, newt,
|
||||
# argocd, kyverno, kyverno_policies, platform, velero, all (default)
|
||||
- name: Noble cluster — platform stack (Ansible-managed)
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: false
|
||||
vars:
|
||||
noble_repo_root: "{{ playbook_dir | dirname | dirname }}"
|
||||
noble_kubeconfig: "{{ lookup('env', 'KUBECONFIG') | default(noble_repo_root + '/talos/kubeconfig', true) }}"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
pre_tasks:
|
||||
# Helm/kubectl use $KUBECONFIG; a missing file yields "connection refused" to localhost:8080.
|
||||
- name: Stat kubeconfig path from KUBECONFIG or default
|
||||
ansible.builtin.stat:
|
||||
path: "{{ noble_kubeconfig }}"
|
||||
register: noble_kubeconfig_stat
|
||||
tags: [always]
|
||||
|
||||
- name: Fall back to repo talos/kubeconfig when $KUBECONFIG is unset or not a file
|
||||
ansible.builtin.set_fact:
|
||||
noble_kubeconfig: "{{ noble_repo_root }}/talos/kubeconfig"
|
||||
when: not noble_kubeconfig_stat.stat.exists | default(false)
|
||||
tags: [always]
|
||||
|
||||
- name: Stat kubeconfig after fallback
|
||||
ansible.builtin.stat:
|
||||
path: "{{ noble_kubeconfig }}"
|
||||
register: noble_kubeconfig_stat2
|
||||
tags: [always]
|
||||
|
||||
- name: Require a real kubeconfig file
|
||||
ansible.builtin.assert:
|
||||
that:
|
||||
- noble_kubeconfig_stat2.stat.exists | default(false)
|
||||
- noble_kubeconfig_stat2.stat.isreg | default(false)
|
||||
fail_msg: >-
|
||||
No kubeconfig file at {{ noble_kubeconfig }}.
|
||||
Fix: export KUBECONFIG=/actual/path/from/talosctl-kubeconfig (see talos/README.md),
|
||||
or copy the admin kubeconfig to {{ noble_repo_root }}/talos/kubeconfig.
|
||||
Do not use documentation placeholders as the path.
|
||||
tags: [always]
|
||||
|
||||
- name: Ensure temp dir for kubeconfig API override
|
||||
ansible.builtin.file:
|
||||
path: "{{ noble_repo_root }}/ansible/.ansible-tmp"
|
||||
state: directory
|
||||
mode: "0700"
|
||||
when: noble_k8s_api_server_override | default('') | length > 0
|
||||
tags: [always]
|
||||
|
||||
- name: Copy kubeconfig for API server override (original file unchanged)
|
||||
ansible.builtin.copy:
|
||||
src: "{{ noble_kubeconfig }}"
|
||||
dest: "{{ noble_repo_root }}/ansible/.ansible-tmp/kubeconfig.patched"
|
||||
mode: "0600"
|
||||
when: noble_k8s_api_server_override | default('') | length > 0
|
||||
tags: [always]
|
||||
|
||||
- name: Resolve current cluster name (for set-cluster)
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- config
|
||||
- view
|
||||
- --minify
|
||||
- -o
|
||||
- jsonpath={.clusters[0].name}
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_repo_root }}/ansible/.ansible-tmp/kubeconfig.patched"
|
||||
register: noble_k8s_cluster_name
|
||||
changed_when: false
|
||||
when: noble_k8s_api_server_override | default('') | length > 0
|
||||
tags: [always]
|
||||
|
||||
- name: Point patched kubeconfig at reachable apiserver
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- config
|
||||
- set-cluster
|
||||
- "{{ noble_k8s_cluster_name.stdout }}"
|
||||
- --server={{ noble_k8s_api_server_override }}
|
||||
- --kubeconfig={{ noble_repo_root }}/ansible/.ansible-tmp/kubeconfig.patched
|
||||
when: noble_k8s_api_server_override | default('') | length > 0
|
||||
changed_when: true
|
||||
tags: [always]
|
||||
|
||||
- name: Use patched kubeconfig for this play
|
||||
ansible.builtin.set_fact:
|
||||
noble_kubeconfig: "{{ noble_repo_root }}/ansible/.ansible-tmp/kubeconfig.patched"
|
||||
when: noble_k8s_api_server_override | default('') | length > 0
|
||||
tags: [always]
|
||||
|
||||
- name: Verify Kubernetes API is reachable from this host
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- get
|
||||
- --raw
|
||||
- /healthz
|
||||
- --request-timeout=15s
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
register: noble_k8s_health_first
|
||||
failed_when: false
|
||||
changed_when: false
|
||||
tags: [always]
|
||||
|
||||
# talosctl kubeconfig often sets server to the VIP; off-LAN you can reach a control-plane IP but not 192.168.50.230.
|
||||
# kubectl stderr is often "The connection to the server ... was refused" (no substring "connection refused").
|
||||
- name: Auto-fallback API server when VIP is unreachable (temp kubeconfig)
|
||||
tags: [always]
|
||||
when:
|
||||
- noble_k8s_api_server_auto_fallback | default(true) | bool
|
||||
- noble_k8s_api_server_override | default('') | length == 0
|
||||
- not (noble_skip_k8s_health_check | default(false) | bool)
|
||||
- (noble_k8s_health_first.rc | default(1)) != 0 or (noble_k8s_health_first.stdout | default('') | trim) != 'ok'
|
||||
- (((noble_k8s_health_first.stderr | default('')) ~ (noble_k8s_health_first.stdout | default(''))) | lower) is search('network is unreachable|no route to host|connection refused|was refused', multiline=False)
|
||||
block:
|
||||
- name: Ensure temp dir for kubeconfig auto-fallback
|
||||
ansible.builtin.file:
|
||||
path: "{{ noble_repo_root }}/ansible/.ansible-tmp"
|
||||
state: directory
|
||||
mode: "0700"
|
||||
|
||||
- name: Copy kubeconfig for API auto-fallback
|
||||
ansible.builtin.copy:
|
||||
src: "{{ noble_kubeconfig }}"
|
||||
dest: "{{ noble_repo_root }}/ansible/.ansible-tmp/kubeconfig.auto-fallback"
|
||||
mode: "0600"
|
||||
|
||||
- name: Resolve cluster name for kubectl set-cluster
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- config
|
||||
- view
|
||||
- --minify
|
||||
- -o
|
||||
- jsonpath={.clusters[0].name}
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_repo_root }}/ansible/.ansible-tmp/kubeconfig.auto-fallback"
|
||||
register: noble_k8s_cluster_fb
|
||||
changed_when: false
|
||||
|
||||
- name: Point temp kubeconfig at fallback apiserver
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- config
|
||||
- set-cluster
|
||||
- "{{ noble_k8s_cluster_fb.stdout }}"
|
||||
- --server={{ noble_k8s_api_server_fallback | default('https://192.168.50.20:6443', true) }}
|
||||
- --kubeconfig={{ noble_repo_root }}/ansible/.ansible-tmp/kubeconfig.auto-fallback
|
||||
changed_when: true
|
||||
|
||||
- name: Use kubeconfig with fallback API server for this play
|
||||
ansible.builtin.set_fact:
|
||||
noble_kubeconfig: "{{ noble_repo_root }}/ansible/.ansible-tmp/kubeconfig.auto-fallback"
|
||||
|
||||
- name: Re-verify Kubernetes API after auto-fallback
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- get
|
||||
- --raw
|
||||
- /healthz
|
||||
- --request-timeout=15s
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
register: noble_k8s_health_after_fallback
|
||||
failed_when: false
|
||||
changed_when: false
|
||||
|
||||
- name: Mark that API was re-checked after kubeconfig fallback
|
||||
ansible.builtin.set_fact:
|
||||
noble_k8s_api_fallback_used: true
|
||||
|
||||
- name: Normalize API health result for preflight (scalars; avoids dict merge / set_fact stringification)
|
||||
ansible.builtin.set_fact:
|
||||
noble_k8s_health_rc: "{{ noble_k8s_health_after_fallback.rc | default(1) if (noble_k8s_api_fallback_used | default(false) | bool) else (noble_k8s_health_first.rc | default(1)) }}"
|
||||
noble_k8s_health_stdout: "{{ noble_k8s_health_after_fallback.stdout | default('') if (noble_k8s_api_fallback_used | default(false) | bool) else (noble_k8s_health_first.stdout | default('')) }}"
|
||||
noble_k8s_health_stderr: "{{ noble_k8s_health_after_fallback.stderr | default('') if (noble_k8s_api_fallback_used | default(false) | bool) else (noble_k8s_health_first.stderr | default('')) }}"
|
||||
tags: [always]
|
||||
|
||||
- name: Fail when API check did not return ok
|
||||
ansible.builtin.fail:
|
||||
msg: "{{ lookup('template', 'templates/api_health_hint.j2') }}"
|
||||
when:
|
||||
- not (noble_skip_k8s_health_check | default(false) | bool)
|
||||
- (noble_k8s_health_rc | int) != 0 or (noble_k8s_health_stdout | default('') | trim) != 'ok'
|
||||
tags: [always]
|
||||
|
||||
roles:
|
||||
- role: helm_repos
|
||||
tags: [repos, helm]
|
||||
- role: noble_cilium
|
||||
tags: [cilium, cni]
|
||||
- role: noble_csi_snapshot_controller
|
||||
tags: [csi_snapshot, snapshot, storage]
|
||||
- role: noble_metrics_server
|
||||
tags: [metrics, metrics_server]
|
||||
- role: noble_longhorn
|
||||
tags: [longhorn, storage]
|
||||
- role: noble_metallb
|
||||
tags: [metallb, lb]
|
||||
- role: noble_kube_vip
|
||||
tags: [kube_vip, vip]
|
||||
- role: noble_traefik
|
||||
tags: [traefik, ingress]
|
||||
- role: noble_cert_manager
|
||||
tags: [cert_manager, certs]
|
||||
- role: noble_newt
|
||||
tags: [newt]
|
||||
- role: noble_argocd
|
||||
tags: [argocd, gitops]
|
||||
- role: noble_kyverno
|
||||
tags: [kyverno, policy]
|
||||
- role: noble_kyverno_policies
|
||||
tags: [kyverno_policies, policy]
|
||||
- role: noble_platform
|
||||
tags: [platform, observability, apps]
|
||||
- role: noble_velero
|
||||
tags: [velero, backups]
|
||||
- role: noble_landing_urls
|
||||
tags: [landing, platform, observability, apps]
|
||||
7
ansible/playbooks/post_deploy.yml
Normal file
7
ansible/playbooks/post_deploy.yml
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
# Manual follow-ups after **noble.yml**: SOPS key backup, optional Argo root Application.
|
||||
- hosts: localhost
|
||||
connection: local
|
||||
gather_facts: false
|
||||
roles:
|
||||
- noble_post_deploy
|
||||
9
ansible/playbooks/proxmox_cluster.yml
Normal file
9
ansible/playbooks/proxmox_cluster.yml
Normal file
@@ -0,0 +1,9 @@
|
||||
---
|
||||
- name: Proxmox cluster bootstrap/join
|
||||
hosts: proxmox_hosts
|
||||
become: true
|
||||
gather_facts: false
|
||||
serial: 1
|
||||
roles:
|
||||
- role: proxmox_cluster
|
||||
tags: [proxmox, cluster]
|
||||
4
ansible/playbooks/proxmox_ops.yml
Normal file
4
ansible/playbooks/proxmox_ops.yml
Normal file
@@ -0,0 +1,4 @@
|
||||
---
|
||||
- import_playbook: proxmox_prepare.yml
|
||||
- import_playbook: proxmox_upgrade.yml
|
||||
- import_playbook: proxmox_cluster.yml
|
||||
8
ansible/playbooks/proxmox_prepare.yml
Normal file
8
ansible/playbooks/proxmox_prepare.yml
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
- name: Proxmox host preparation (community repos + no-subscription notice)
|
||||
hosts: proxmox_hosts
|
||||
become: true
|
||||
gather_facts: true
|
||||
roles:
|
||||
- role: proxmox_baseline
|
||||
tags: [proxmox, prepare, repos, ui]
|
||||
9
ansible/playbooks/proxmox_upgrade.yml
Normal file
9
ansible/playbooks/proxmox_upgrade.yml
Normal file
@@ -0,0 +1,9 @@
|
||||
---
|
||||
- name: Proxmox host maintenance (upgrade to latest)
|
||||
hosts: proxmox_hosts
|
||||
become: true
|
||||
gather_facts: true
|
||||
serial: 1
|
||||
roles:
|
||||
- role: proxmox_maintenance
|
||||
tags: [proxmox, maintenance, updates]
|
||||
@@ -1,26 +0,0 @@
|
||||
---
|
||||
- name: Register Target Host
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: Verify target_host is defined
|
||||
fail:
|
||||
msg: "The 'target_host' variable must be defined (e.g. 192.168.1.10)"
|
||||
when: target_host is not defined
|
||||
|
||||
- name: Add target host to inventory
|
||||
add_host:
|
||||
name: target_node
|
||||
ansible_host: "{{ target_host }}"
|
||||
ansible_user: "{{ target_user | default('root') }}"
|
||||
ansible_ssh_pass: "{{ target_password | default(omit) }}"
|
||||
ansible_ssh_private_key_file: "{{ target_private_key_file | default(omit) }}"
|
||||
ansible_python_interpreter: /usr/bin/python3
|
||||
|
||||
- name: Bootstrap Node
|
||||
hosts: target_node
|
||||
become: yes
|
||||
gather_facts: yes
|
||||
roles:
|
||||
- common
|
||||
@@ -1,29 +0,0 @@
|
||||
---
|
||||
- name: Register Target Host
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: Verify target_host is defined
|
||||
fail:
|
||||
msg: "The 'target_host' variable must be defined (e.g. 192.168.1.10)"
|
||||
when: target_host is not defined
|
||||
|
||||
- name: Add target host to inventory
|
||||
add_host:
|
||||
name: target_node
|
||||
ansible_host: "{{ target_host }}"
|
||||
ansible_user: "{{ target_user | default('root') }}"
|
||||
ansible_ssh_pass: "{{ target_password | default(omit) }}"
|
||||
ansible_ssh_private_key_file: "{{ target_private_key_file | default(omit) }}"
|
||||
ansible_python_interpreter: /usr/bin/python3
|
||||
|
||||
- name: Configure Networking
|
||||
hosts: target_node
|
||||
become: yes
|
||||
gather_facts: yes
|
||||
tasks:
|
||||
- name: Run networking task from common role
|
||||
include_role:
|
||||
name: common
|
||||
tasks_from: networking.yml
|
||||
@@ -1,29 +0,0 @@
|
||||
---
|
||||
- name: Register Target Host
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: Verify target_host is defined
|
||||
fail:
|
||||
msg: "The 'target_host' variable must be defined (e.g. 192.168.1.10)"
|
||||
when: target_host is not defined
|
||||
|
||||
- name: Add target host to inventory
|
||||
add_host:
|
||||
name: target_node
|
||||
ansible_host: "{{ target_host }}"
|
||||
ansible_user: "{{ target_user | default('root') }}"
|
||||
ansible_ssh_pass: "{{ target_password | default(omit) }}"
|
||||
ansible_ssh_private_key_file: "{{ target_private_key_file | default(omit) }}"
|
||||
ansible_python_interpreter: /usr/bin/python3
|
||||
|
||||
- name: Configure Users
|
||||
hosts: target_node
|
||||
become: yes
|
||||
gather_facts: yes
|
||||
tasks:
|
||||
- name: Run users task from common role
|
||||
include_role:
|
||||
name: common
|
||||
tasks_from: users.yml
|
||||
@@ -1,34 +0,0 @@
|
||||
---
|
||||
- name: Register Proxmox Host
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: Verify proxmox_host is defined
|
||||
fail:
|
||||
msg: "The 'proxmox_host' variable must be defined."
|
||||
when: proxmox_host is not defined
|
||||
|
||||
- name: Verify proxmox_action is defined
|
||||
fail:
|
||||
msg: "The 'proxmox_action' variable must be defined (e.g. create_vm, create_template, delete_vm)."
|
||||
when: proxmox_action is not defined
|
||||
|
||||
- name: Add Proxmox host to inventory
|
||||
add_host:
|
||||
name: proxmox_node
|
||||
ansible_host: "{{ proxmox_host }}"
|
||||
ansible_user: "{{ proxmox_user | default('root') }}"
|
||||
ansible_ssh_pass: "{{ proxmox_password | default(omit) }}"
|
||||
ansible_ssh_private_key_file: "{{ proxmox_private_key_file | default(omit) }}"
|
||||
ansible_python_interpreter: /usr/bin/python3
|
||||
|
||||
- name: Execute Proxmox Action
|
||||
hosts: proxmox_node
|
||||
become: yes
|
||||
gather_facts: yes
|
||||
vars:
|
||||
# Explicitly map the action variable if needed, though role should pick it up from host vars or extra vars
|
||||
proxmox_action: "{{ proxmox_action }}"
|
||||
roles:
|
||||
- proxmox_vm
|
||||
11
ansible/playbooks/talos_bootstrap.yml
Normal file
11
ansible/playbooks/talos_bootstrap.yml
Normal file
@@ -0,0 +1,11 @@
|
||||
---
|
||||
# Genconfig only — for full Talos Phase A (apply, bootstrap, kubeconfig) use **playbooks/talos_phase_a.yml**
|
||||
# or **playbooks/deploy.yml**. Run: ansible-playbook playbooks/talos_bootstrap.yml -e noble_talos_genconfig=true
|
||||
- name: Talos — optional genconfig helper
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: false
|
||||
vars:
|
||||
noble_repo_root: "{{ playbook_dir | dirname | dirname }}"
|
||||
roles:
|
||||
- role: talos_bootstrap
|
||||
15
ansible/playbooks/talos_phase_a.yml
Normal file
15
ansible/playbooks/talos_phase_a.yml
Normal file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
# Talos Phase A — **talhelper genconfig** → **apply-config** (all nodes) → **bootstrap** → **kubeconfig**.
|
||||
# Requires: **talosctl**, **talhelper**, reachable node IPs (same LAN as nodes for Talos API :50000).
|
||||
# See **talos/README.md** §1–§3. Then run **playbooks/noble.yml** or **deploy.yml**.
|
||||
- name: Talos — genconfig, apply, bootstrap, kubeconfig
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: false
|
||||
vars:
|
||||
noble_repo_root: "{{ playbook_dir | dirname | dirname }}"
|
||||
noble_talos_dir: "{{ noble_repo_root }}/talos"
|
||||
noble_talos_kubeconfig_out: "{{ noble_repo_root }}/talos/kubeconfig"
|
||||
roles:
|
||||
- role: talos_phase_a
|
||||
tags: [talos, phase_a]
|
||||
22
ansible/playbooks/templates/api_health_hint.j2
Normal file
22
ansible/playbooks/templates/api_health_hint.j2
Normal file
@@ -0,0 +1,22 @@
|
||||
{# Error output for noble.yml API preflight when kubectl /healthz fails #}
|
||||
Cannot use the Kubernetes API from this host (kubectl get --raw /healthz).
|
||||
rc={{ noble_k8s_health_rc | default('n/a') }}
|
||||
stderr: {{ noble_k8s_health_stderr | default('') | trim }}
|
||||
|
||||
{% set err = (noble_k8s_health_stderr | default('')) | lower %}
|
||||
{% if 'connection refused' in err %}
|
||||
Connection refused: the TCP path to that host works, but nothing is accepting HTTPS on port 6443 there.
|
||||
• **Not bootstrapped yet?** Finish Talos first: `talosctl bootstrap` (once on a control plane), then `talosctl kubeconfig`, then confirm `kubectl get nodes`. See talos/README.md §2–§3 and CLUSTER-BUILD.md Phase A. **Do not run this playbook before the Kubernetes API exists.**
|
||||
• If bootstrap is done: try another control-plane IP (CLUSTER-BUILD inventory: neon 192.168.50.20, argon .30, krypton .40), or the VIP if kube-vip is up and you are on the LAN:
|
||||
-e 'noble_k8s_api_server_override=https://192.168.50.230:6443'
|
||||
• Do not point the API URL at a worker-only node.
|
||||
• `talosctl health` / `kubectl get nodes` from a working client.
|
||||
{% elif 'network is unreachable' in err or 'no route to host' in err %}
|
||||
Network unreachable / no route: this machine cannot route to the API IP. Join the lab LAN or VPN, or set a reachable API server URL (talos/README.md §3).
|
||||
{% else %}
|
||||
If kubeconfig used the VIP from off-LAN, try a reachable control-plane IP, e.g.:
|
||||
-e 'noble_k8s_api_server_override=https://192.168.50.20:6443'
|
||||
See talos/README.md §3.
|
||||
{% endif %}
|
||||
|
||||
To skip this check (not recommended): -e noble_skip_k8s_health_check=true
|
||||
@@ -1,2 +0,0 @@
|
||||
collections:
|
||||
- name: community.general
|
||||
@@ -1,30 +0,0 @@
|
||||
---
|
||||
# Common packages to install
|
||||
common_packages:
|
||||
- curl
|
||||
- wget
|
||||
- git
|
||||
- vim
|
||||
- htop
|
||||
- net-tools
|
||||
- unzip
|
||||
- dnsutils
|
||||
- software-properties-common
|
||||
- ca-certificates
|
||||
- gnupg
|
||||
- openssh-server
|
||||
|
||||
# SSH Configuration
|
||||
common_ssh_users:
|
||||
- name: "{{ ansible_user_id }}"
|
||||
keys: []
|
||||
# Add your keys in inventory or group_vars override
|
||||
|
||||
# Networking
|
||||
common_configure_static_ip: false
|
||||
common_interface_name: "eth0"
|
||||
# common_ip_address: "192.168.1.100/24"
|
||||
# common_gateway: "192.168.1.1"
|
||||
common_dns_servers:
|
||||
- "1.1.1.1"
|
||||
- "8.8.8.8"
|
||||
@@ -1,6 +0,0 @@
|
||||
---
|
||||
- name: Apply Netplan
|
||||
shell: netplan apply
|
||||
async: 45
|
||||
poll: 0
|
||||
ignore_errors: yes
|
||||
@@ -1,10 +0,0 @@
|
||||
---
|
||||
- name: Install common packages
|
||||
import_tasks: packages.yml
|
||||
|
||||
- name: Configure users and SSH keys
|
||||
import_tasks: users.yml
|
||||
|
||||
- name: Configure networking
|
||||
import_tasks: networking.yml
|
||||
when: common_configure_static_ip | bool
|
||||
@@ -1,23 +0,0 @@
|
||||
---
|
||||
- name: Verify required variables for static IP
|
||||
fail:
|
||||
msg: "common_ip_address and common_interface_name must be defined when common_configure_static_ip is true."
|
||||
when:
|
||||
- common_configure_static_ip | bool
|
||||
- (common_ip_address is not defined or common_ip_address | length == 0 or common_interface_name is not defined)
|
||||
|
||||
- name: Install netplan.io
|
||||
apt:
|
||||
name: netplan.io
|
||||
state: present
|
||||
when: ansible_os_family == "Debian"
|
||||
|
||||
- name: Configure Netplan
|
||||
template:
|
||||
src: netplan_config.yaml.j2
|
||||
dest: /etc/netplan/01-netcfg.yaml
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: Apply Netplan
|
||||
when: common_configure_static_ip | bool
|
||||
@@ -1,12 +0,0 @@
|
||||
---
|
||||
- name: Update apt cache
|
||||
apt:
|
||||
update_cache: yes
|
||||
cache_valid_time: 3600
|
||||
when: ansible_os_family == "Debian"
|
||||
|
||||
- name: Install common packages
|
||||
apt:
|
||||
name: "{{ common_packages }}"
|
||||
state: present
|
||||
when: ansible_os_family == "Debian"
|
||||
@@ -1,18 +0,0 @@
|
||||
---
|
||||
- name: Ensure users exist
|
||||
user:
|
||||
name: "{{ item.name }}"
|
||||
shell: /bin/bash
|
||||
groups: sudo
|
||||
append: yes
|
||||
state: present
|
||||
loop: "{{ common_ssh_users }}"
|
||||
when: item.create_user | default(false)
|
||||
|
||||
- name: Add SSH keys
|
||||
authorized_key:
|
||||
user: "{{ item.0.name }}"
|
||||
key: "{{ item.1 }}"
|
||||
loop: "{{ common_ssh_users | subelements('keys', skip_missing=True) }}"
|
||||
loop_control:
|
||||
label: "{{ item.0.name }}"
|
||||
@@ -1,15 +0,0 @@
|
||||
network:
|
||||
version: 2
|
||||
ethernets:
|
||||
{{ common_interface_name }}:
|
||||
dhcp4: no
|
||||
addresses:
|
||||
- {{ common_ip_address }}
|
||||
{% if common_gateway %}
|
||||
gateway4: {{ common_gateway }}
|
||||
{% endif %}
|
||||
nameservers:
|
||||
addresses:
|
||||
{% for server in common_dns_servers %}
|
||||
- {{ server }}
|
||||
{% endfor %}
|
||||
39
ansible/roles/debian_baseline_hardening/defaults/main.yml
Normal file
39
ansible/roles/debian_baseline_hardening/defaults/main.yml
Normal file
@@ -0,0 +1,39 @@
|
||||
---
|
||||
# Update apt metadata only when stale (seconds)
|
||||
debian_baseline_apt_cache_valid_time: 3600
|
||||
|
||||
# Core host hardening packages
|
||||
debian_baseline_packages:
|
||||
- unattended-upgrades
|
||||
- apt-listchanges
|
||||
- fail2ban
|
||||
- needrestart
|
||||
- sudo
|
||||
- ca-certificates
|
||||
|
||||
# SSH hardening controls
|
||||
debian_baseline_ssh_permit_root_login: "no"
|
||||
debian_baseline_ssh_password_authentication: "no"
|
||||
debian_baseline_ssh_pubkey_authentication: "yes"
|
||||
debian_baseline_ssh_x11_forwarding: "no"
|
||||
debian_baseline_ssh_max_auth_tries: 3
|
||||
debian_baseline_ssh_client_alive_interval: 300
|
||||
debian_baseline_ssh_client_alive_count_max: 2
|
||||
debian_baseline_ssh_allow_users: []
|
||||
|
||||
# unattended-upgrades controls
|
||||
debian_baseline_enable_unattended_upgrades: true
|
||||
debian_baseline_unattended_auto_upgrade: "1"
|
||||
debian_baseline_unattended_update_lists: "1"
|
||||
|
||||
# Kernel and network hardening sysctls
|
||||
debian_baseline_sysctl_settings:
|
||||
net.ipv4.conf.all.accept_redirects: "0"
|
||||
net.ipv4.conf.default.accept_redirects: "0"
|
||||
net.ipv4.conf.all.send_redirects: "0"
|
||||
net.ipv4.conf.default.send_redirects: "0"
|
||||
net.ipv4.conf.all.log_martians: "1"
|
||||
net.ipv4.conf.default.log_martians: "1"
|
||||
net.ipv4.tcp_syncookies: "1"
|
||||
net.ipv6.conf.all.accept_redirects: "0"
|
||||
net.ipv6.conf.default.accept_redirects: "0"
|
||||
12
ansible/roles/debian_baseline_hardening/handlers/main.yml
Normal file
12
ansible/roles/debian_baseline_hardening/handlers/main.yml
Normal file
@@ -0,0 +1,12 @@
|
||||
---
|
||||
- name: Restart ssh
|
||||
ansible.builtin.service:
|
||||
name: ssh
|
||||
state: restarted
|
||||
|
||||
- name: Reload sysctl
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- sysctl
|
||||
- --system
|
||||
changed_when: true
|
||||
52
ansible/roles/debian_baseline_hardening/tasks/main.yml
Normal file
52
ansible/roles/debian_baseline_hardening/tasks/main.yml
Normal file
@@ -0,0 +1,52 @@
|
||||
---
|
||||
- name: Refresh apt cache
|
||||
ansible.builtin.apt:
|
||||
update_cache: true
|
||||
cache_valid_time: "{{ debian_baseline_apt_cache_valid_time }}"
|
||||
|
||||
- name: Install baseline hardening packages
|
||||
ansible.builtin.apt:
|
||||
name: "{{ debian_baseline_packages }}"
|
||||
state: present
|
||||
|
||||
- name: Configure unattended-upgrades auto settings
|
||||
ansible.builtin.copy:
|
||||
dest: /etc/apt/apt.conf.d/20auto-upgrades
|
||||
mode: "0644"
|
||||
content: |
|
||||
APT::Periodic::Update-Package-Lists "{{ debian_baseline_unattended_update_lists }}";
|
||||
APT::Periodic::Unattended-Upgrade "{{ debian_baseline_unattended_auto_upgrade }}";
|
||||
when: debian_baseline_enable_unattended_upgrades | bool
|
||||
|
||||
- name: Configure SSH hardening options
|
||||
ansible.builtin.copy:
|
||||
dest: /etc/ssh/sshd_config.d/99-hardening.conf
|
||||
mode: "0644"
|
||||
content: |
|
||||
PermitRootLogin {{ debian_baseline_ssh_permit_root_login }}
|
||||
PasswordAuthentication {{ debian_baseline_ssh_password_authentication }}
|
||||
PubkeyAuthentication {{ debian_baseline_ssh_pubkey_authentication }}
|
||||
X11Forwarding {{ debian_baseline_ssh_x11_forwarding }}
|
||||
MaxAuthTries {{ debian_baseline_ssh_max_auth_tries }}
|
||||
ClientAliveInterval {{ debian_baseline_ssh_client_alive_interval }}
|
||||
ClientAliveCountMax {{ debian_baseline_ssh_client_alive_count_max }}
|
||||
{% if debian_baseline_ssh_allow_users | length > 0 %}
|
||||
AllowUsers {{ debian_baseline_ssh_allow_users | join(' ') }}
|
||||
{% endif %}
|
||||
notify: Restart ssh
|
||||
|
||||
- name: Configure baseline sysctls
|
||||
ansible.builtin.copy:
|
||||
dest: /etc/sysctl.d/99-hardening.conf
|
||||
mode: "0644"
|
||||
content: |
|
||||
{% for key, value in debian_baseline_sysctl_settings.items() %}
|
||||
{{ key }} = {{ value }}
|
||||
{% endfor %}
|
||||
notify: Reload sysctl
|
||||
|
||||
- name: Ensure fail2ban service is enabled
|
||||
ansible.builtin.service:
|
||||
name: fail2ban
|
||||
enabled: true
|
||||
state: started
|
||||
7
ansible/roles/debian_maintenance/defaults/main.yml
Normal file
7
ansible/roles/debian_maintenance/defaults/main.yml
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
debian_maintenance_apt_cache_valid_time: 3600
|
||||
debian_maintenance_upgrade_type: dist
|
||||
debian_maintenance_autoremove: true
|
||||
debian_maintenance_autoclean: true
|
||||
debian_maintenance_reboot_if_required: true
|
||||
debian_maintenance_reboot_timeout: 1800
|
||||
30
ansible/roles/debian_maintenance/tasks/main.yml
Normal file
30
ansible/roles/debian_maintenance/tasks/main.yml
Normal file
@@ -0,0 +1,30 @@
|
||||
---
|
||||
- name: Refresh apt cache
|
||||
ansible.builtin.apt:
|
||||
update_cache: true
|
||||
cache_valid_time: "{{ debian_maintenance_apt_cache_valid_time }}"
|
||||
|
||||
- name: Upgrade Debian packages
|
||||
ansible.builtin.apt:
|
||||
upgrade: "{{ debian_maintenance_upgrade_type }}"
|
||||
|
||||
- name: Remove orphaned packages
|
||||
ansible.builtin.apt:
|
||||
autoremove: "{{ debian_maintenance_autoremove }}"
|
||||
|
||||
- name: Clean apt package cache
|
||||
ansible.builtin.apt:
|
||||
autoclean: "{{ debian_maintenance_autoclean }}"
|
||||
|
||||
- name: Check if reboot is required
|
||||
ansible.builtin.stat:
|
||||
path: /var/run/reboot-required
|
||||
register: debian_maintenance_reboot_required_file
|
||||
|
||||
- name: Reboot when required by package updates
|
||||
ansible.builtin.reboot:
|
||||
reboot_timeout: "{{ debian_maintenance_reboot_timeout }}"
|
||||
msg: "Reboot initiated by Ansible maintenance playbook"
|
||||
when:
|
||||
- debian_maintenance_reboot_if_required | bool
|
||||
- debian_maintenance_reboot_required_file.stat.exists | default(false)
|
||||
10
ansible/roles/debian_ssh_key_rotation/defaults/main.yml
Normal file
10
ansible/roles/debian_ssh_key_rotation/defaults/main.yml
Normal file
@@ -0,0 +1,10 @@
|
||||
---
|
||||
# List of users to manage keys for.
|
||||
# Example:
|
||||
# debian_ssh_rotation_users:
|
||||
# - name: deploy
|
||||
# home: /home/deploy
|
||||
# state: present
|
||||
# keys:
|
||||
# - "ssh-ed25519 AAAA... deploy@laptop"
|
||||
debian_ssh_rotation_users: []
|
||||
50
ansible/roles/debian_ssh_key_rotation/tasks/main.yml
Normal file
50
ansible/roles/debian_ssh_key_rotation/tasks/main.yml
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
- name: Validate SSH key rotation inputs
|
||||
ansible.builtin.assert:
|
||||
that:
|
||||
- item.name is defined
|
||||
- item.home is defined
|
||||
- (item.state | default('present')) in ['present', 'absent']
|
||||
- (item.state | default('present')) == 'absent' or (item.keys is defined and item.keys | length > 0)
|
||||
fail_msg: >-
|
||||
Each entry in debian_ssh_rotation_users must include name, home, and either:
|
||||
state=absent, or keys with at least one SSH public key.
|
||||
loop: "{{ debian_ssh_rotation_users }}"
|
||||
loop_control:
|
||||
label: "{{ item.name | default('unknown') }}"
|
||||
|
||||
- name: Ensure ~/.ssh exists for managed users
|
||||
ansible.builtin.file:
|
||||
path: "{{ item.home }}/.ssh"
|
||||
state: directory
|
||||
owner: "{{ item.name }}"
|
||||
group: "{{ item.name }}"
|
||||
mode: "0700"
|
||||
loop: "{{ debian_ssh_rotation_users }}"
|
||||
loop_control:
|
||||
label: "{{ item.name }}"
|
||||
when: (item.state | default('present')) == 'present'
|
||||
|
||||
- name: Rotate authorized_keys for managed users
|
||||
ansible.builtin.copy:
|
||||
dest: "{{ item.home }}/.ssh/authorized_keys"
|
||||
owner: "{{ item.name }}"
|
||||
group: "{{ item.name }}"
|
||||
mode: "0600"
|
||||
content: |
|
||||
{% for key in item.keys %}
|
||||
{{ key }}
|
||||
{% endfor %}
|
||||
loop: "{{ debian_ssh_rotation_users }}"
|
||||
loop_control:
|
||||
label: "{{ item.name }}"
|
||||
when: (item.state | default('present')) == 'present'
|
||||
|
||||
- name: Remove authorized_keys for users marked absent
|
||||
ansible.builtin.file:
|
||||
path: "{{ item.home }}/.ssh/authorized_keys"
|
||||
state: absent
|
||||
loop: "{{ debian_ssh_rotation_users }}"
|
||||
loop_control:
|
||||
label: "{{ item.name }}"
|
||||
when: (item.state | default('present')) == 'absent'
|
||||
16
ansible/roles/helm_repos/defaults/main.yml
Normal file
16
ansible/roles/helm_repos/defaults/main.yml
Normal file
@@ -0,0 +1,16 @@
|
||||
---
|
||||
noble_helm_repos:
|
||||
- { name: cilium, url: "https://helm.cilium.io/" }
|
||||
- { name: metallb, url: "https://metallb.github.io/metallb" }
|
||||
- { name: longhorn, url: "https://charts.longhorn.io" }
|
||||
- { name: traefik, url: "https://traefik.github.io/charts" }
|
||||
- { name: jetstack, url: "https://charts.jetstack.io" }
|
||||
- { name: fossorial, url: "https://charts.fossorial.io" }
|
||||
- { name: argo, url: "https://argoproj.github.io/argo-helm" }
|
||||
- { name: metrics-server, url: "https://kubernetes-sigs.github.io/metrics-server/" }
|
||||
- { name: prometheus-community, url: "https://prometheus-community.github.io/helm-charts" }
|
||||
- { name: grafana, url: "https://grafana.github.io/helm-charts" }
|
||||
- { name: fluent, url: "https://fluent.github.io/helm-charts" }
|
||||
- { name: headlamp, url: "https://kubernetes-sigs.github.io/headlamp/" }
|
||||
- { name: kyverno, url: "https://kyverno.github.io/kyverno/" }
|
||||
- { name: vmware-tanzu, url: "https://vmware-tanzu.github.io/helm-charts" }
|
||||
16
ansible/roles/helm_repos/tasks/main.yml
Normal file
16
ansible/roles/helm_repos/tasks/main.yml
Normal file
@@ -0,0 +1,16 @@
|
||||
---
|
||||
- name: Add Helm repositories
|
||||
ansible.builtin.command:
|
||||
cmd: "helm repo add {{ item.name }} {{ item.url }}"
|
||||
loop: "{{ noble_helm_repos }}"
|
||||
loop_control:
|
||||
label: "{{ item.name }}"
|
||||
register: helm_repo_add
|
||||
changed_when: helm_repo_add.rc == 0
|
||||
failed_when: >-
|
||||
helm_repo_add.rc != 0 and
|
||||
('already exists' not in (helm_repo_add.stderr | default('')))
|
||||
|
||||
- name: helm repo update
|
||||
ansible.builtin.command: helm repo update
|
||||
changed_when: true
|
||||
6
ansible/roles/noble_argocd/defaults/main.yml
Normal file
6
ansible/roles/noble_argocd/defaults/main.yml
Normal file
@@ -0,0 +1,6 @@
|
||||
---
|
||||
# When true, applies clusters/noble/bootstrap/argocd/root-application.yaml (app-of-apps).
|
||||
# Edit spec.source.repoURL in that file if your Git remote differs.
|
||||
noble_argocd_apply_root_application: false
|
||||
# When true, applies clusters/noble/bootstrap/argocd/bootstrap-root-application.yaml (noble-bootstrap-root; manual sync until README §5).
|
||||
noble_argocd_apply_bootstrap_root_application: true
|
||||
46
ansible/roles/noble_argocd/tasks/main.yml
Normal file
46
ansible/roles/noble_argocd/tasks/main.yml
Normal file
@@ -0,0 +1,46 @@
|
||||
---
|
||||
- name: Install Argo CD
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- helm
|
||||
- upgrade
|
||||
- --install
|
||||
- argocd
|
||||
- argo/argo-cd
|
||||
- --namespace
|
||||
- argocd
|
||||
- --create-namespace
|
||||
- --version
|
||||
- "9.4.17"
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/argocd/values.yaml"
|
||||
- --wait
|
||||
- --timeout
|
||||
- 15m
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
|
||||
- name: Apply Argo CD root Application (app-of-apps)
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- apply
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/argocd/root-application.yaml"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
when: noble_argocd_apply_root_application | default(false) | bool
|
||||
changed_when: true
|
||||
|
||||
- name: Apply Argo CD bootstrap app-of-apps Application
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- apply
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/argocd/bootstrap-root-application.yaml"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
when: noble_argocd_apply_bootstrap_root_application | default(false) | bool
|
||||
changed_when: true
|
||||
3
ansible/roles/noble_cert_manager/defaults/main.yml
Normal file
3
ansible/roles/noble_cert_manager/defaults/main.yml
Normal file
@@ -0,0 +1,3 @@
|
||||
---
|
||||
# Warn when **cloudflare-dns-api-token** is missing after apply (also set in **group_vars/all.yml** when loaded).
|
||||
noble_cert_manager_require_cloudflare_secret: true
|
||||
28
ansible/roles/noble_cert_manager/tasks/from_env.yml
Normal file
28
ansible/roles/noble_cert_manager/tasks/from_env.yml
Normal file
@@ -0,0 +1,28 @@
|
||||
---
|
||||
# See repository **.env.sample** — copy to **.env** (gitignored).
|
||||
- name: Stat repository .env for deploy secrets
|
||||
ansible.builtin.stat:
|
||||
path: "{{ noble_repo_root }}/.env"
|
||||
register: noble_deploy_env_file
|
||||
changed_when: false
|
||||
|
||||
- name: Create cert-manager Cloudflare DNS secret from .env
|
||||
ansible.builtin.shell: |
|
||||
set -euo pipefail
|
||||
set -a
|
||||
. "{{ noble_repo_root }}/.env"
|
||||
set +a
|
||||
if [ -z "${CLOUDFLARE_DNS_API_TOKEN:-}" ]; then
|
||||
echo NO_TOKEN
|
||||
exit 0
|
||||
fi
|
||||
kubectl -n cert-manager create secret generic cloudflare-dns-api-token \
|
||||
--from-literal=api-token="${CLOUDFLARE_DNS_API_TOKEN}" \
|
||||
--dry-run=client -o yaml | kubectl apply -f -
|
||||
echo APPLIED
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
when: noble_deploy_env_file.stat.exists | default(false)
|
||||
no_log: true
|
||||
register: noble_cf_secret_from_env
|
||||
changed_when: "'APPLIED' in (noble_cf_secret_from_env.stdout | default(''))"
|
||||
68
ansible/roles/noble_cert_manager/tasks/main.yml
Normal file
68
ansible/roles/noble_cert_manager/tasks/main.yml
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
- name: Create cert-manager namespace
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- apply
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/cert-manager/namespace.yaml"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
|
||||
- name: Install cert-manager
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- helm
|
||||
- upgrade
|
||||
- --install
|
||||
- cert-manager
|
||||
- jetstack/cert-manager
|
||||
- --namespace
|
||||
- cert-manager
|
||||
- --version
|
||||
- v1.20.0
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/cert-manager/values.yaml"
|
||||
- --wait
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
|
||||
- name: Apply secrets from repository .env (optional)
|
||||
ansible.builtin.include_tasks: from_env.yml
|
||||
|
||||
- name: Check Cloudflare DNS API token Secret (required for ClusterIssuers)
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- -n
|
||||
- cert-manager
|
||||
- get
|
||||
- secret
|
||||
- cloudflare-dns-api-token
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
register: noble_cf_secret
|
||||
failed_when: false
|
||||
changed_when: false
|
||||
|
||||
- name: Warn when Cloudflare Secret is missing
|
||||
ansible.builtin.debug:
|
||||
msg: >-
|
||||
Secret cert-manager/cloudflare-dns-api-token not found.
|
||||
Create it per clusters/noble/bootstrap/cert-manager/README.md before ClusterIssuers can succeed.
|
||||
when:
|
||||
- noble_cert_manager_require_cloudflare_secret | default(true) | bool
|
||||
- noble_cf_secret.rc != 0
|
||||
|
||||
- name: Apply ClusterIssuers (staging + prod)
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- apply
|
||||
- -k
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/cert-manager"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
25
ansible/roles/noble_cilium/tasks/main.yml
Normal file
25
ansible/roles/noble_cilium/tasks/main.yml
Normal file
@@ -0,0 +1,25 @@
|
||||
---
|
||||
- name: Install Cilium (required CNI for Talos cni:none)
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- helm
|
||||
- upgrade
|
||||
- --install
|
||||
- cilium
|
||||
- cilium/cilium
|
||||
- --namespace
|
||||
- kube-system
|
||||
- --version
|
||||
- "1.16.6"
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/cilium/values.yaml"
|
||||
- --wait
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
|
||||
- name: Wait for Cilium DaemonSet
|
||||
ansible.builtin.command: kubectl -n kube-system rollout status ds/cilium --timeout=300s
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: false
|
||||
@@ -0,0 +1,2 @@
|
||||
---
|
||||
noble_csi_snapshot_kubectl_timeout: 120s
|
||||
39
ansible/roles/noble_csi_snapshot_controller/tasks/main.yml
Normal file
39
ansible/roles/noble_csi_snapshot_controller/tasks/main.yml
Normal file
@@ -0,0 +1,39 @@
|
||||
---
|
||||
# Volume Snapshot CRDs + snapshot-controller (Velero CSI / Longhorn snapshots).
|
||||
- name: Apply Volume Snapshot CRDs (snapshot.storage.k8s.io)
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- apply
|
||||
- "--request-timeout={{ noble_csi_snapshot_kubectl_timeout | default('120s') }}"
|
||||
- -k
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/csi-snapshot-controller/crd"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
|
||||
- name: Apply snapshot-controller in kube-system
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- apply
|
||||
- "--request-timeout={{ noble_csi_snapshot_kubectl_timeout | default('120s') }}"
|
||||
- -k
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/csi-snapshot-controller/controller"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
|
||||
- name: Wait for snapshot-controller Deployment
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- -n
|
||||
- kube-system
|
||||
- rollout
|
||||
- status
|
||||
- deploy/snapshot-controller
|
||||
- --timeout=120s
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: false
|
||||
11
ansible/roles/noble_kube_vip/tasks/main.yml
Normal file
11
ansible/roles/noble_kube_vip/tasks/main.yml
Normal file
@@ -0,0 +1,11 @@
|
||||
---
|
||||
- name: Apply kube-vip (Kubernetes API VIP)
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- apply
|
||||
- -k
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/kube-vip"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
32
ansible/roles/noble_kyverno/tasks/main.yml
Normal file
32
ansible/roles/noble_kyverno/tasks/main.yml
Normal file
@@ -0,0 +1,32 @@
|
||||
---
|
||||
- name: Create Kyverno namespace
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- apply
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/kyverno/namespace.yaml"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
|
||||
- name: Install Kyverno operator
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- helm
|
||||
- upgrade
|
||||
- --install
|
||||
- kyverno
|
||||
- kyverno/kyverno
|
||||
- -n
|
||||
- kyverno
|
||||
- --version
|
||||
- "3.7.1"
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/kyverno/values.yaml"
|
||||
- --wait
|
||||
- --timeout
|
||||
- 15m
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
21
ansible/roles/noble_kyverno_policies/tasks/main.yml
Normal file
21
ansible/roles/noble_kyverno_policies/tasks/main.yml
Normal file
@@ -0,0 +1,21 @@
|
||||
---
|
||||
- name: Install Kyverno policy chart (PSS baseline, Audit)
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- helm
|
||||
- upgrade
|
||||
- --install
|
||||
- kyverno-policies
|
||||
- kyverno/kyverno-policies
|
||||
- -n
|
||||
- kyverno
|
||||
- --version
|
||||
- "3.7.1"
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/kyverno/policies-values.yaml"
|
||||
- --wait
|
||||
- --timeout
|
||||
- 10m
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
51
ansible/roles/noble_landing_urls/defaults/main.yml
Normal file
51
ansible/roles/noble_landing_urls/defaults/main.yml
Normal file
@@ -0,0 +1,51 @@
|
||||
---
|
||||
# Regenerated when **noble_landing_urls** runs (after platform stack). Paths match Traefik + cert-manager Ingresses.
|
||||
noble_landing_urls_dest: "{{ noble_repo_root }}/ansible/output/noble-lab-ui-urls.md"
|
||||
|
||||
# When true, run kubectl to fill Argo CD / Grafana secrets and a bounded Headlamp SA token in the markdown (requires working kubeconfig).
|
||||
noble_landing_urls_fetch_credentials: true
|
||||
|
||||
# Headlamp: bounded token for UI sign-in (`kubectl create token`); cluster may cap max duration.
|
||||
noble_landing_urls_headlamp_token_duration: 48h
|
||||
|
||||
noble_lab_ui_entries:
|
||||
- name: Argo CD
|
||||
description: GitOps UI (sync, apps, repos)
|
||||
namespace: argocd
|
||||
service: argocd-server
|
||||
url: https://argo.apps.noble.lab.pcenicni.dev
|
||||
- name: Grafana
|
||||
description: Dashboards, Loki explore (logs)
|
||||
namespace: monitoring
|
||||
service: kube-prometheus-grafana
|
||||
url: https://grafana.apps.noble.lab.pcenicni.dev
|
||||
- name: Prometheus
|
||||
description: Prometheus UI (queries, targets) — lab; protect in production
|
||||
namespace: monitoring
|
||||
service: kube-prometheus-kube-prome-prometheus
|
||||
url: https://prometheus.apps.noble.lab.pcenicni.dev
|
||||
- name: Alertmanager
|
||||
description: Alertmanager UI (silences, status)
|
||||
namespace: monitoring
|
||||
service: kube-prometheus-kube-prome-alertmanager
|
||||
url: https://alertmanager.apps.noble.lab.pcenicni.dev
|
||||
- name: Headlamp
|
||||
description: Kubernetes UI (cluster resources)
|
||||
namespace: headlamp
|
||||
service: headlamp
|
||||
url: https://headlamp.apps.noble.lab.pcenicni.dev
|
||||
- name: Longhorn
|
||||
description: Storage volumes, nodes, backups
|
||||
namespace: longhorn-system
|
||||
service: longhorn-frontend
|
||||
url: https://longhorn.apps.noble.lab.pcenicni.dev
|
||||
- name: Velero
|
||||
description: Cluster backups — no web UI (velero CLI / kubectl CRDs)
|
||||
namespace: velero
|
||||
service: velero
|
||||
url: ""
|
||||
- name: Homepage
|
||||
description: App dashboard (links to lab UIs)
|
||||
namespace: homepage
|
||||
service: homepage
|
||||
url: https://homepage.apps.noble.lab.pcenicni.dev
|
||||
72
ansible/roles/noble_landing_urls/tasks/fetch_credentials.yml
Normal file
72
ansible/roles/noble_landing_urls/tasks/fetch_credentials.yml
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
# Populates template variables from Secrets + Headlamp token (no_log on kubectl to avoid leaking into Ansible stdout).
|
||||
- name: Fetch Argo CD initial admin password (base64)
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- -n
|
||||
- argocd
|
||||
- get
|
||||
- secret
|
||||
- argocd-initial-admin-secret
|
||||
- -o
|
||||
- jsonpath={.data.password}
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
register: noble_fetch_argocd_pw_b64
|
||||
failed_when: false
|
||||
changed_when: false
|
||||
no_log: true
|
||||
|
||||
- name: Fetch Grafana admin user (base64)
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- -n
|
||||
- monitoring
|
||||
- get
|
||||
- secret
|
||||
- kube-prometheus-grafana
|
||||
- -o
|
||||
- jsonpath={.data.admin-user}
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
register: noble_fetch_grafana_user_b64
|
||||
failed_when: false
|
||||
changed_when: false
|
||||
no_log: true
|
||||
|
||||
- name: Fetch Grafana admin password (base64)
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- -n
|
||||
- monitoring
|
||||
- get
|
||||
- secret
|
||||
- kube-prometheus-grafana
|
||||
- -o
|
||||
- jsonpath={.data.admin-password}
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
register: noble_fetch_grafana_pw_b64
|
||||
failed_when: false
|
||||
changed_when: false
|
||||
no_log: true
|
||||
|
||||
- name: Create Headlamp ServiceAccount token (for UI sign-in)
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- -n
|
||||
- headlamp
|
||||
- create
|
||||
- token
|
||||
- headlamp
|
||||
- "--duration={{ noble_landing_urls_headlamp_token_duration | default('48h') }}"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
register: noble_fetch_headlamp_token
|
||||
failed_when: false
|
||||
changed_when: false
|
||||
no_log: true
|
||||
20
ansible/roles/noble_landing_urls/tasks/main.yml
Normal file
20
ansible/roles/noble_landing_urls/tasks/main.yml
Normal file
@@ -0,0 +1,20 @@
|
||||
---
|
||||
- name: Ensure output directory for generated landing page
|
||||
ansible.builtin.file:
|
||||
path: "{{ noble_repo_root }}/ansible/output"
|
||||
state: directory
|
||||
mode: "0755"
|
||||
|
||||
- name: Fetch initial credentials from cluster Secrets (optional)
|
||||
ansible.builtin.include_tasks: fetch_credentials.yml
|
||||
when: noble_landing_urls_fetch_credentials | default(true) | bool
|
||||
|
||||
- name: Write noble lab UI URLs (markdown landing page)
|
||||
ansible.builtin.template:
|
||||
src: noble-lab-ui-urls.md.j2
|
||||
dest: "{{ noble_landing_urls_dest }}"
|
||||
mode: "0644"
|
||||
|
||||
- name: Show landing page path
|
||||
ansible.builtin.debug:
|
||||
msg: "Noble lab UI list written to {{ noble_landing_urls_dest }}"
|
||||
@@ -0,0 +1,51 @@
|
||||
# Noble lab — web UIs (LAN)
|
||||
|
||||
> **Sensitive:** This file may include **passwords read from Kubernetes Secrets** when credential fetch ran. It is **gitignored** — do not commit or share.
|
||||
|
||||
**DNS:** point **`*.apps.noble.lab.pcenicni.dev`** at the Traefik **LoadBalancer** (MetalLB **`192.168.50.211`** by default — see `clusters/noble/bootstrap/traefik/values.yaml`).
|
||||
|
||||
**TLS:** **cert-manager** + **`letsencrypt-prod`** on each Ingress (public **DNS-01** for **`pcenicni.dev`**).
|
||||
|
||||
This file is **generated** by Ansible (`noble_landing_urls` role). Use it as a temporary landing page to find services after deploy.
|
||||
|
||||
| UI | What | Kubernetes service | Namespace | URL |
|
||||
|----|------|----------------------|-----------|-----|
|
||||
{% for e in noble_lab_ui_entries %}
|
||||
| {{ e.name }} | {{ e.description }} | `{{ e.service }}` | `{{ e.namespace }}` | {% if e.url | default('') | length > 0 %}[{{ e.url }}]({{ e.url }}){% else %}—{% endif %} |
|
||||
{% endfor %}
|
||||
|
||||
## Initial access (logins)
|
||||
|
||||
| App | Username / identity | Password / secret |
|
||||
|-----|---------------------|-------------------|
|
||||
| **Argo CD** | `admin` | {% if (noble_fetch_argocd_pw_b64 is defined) and (noble_fetch_argocd_pw_b64.rc | default(1) == 0) and (noble_fetch_argocd_pw_b64.stdout | default('') | length > 0) %}`{{ noble_fetch_argocd_pw_b64.stdout | b64decode }}`{% else %}*(not fetched — use commands below)*{% endif %} |
|
||||
| **Grafana** | {% if (noble_fetch_grafana_user_b64 is defined) and (noble_fetch_grafana_user_b64.rc | default(1) == 0) and (noble_fetch_grafana_user_b64.stdout | default('') | length > 0) %}`{{ noble_fetch_grafana_user_b64.stdout | b64decode }}`{% else %}*(from Secret — use commands below)*{% endif %} | {% if (noble_fetch_grafana_pw_b64 is defined) and (noble_fetch_grafana_pw_b64.rc | default(1) == 0) and (noble_fetch_grafana_pw_b64.stdout | default('') | length > 0) %}`{{ noble_fetch_grafana_pw_b64.stdout | b64decode }}`{% else %}*(not fetched — use commands below)*{% endif %} |
|
||||
| **Headlamp** | ServiceAccount **`headlamp`** | {% if (noble_fetch_headlamp_token is defined) and (noble_fetch_headlamp_token.rc | default(1) == 0) and (noble_fetch_headlamp_token.stdout | default('') | trim | length > 0) %}Token ({{ noble_landing_urls_headlamp_token_duration | default('48h') }}): `{{ noble_fetch_headlamp_token.stdout | trim }}`{% else %}*(not generated — use command below)*{% endif %} |
|
||||
| **Prometheus** | — | No auth in default install (lab). |
|
||||
| **Alertmanager** | — | No auth in default install (lab). |
|
||||
| **Longhorn** | — | No default login unless you enable access control in the UI settings. |
|
||||
|
||||
### Commands to retrieve passwords (if not filled above)
|
||||
|
||||
```bash
|
||||
# Argo CD initial admin (Secret removed after you change password)
|
||||
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath='{.data.password}' | base64 -d
|
||||
echo
|
||||
|
||||
# Grafana admin user / password
|
||||
kubectl -n monitoring get secret kube-prometheus-grafana -o jsonpath='{.data.admin-user}' | base64 -d
|
||||
echo
|
||||
kubectl -n monitoring get secret kube-prometheus-grafana -o jsonpath='{.data.admin-password}' | base64 -d
|
||||
echo
|
||||
```
|
||||
|
||||
To generate this file **without** calling kubectl, run Ansible with **`-e noble_landing_urls_fetch_credentials=false`**.
|
||||
|
||||
## Notes
|
||||
|
||||
- **Argo CD** `argocd-initial-admin-secret` disappears after you change the admin password.
|
||||
- **Grafana** password is random unless you set `grafana.adminPassword` in chart values.
|
||||
- **Prometheus / Alertmanager** UIs are unauthenticated by default — restrict when hardening (`talos/CLUSTER-BUILD.md` Phase G).
|
||||
- **SOPS:** cluster secrets in git under **`clusters/noble/secrets/`** are encrypted; decrypt with **`age-key.txt`** (not in git). See **`clusters/noble/secrets/README.md`**.
|
||||
- **Headlamp** token above expires after the configured duration; re-run Ansible or `kubectl create token` to refresh.
|
||||
- **Velero** has **no web UI** — use **`velero`** CLI or **`kubectl -n velero get backup,schedule,backupstoragelocation`**. Metrics: **`velero`** Service in **`velero`** (Prometheus scrape). See `clusters/noble/bootstrap/velero/README.md`.
|
||||
29
ansible/roles/noble_longhorn/tasks/main.yml
Normal file
29
ansible/roles/noble_longhorn/tasks/main.yml
Normal file
@@ -0,0 +1,29 @@
|
||||
---
|
||||
- name: Apply Longhorn namespace (PSA) from kustomization
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- apply
|
||||
- -k
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/longhorn"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
|
||||
- name: Install Longhorn chart
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- helm
|
||||
- upgrade
|
||||
- --install
|
||||
- longhorn
|
||||
- longhorn/longhorn
|
||||
- -n
|
||||
- longhorn-system
|
||||
- --create-namespace
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/longhorn/values.yaml"
|
||||
- --wait
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
3
ansible/roles/noble_metallb/defaults/main.yml
Normal file
3
ansible/roles/noble_metallb/defaults/main.yml
Normal file
@@ -0,0 +1,3 @@
|
||||
---
|
||||
# Helm **--wait** default is often too short when images pull slowly or nodes are busy.
|
||||
noble_helm_metallb_wait_timeout: 20m
|
||||
39
ansible/roles/noble_metallb/tasks/main.yml
Normal file
39
ansible/roles/noble_metallb/tasks/main.yml
Normal file
@@ -0,0 +1,39 @@
|
||||
---
|
||||
- name: Apply MetalLB namespace (Pod Security labels)
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- apply
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/metallb/namespace.yaml"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
|
||||
- name: Install MetalLB chart
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- helm
|
||||
- upgrade
|
||||
- --install
|
||||
- metallb
|
||||
- metallb/metallb
|
||||
- --namespace
|
||||
- metallb-system
|
||||
- --wait
|
||||
- --timeout
|
||||
- "{{ noble_helm_metallb_wait_timeout }}"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
|
||||
- name: Apply IPAddressPool and L2Advertisement
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- apply
|
||||
- -k
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/metallb"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
19
ansible/roles/noble_metrics_server/tasks/main.yml
Normal file
19
ansible/roles/noble_metrics_server/tasks/main.yml
Normal file
@@ -0,0 +1,19 @@
|
||||
---
|
||||
- name: Install metrics-server
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- helm
|
||||
- upgrade
|
||||
- --install
|
||||
- metrics-server
|
||||
- metrics-server/metrics-server
|
||||
- -n
|
||||
- kube-system
|
||||
- --version
|
||||
- "3.13.0"
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/metrics-server/values.yaml"
|
||||
- --wait
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
3
ansible/roles/noble_newt/defaults/main.yml
Normal file
3
ansible/roles/noble_newt/defaults/main.yml
Normal file
@@ -0,0 +1,3 @@
|
||||
---
|
||||
# Set true after creating the newt-pangolin-auth Secret (see role / cluster docs).
|
||||
noble_newt_install: true
|
||||
30
ansible/roles/noble_newt/tasks/from_env.yml
Normal file
30
ansible/roles/noble_newt/tasks/from_env.yml
Normal file
@@ -0,0 +1,30 @@
|
||||
---
|
||||
# See repository **.env.sample** — copy to **.env** (gitignored).
|
||||
- name: Stat repository .env for deploy secrets
|
||||
ansible.builtin.stat:
|
||||
path: "{{ noble_repo_root }}/.env"
|
||||
register: noble_deploy_env_file
|
||||
changed_when: false
|
||||
|
||||
- name: Create newt-pangolin-auth Secret from .env
|
||||
ansible.builtin.shell: |
|
||||
set -euo pipefail
|
||||
set -a
|
||||
. "{{ noble_repo_root }}/.env"
|
||||
set +a
|
||||
if [ -z "${PANGOLIN_ENDPOINT:-}" ] || [ -z "${NEWT_ID:-}" ] || [ -z "${NEWT_SECRET:-}" ]; then
|
||||
echo NO_VARS
|
||||
exit 0
|
||||
fi
|
||||
kubectl -n newt create secret generic newt-pangolin-auth \
|
||||
--from-literal=PANGOLIN_ENDPOINT="${PANGOLIN_ENDPOINT}" \
|
||||
--from-literal=NEWT_ID="${NEWT_ID}" \
|
||||
--from-literal=NEWT_SECRET="${NEWT_SECRET}" \
|
||||
--dry-run=client -o yaml | kubectl apply -f -
|
||||
echo APPLIED
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
when: noble_deploy_env_file.stat.exists | default(false)
|
||||
no_log: true
|
||||
register: noble_newt_secret_from_env
|
||||
changed_when: "'APPLIED' in (noble_newt_secret_from_env.stdout | default(''))"
|
||||
41
ansible/roles/noble_newt/tasks/main.yml
Normal file
41
ansible/roles/noble_newt/tasks/main.yml
Normal file
@@ -0,0 +1,41 @@
|
||||
---
|
||||
- name: Skip Newt when not enabled
|
||||
ansible.builtin.debug:
|
||||
msg: "noble_newt_install is false — set PANGOLIN_ENDPOINT, NEWT_ID, NEWT_SECRET in repo .env (or create the Secret manually) and set noble_newt_install=true to deploy Newt."
|
||||
when: not (noble_newt_install | bool)
|
||||
|
||||
- name: Create Newt namespace
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- apply
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/newt/namespace.yaml"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
when: noble_newt_install | bool
|
||||
changed_when: true
|
||||
|
||||
- name: Apply Newt Pangolin auth Secret from repository .env (optional)
|
||||
ansible.builtin.include_tasks: from_env.yml
|
||||
when: noble_newt_install | bool
|
||||
|
||||
- name: Install Newt chart
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- helm
|
||||
- upgrade
|
||||
- --install
|
||||
- newt
|
||||
- fossorial/newt
|
||||
- --namespace
|
||||
- newt
|
||||
- --version
|
||||
- "1.2.0"
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/newt/values.yaml"
|
||||
- --wait
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
when: noble_newt_install | bool
|
||||
changed_when: true
|
||||
9
ansible/roles/noble_platform/defaults/main.yml
Normal file
9
ansible/roles/noble_platform/defaults/main.yml
Normal file
@@ -0,0 +1,9 @@
|
||||
---
|
||||
# kubectl apply -k can hit transient etcd timeouts under load; retries + longer API deadline help.
|
||||
noble_platform_kubectl_request_timeout: 120s
|
||||
noble_platform_kustomize_retries: 5
|
||||
noble_platform_kustomize_delay: 20
|
||||
|
||||
# Decrypt **clusters/noble/secrets/*.yaml** with SOPS and kubectl apply (requires **sops**, **age**, and **age-key.txt**).
|
||||
noble_apply_sops_secrets: true
|
||||
noble_sops_age_key_file: "{{ noble_repo_root }}/age-key.txt"
|
||||
117
ansible/roles/noble_platform/tasks/main.yml
Normal file
117
ansible/roles/noble_platform/tasks/main.yml
Normal file
@@ -0,0 +1,117 @@
|
||||
---
|
||||
# Mirrors former **noble-platform** Argo Application: Helm releases + plain manifests under clusters/noble/bootstrap.
|
||||
- name: Apply clusters/noble/bootstrap kustomize (namespaces, Grafana Loki datasource)
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- apply
|
||||
- "--request-timeout={{ noble_platform_kubectl_request_timeout }}"
|
||||
- -k
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
register: noble_platform_kustomize
|
||||
retries: "{{ noble_platform_kustomize_retries | int }}"
|
||||
delay: "{{ noble_platform_kustomize_delay | int }}"
|
||||
until: noble_platform_kustomize.rc == 0
|
||||
changed_when: true
|
||||
|
||||
- name: Stat SOPS age private key (age-key.txt)
|
||||
ansible.builtin.stat:
|
||||
path: "{{ noble_sops_age_key_file }}"
|
||||
register: noble_sops_age_key_stat
|
||||
|
||||
- name: Apply SOPS-encrypted cluster secrets (clusters/noble/secrets/*.yaml)
|
||||
ansible.builtin.shell: |
|
||||
set -euo pipefail
|
||||
shopt -s nullglob
|
||||
for f in "{{ noble_repo_root }}/clusters/noble/secrets"/*.yaml; do
|
||||
sops -d "$f" | kubectl apply -f -
|
||||
done
|
||||
args:
|
||||
executable: /bin/bash
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
SOPS_AGE_KEY_FILE: "{{ noble_sops_age_key_file }}"
|
||||
when:
|
||||
- noble_apply_sops_secrets | default(true) | bool
|
||||
- noble_sops_age_key_stat.stat.exists
|
||||
changed_when: true
|
||||
|
||||
- name: Install kube-prometheus-stack
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- helm
|
||||
- upgrade
|
||||
- --install
|
||||
- kube-prometheus
|
||||
- prometheus-community/kube-prometheus-stack
|
||||
- -n
|
||||
- monitoring
|
||||
- --version
|
||||
- "82.15.1"
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/kube-prometheus-stack/values.yaml"
|
||||
- --wait
|
||||
- --timeout
|
||||
- 30m
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
|
||||
- name: Install Loki
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- helm
|
||||
- upgrade
|
||||
- --install
|
||||
- loki
|
||||
- grafana/loki
|
||||
- -n
|
||||
- loki
|
||||
- --version
|
||||
- "6.55.0"
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/loki/values.yaml"
|
||||
- --wait
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
|
||||
- name: Install Fluent Bit
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- helm
|
||||
- upgrade
|
||||
- --install
|
||||
- fluent-bit
|
||||
- fluent/fluent-bit
|
||||
- -n
|
||||
- logging
|
||||
- --version
|
||||
- "0.56.0"
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/fluent-bit/values.yaml"
|
||||
- --wait
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
|
||||
- name: Install Headlamp
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- helm
|
||||
- upgrade
|
||||
- --install
|
||||
- headlamp
|
||||
- headlamp/headlamp
|
||||
- --version
|
||||
- "0.40.1"
|
||||
- -n
|
||||
- headlamp
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/headlamp/values.yaml"
|
||||
- --wait
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
15
ansible/roles/noble_post_deploy/tasks/main.yml
Normal file
15
ansible/roles/noble_post_deploy/tasks/main.yml
Normal file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
- name: SOPS secrets (workstation)
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Encrypted Kubernetes Secrets live under clusters/noble/secrets/ (Mozilla SOPS + age).
|
||||
Private key: age-key.txt at repo root (gitignored). See clusters/noble/secrets/README.md
|
||||
and .sops.yaml. noble.yml decrypt-applies these when age-key.txt exists.
|
||||
|
||||
- name: Argo CD optional root Application (empty app-of-apps)
|
||||
ansible.builtin.debug:
|
||||
msg: >-
|
||||
App-of-apps: noble.yml applies root-application.yaml when noble_argocd_apply_root_application is true;
|
||||
bootstrap-root-application.yaml when noble_argocd_apply_bootstrap_root_application is true (group_vars/all.yml).
|
||||
noble-bootstrap-root uses manual sync until you enable automation after the playbook —
|
||||
clusters/noble/bootstrap/argocd/README.md §5. See clusters/noble/apps/README.md and that README.
|
||||
30
ansible/roles/noble_traefik/tasks/main.yml
Normal file
30
ansible/roles/noble_traefik/tasks/main.yml
Normal file
@@ -0,0 +1,30 @@
|
||||
---
|
||||
- name: Create Traefik namespace
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- apply
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/traefik/namespace.yaml"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
|
||||
- name: Install Traefik
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- helm
|
||||
- upgrade
|
||||
- --install
|
||||
- traefik
|
||||
- traefik/traefik
|
||||
- --namespace
|
||||
- traefik
|
||||
- --version
|
||||
- "39.0.6"
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/traefik/values.yaml"
|
||||
- --wait
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
changed_when: true
|
||||
13
ansible/roles/noble_velero/defaults/main.yml
Normal file
13
ansible/roles/noble_velero/defaults/main.yml
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
# **noble_velero_install** is in **ansible/group_vars/all.yml**. Override S3 fields via extra-vars or group_vars.
|
||||
noble_velero_chart_version: "12.0.0"
|
||||
|
||||
noble_velero_s3_bucket: ""
|
||||
noble_velero_s3_url: ""
|
||||
noble_velero_s3_region: "us-east-1"
|
||||
noble_velero_s3_force_path_style: "true"
|
||||
noble_velero_s3_prefix: ""
|
||||
|
||||
# Optional — if unset, Ansible expects Secret **velero/velero-cloud-credentials** (key **cloud**) to exist.
|
||||
noble_velero_aws_access_key_id: ""
|
||||
noble_velero_aws_secret_access_key: ""
|
||||
68
ansible/roles/noble_velero/tasks/from_env.yml
Normal file
68
ansible/roles/noble_velero/tasks/from_env.yml
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
# See repository **.env.sample** — copy to **.env** (gitignored).
|
||||
- name: Stat repository .env for Velero
|
||||
ansible.builtin.stat:
|
||||
path: "{{ noble_repo_root }}/.env"
|
||||
register: noble_deploy_env_file
|
||||
changed_when: false
|
||||
|
||||
- name: Load NOBLE_VELERO_S3_BUCKET from .env when unset
|
||||
ansible.builtin.shell: |
|
||||
set -a
|
||||
. "{{ noble_repo_root }}/.env"
|
||||
set +a
|
||||
echo "${NOBLE_VELERO_S3_BUCKET:-}"
|
||||
register: noble_velero_s3_bucket_from_env
|
||||
when:
|
||||
- noble_deploy_env_file.stat.exists | default(false)
|
||||
- noble_velero_s3_bucket | default('') | length == 0
|
||||
changed_when: false
|
||||
|
||||
- name: Apply NOBLE_VELERO_S3_BUCKET from .env
|
||||
ansible.builtin.set_fact:
|
||||
noble_velero_s3_bucket: "{{ noble_velero_s3_bucket_from_env.stdout | trim }}"
|
||||
when:
|
||||
- noble_velero_s3_bucket_from_env is defined
|
||||
- (noble_velero_s3_bucket_from_env.stdout | default('') | trim | length) > 0
|
||||
|
||||
- name: Load NOBLE_VELERO_S3_URL from .env when unset
|
||||
ansible.builtin.shell: |
|
||||
set -a
|
||||
. "{{ noble_repo_root }}/.env"
|
||||
set +a
|
||||
echo "${NOBLE_VELERO_S3_URL:-}"
|
||||
register: noble_velero_s3_url_from_env
|
||||
when:
|
||||
- noble_deploy_env_file.stat.exists | default(false)
|
||||
- noble_velero_s3_url | default('') | length == 0
|
||||
changed_when: false
|
||||
|
||||
- name: Apply NOBLE_VELERO_S3_URL from .env
|
||||
ansible.builtin.set_fact:
|
||||
noble_velero_s3_url: "{{ noble_velero_s3_url_from_env.stdout | trim }}"
|
||||
when:
|
||||
- noble_velero_s3_url_from_env is defined
|
||||
- (noble_velero_s3_url_from_env.stdout | default('') | trim | length) > 0
|
||||
|
||||
- name: Create velero-cloud-credentials from .env when keys present
|
||||
ansible.builtin.shell: |
|
||||
set -euo pipefail
|
||||
set -a
|
||||
. "{{ noble_repo_root }}/.env"
|
||||
set +a
|
||||
if [ -z "${NOBLE_VELERO_AWS_ACCESS_KEY_ID:-}" ] || [ -z "${NOBLE_VELERO_AWS_SECRET_ACCESS_KEY:-}" ]; then
|
||||
echo SKIP
|
||||
exit 0
|
||||
fi
|
||||
CLOUD="$(printf '[default]\naws_access_key_id=%s\naws_secret_access_key=%s\n' \
|
||||
"${NOBLE_VELERO_AWS_ACCESS_KEY_ID}" "${NOBLE_VELERO_AWS_SECRET_ACCESS_KEY}")"
|
||||
kubectl -n velero create secret generic velero-cloud-credentials \
|
||||
--from-literal=cloud="${CLOUD}" \
|
||||
--dry-run=client -o yaml | kubectl apply -f -
|
||||
echo APPLIED
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
when: noble_deploy_env_file.stat.exists | default(false)
|
||||
no_log: true
|
||||
register: noble_velero_secret_from_env
|
||||
changed_when: "'APPLIED' in (noble_velero_secret_from_env.stdout | default(''))"
|
||||
85
ansible/roles/noble_velero/tasks/main.yml
Normal file
85
ansible/roles/noble_velero/tasks/main.yml
Normal file
@@ -0,0 +1,85 @@
|
||||
---
|
||||
# Velero — S3 backup target + built-in CSI snapshots (Longhorn: label VolumeSnapshotClass per README).
|
||||
- name: Apply velero namespace
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- apply
|
||||
- -f
|
||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/velero/namespace.yaml"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
when: noble_velero_install | default(false) | bool
|
||||
changed_when: true
|
||||
|
||||
- name: Include Velero settings from repository .env (S3 bucket, URL, credentials)
|
||||
ansible.builtin.include_tasks: from_env.yml
|
||||
when: noble_velero_install | default(false) | bool
|
||||
|
||||
- name: Require S3 bucket and endpoint for Velero
|
||||
ansible.builtin.assert:
|
||||
that:
|
||||
- noble_velero_s3_bucket | default('') | length > 0
|
||||
- noble_velero_s3_url | default('') | length > 0
|
||||
fail_msg: >-
|
||||
Set NOBLE_VELERO_S3_BUCKET and NOBLE_VELERO_S3_URL in .env, or noble_velero_s3_bucket / noble_velero_s3_url
|
||||
(e.g. -e ...), or group_vars when noble_velero_install is true.
|
||||
when: noble_velero_install | default(false) | bool
|
||||
|
||||
- name: Create velero-cloud-credentials from Ansible vars
|
||||
ansible.builtin.shell: |
|
||||
set -euo pipefail
|
||||
CLOUD="$(printf '[default]\naws_access_key_id=%s\naws_secret_access_key=%s\n' \
|
||||
"${AWS_ACCESS_KEY_ID}" "${AWS_SECRET_ACCESS_KEY}")"
|
||||
kubectl -n velero create secret generic velero-cloud-credentials \
|
||||
--from-literal=cloud="${CLOUD}" \
|
||||
--dry-run=client -o yaml | kubectl apply -f -
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
AWS_ACCESS_KEY_ID: "{{ noble_velero_aws_access_key_id }}"
|
||||
AWS_SECRET_ACCESS_KEY: "{{ noble_velero_aws_secret_access_key }}"
|
||||
when:
|
||||
- noble_velero_install | default(false) | bool
|
||||
- noble_velero_aws_access_key_id | default('') | length > 0
|
||||
- noble_velero_aws_secret_access_key | default('') | length > 0
|
||||
no_log: true
|
||||
changed_when: true
|
||||
|
||||
- name: Check velero-cloud-credentials Secret
|
||||
ansible.builtin.command:
|
||||
argv:
|
||||
- kubectl
|
||||
- -n
|
||||
- velero
|
||||
- get
|
||||
- secret
|
||||
- velero-cloud-credentials
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
register: noble_velero_secret_check
|
||||
failed_when: false
|
||||
changed_when: false
|
||||
when: noble_velero_install | default(false) | bool
|
||||
|
||||
- name: Require velero-cloud-credentials before Helm
|
||||
ansible.builtin.assert:
|
||||
that:
|
||||
- noble_velero_secret_check.rc == 0
|
||||
fail_msg: >-
|
||||
Velero needs Secret velero/velero-cloud-credentials (key cloud). Set NOBLE_VELERO_AWS_ACCESS_KEY_ID and
|
||||
NOBLE_VELERO_AWS_SECRET_ACCESS_KEY in .env, or noble_velero_aws_* extra-vars, or create the Secret manually
|
||||
(see clusters/noble/bootstrap/velero/README.md).
|
||||
when: noble_velero_install | default(false) | bool
|
||||
|
||||
- name: Optional object prefix argv for Helm
|
||||
ansible.builtin.set_fact:
|
||||
noble_velero_helm_prefix_argv: "{{ ['--set-string', 'configuration.backupStorageLocation[0].prefix=' ~ (noble_velero_s3_prefix | default(''))] if (noble_velero_s3_prefix | default('') | length > 0) else [] }}"
|
||||
when: noble_velero_install | default(false) | bool
|
||||
|
||||
- name: Install Velero
|
||||
ansible.builtin.command:
|
||||
argv: "{{ ['helm', 'upgrade', '--install', 'velero', 'vmware-tanzu/velero', '--namespace', 'velero', '--version', noble_velero_chart_version, '-f', noble_repo_root ~ '/clusters/noble/bootstrap/velero/values.yaml', '--set-string', 'configuration.backupStorageLocation[0].bucket=' ~ noble_velero_s3_bucket, '--set-string', 'configuration.backupStorageLocation[0].config.s3Url=' ~ noble_velero_s3_url, '--set-string', 'configuration.backupStorageLocation[0].config.region=' ~ noble_velero_s3_region, '--set-string', 'configuration.backupStorageLocation[0].config.s3ForcePathStyle=' ~ noble_velero_s3_force_path_style] + (noble_velero_helm_prefix_argv | default([])) + ['--wait'] }}"
|
||||
environment:
|
||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||
when: noble_velero_install | default(false) | bool
|
||||
changed_when: true
|
||||
14
ansible/roles/proxmox_baseline/defaults/main.yml
Normal file
14
ansible/roles/proxmox_baseline/defaults/main.yml
Normal file
@@ -0,0 +1,14 @@
|
||||
---
|
||||
proxmox_repo_debian_codename: "{{ ansible_facts['distribution_release'] | default('bookworm') }}"
|
||||
proxmox_repo_disable_enterprise: true
|
||||
proxmox_repo_disable_ceph_enterprise: true
|
||||
proxmox_repo_enable_pve_no_subscription: true
|
||||
proxmox_repo_enable_ceph_no_subscription: false
|
||||
|
||||
proxmox_no_subscription_notice_disable: true
|
||||
proxmox_widget_toolkit_file: /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js
|
||||
|
||||
# Bootstrap root SSH keys from the control machine so subsequent runs can use key auth.
|
||||
proxmox_root_authorized_key_files:
|
||||
- "{{ lookup('env', 'HOME') }}/.ssh/id_ed25519.pub"
|
||||
- "{{ lookup('env', 'HOME') }}/.ssh/ansible.pub"
|
||||
5
ansible/roles/proxmox_baseline/handlers/main.yml
Normal file
5
ansible/roles/proxmox_baseline/handlers/main.yml
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
- name: Restart pveproxy
|
||||
ansible.builtin.service:
|
||||
name: pveproxy
|
||||
state: restarted
|
||||
100
ansible/roles/proxmox_baseline/tasks/main.yml
Normal file
100
ansible/roles/proxmox_baseline/tasks/main.yml
Normal file
@@ -0,0 +1,100 @@
|
||||
---
|
||||
- name: Check configured local public key files
|
||||
ansible.builtin.stat:
|
||||
path: "{{ item }}"
|
||||
register: proxmox_root_pubkey_stats
|
||||
loop: "{{ proxmox_root_authorized_key_files }}"
|
||||
delegate_to: localhost
|
||||
become: false
|
||||
|
||||
- name: Fail when a configured local public key file is missing
|
||||
ansible.builtin.fail:
|
||||
msg: "Configured key file does not exist on the control host: {{ item.item }}"
|
||||
when: not item.stat.exists
|
||||
loop: "{{ proxmox_root_pubkey_stats.results }}"
|
||||
delegate_to: localhost
|
||||
become: false
|
||||
|
||||
- name: Ensure root authorized_keys contains configured public keys
|
||||
ansible.posix.authorized_key:
|
||||
user: root
|
||||
state: present
|
||||
key: "{{ lookup('ansible.builtin.file', item) }}"
|
||||
manage_dir: true
|
||||
loop: "{{ proxmox_root_authorized_key_files }}"
|
||||
|
||||
- name: Remove enterprise repository lines from /etc/apt/sources.list
|
||||
ansible.builtin.lineinfile:
|
||||
path: /etc/apt/sources.list
|
||||
regexp: ".*enterprise\\.proxmox\\.com.*"
|
||||
state: absent
|
||||
when:
|
||||
- proxmox_repo_disable_enterprise | bool or proxmox_repo_disable_ceph_enterprise | bool
|
||||
failed_when: false
|
||||
|
||||
- name: Find apt source files that contain Proxmox enterprise repositories
|
||||
ansible.builtin.find:
|
||||
paths: /etc/apt/sources.list.d
|
||||
file_type: file
|
||||
patterns:
|
||||
- "*.list"
|
||||
- "*.sources"
|
||||
contains: "enterprise\\.proxmox\\.com"
|
||||
use_regex: true
|
||||
register: proxmox_enterprise_repo_files
|
||||
when:
|
||||
- proxmox_repo_disable_enterprise | bool or proxmox_repo_disable_ceph_enterprise | bool
|
||||
|
||||
- name: Remove enterprise repository lines from apt source files
|
||||
ansible.builtin.lineinfile:
|
||||
path: "{{ item.path }}"
|
||||
regexp: ".*enterprise\\.proxmox\\.com.*"
|
||||
state: absent
|
||||
loop: "{{ proxmox_enterprise_repo_files.files | default([]) }}"
|
||||
when:
|
||||
- proxmox_repo_disable_enterprise | bool or proxmox_repo_disable_ceph_enterprise | bool
|
||||
|
||||
- name: Find apt source files that already contain pve-no-subscription
|
||||
ansible.builtin.find:
|
||||
paths: /etc/apt/sources.list.d
|
||||
file_type: file
|
||||
patterns:
|
||||
- "*.list"
|
||||
- "*.sources"
|
||||
contains: "pve-no-subscription"
|
||||
use_regex: false
|
||||
register: proxmox_no_sub_repo_files
|
||||
when: proxmox_repo_enable_pve_no_subscription | bool
|
||||
|
||||
- name: Ensure Proxmox no-subscription repository is configured when absent
|
||||
ansible.builtin.copy:
|
||||
dest: /etc/apt/sources.list.d/pve-no-subscription.list
|
||||
content: "deb http://download.proxmox.com/debian/pve {{ proxmox_repo_debian_codename }} pve-no-subscription\n"
|
||||
mode: "0644"
|
||||
when:
|
||||
- proxmox_repo_enable_pve_no_subscription | bool
|
||||
- (proxmox_no_sub_repo_files.matched | default(0) | int) == 0
|
||||
|
||||
- name: Remove duplicate pve-no-subscription.list when another source already provides it
|
||||
ansible.builtin.file:
|
||||
path: /etc/apt/sources.list.d/pve-no-subscription.list
|
||||
state: absent
|
||||
when:
|
||||
- proxmox_repo_enable_pve_no_subscription | bool
|
||||
- (proxmox_no_sub_repo_files.files | default([]) | map(attribute='path') | list | select('ne', '/etc/apt/sources.list.d/pve-no-subscription.list') | list | length) > 0
|
||||
|
||||
- name: Ensure Ceph no-subscription repository is configured
|
||||
ansible.builtin.copy:
|
||||
dest: /etc/apt/sources.list.d/ceph-no-subscription.list
|
||||
content: "deb http://download.proxmox.com/debian/ceph-{{ proxmox_repo_debian_codename }} {{ proxmox_repo_debian_codename }} no-subscription\n"
|
||||
mode: "0644"
|
||||
when: proxmox_repo_enable_ceph_no_subscription | bool
|
||||
|
||||
- name: Disable no-subscription pop-up in Proxmox UI
|
||||
ansible.builtin.replace:
|
||||
path: "{{ proxmox_widget_toolkit_file }}"
|
||||
regexp: "if \\(data\\.status !== 'Active'\\)"
|
||||
replace: "if (false)"
|
||||
backup: true
|
||||
when: proxmox_no_subscription_notice_disable | bool
|
||||
notify: Restart pveproxy
|
||||
7
ansible/roles/proxmox_cluster/defaults/main.yml
Normal file
7
ansible/roles/proxmox_cluster/defaults/main.yml
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
proxmox_cluster_enabled: true
|
||||
proxmox_cluster_name: pve-cluster
|
||||
proxmox_cluster_master: ""
|
||||
proxmox_cluster_master_ip: ""
|
||||
proxmox_cluster_force: false
|
||||
proxmox_cluster_master_root_password: ""
|
||||
63
ansible/roles/proxmox_cluster/tasks/main.yml
Normal file
63
ansible/roles/proxmox_cluster/tasks/main.yml
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
- name: Skip cluster role when disabled
|
||||
ansible.builtin.meta: end_host
|
||||
when: not (proxmox_cluster_enabled | bool)
|
||||
|
||||
- name: Check whether corosync cluster config exists
|
||||
ansible.builtin.stat:
|
||||
path: /etc/pve/corosync.conf
|
||||
register: proxmox_cluster_corosync_conf
|
||||
|
||||
- name: Set effective Proxmox cluster master
|
||||
ansible.builtin.set_fact:
|
||||
proxmox_cluster_master_effective: "{{ proxmox_cluster_master | default(groups['proxmox_hosts'][0], true) }}"
|
||||
|
||||
- name: Set effective Proxmox cluster master IP
|
||||
ansible.builtin.set_fact:
|
||||
proxmox_cluster_master_ip_effective: >-
|
||||
{{
|
||||
proxmox_cluster_master_ip
|
||||
| default(hostvars[proxmox_cluster_master_effective].ansible_host
|
||||
| default(proxmox_cluster_master_effective), true)
|
||||
}}
|
||||
|
||||
- name: Create cluster on designated master
|
||||
ansible.builtin.command:
|
||||
cmd: "pvecm create {{ proxmox_cluster_name }}"
|
||||
when:
|
||||
- inventory_hostname == proxmox_cluster_master_effective
|
||||
- not proxmox_cluster_corosync_conf.stat.exists
|
||||
|
||||
- name: Ensure python3-pexpect is installed for password-based cluster join
|
||||
ansible.builtin.apt:
|
||||
name: python3-pexpect
|
||||
state: present
|
||||
update_cache: true
|
||||
when:
|
||||
- inventory_hostname != proxmox_cluster_master_effective
|
||||
- not proxmox_cluster_corosync_conf.stat.exists
|
||||
- proxmox_cluster_master_root_password | length > 0
|
||||
|
||||
- name: Join node to existing cluster (password provided)
|
||||
ansible.builtin.expect:
|
||||
command: >-
|
||||
pvecm add {{ proxmox_cluster_master_ip_effective }}
|
||||
{% if proxmox_cluster_force | bool %}--force{% endif %}
|
||||
responses:
|
||||
"Please enter superuser \\(root\\) password for '.*':": "{{ proxmox_cluster_master_root_password }}"
|
||||
"password:": "{{ proxmox_cluster_master_root_password }}"
|
||||
no_log: true
|
||||
when:
|
||||
- inventory_hostname != proxmox_cluster_master_effective
|
||||
- not proxmox_cluster_corosync_conf.stat.exists
|
||||
- proxmox_cluster_master_root_password | length > 0
|
||||
|
||||
- name: Join node to existing cluster (SSH trust/no prompt)
|
||||
ansible.builtin.command:
|
||||
cmd: >-
|
||||
pvecm add {{ proxmox_cluster_master_ip_effective }}
|
||||
{% if proxmox_cluster_force | bool %}--force{% endif %}
|
||||
when:
|
||||
- inventory_hostname != proxmox_cluster_master_effective
|
||||
- not proxmox_cluster_corosync_conf.stat.exists
|
||||
- proxmox_cluster_master_root_password | length == 0
|
||||
6
ansible/roles/proxmox_maintenance/defaults/main.yml
Normal file
6
ansible/roles/proxmox_maintenance/defaults/main.yml
Normal file
@@ -0,0 +1,6 @@
|
||||
---
|
||||
proxmox_upgrade_apt_cache_valid_time: 3600
|
||||
proxmox_upgrade_autoremove: true
|
||||
proxmox_upgrade_autoclean: true
|
||||
proxmox_upgrade_reboot_if_required: true
|
||||
proxmox_upgrade_reboot_timeout: 1800
|
||||
30
ansible/roles/proxmox_maintenance/tasks/main.yml
Normal file
30
ansible/roles/proxmox_maintenance/tasks/main.yml
Normal file
@@ -0,0 +1,30 @@
|
||||
---
|
||||
- name: Refresh apt cache
|
||||
ansible.builtin.apt:
|
||||
update_cache: true
|
||||
cache_valid_time: "{{ proxmox_upgrade_apt_cache_valid_time }}"
|
||||
|
||||
- name: Upgrade Proxmox host packages
|
||||
ansible.builtin.apt:
|
||||
upgrade: dist
|
||||
|
||||
- name: Remove orphaned packages
|
||||
ansible.builtin.apt:
|
||||
autoremove: "{{ proxmox_upgrade_autoremove }}"
|
||||
|
||||
- name: Clean apt package cache
|
||||
ansible.builtin.apt:
|
||||
autoclean: "{{ proxmox_upgrade_autoclean }}"
|
||||
|
||||
- name: Check if reboot is required
|
||||
ansible.builtin.stat:
|
||||
path: /var/run/reboot-required
|
||||
register: proxmox_reboot_required_file
|
||||
|
||||
- name: Reboot when required by package upgrades
|
||||
ansible.builtin.reboot:
|
||||
reboot_timeout: "{{ proxmox_upgrade_reboot_timeout }}"
|
||||
msg: "Reboot initiated by Ansible Proxmox maintenance playbook"
|
||||
when:
|
||||
- proxmox_upgrade_reboot_if_required | bool
|
||||
- proxmox_reboot_required_file.stat.exists | default(false)
|
||||
@@ -1,23 +0,0 @@
|
||||
---
|
||||
# Defaults for proxmox_provision role
|
||||
|
||||
# Connection Details (fallbacks, but ideally inherited from inventory group_vars)
|
||||
proxmox_api_host: "{{ ansible_host | default(inventory_hostname) }}"
|
||||
proxmox_node: "{{ inventory_hostname }}"
|
||||
|
||||
# VM Details
|
||||
vmid: 0 # 0 lets Proxmox choose next available, or specify fixed ID
|
||||
vm_name: "new-vm"
|
||||
template_name: "ubuntu-2204-cloud"
|
||||
vm_memory: 2048
|
||||
vm_cores: 2
|
||||
vm_storage: "local-lvm"
|
||||
vm_net_bridge: "vmbr0"
|
||||
|
||||
# Cloud Init / User Data
|
||||
ci_user: "ubuntu"
|
||||
# ssh_keys should be a list of public keys
|
||||
ssh_keys: []
|
||||
|
||||
# State
|
||||
vm_state: started
|
||||
@@ -1,39 +0,0 @@
|
||||
---
|
||||
- name: Provision VM from Template
|
||||
community.general.proxmox_kvm:
|
||||
api_host: "{{ proxmox_api_host }}"
|
||||
api_user: "{{ proxmox_api_user }}"
|
||||
api_password: "{{ proxmox_api_password }}"
|
||||
# Use remote host verification if you have valid certs, else ignore
|
||||
validate_certs: false
|
||||
|
||||
node: "{{ proxmox_node }}"
|
||||
vmid: "{{ vmid if vmid | int > 0 else omit }}"
|
||||
name: "{{ vm_name }}"
|
||||
|
||||
clone: "{{ template_name }}"
|
||||
full: true # Full clone
|
||||
|
||||
cores: "{{ vm_cores }}"
|
||||
memory: "{{ vm_memory }}"
|
||||
|
||||
storage: "{{ vm_storage }}"
|
||||
net:
|
||||
net0: "virtio,bridge={{ vm_net_bridge }}"
|
||||
|
||||
# Cloud Init
|
||||
ciuser: "{{ ci_user }}"
|
||||
sshkeys: "{{ ssh_keys | join('\n') }}"
|
||||
ipconfig:
|
||||
ipconfig0: "ip=dhcp"
|
||||
|
||||
state: "{{ vm_state }}"
|
||||
register: vm_provision_result
|
||||
|
||||
- name: Debug Provision Result
|
||||
debug:
|
||||
var: vm_provision_result
|
||||
|
||||
# Note: Waiting for SSH requires knowing the IP.
|
||||
# If qemu-guest-agent is installed in the template, we can fetch it.
|
||||
# Otherwise, we might need a fixed IP or DNS check.
|
||||
@@ -1,41 +0,0 @@
|
||||
---
|
||||
# Defaults for proxmox_template_manage role
|
||||
|
||||
# Target Proxmox Node (where commands run)
|
||||
proxmox_node: "{{ inventory_hostname }}"
|
||||
|
||||
# Storage Pool for Disks
|
||||
storage_pool: local-lvm
|
||||
|
||||
# Template ID and Name
|
||||
template_id: 9000
|
||||
template_name: ubuntu-2204-cloud-template
|
||||
|
||||
# Hardware Specs
|
||||
template_memory: 2048
|
||||
template_cores: 2
|
||||
|
||||
# Image Source
|
||||
# Options: 'list' (use image_alias) or 'url' (use custom_image_url)
|
||||
image_source_type: list
|
||||
|
||||
image_list:
|
||||
ubuntu-22.04:
|
||||
url: "https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img"
|
||||
filename: "ubuntu-22.04-server-cloudimg-amd64.img"
|
||||
ubuntu-24.04:
|
||||
url: "https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img"
|
||||
filename: "ubuntu-24.04-server-cloudimg-amd64.img"
|
||||
debian-12:
|
||||
url: "https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2"
|
||||
filename: "debian-12-generic-amd64.qcow2"
|
||||
|
||||
image_alias: ubuntu-22.04
|
||||
|
||||
custom_image_url: ""
|
||||
custom_image_name: "custom-image.img"
|
||||
|
||||
# Cloud Init / SSH
|
||||
# Optional: Embed a default admin key into the template
|
||||
embed_admin_ssh_key: false
|
||||
admin_ssh_key: ""
|
||||
@@ -1,90 +0,0 @@
|
||||
---
|
||||
- name: Resolve Image Variables (List)
|
||||
set_fact:
|
||||
_image_url: "{{ image_list[image_alias].url }}"
|
||||
_image_name: "{{ image_list[image_alias].filename }}"
|
||||
when: image_source_type == 'list'
|
||||
|
||||
- name: Resolve Image Variables (URL)
|
||||
set_fact:
|
||||
_image_url: "{{ custom_image_url }}"
|
||||
_image_name: "{{ custom_image_name }}"
|
||||
when: image_source_type == 'url'
|
||||
|
||||
- name: Check if template already exists
|
||||
command: "qm status {{ template_id }}"
|
||||
register: vm_status
|
||||
failed_when: false
|
||||
changed_when: false
|
||||
|
||||
- name: Fail if template ID exists
|
||||
fail:
|
||||
msg: "VM ID {{ template_id }} already exists. Please delete it or choose a different ID."
|
||||
when: vm_status.rc == 0
|
||||
|
||||
- name: Download Cloud Image
|
||||
get_url:
|
||||
url: "{{ _image_url }}"
|
||||
dest: "/tmp/{{ _image_name }}"
|
||||
mode: '0644'
|
||||
|
||||
- name: Install libguestfs-tools (for virt-customize if needed)
|
||||
apt:
|
||||
name: libguestfs-tools
|
||||
state: present
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Create VM with hardware config
|
||||
command: >
|
||||
qm create {{ template_id }}
|
||||
--name "{{ template_name }}"
|
||||
--memory {{ template_memory }}
|
||||
--core {{ template_cores }}
|
||||
--net0 virtio,bridge=vmbr0
|
||||
--scsihw virtio-scsi-pci
|
||||
--ostype l26
|
||||
--serial0 socket --vga serial0
|
||||
|
||||
- name: Import Disk
|
||||
command: "qm importdisk {{ template_id }} /tmp/{{ _image_name }} {{ storage_pool }}"
|
||||
|
||||
- name: Attach Disk to SCSI
|
||||
command: "qm set {{ template_id }} --scsi0 {{ storage_pool }}:vm-{{ template_id }}-disk-0"
|
||||
|
||||
- name: Add Cloud-Init Drive
|
||||
command: "qm set {{ template_id }} --ide2 {{ storage_pool }}:cloudinit"
|
||||
|
||||
- name: Set Boot Order
|
||||
command: "qm set {{ template_id }} --boot c --bootdisk scsi0"
|
||||
|
||||
- name: Configure Cloud-Init (Optional Admin Key)
|
||||
block:
|
||||
- name: Prepare SSH Keys File
|
||||
copy:
|
||||
content: "{{ admin_ssh_key }}"
|
||||
dest: "/tmp/ssh_key_{{ template_id }}.pub"
|
||||
mode: '0600'
|
||||
|
||||
- name: Set SSH Keys on Template
|
||||
command: "qm set {{ template_id }} --sshkeys /tmp/ssh_key_{{ template_id }}.pub"
|
||||
|
||||
- name: Cleanup Key File
|
||||
file:
|
||||
path: "/tmp/ssh_key_{{ template_id }}.pub"
|
||||
state: absent
|
||||
when: embed_admin_ssh_key | bool and admin_ssh_key | length > 0
|
||||
|
||||
- name: Set Cloud-Init IP Config (DHCP)
|
||||
command: "qm set {{ template_id }} --ipconfig0 ip=dhcp"
|
||||
|
||||
- name: Resize Disk (to Minimum 10G)
|
||||
command: "qm resize {{ template_id }} scsi0 10G"
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Convert to Template
|
||||
command: "qm template {{ template_id }}"
|
||||
|
||||
- name: Remove Downloaded Image
|
||||
file:
|
||||
path: "/tmp/{{ _image_name }}"
|
||||
state: absent
|
||||
@@ -1,58 +0,0 @@
|
||||
---
|
||||
# Defaults for proxmox_vm role
|
||||
|
||||
# Action to perform: create_template, create_vm, delete_vm, backup_vm
|
||||
proxmox_action: create_vm
|
||||
|
||||
# Common settings
|
||||
storage_pool: Lithium
|
||||
vmid: 9000
|
||||
target_node: "{{ inventory_hostname }}"
|
||||
|
||||
# --- Template Creation Settings ---
|
||||
|
||||
# Image Source Selection
|
||||
# Options: 'list' (use image_alias) or 'url' (use custom_image_url)
|
||||
image_source_type: list
|
||||
|
||||
# Predefined Image List
|
||||
# You can select these by setting image_alias
|
||||
image_list:
|
||||
ubuntu-22.04:
|
||||
url: "https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img"
|
||||
filename: "ubuntu-22.04-server-cloudimg-amd64.img"
|
||||
ubuntu-24.04:
|
||||
url: "https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img"
|
||||
filename: "ubuntu-24.04-server-cloudimg-amd64.img"
|
||||
debian-12:
|
||||
url: "https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2"
|
||||
filename: "debian-12-generic-amd64.qcow2"
|
||||
|
||||
# Selection (Default)
|
||||
image_alias: ubuntu-22.04
|
||||
|
||||
# Custom URL (Used if image_source_type is 'url')
|
||||
custom_image_url: ""
|
||||
custom_image_name: "custom-image.img"
|
||||
|
||||
# Template Config
|
||||
template_name: ubuntu-cloud-template
|
||||
memory: 2048
|
||||
cores: 2
|
||||
|
||||
# --- SSH Key Configuration ---
|
||||
# The Admin Key is always added
|
||||
admin_ssh_key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAI..." # REPLACE THIS with your actual public key
|
||||
|
||||
# Additional keys (list of strings)
|
||||
additional_ssh_keys: []
|
||||
|
||||
# --- Create VM Settings (Cloning) ---
|
||||
new_vm_name: new-vm
|
||||
clone_full: true # Full clone (independent) vs Linked clone
|
||||
start_after_create: true
|
||||
|
||||
# --- Backup Settings ---
|
||||
backup_mode: snapshot # snapshot, suspend, stop
|
||||
backup_compress: zstd
|
||||
backup_storage: Lithium
|
||||
@@ -1,7 +0,0 @@
|
||||
---
|
||||
- name: Create VM Backup
|
||||
command: >
|
||||
vzdump {{ vmid }}
|
||||
--mode {{ backup_mode }}
|
||||
--compress {{ backup_compress }}
|
||||
--storage {{ backup_storage }}
|
||||
@@ -1,91 +0,0 @@
|
||||
---
|
||||
- name: Resolve Image Variables (List)
|
||||
set_fact:
|
||||
_image_url: "{{ image_list[image_alias].url }}"
|
||||
_image_name: "{{ image_list[image_alias].filename }}"
|
||||
when: image_source_type == 'list'
|
||||
|
||||
- name: Resolve Image Variables (URL)
|
||||
set_fact:
|
||||
_image_url: "{{ custom_image_url }}"
|
||||
_image_name: "{{ custom_image_name }}"
|
||||
when: image_source_type == 'url'
|
||||
|
||||
- name: Check if template already exists
|
||||
command: "qm status {{ vmid }}"
|
||||
register: vm_status
|
||||
failed_when: false
|
||||
changed_when: false
|
||||
|
||||
- name: Fail if template ID exists
|
||||
fail:
|
||||
msg: "VM ID {{ vmid }} already exists. Please choose a different ID or delete the existing VM."
|
||||
when: vm_status.rc == 0
|
||||
|
||||
- name: Download Cloud Image
|
||||
get_url:
|
||||
url: "{{ _image_url }}"
|
||||
dest: "/tmp/{{ _image_name }}"
|
||||
mode: '0644'
|
||||
|
||||
- name: Install libguestfs-tools
|
||||
apt:
|
||||
name: libguestfs-tools
|
||||
state: present
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Create VM with hardware config
|
||||
command: >
|
||||
qm create {{ vmid }}
|
||||
--name "{{ template_name }}"
|
||||
--memory {{ memory }}
|
||||
--core {{ cores }}
|
||||
--net0 virtio,bridge=vmbr0
|
||||
--scsihw virtio-scsi-pci
|
||||
--ostype l26
|
||||
--serial0 socket --vga serial0
|
||||
|
||||
- name: Import Disk
|
||||
command: "qm importdisk {{ vmid }} /tmp/{{ _image_name }} {{ storage_pool }}"
|
||||
|
||||
- name: Attach Disk to SCSI
|
||||
command: "qm set {{ vmid }} --scsi0 {{ storage_pool }}:vm-{{ vmid }}-disk-0"
|
||||
|
||||
- name: Add Cloud-Init Drive
|
||||
command: "qm set {{ vmid }} --ide2 {{ storage_pool }}:cloudinit"
|
||||
|
||||
- name: Set Boot Order
|
||||
command: "qm set {{ vmid }} --boot c --bootdisk scsi0"
|
||||
|
||||
- name: Prepare SSH Keys File
|
||||
copy:
|
||||
content: |
|
||||
{{ admin_ssh_key }}
|
||||
{% for key in additional_ssh_keys %}
|
||||
{{ key }}
|
||||
{% endfor %}
|
||||
dest: "/tmp/ssh_keys_{{ vmid }}.pub"
|
||||
mode: '0600'
|
||||
|
||||
- name: Configure Cloud-Init (SSH Keys, User, IP)
|
||||
command: >
|
||||
qm set {{ vmid }}
|
||||
--sshkeys /tmp/ssh_keys_{{ vmid }}.pub
|
||||
--ipconfig0 ip=dhcp
|
||||
|
||||
- name: Resize Disk (Default 10G)
|
||||
command: "qm resize {{ vmid }} scsi0 10G"
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Convert to Template
|
||||
command: "qm template {{ vmid }}"
|
||||
|
||||
- name: Remove Downloaded Image
|
||||
file:
|
||||
path: "/tmp/{{ _image_name }}"
|
||||
state: absent
|
||||
|
||||
- name: Remove Temporary SSH Keys File
|
||||
file:
|
||||
path: "/tmp/ssh_keys_{{ vmid }}.pub"
|
||||
state: absent
|
||||
@@ -1,11 +0,0 @@
|
||||
---
|
||||
- name: Clone VM from Template
|
||||
command: >
|
||||
qm clone {{ vmid }} {{ new_vmid }}
|
||||
--name "{{ new_vm_name }}"
|
||||
--full {{ 1 if clone_full | bool else 0 }}
|
||||
register: clone_result
|
||||
|
||||
- name: Start VM (Optional)
|
||||
command: "qm start {{ new_vmid }}"
|
||||
when: start_after_create | default(false) | bool
|
||||
@@ -1,7 +0,0 @@
|
||||
---
|
||||
- name: Stop VM (Force Stop)
|
||||
command: "qm stop {{ vmid }}"
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Destroy VM
|
||||
command: "qm destroy {{ vmid }} --purge"
|
||||
@@ -1,3 +0,0 @@
|
||||
---
|
||||
- name: Dispatch task based on action
|
||||
include_tasks: "{{ proxmox_action }}.yml"
|
||||
3
ansible/roles/talos_bootstrap/defaults/main.yml
Normal file
3
ansible/roles/talos_bootstrap/defaults/main.yml
Normal file
@@ -0,0 +1,3 @@
|
||||
---
|
||||
# Set **true** to run `talhelper genconfig -o out` under **talos/** (requires talhelper + talconfig).
|
||||
noble_talos_genconfig: false
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user