Enhance Ansible playbooks and documentation for Debian and Proxmox management. Add new playbooks for Debian hardening, maintenance, SSH key rotation, and Proxmox cluster setup. Update README.md with quick start instructions for Debian and Proxmox operations. Modify group_vars to include Argo CD application settings, improving deployment flexibility and clarity.
This commit is contained in:
@@ -37,6 +37,14 @@ Copy **`.env.sample`** to **`.env`** at the repository root (`.env` is gitignore
|
|||||||
| [`playbooks/noble.yml`](playbooks/noble.yml) | Helm + `kubectl` platform (after Phase A). |
|
| [`playbooks/noble.yml`](playbooks/noble.yml) | Helm + `kubectl` platform (after Phase A). |
|
||||||
| [`playbooks/post_deploy.yml`](playbooks/post_deploy.yml) | SOPS reminders and optional Argo root Application note. |
|
| [`playbooks/post_deploy.yml`](playbooks/post_deploy.yml) | SOPS reminders and optional Argo root Application note. |
|
||||||
| [`playbooks/talos_bootstrap.yml`](playbooks/talos_bootstrap.yml) | **`talhelper genconfig` only** (legacy shortcut; prefer **`talos_phase_a.yml`**). |
|
| [`playbooks/talos_bootstrap.yml`](playbooks/talos_bootstrap.yml) | **`talhelper genconfig` only** (legacy shortcut; prefer **`talos_phase_a.yml`**). |
|
||||||
|
| [`playbooks/debian_harden.yml`](playbooks/debian_harden.yml) | Baseline hardening for Debian servers (SSH/sysctl/fail2ban/unattended-upgrades). |
|
||||||
|
| [`playbooks/debian_maintenance.yml`](playbooks/debian_maintenance.yml) | Debian maintenance run (apt upgrades, autoremove/autoclean, reboot when required). |
|
||||||
|
| [`playbooks/debian_rotate_ssh_keys.yml`](playbooks/debian_rotate_ssh_keys.yml) | Rotate managed users' `authorized_keys`. |
|
||||||
|
| [`playbooks/debian_ops.yml`](playbooks/debian_ops.yml) | Convenience pipeline: harden then maintenance for Debian servers. |
|
||||||
|
| [`playbooks/proxmox_prepare.yml`](playbooks/proxmox_prepare.yml) | Configure Proxmox community repos and disable no-subscription UI warning. |
|
||||||
|
| [`playbooks/proxmox_upgrade.yml`](playbooks/proxmox_upgrade.yml) | Proxmox maintenance run (apt dist-upgrade, cleanup, reboot when required). |
|
||||||
|
| [`playbooks/proxmox_cluster.yml`](playbooks/proxmox_cluster.yml) | Create a Proxmox cluster on the master and join additional hosts. |
|
||||||
|
| [`playbooks/proxmox_ops.yml`](playbooks/proxmox_ops.yml) | Convenience pipeline: prepare, upgrade, then cluster Proxmox hosts. |
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd ansible
|
cd ansible
|
||||||
@@ -71,7 +79,7 @@ ansible-playbook playbooks/noble.yml --tags velero -e noble_velero_install=true
|
|||||||
|
|
||||||
### Variables — `group_vars/all.yml` and role defaults
|
### Variables — `group_vars/all.yml` and role defaults
|
||||||
|
|
||||||
- **`group_vars/all.yml`:** **`noble_newt_install`**, **`noble_velero_install`**, **`noble_cert_manager_require_cloudflare_secret`**, **`noble_k8s_api_server_override`**, **`noble_k8s_api_server_auto_fallback`**, **`noble_k8s_api_server_fallback`**, **`noble_skip_k8s_health_check`**
|
- **`group_vars/all.yml`:** **`noble_newt_install`**, **`noble_velero_install`**, **`noble_cert_manager_require_cloudflare_secret`**, **`noble_argocd_apply_root_application`**, **`noble_k8s_api_server_override`**, **`noble_k8s_api_server_auto_fallback`**, **`noble_k8s_api_server_fallback`**, **`noble_skip_k8s_health_check`**
|
||||||
- **`roles/noble_platform/defaults/main.yml`:** **`noble_apply_sops_secrets`**, **`noble_sops_age_key_file`** (SOPS secrets under **`clusters/noble/secrets/`**)
|
- **`roles/noble_platform/defaults/main.yml`:** **`noble_apply_sops_secrets`**, **`noble_sops_age_key_file`** (SOPS secrets under **`clusters/noble/secrets/`**)
|
||||||
|
|
||||||
## Roles
|
## Roles
|
||||||
@@ -84,6 +92,63 @@ ansible-playbook playbooks/noble.yml --tags velero -e noble_velero_install=true
|
|||||||
| `noble_landing_urls` | Writes **`ansible/output/noble-lab-ui-urls.md`** — URLs, service names, and (optional) Argo/Grafana passwords from Secrets |
|
| `noble_landing_urls` | Writes **`ansible/output/noble-lab-ui-urls.md`** — URLs, service names, and (optional) Argo/Grafana passwords from Secrets |
|
||||||
| `noble_post_deploy` | Post-install reminders |
|
| `noble_post_deploy` | Post-install reminders |
|
||||||
| `talos_bootstrap` | Genconfig-only (used by older playbook) |
|
| `talos_bootstrap` | Genconfig-only (used by older playbook) |
|
||||||
|
| `debian_baseline_hardening` | Baseline Debian hardening (SSH policy, sysctl profile, fail2ban, unattended upgrades) |
|
||||||
|
| `debian_maintenance` | Routine Debian maintenance tasks (updates, cleanup, reboot-on-required) |
|
||||||
|
| `debian_ssh_key_rotation` | Declarative `authorized_keys` rotation for server users |
|
||||||
|
| `proxmox_baseline` | Proxmox repo prep (community repos) and no-subscription warning suppression |
|
||||||
|
| `proxmox_maintenance` | Proxmox package maintenance (dist-upgrade, cleanup, reboot-on-required) |
|
||||||
|
| `proxmox_cluster` | Proxmox cluster bootstrap/join automation using `pvecm` |
|
||||||
|
|
||||||
|
## Debian server ops quick start
|
||||||
|
|
||||||
|
These playbooks are separate from the Talos/noble flow and target hosts in `debian_servers`.
|
||||||
|
|
||||||
|
1. Copy `inventory/debian.example.yml` to `inventory/debian.yml` and update hosts/users.
|
||||||
|
2. Update `group_vars/debian_servers.yml` with your allowed SSH users and real public keys.
|
||||||
|
3. Run with the Debian inventory:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ansible
|
||||||
|
ansible-playbook -i inventory/debian.yml playbooks/debian_harden.yml
|
||||||
|
ansible-playbook -i inventory/debian.yml playbooks/debian_rotate_ssh_keys.yml
|
||||||
|
ansible-playbook -i inventory/debian.yml playbooks/debian_maintenance.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
Or run the combined maintenance pipeline:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ansible
|
||||||
|
ansible-playbook -i inventory/debian.yml playbooks/debian_ops.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
## Proxmox host + cluster quick start
|
||||||
|
|
||||||
|
These playbooks are separate from the Talos/noble flow and target hosts in `proxmox_hosts`.
|
||||||
|
|
||||||
|
1. Copy `inventory/proxmox.example.yml` to `inventory/proxmox.yml` and update hosts/users.
|
||||||
|
2. Update `group_vars/proxmox_hosts.yml` with your cluster name (`proxmox_cluster_name`), chosen cluster master, and root public key file paths to install.
|
||||||
|
3. First run (no SSH keys yet): use `--ask-pass` **or** set `ansible_password` (prefer Ansible Vault). Keep `ansible_ssh_common_args: "-o StrictHostKeyChecking=accept-new"` in inventory for first-contact hosts.
|
||||||
|
4. Run prepare first to install your public keys on each host, then continue:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ansible
|
||||||
|
ansible-playbook -i inventory/proxmox.yml playbooks/proxmox_prepare.yml --ask-pass
|
||||||
|
ansible-playbook -i inventory/proxmox.yml playbooks/proxmox_upgrade.yml
|
||||||
|
ansible-playbook -i inventory/proxmox.yml playbooks/proxmox_cluster.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
After `proxmox_prepare.yml` finishes, SSH key auth should work for root (keys from `proxmox_root_authorized_key_files`), so `--ask-pass` is usually no longer needed.
|
||||||
|
|
||||||
|
If `pvecm add` still prompts for the master root password during join, set `proxmox_cluster_master_root_password` (prefer Vault) to run join non-interactively.
|
||||||
|
|
||||||
|
Changing `proxmox_cluster_name` only affects new cluster creation; it does not rename an already-created cluster.
|
||||||
|
|
||||||
|
Or run the full Proxmox pipeline:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ansible
|
||||||
|
ansible-playbook -i inventory/proxmox.yml playbooks/proxmox_ops.yml
|
||||||
|
```
|
||||||
|
|
||||||
## Migrating from Argo-managed `noble-platform`
|
## Migrating from Argo-managed `noble-platform`
|
||||||
|
|
||||||
|
|||||||
@@ -21,3 +21,6 @@ noble_cert_manager_require_cloudflare_secret: true
|
|||||||
|
|
||||||
# Velero — set **noble_velero_install: true** plus S3 bucket/URL (and credentials — see clusters/noble/bootstrap/velero/README.md)
|
# Velero — set **noble_velero_install: true** plus S3 bucket/URL (and credentials — see clusters/noble/bootstrap/velero/README.md)
|
||||||
noble_velero_install: false
|
noble_velero_install: false
|
||||||
|
|
||||||
|
# Argo CD — apply app-of-apps root Application (clusters/noble/bootstrap/argocd/root-application.yaml). Set false to skip.
|
||||||
|
noble_argocd_apply_root_application: true
|
||||||
|
|||||||
12
ansible/group_vars/debian_servers.yml
Normal file
12
ansible/group_vars/debian_servers.yml
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
---
|
||||||
|
# Hardened SSH settings
|
||||||
|
debian_baseline_ssh_allow_users:
|
||||||
|
- admin
|
||||||
|
|
||||||
|
# Example key rotation entries. Replace with your real users and keys.
|
||||||
|
debian_ssh_rotation_users:
|
||||||
|
- name: admin
|
||||||
|
home: /home/admin
|
||||||
|
state: present
|
||||||
|
keys:
|
||||||
|
- "ssh-ed25519 AAAAEXAMPLE_REPLACE_ME admin@workstation"
|
||||||
37
ansible/group_vars/proxmox_hosts.yml
Normal file
37
ansible/group_vars/proxmox_hosts.yml
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
---
|
||||||
|
# Proxmox repositories
|
||||||
|
proxmox_repo_debian_codename: trixie
|
||||||
|
proxmox_repo_disable_enterprise: true
|
||||||
|
proxmox_repo_disable_ceph_enterprise: true
|
||||||
|
proxmox_repo_enable_pve_no_subscription: true
|
||||||
|
proxmox_repo_enable_ceph_no_subscription: true
|
||||||
|
|
||||||
|
# Suppress "No valid subscription" warning in UI
|
||||||
|
proxmox_no_subscription_notice_disable: true
|
||||||
|
|
||||||
|
# Public keys to install for root on each Proxmox host.
|
||||||
|
proxmox_root_authorized_key_files:
|
||||||
|
- "{{ lookup('env', 'HOME') }}/.ssh/id_ed25519.pub"
|
||||||
|
- "{{ lookup('env', 'HOME') }}/.ssh/ansible.pub"
|
||||||
|
|
||||||
|
# Package upgrade/reboot policy
|
||||||
|
proxmox_upgrade_apt_cache_valid_time: 3600
|
||||||
|
proxmox_upgrade_autoremove: true
|
||||||
|
proxmox_upgrade_autoclean: true
|
||||||
|
proxmox_upgrade_reboot_if_required: true
|
||||||
|
proxmox_upgrade_reboot_timeout: 1800
|
||||||
|
|
||||||
|
# Cluster settings
|
||||||
|
proxmox_cluster_enabled: true
|
||||||
|
proxmox_cluster_name: atomic-hub
|
||||||
|
|
||||||
|
# Bootstrap host name from inventory (first host by default if empty)
|
||||||
|
proxmox_cluster_master: ""
|
||||||
|
|
||||||
|
# Optional explicit IP/FQDN for joining; leave empty to use ansible_host of master
|
||||||
|
proxmox_cluster_master_ip: ""
|
||||||
|
proxmox_cluster_force: false
|
||||||
|
|
||||||
|
# Optional: use only for first cluster joins when inter-node SSH trust is not established.
|
||||||
|
# Prefer storing with Ansible Vault if you set this.
|
||||||
|
proxmox_cluster_master_root_password: "Hemroid8"
|
||||||
11
ansible/inventory/debian.example.yml
Normal file
11
ansible/inventory/debian.example.yml
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
---
|
||||||
|
all:
|
||||||
|
children:
|
||||||
|
debian_servers:
|
||||||
|
hosts:
|
||||||
|
debian-01:
|
||||||
|
ansible_host: 192.168.50.101
|
||||||
|
ansible_user: admin
|
||||||
|
debian-02:
|
||||||
|
ansible_host: 192.168.50.102
|
||||||
|
ansible_user: admin
|
||||||
24
ansible/inventory/proxmox.example.yml
Normal file
24
ansible/inventory/proxmox.example.yml
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
---
|
||||||
|
all:
|
||||||
|
children:
|
||||||
|
proxmox_hosts:
|
||||||
|
vars:
|
||||||
|
ansible_ssh_common_args: "-o StrictHostKeyChecking=accept-new"
|
||||||
|
hosts:
|
||||||
|
helium:
|
||||||
|
ansible_host: 192.168.1.100
|
||||||
|
ansible_user: root
|
||||||
|
# First run without SSH keys:
|
||||||
|
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||||
|
neon:
|
||||||
|
ansible_host: 192.168.1.90
|
||||||
|
ansible_user: root
|
||||||
|
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||||
|
argon:
|
||||||
|
ansible_host: 192.168.1.80
|
||||||
|
ansible_user: root
|
||||||
|
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||||
|
krypton:
|
||||||
|
ansible_host: 192.168.1.70
|
||||||
|
ansible_user: root
|
||||||
|
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||||
24
ansible/inventory/proxmox.yml
Normal file
24
ansible/inventory/proxmox.yml
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
---
|
||||||
|
all:
|
||||||
|
children:
|
||||||
|
proxmox_hosts:
|
||||||
|
vars:
|
||||||
|
ansible_ssh_common_args: "-o StrictHostKeyChecking=accept-new"
|
||||||
|
hosts:
|
||||||
|
helium:
|
||||||
|
ansible_host: 192.168.1.100
|
||||||
|
ansible_user: root
|
||||||
|
# First run without SSH keys:
|
||||||
|
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||||
|
neon:
|
||||||
|
ansible_host: 192.168.1.90
|
||||||
|
ansible_user: root
|
||||||
|
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||||
|
argon:
|
||||||
|
ansible_host: 192.168.1.80
|
||||||
|
ansible_user: root
|
||||||
|
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||||
|
krypton:
|
||||||
|
ansible_host: 192.168.1.70
|
||||||
|
ansible_user: root
|
||||||
|
# ansible_password: "{{ vault_proxmox_root_password }}"
|
||||||
8
ansible/playbooks/debian_harden.yml
Normal file
8
ansible/playbooks/debian_harden.yml
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
---
|
||||||
|
- name: Debian server baseline hardening
|
||||||
|
hosts: debian_servers
|
||||||
|
become: true
|
||||||
|
gather_facts: true
|
||||||
|
roles:
|
||||||
|
- role: debian_baseline_hardening
|
||||||
|
tags: [hardening, baseline]
|
||||||
8
ansible/playbooks/debian_maintenance.yml
Normal file
8
ansible/playbooks/debian_maintenance.yml
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
---
|
||||||
|
- name: Debian maintenance (updates + reboot handling)
|
||||||
|
hosts: debian_servers
|
||||||
|
become: true
|
||||||
|
gather_facts: true
|
||||||
|
roles:
|
||||||
|
- role: debian_maintenance
|
||||||
|
tags: [maintenance, updates]
|
||||||
3
ansible/playbooks/debian_ops.yml
Normal file
3
ansible/playbooks/debian_ops.yml
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
---
|
||||||
|
- import_playbook: debian_harden.yml
|
||||||
|
- import_playbook: debian_maintenance.yml
|
||||||
8
ansible/playbooks/debian_rotate_ssh_keys.yml
Normal file
8
ansible/playbooks/debian_rotate_ssh_keys.yml
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
---
|
||||||
|
- name: Debian SSH key rotation
|
||||||
|
hosts: debian_servers
|
||||||
|
become: true
|
||||||
|
gather_facts: false
|
||||||
|
roles:
|
||||||
|
- role: debian_ssh_key_rotation
|
||||||
|
tags: [ssh, ssh_keys, rotation]
|
||||||
@@ -113,6 +113,7 @@
|
|||||||
tags: [always]
|
tags: [always]
|
||||||
|
|
||||||
# talosctl kubeconfig often sets server to the VIP; off-LAN you can reach a control-plane IP but not 192.168.50.230.
|
# talosctl kubeconfig often sets server to the VIP; off-LAN you can reach a control-plane IP but not 192.168.50.230.
|
||||||
|
# kubectl stderr is often "The connection to the server ... was refused" (no substring "connection refused").
|
||||||
- name: Auto-fallback API server when VIP is unreachable (temp kubeconfig)
|
- name: Auto-fallback API server when VIP is unreachable (temp kubeconfig)
|
||||||
tags: [always]
|
tags: [always]
|
||||||
when:
|
when:
|
||||||
@@ -120,8 +121,7 @@
|
|||||||
- noble_k8s_api_server_override | default('') | length == 0
|
- noble_k8s_api_server_override | default('') | length == 0
|
||||||
- not (noble_skip_k8s_health_check | default(false) | bool)
|
- not (noble_skip_k8s_health_check | default(false) | bool)
|
||||||
- (noble_k8s_health_first.rc | default(1)) != 0 or (noble_k8s_health_first.stdout | default('') | trim) != 'ok'
|
- (noble_k8s_health_first.rc | default(1)) != 0 or (noble_k8s_health_first.stdout | default('') | trim) != 'ok'
|
||||||
- ('network is unreachable' in (noble_k8s_health_first.stderr | default('') | lower)) or
|
- (((noble_k8s_health_first.stderr | default('')) ~ (noble_k8s_health_first.stdout | default(''))) | lower) is search('network is unreachable|no route to host|connection refused|was refused', multiline=False)
|
||||||
('no route to host' in (noble_k8s_health_first.stderr | default('') | lower))
|
|
||||||
block:
|
block:
|
||||||
- name: Ensure temp dir for kubeconfig auto-fallback
|
- name: Ensure temp dir for kubeconfig auto-fallback
|
||||||
ansible.builtin.file:
|
ansible.builtin.file:
|
||||||
|
|||||||
9
ansible/playbooks/proxmox_cluster.yml
Normal file
9
ansible/playbooks/proxmox_cluster.yml
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
---
|
||||||
|
- name: Proxmox cluster bootstrap/join
|
||||||
|
hosts: proxmox_hosts
|
||||||
|
become: true
|
||||||
|
gather_facts: false
|
||||||
|
serial: 1
|
||||||
|
roles:
|
||||||
|
- role: proxmox_cluster
|
||||||
|
tags: [proxmox, cluster]
|
||||||
4
ansible/playbooks/proxmox_ops.yml
Normal file
4
ansible/playbooks/proxmox_ops.yml
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
---
|
||||||
|
- import_playbook: proxmox_prepare.yml
|
||||||
|
- import_playbook: proxmox_upgrade.yml
|
||||||
|
- import_playbook: proxmox_cluster.yml
|
||||||
8
ansible/playbooks/proxmox_prepare.yml
Normal file
8
ansible/playbooks/proxmox_prepare.yml
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
---
|
||||||
|
- name: Proxmox host preparation (community repos + no-subscription notice)
|
||||||
|
hosts: proxmox_hosts
|
||||||
|
become: true
|
||||||
|
gather_facts: true
|
||||||
|
roles:
|
||||||
|
- role: proxmox_baseline
|
||||||
|
tags: [proxmox, prepare, repos, ui]
|
||||||
9
ansible/playbooks/proxmox_upgrade.yml
Normal file
9
ansible/playbooks/proxmox_upgrade.yml
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
---
|
||||||
|
- name: Proxmox host maintenance (upgrade to latest)
|
||||||
|
hosts: proxmox_hosts
|
||||||
|
become: true
|
||||||
|
gather_facts: true
|
||||||
|
serial: 1
|
||||||
|
roles:
|
||||||
|
- role: proxmox_maintenance
|
||||||
|
tags: [proxmox, maintenance, updates]
|
||||||
39
ansible/roles/debian_baseline_hardening/defaults/main.yml
Normal file
39
ansible/roles/debian_baseline_hardening/defaults/main.yml
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
---
|
||||||
|
# Update apt metadata only when stale (seconds)
|
||||||
|
debian_baseline_apt_cache_valid_time: 3600
|
||||||
|
|
||||||
|
# Core host hardening packages
|
||||||
|
debian_baseline_packages:
|
||||||
|
- unattended-upgrades
|
||||||
|
- apt-listchanges
|
||||||
|
- fail2ban
|
||||||
|
- needrestart
|
||||||
|
- sudo
|
||||||
|
- ca-certificates
|
||||||
|
|
||||||
|
# SSH hardening controls
|
||||||
|
debian_baseline_ssh_permit_root_login: "no"
|
||||||
|
debian_baseline_ssh_password_authentication: "no"
|
||||||
|
debian_baseline_ssh_pubkey_authentication: "yes"
|
||||||
|
debian_baseline_ssh_x11_forwarding: "no"
|
||||||
|
debian_baseline_ssh_max_auth_tries: 3
|
||||||
|
debian_baseline_ssh_client_alive_interval: 300
|
||||||
|
debian_baseline_ssh_client_alive_count_max: 2
|
||||||
|
debian_baseline_ssh_allow_users: []
|
||||||
|
|
||||||
|
# unattended-upgrades controls
|
||||||
|
debian_baseline_enable_unattended_upgrades: true
|
||||||
|
debian_baseline_unattended_auto_upgrade: "1"
|
||||||
|
debian_baseline_unattended_update_lists: "1"
|
||||||
|
|
||||||
|
# Kernel and network hardening sysctls
|
||||||
|
debian_baseline_sysctl_settings:
|
||||||
|
net.ipv4.conf.all.accept_redirects: "0"
|
||||||
|
net.ipv4.conf.default.accept_redirects: "0"
|
||||||
|
net.ipv4.conf.all.send_redirects: "0"
|
||||||
|
net.ipv4.conf.default.send_redirects: "0"
|
||||||
|
net.ipv4.conf.all.log_martians: "1"
|
||||||
|
net.ipv4.conf.default.log_martians: "1"
|
||||||
|
net.ipv4.tcp_syncookies: "1"
|
||||||
|
net.ipv6.conf.all.accept_redirects: "0"
|
||||||
|
net.ipv6.conf.default.accept_redirects: "0"
|
||||||
12
ansible/roles/debian_baseline_hardening/handlers/main.yml
Normal file
12
ansible/roles/debian_baseline_hardening/handlers/main.yml
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
---
|
||||||
|
- name: Restart ssh
|
||||||
|
ansible.builtin.service:
|
||||||
|
name: ssh
|
||||||
|
state: restarted
|
||||||
|
|
||||||
|
- name: Reload sysctl
|
||||||
|
ansible.builtin.command:
|
||||||
|
argv:
|
||||||
|
- sysctl
|
||||||
|
- --system
|
||||||
|
changed_when: true
|
||||||
52
ansible/roles/debian_baseline_hardening/tasks/main.yml
Normal file
52
ansible/roles/debian_baseline_hardening/tasks/main.yml
Normal file
@@ -0,0 +1,52 @@
|
|||||||
|
---
|
||||||
|
- name: Refresh apt cache
|
||||||
|
ansible.builtin.apt:
|
||||||
|
update_cache: true
|
||||||
|
cache_valid_time: "{{ debian_baseline_apt_cache_valid_time }}"
|
||||||
|
|
||||||
|
- name: Install baseline hardening packages
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name: "{{ debian_baseline_packages }}"
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: Configure unattended-upgrades auto settings
|
||||||
|
ansible.builtin.copy:
|
||||||
|
dest: /etc/apt/apt.conf.d/20auto-upgrades
|
||||||
|
mode: "0644"
|
||||||
|
content: |
|
||||||
|
APT::Periodic::Update-Package-Lists "{{ debian_baseline_unattended_update_lists }}";
|
||||||
|
APT::Periodic::Unattended-Upgrade "{{ debian_baseline_unattended_auto_upgrade }}";
|
||||||
|
when: debian_baseline_enable_unattended_upgrades | bool
|
||||||
|
|
||||||
|
- name: Configure SSH hardening options
|
||||||
|
ansible.builtin.copy:
|
||||||
|
dest: /etc/ssh/sshd_config.d/99-hardening.conf
|
||||||
|
mode: "0644"
|
||||||
|
content: |
|
||||||
|
PermitRootLogin {{ debian_baseline_ssh_permit_root_login }}
|
||||||
|
PasswordAuthentication {{ debian_baseline_ssh_password_authentication }}
|
||||||
|
PubkeyAuthentication {{ debian_baseline_ssh_pubkey_authentication }}
|
||||||
|
X11Forwarding {{ debian_baseline_ssh_x11_forwarding }}
|
||||||
|
MaxAuthTries {{ debian_baseline_ssh_max_auth_tries }}
|
||||||
|
ClientAliveInterval {{ debian_baseline_ssh_client_alive_interval }}
|
||||||
|
ClientAliveCountMax {{ debian_baseline_ssh_client_alive_count_max }}
|
||||||
|
{% if debian_baseline_ssh_allow_users | length > 0 %}
|
||||||
|
AllowUsers {{ debian_baseline_ssh_allow_users | join(' ') }}
|
||||||
|
{% endif %}
|
||||||
|
notify: Restart ssh
|
||||||
|
|
||||||
|
- name: Configure baseline sysctls
|
||||||
|
ansible.builtin.copy:
|
||||||
|
dest: /etc/sysctl.d/99-hardening.conf
|
||||||
|
mode: "0644"
|
||||||
|
content: |
|
||||||
|
{% for key, value in debian_baseline_sysctl_settings.items() %}
|
||||||
|
{{ key }} = {{ value }}
|
||||||
|
{% endfor %}
|
||||||
|
notify: Reload sysctl
|
||||||
|
|
||||||
|
- name: Ensure fail2ban service is enabled
|
||||||
|
ansible.builtin.service:
|
||||||
|
name: fail2ban
|
||||||
|
enabled: true
|
||||||
|
state: started
|
||||||
7
ansible/roles/debian_maintenance/defaults/main.yml
Normal file
7
ansible/roles/debian_maintenance/defaults/main.yml
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
---
|
||||||
|
debian_maintenance_apt_cache_valid_time: 3600
|
||||||
|
debian_maintenance_upgrade_type: dist
|
||||||
|
debian_maintenance_autoremove: true
|
||||||
|
debian_maintenance_autoclean: true
|
||||||
|
debian_maintenance_reboot_if_required: true
|
||||||
|
debian_maintenance_reboot_timeout: 1800
|
||||||
30
ansible/roles/debian_maintenance/tasks/main.yml
Normal file
30
ansible/roles/debian_maintenance/tasks/main.yml
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
---
|
||||||
|
- name: Refresh apt cache
|
||||||
|
ansible.builtin.apt:
|
||||||
|
update_cache: true
|
||||||
|
cache_valid_time: "{{ debian_maintenance_apt_cache_valid_time }}"
|
||||||
|
|
||||||
|
- name: Upgrade Debian packages
|
||||||
|
ansible.builtin.apt:
|
||||||
|
upgrade: "{{ debian_maintenance_upgrade_type }}"
|
||||||
|
|
||||||
|
- name: Remove orphaned packages
|
||||||
|
ansible.builtin.apt:
|
||||||
|
autoremove: "{{ debian_maintenance_autoremove }}"
|
||||||
|
|
||||||
|
- name: Clean apt package cache
|
||||||
|
ansible.builtin.apt:
|
||||||
|
autoclean: "{{ debian_maintenance_autoclean }}"
|
||||||
|
|
||||||
|
- name: Check if reboot is required
|
||||||
|
ansible.builtin.stat:
|
||||||
|
path: /var/run/reboot-required
|
||||||
|
register: debian_maintenance_reboot_required_file
|
||||||
|
|
||||||
|
- name: Reboot when required by package updates
|
||||||
|
ansible.builtin.reboot:
|
||||||
|
reboot_timeout: "{{ debian_maintenance_reboot_timeout }}"
|
||||||
|
msg: "Reboot initiated by Ansible maintenance playbook"
|
||||||
|
when:
|
||||||
|
- debian_maintenance_reboot_if_required | bool
|
||||||
|
- debian_maintenance_reboot_required_file.stat.exists | default(false)
|
||||||
10
ansible/roles/debian_ssh_key_rotation/defaults/main.yml
Normal file
10
ansible/roles/debian_ssh_key_rotation/defaults/main.yml
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
---
|
||||||
|
# List of users to manage keys for.
|
||||||
|
# Example:
|
||||||
|
# debian_ssh_rotation_users:
|
||||||
|
# - name: deploy
|
||||||
|
# home: /home/deploy
|
||||||
|
# state: present
|
||||||
|
# keys:
|
||||||
|
# - "ssh-ed25519 AAAA... deploy@laptop"
|
||||||
|
debian_ssh_rotation_users: []
|
||||||
50
ansible/roles/debian_ssh_key_rotation/tasks/main.yml
Normal file
50
ansible/roles/debian_ssh_key_rotation/tasks/main.yml
Normal file
@@ -0,0 +1,50 @@
|
|||||||
|
---
|
||||||
|
- name: Validate SSH key rotation inputs
|
||||||
|
ansible.builtin.assert:
|
||||||
|
that:
|
||||||
|
- item.name is defined
|
||||||
|
- item.home is defined
|
||||||
|
- (item.state | default('present')) in ['present', 'absent']
|
||||||
|
- (item.state | default('present')) == 'absent' or (item.keys is defined and item.keys | length > 0)
|
||||||
|
fail_msg: >-
|
||||||
|
Each entry in debian_ssh_rotation_users must include name, home, and either:
|
||||||
|
state=absent, or keys with at least one SSH public key.
|
||||||
|
loop: "{{ debian_ssh_rotation_users }}"
|
||||||
|
loop_control:
|
||||||
|
label: "{{ item.name | default('unknown') }}"
|
||||||
|
|
||||||
|
- name: Ensure ~/.ssh exists for managed users
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ item.home }}/.ssh"
|
||||||
|
state: directory
|
||||||
|
owner: "{{ item.name }}"
|
||||||
|
group: "{{ item.name }}"
|
||||||
|
mode: "0700"
|
||||||
|
loop: "{{ debian_ssh_rotation_users }}"
|
||||||
|
loop_control:
|
||||||
|
label: "{{ item.name }}"
|
||||||
|
when: (item.state | default('present')) == 'present'
|
||||||
|
|
||||||
|
- name: Rotate authorized_keys for managed users
|
||||||
|
ansible.builtin.copy:
|
||||||
|
dest: "{{ item.home }}/.ssh/authorized_keys"
|
||||||
|
owner: "{{ item.name }}"
|
||||||
|
group: "{{ item.name }}"
|
||||||
|
mode: "0600"
|
||||||
|
content: |
|
||||||
|
{% for key in item.keys %}
|
||||||
|
{{ key }}
|
||||||
|
{% endfor %}
|
||||||
|
loop: "{{ debian_ssh_rotation_users }}"
|
||||||
|
loop_control:
|
||||||
|
label: "{{ item.name }}"
|
||||||
|
when: (item.state | default('present')) == 'present'
|
||||||
|
|
||||||
|
- name: Remove authorized_keys for users marked absent
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ item.home }}/.ssh/authorized_keys"
|
||||||
|
state: absent
|
||||||
|
loop: "{{ debian_ssh_rotation_users }}"
|
||||||
|
loop_control:
|
||||||
|
label: "{{ item.name }}"
|
||||||
|
when: (item.state | default('present')) == 'absent'
|
||||||
4
ansible/roles/noble_argocd/defaults/main.yml
Normal file
4
ansible/roles/noble_argocd/defaults/main.yml
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
---
|
||||||
|
# When true, applies clusters/noble/bootstrap/argocd/root-application.yaml (app-of-apps).
|
||||||
|
# Edit spec.source.repoURL in that file if your Git remote differs.
|
||||||
|
noble_argocd_apply_root_application: false
|
||||||
@@ -15,6 +15,20 @@
|
|||||||
- -f
|
- -f
|
||||||
- "{{ noble_repo_root }}/clusters/noble/bootstrap/argocd/values.yaml"
|
- "{{ noble_repo_root }}/clusters/noble/bootstrap/argocd/values.yaml"
|
||||||
- --wait
|
- --wait
|
||||||
|
- --timeout
|
||||||
|
- 15m
|
||||||
environment:
|
environment:
|
||||||
KUBECONFIG: "{{ noble_kubeconfig }}"
|
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||||
changed_when: true
|
changed_when: true
|
||||||
|
|
||||||
|
- name: Apply Argo CD root Application (app-of-apps)
|
||||||
|
ansible.builtin.command:
|
||||||
|
argv:
|
||||||
|
- kubectl
|
||||||
|
- apply
|
||||||
|
- -f
|
||||||
|
- "{{ noble_repo_root }}/clusters/noble/bootstrap/argocd/root-application.yaml"
|
||||||
|
environment:
|
||||||
|
KUBECONFIG: "{{ noble_kubeconfig }}"
|
||||||
|
when: noble_argocd_apply_root_application | default(false) | bool
|
||||||
|
changed_when: true
|
||||||
|
|||||||
@@ -9,5 +9,6 @@
|
|||||||
- name: Argo CD optional root Application (empty app-of-apps)
|
- name: Argo CD optional root Application (empty app-of-apps)
|
||||||
ansible.builtin.debug:
|
ansible.builtin.debug:
|
||||||
msg: >-
|
msg: >-
|
||||||
Optional: kubectl apply -f clusters/noble/bootstrap/argocd/root-application.yaml
|
App-of-apps: noble.yml applies root-application.yaml when noble_argocd_apply_root_application is true
|
||||||
after editing repoURL. Core workloads are not synced by Argo — see clusters/noble/apps/README.md
|
(group_vars/all.yml). Otherwise: kubectl apply -f clusters/noble/bootstrap/argocd/root-application.yaml
|
||||||
|
after editing spec.source.repoURL. Core platform is Ansible — see clusters/noble/apps/README.md
|
||||||
|
|||||||
14
ansible/roles/proxmox_baseline/defaults/main.yml
Normal file
14
ansible/roles/proxmox_baseline/defaults/main.yml
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
---
|
||||||
|
proxmox_repo_debian_codename: "{{ ansible_facts['distribution_release'] | default('bookworm') }}"
|
||||||
|
proxmox_repo_disable_enterprise: true
|
||||||
|
proxmox_repo_disable_ceph_enterprise: true
|
||||||
|
proxmox_repo_enable_pve_no_subscription: true
|
||||||
|
proxmox_repo_enable_ceph_no_subscription: false
|
||||||
|
|
||||||
|
proxmox_no_subscription_notice_disable: true
|
||||||
|
proxmox_widget_toolkit_file: /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js
|
||||||
|
|
||||||
|
# Bootstrap root SSH keys from the control machine so subsequent runs can use key auth.
|
||||||
|
proxmox_root_authorized_key_files:
|
||||||
|
- "{{ lookup('env', 'HOME') }}/.ssh/id_ed25519.pub"
|
||||||
|
- "{{ lookup('env', 'HOME') }}/.ssh/ansible.pub"
|
||||||
5
ansible/roles/proxmox_baseline/handlers/main.yml
Normal file
5
ansible/roles/proxmox_baseline/handlers/main.yml
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
- name: Restart pveproxy
|
||||||
|
ansible.builtin.service:
|
||||||
|
name: pveproxy
|
||||||
|
state: restarted
|
||||||
100
ansible/roles/proxmox_baseline/tasks/main.yml
Normal file
100
ansible/roles/proxmox_baseline/tasks/main.yml
Normal file
@@ -0,0 +1,100 @@
|
|||||||
|
---
|
||||||
|
- name: Check configured local public key files
|
||||||
|
ansible.builtin.stat:
|
||||||
|
path: "{{ item }}"
|
||||||
|
register: proxmox_root_pubkey_stats
|
||||||
|
loop: "{{ proxmox_root_authorized_key_files }}"
|
||||||
|
delegate_to: localhost
|
||||||
|
become: false
|
||||||
|
|
||||||
|
- name: Fail when a configured local public key file is missing
|
||||||
|
ansible.builtin.fail:
|
||||||
|
msg: "Configured key file does not exist on the control host: {{ item.item }}"
|
||||||
|
when: not item.stat.exists
|
||||||
|
loop: "{{ proxmox_root_pubkey_stats.results }}"
|
||||||
|
delegate_to: localhost
|
||||||
|
become: false
|
||||||
|
|
||||||
|
- name: Ensure root authorized_keys contains configured public keys
|
||||||
|
ansible.posix.authorized_key:
|
||||||
|
user: root
|
||||||
|
state: present
|
||||||
|
key: "{{ lookup('ansible.builtin.file', item) }}"
|
||||||
|
manage_dir: true
|
||||||
|
loop: "{{ proxmox_root_authorized_key_files }}"
|
||||||
|
|
||||||
|
- name: Remove enterprise repository lines from /etc/apt/sources.list
|
||||||
|
ansible.builtin.lineinfile:
|
||||||
|
path: /etc/apt/sources.list
|
||||||
|
regexp: ".*enterprise\\.proxmox\\.com.*"
|
||||||
|
state: absent
|
||||||
|
when:
|
||||||
|
- proxmox_repo_disable_enterprise | bool or proxmox_repo_disable_ceph_enterprise | bool
|
||||||
|
failed_when: false
|
||||||
|
|
||||||
|
- name: Find apt source files that contain Proxmox enterprise repositories
|
||||||
|
ansible.builtin.find:
|
||||||
|
paths: /etc/apt/sources.list.d
|
||||||
|
file_type: file
|
||||||
|
patterns:
|
||||||
|
- "*.list"
|
||||||
|
- "*.sources"
|
||||||
|
contains: "enterprise\\.proxmox\\.com"
|
||||||
|
use_regex: true
|
||||||
|
register: proxmox_enterprise_repo_files
|
||||||
|
when:
|
||||||
|
- proxmox_repo_disable_enterprise | bool or proxmox_repo_disable_ceph_enterprise | bool
|
||||||
|
|
||||||
|
- name: Remove enterprise repository lines from apt source files
|
||||||
|
ansible.builtin.lineinfile:
|
||||||
|
path: "{{ item.path }}"
|
||||||
|
regexp: ".*enterprise\\.proxmox\\.com.*"
|
||||||
|
state: absent
|
||||||
|
loop: "{{ proxmox_enterprise_repo_files.files | default([]) }}"
|
||||||
|
when:
|
||||||
|
- proxmox_repo_disable_enterprise | bool or proxmox_repo_disable_ceph_enterprise | bool
|
||||||
|
|
||||||
|
- name: Find apt source files that already contain pve-no-subscription
|
||||||
|
ansible.builtin.find:
|
||||||
|
paths: /etc/apt/sources.list.d
|
||||||
|
file_type: file
|
||||||
|
patterns:
|
||||||
|
- "*.list"
|
||||||
|
- "*.sources"
|
||||||
|
contains: "pve-no-subscription"
|
||||||
|
use_regex: false
|
||||||
|
register: proxmox_no_sub_repo_files
|
||||||
|
when: proxmox_repo_enable_pve_no_subscription | bool
|
||||||
|
|
||||||
|
- name: Ensure Proxmox no-subscription repository is configured when absent
|
||||||
|
ansible.builtin.copy:
|
||||||
|
dest: /etc/apt/sources.list.d/pve-no-subscription.list
|
||||||
|
content: "deb http://download.proxmox.com/debian/pve {{ proxmox_repo_debian_codename }} pve-no-subscription\n"
|
||||||
|
mode: "0644"
|
||||||
|
when:
|
||||||
|
- proxmox_repo_enable_pve_no_subscription | bool
|
||||||
|
- (proxmox_no_sub_repo_files.matched | default(0) | int) == 0
|
||||||
|
|
||||||
|
- name: Remove duplicate pve-no-subscription.list when another source already provides it
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: /etc/apt/sources.list.d/pve-no-subscription.list
|
||||||
|
state: absent
|
||||||
|
when:
|
||||||
|
- proxmox_repo_enable_pve_no_subscription | bool
|
||||||
|
- (proxmox_no_sub_repo_files.files | default([]) | map(attribute='path') | list | select('ne', '/etc/apt/sources.list.d/pve-no-subscription.list') | list | length) > 0
|
||||||
|
|
||||||
|
- name: Ensure Ceph no-subscription repository is configured
|
||||||
|
ansible.builtin.copy:
|
||||||
|
dest: /etc/apt/sources.list.d/ceph-no-subscription.list
|
||||||
|
content: "deb http://download.proxmox.com/debian/ceph-{{ proxmox_repo_debian_codename }} {{ proxmox_repo_debian_codename }} no-subscription\n"
|
||||||
|
mode: "0644"
|
||||||
|
when: proxmox_repo_enable_ceph_no_subscription | bool
|
||||||
|
|
||||||
|
- name: Disable no-subscription pop-up in Proxmox UI
|
||||||
|
ansible.builtin.replace:
|
||||||
|
path: "{{ proxmox_widget_toolkit_file }}"
|
||||||
|
regexp: "if \\(data\\.status !== 'Active'\\)"
|
||||||
|
replace: "if (false)"
|
||||||
|
backup: true
|
||||||
|
when: proxmox_no_subscription_notice_disable | bool
|
||||||
|
notify: Restart pveproxy
|
||||||
7
ansible/roles/proxmox_cluster/defaults/main.yml
Normal file
7
ansible/roles/proxmox_cluster/defaults/main.yml
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
---
|
||||||
|
proxmox_cluster_enabled: true
|
||||||
|
proxmox_cluster_name: pve-cluster
|
||||||
|
proxmox_cluster_master: ""
|
||||||
|
proxmox_cluster_master_ip: ""
|
||||||
|
proxmox_cluster_force: false
|
||||||
|
proxmox_cluster_master_root_password: ""
|
||||||
63
ansible/roles/proxmox_cluster/tasks/main.yml
Normal file
63
ansible/roles/proxmox_cluster/tasks/main.yml
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
---
|
||||||
|
- name: Skip cluster role when disabled
|
||||||
|
ansible.builtin.meta: end_host
|
||||||
|
when: not (proxmox_cluster_enabled | bool)
|
||||||
|
|
||||||
|
- name: Check whether corosync cluster config exists
|
||||||
|
ansible.builtin.stat:
|
||||||
|
path: /etc/pve/corosync.conf
|
||||||
|
register: proxmox_cluster_corosync_conf
|
||||||
|
|
||||||
|
- name: Set effective Proxmox cluster master
|
||||||
|
ansible.builtin.set_fact:
|
||||||
|
proxmox_cluster_master_effective: "{{ proxmox_cluster_master | default(groups['proxmox_hosts'][0], true) }}"
|
||||||
|
|
||||||
|
- name: Set effective Proxmox cluster master IP
|
||||||
|
ansible.builtin.set_fact:
|
||||||
|
proxmox_cluster_master_ip_effective: >-
|
||||||
|
{{
|
||||||
|
proxmox_cluster_master_ip
|
||||||
|
| default(hostvars[proxmox_cluster_master_effective].ansible_host
|
||||||
|
| default(proxmox_cluster_master_effective), true)
|
||||||
|
}}
|
||||||
|
|
||||||
|
- name: Create cluster on designated master
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: "pvecm create {{ proxmox_cluster_name }}"
|
||||||
|
when:
|
||||||
|
- inventory_hostname == proxmox_cluster_master_effective
|
||||||
|
- not proxmox_cluster_corosync_conf.stat.exists
|
||||||
|
|
||||||
|
- name: Ensure python3-pexpect is installed for password-based cluster join
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name: python3-pexpect
|
||||||
|
state: present
|
||||||
|
update_cache: true
|
||||||
|
when:
|
||||||
|
- inventory_hostname != proxmox_cluster_master_effective
|
||||||
|
- not proxmox_cluster_corosync_conf.stat.exists
|
||||||
|
- proxmox_cluster_master_root_password | length > 0
|
||||||
|
|
||||||
|
- name: Join node to existing cluster (password provided)
|
||||||
|
ansible.builtin.expect:
|
||||||
|
command: >-
|
||||||
|
pvecm add {{ proxmox_cluster_master_ip_effective }}
|
||||||
|
{% if proxmox_cluster_force | bool %}--force{% endif %}
|
||||||
|
responses:
|
||||||
|
"Please enter superuser \\(root\\) password for '.*':": "{{ proxmox_cluster_master_root_password }}"
|
||||||
|
"password:": "{{ proxmox_cluster_master_root_password }}"
|
||||||
|
no_log: true
|
||||||
|
when:
|
||||||
|
- inventory_hostname != proxmox_cluster_master_effective
|
||||||
|
- not proxmox_cluster_corosync_conf.stat.exists
|
||||||
|
- proxmox_cluster_master_root_password | length > 0
|
||||||
|
|
||||||
|
- name: Join node to existing cluster (SSH trust/no prompt)
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: >-
|
||||||
|
pvecm add {{ proxmox_cluster_master_ip_effective }}
|
||||||
|
{% if proxmox_cluster_force | bool %}--force{% endif %}
|
||||||
|
when:
|
||||||
|
- inventory_hostname != proxmox_cluster_master_effective
|
||||||
|
- not proxmox_cluster_corosync_conf.stat.exists
|
||||||
|
- proxmox_cluster_master_root_password | length == 0
|
||||||
6
ansible/roles/proxmox_maintenance/defaults/main.yml
Normal file
6
ansible/roles/proxmox_maintenance/defaults/main.yml
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
---
|
||||||
|
proxmox_upgrade_apt_cache_valid_time: 3600
|
||||||
|
proxmox_upgrade_autoremove: true
|
||||||
|
proxmox_upgrade_autoclean: true
|
||||||
|
proxmox_upgrade_reboot_if_required: true
|
||||||
|
proxmox_upgrade_reboot_timeout: 1800
|
||||||
30
ansible/roles/proxmox_maintenance/tasks/main.yml
Normal file
30
ansible/roles/proxmox_maintenance/tasks/main.yml
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
---
|
||||||
|
- name: Refresh apt cache
|
||||||
|
ansible.builtin.apt:
|
||||||
|
update_cache: true
|
||||||
|
cache_valid_time: "{{ proxmox_upgrade_apt_cache_valid_time }}"
|
||||||
|
|
||||||
|
- name: Upgrade Proxmox host packages
|
||||||
|
ansible.builtin.apt:
|
||||||
|
upgrade: dist
|
||||||
|
|
||||||
|
- name: Remove orphaned packages
|
||||||
|
ansible.builtin.apt:
|
||||||
|
autoremove: "{{ proxmox_upgrade_autoremove }}"
|
||||||
|
|
||||||
|
- name: Clean apt package cache
|
||||||
|
ansible.builtin.apt:
|
||||||
|
autoclean: "{{ proxmox_upgrade_autoclean }}"
|
||||||
|
|
||||||
|
- name: Check if reboot is required
|
||||||
|
ansible.builtin.stat:
|
||||||
|
path: /var/run/reboot-required
|
||||||
|
register: proxmox_reboot_required_file
|
||||||
|
|
||||||
|
- name: Reboot when required by package upgrades
|
||||||
|
ansible.builtin.reboot:
|
||||||
|
reboot_timeout: "{{ proxmox_upgrade_reboot_timeout }}"
|
||||||
|
msg: "Reboot initiated by Ansible Proxmox maintenance playbook"
|
||||||
|
when:
|
||||||
|
- proxmox_upgrade_reboot_if_required | bool
|
||||||
|
- proxmox_reboot_required_file.stat.exists | default(false)
|
||||||
Reference in New Issue
Block a user