5.2 KiB
Velero (cluster backups)
Ansible-managed core stack — not reconciled by Argo CD (clusters/noble/apps is optional GitOps only).
What you get
- vmware-tanzu/velero Helm chart (12.0.0 → Velero 1.18.0) in namespace
velero - AWS plugin init container for S3-compatible object storage (
velero/velero-plugin-for-aws:v1.14.0) - CSI snapshots via Velero’s built-in CSI support (
EnableCSI) and VolumeSnapshotLocationvelero.io/csi(no separate CSI plugin image for Velero ≥ 1.14) - Prometheus scraping: ServiceMonitor labeled for kube-prometheus (
release: kube-prometheus)
Prerequisites
-
Longhorn (or another CSI driver) with a VolumeSnapshotClass for that driver.
-
For Velero to pick a default snapshot class, one
VolumeSnapshotClassper driver should carry:metadata: labels: velero.io/csi-volumesnapshot-class: "true"Example for Longhorn: after install, confirm the driver name (often
driver.longhorn.io) and either label Longhorn’sVolumeSnapshotClassor create one and label it (see Velero CSI). -
S3-compatible endpoint (MinIO, VersityGW, AWS, etc.) and a bucket.
Credentials Secret
Velero expects velero/velero-cloud-credentials, key cloud, in INI form for the AWS plugin:
[default]
aws_access_key_id=<key>
aws_secret_access_key=<secret>
Create manually:
kubectl -n velero create secret generic velero-cloud-credentials \
--from-literal=cloud="$(printf '[default]\naws_access_key_id=%s\naws_secret_access_key=%s\n' "$KEY" "$SECRET")"
Or let Ansible create it from .env (NOBLE_VELERO_AWS_ACCESS_KEY_ID, NOBLE_VELERO_AWS_SECRET_ACCESS_KEY) or from extra-vars noble_velero_aws_access_key_id / noble_velero_aws_secret_access_key.
Apply (Ansible)
-
Copy
.env.sample→.envat the repository root and set at least:NOBLE_VELERO_S3_BUCKET— object bucket nameNOBLE_VELERO_S3_URL— S3 API base URL (e.g.https://minio.lan:9000or your VersityGW/MinIO endpoint)NOBLE_VELERO_AWS_ACCESS_KEY_ID/NOBLE_VELERO_AWS_SECRET_ACCESS_KEY— credentials the AWS plugin uses (S3-compatible access key style)
-
Enable the role: set
noble_velero_install: trueinansible/group_vars/all.yml, or pass-e noble_velero_install=trueon the command line. -
Run from
ansible/(adjustKUBECONFIGto your cluster admin kubeconfig):
cd ansible
export KUBECONFIG=/absolute/path/to/home-server/talos/kubeconfig
# Velero only (after helm repos; skips other roles unless their tags match — use full playbook if unsure)
ansible-playbook playbooks/noble.yml --tags repos,velero -e noble_velero_install=true
If NOBLE_VELERO_S3_BUCKET / NOBLE_VELERO_S3_URL are not in .env, pass them explicitly:
ansible-playbook playbooks/noble.yml --tags repos,velero -e noble_velero_install=true \
-e noble_velero_s3_bucket=my-bucket \
-e noble_velero_s3_url=https://s3.example.com:9000
Full platform run (includes Velero when noble_velero_install is true in group_vars):
ansible-playbook playbooks/noble.yml
Install (Ansible) — details
- Set
noble_velero_install: trueinansible/group_vars/all.yml(or pass-e noble_velero_install=true). - Set
noble_velero_s3_bucketandnoble_velero_s3_urlvia.env(NOBLE_VELERO_S3_*) orgroup_varsor-e. Extra-vars override.env. Optional:noble_velero_s3_region,noble_velero_s3_prefix,noble_velero_s3_force_path_style(defaults matchvalues.yaml). - Run
ansible/playbooks/noble.yml(Velero runs afternoble_platform).
Example without .env (all on the CLI):
cd ansible
ansible-playbook playbooks/noble.yml --tags velero \
-e noble_velero_install=true \
-e noble_velero_s3_bucket=noble-velero \
-e noble_velero_s3_url=https://minio.lan:9000 \
-e noble_velero_aws_access_key_id="$KEY" \
-e noble_velero_aws_secret_access_key="$SECRET"
The clusters/noble/bootstrap/kustomization.yaml applies velero/namespace.yaml with the rest of the bootstrap namespaces (so velero exists before Helm).
Install (Helm only)
From repo root:
kubectl apply -f clusters/noble/bootstrap/velero/namespace.yaml
# Create velero-cloud-credentials (see above), then:
helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts && helm repo update
helm upgrade --install velero vmware-tanzu/velero -n velero --version 12.0.0 \
-f clusters/noble/bootstrap/velero/values.yaml \
--set-string configuration.backupStorageLocation[0].bucket=YOUR_BUCKET \
--set-string configuration.backupStorageLocation[0].config.s3Url=https://YOUR-S3-ENDPOINT \
--wait
Edit values.yaml defaults (bucket placeholder, s3Url) or override with --set-string as above.
Quick checks
kubectl -n velero get pods,backupstoragelocation,volumesnapshotlocation
velero backup create test --wait
(velero CLI: install from Velero releases.)