Update Ansible configuration to integrate SOPS for managing secrets. Enhance README.md with SOPS usage instructions and prerequisites. Remove External Secrets Operator references and related configurations from the bootstrap process, streamlining the deployment. Adjust playbooks and roles to apply SOPS-encrypted secrets automatically, improving security and clarity in secret management.
This commit is contained in:
100
docs/homelab-network.md
Normal file
100
docs/homelab-network.md
Normal file
@@ -0,0 +1,100 @@
|
||||
# Homelab network inventory
|
||||
|
||||
Single place for **VLANs**, **static addressing**, and **hosts** beside the **noble** Talos cluster. **Proxmox** is the **hypervisor** for the VMs below; **all of those VMs are intended to run on `192.168.1.0/24`** (same broadcast domain as Pi-hole and typical home clients). **Noble** (Talos) stays on **`192.168.50.0/24`** per [`architecture.md`](architecture.md) and [`talos/CLUSTER-BUILD.md`](../talos/CLUSTER-BUILD.md) until you change that design.
|
||||
|
||||
## VLANs (logical)
|
||||
|
||||
| Network | Role |
|
||||
|---------|------|
|
||||
| **`192.168.1.0/24`** | **Homelab / Proxmox LAN** — **Proxmox host(s)**, **all Proxmox VMs**, **Pi-hole**, **Mac mini**, and other servers that share this VLAN. |
|
||||
| **`192.168.50.0/24`** | **Noble Talos** cluster — physical nodes, **kube-vip**, **MetalLB**, Traefik; **not** the Proxmox VM subnet. |
|
||||
| **`192.168.60.0/24`** | **DMZ / WAN-facing** — **NPM**, **WebDAV**, **other services** that need WAN access. |
|
||||
| **`192.168.40.0/24`** | **Home Assistant** and IoT devices — isolated; record subnet and HA IP in DHCP/router. |
|
||||
|
||||
**Routing / DNS:** Clients and VMs on **`192.168.1.0/24`** reach **noble** services on **`192.168.50.0/24`** via **L3** (router/firewall). **NFS** from OMV (`192.168.1.105`) to **noble** pods uses the **OMV data IP** as the NFS server address from the cluster’s perspective.
|
||||
|
||||
Firewall rules between VLANs are **out of scope** here; document them where you keep runbooks.
|
||||
|
||||
---
|
||||
|
||||
## `192.168.50.0/24` — reservations (noble only)
|
||||
|
||||
Do not assign **unrelated** static services on **this** VLAN without checking overlap with MetalLB and kube-vip.
|
||||
|
||||
| Use | Addresses |
|
||||
|-----|-----------|
|
||||
| Talos nodes | `.10`–`.40` (see [`talos/talconfig.yaml`](../talos/talconfig.yaml)) |
|
||||
| MetalLB L2 pool | `.210`–`.229` |
|
||||
| Traefik (ingress) | `.211` (typical) |
|
||||
| Argo CD | `.210` (typical) |
|
||||
| Kubernetes API (kube-vip) | **`.230`** — **must not** be a VM |
|
||||
|
||||
---
|
||||
|
||||
## Proxmox VMs (`192.168.1.0/24`)
|
||||
|
||||
All run on **Proxmox**; addresses below use **`192.168.1.0/24`** (same host octet as your earlier `.50.x` / `.60.x` plan, moved into the homelab VLAN). Adjust if your router uses a different numbering scheme.
|
||||
|
||||
Most are **Docker hosts** with multiple apps; treat the **IP** as the **host**, not individual containers.
|
||||
|
||||
| VM ID | Name | IP | Notes |
|
||||
|-------|------|-----|--------|
|
||||
| 666 | nginxproxymanager | `192.168.1.20` | NPM (edge / WAN-facing role — firewall as you design). |
|
||||
| 777 | nginxproxymanager-Lan | `192.168.1.60` | NPM on **internal** homelab LAN. |
|
||||
| 100 | Openmediavault | `192.168.1.105` | **NFS** exports for *arr / media paths. |
|
||||
| 110 | Monitor | `192.168.1.110` | Uptime Kuma, Peekaping, Tracearr → cluster candidates. |
|
||||
| 120 | arr | `192.168.1.120` | *arr stack; media via **NFS** from OMV — see [migration](#arr-stack-nfs-and-kubernetes). |
|
||||
| 130 | Automate | `192.168.1.130` | Low use — **candidate to remove** or consolidate. |
|
||||
| 140 | general-purpose | `192.168.1.140` | IT tools, Mealie, Open WebUI, SparkyFitness, … |
|
||||
| 150 | Media-server | `192.168.1.150` | Jellyfin (test, **NFS** media), ebook server. |
|
||||
| 160 | s3 | `192.168.1.170` | Object storage; **merge** into **central S3** on noble per [`shared-data-services.md`](shared-data-services.md) when ready. |
|
||||
| 190 | Auth | `192.168.1.190` | **Authentik** → **noble (K8s)** for HA. |
|
||||
| 300 | gitea | `192.168.1.203` | On **`.1`**, no overlap with noble **MetalLB `.210`–`.229`** on **`.50`**. |
|
||||
| 310 | gitea-nsfw | `192.168.1.204` | |
|
||||
| 500 | AMP | `192.168.1.47` | |
|
||||
|
||||
### Workload detail (what runs where)
|
||||
|
||||
**Auth (190)** — **Authentik** is the main service; moving it to **Kubernetes (noble)** gives you **HA**, rolling upgrades, and backups via your cluster patterns (PVCs, Velero, etc.). Plan **OIDC redirect URLs** and **outposts** (if used) when the **ingress hostname** and paths to **`.50`** services change.
|
||||
|
||||
**Monitor (110)** — **Uptime Kuma**, **Peekaping**, and **Tracearr** are a good fit for the cluster: small state (SQLite or small DBs), **Ingress** via Traefik, and **Longhorn** or a small DB PVC. Migrate **one app at a time** and keep the old VM until DNS and alerts are verified.
|
||||
|
||||
**arr (120)** — **Lidarr, Sonarr, Radarr**, and related *arr* apps; libraries and download paths point at **NFS** from **Openmediavault (100)** at **`192.168.1.105`**. The hard part is **keeping paths, permissions (UID/GID), and download client** wiring while pods move.
|
||||
|
||||
**Automate (130)** — Tools are **barely used**; **decommission**, merge into **general-purpose (140)**, or replace with a **CronJob** / one-shot on the cluster only if something still needs scheduling.
|
||||
|
||||
**general-purpose (140)** — “Daily driver” stack: **IT tools**, **Mealie**, **Open WebUI**, **SparkyFitness**, and similar. **Candidates for gradual moves** to noble; group by **data sensitivity** and **persistence** (Postgres vs SQLite) when you pick order.
|
||||
|
||||
**Media-server (150)** — **Jellyfin** (testing) with libraries on **NFS**; **ebook** server. Treat **Jellyfin** like *arr* for storage: same NFS export and **transcoding** needs (CPU on worker nodes or GPU if you add it). Ebook stack depends on what you run (e.g. Kavita, Audiobookshelf) — note **metadata paths** before moving.
|
||||
|
||||
### Arr stack, NFS, and Kubernetes
|
||||
|
||||
You do **not** have to move NFS into the cluster: **Openmediavault** on **`192.168.1.105`** can stay the **NFS server** while the *arr* apps run as **Deployments** with **ReadWriteMany** volumes. Noble nodes on **`192.168.50.0/24`** mount NFS using **that IP** (ensure **firewall** allows **NFS** from node IPs to OMV).
|
||||
|
||||
1. **Keep OMV as the single source of exports** — same **export path** (e.g. `/export/media`) from the cluster’s perspective as from the current VM.
|
||||
2. **Mount NFS in Kubernetes** — use a **CSI NFS driver** (e.g. **nfs-subdir-external-provisioner** or **csi-driver-nfs**) so each app gets a **PVC** backed by a **subdirectory** of the export, **or** one shared RWX PVC for a common tree if your layout needs it.
|
||||
3. **Match POSIX ownership** — set **supplemental groups** or **fsGroup** / **runAsUser** on the pods so Sonarr/Radarr see the same **UID/GID** as today’s Docker setup; fix **squash** settings on OMV if you use `root_squash`.
|
||||
4. **Config and DB** — back up each app’s **config volume** (or SQLite files), redeploy with the same **environment**; point **download clients** and **NFS media roots** to the **same logical paths** inside the container.
|
||||
5. **Low-risk path** — run **one** *arr* app on the cluster while the rest stay on **VM 120** until imports and downloads behave; then cut DNS/NPM streams over.
|
||||
|
||||
If you prefer **no** NFS from pods, the alternative is **large ReadWriteOnce** disks on Longhorn and **sync** from OMV — usually **more** moving parts than **RWX NFS** for this workload class.
|
||||
|
||||
---
|
||||
|
||||
## Other hosts
|
||||
|
||||
| Host | IP | VLAN / network | Notes |
|
||||
|------|-----|----------------|--------|
|
||||
| **Pi-hole** | `192.168.1.127` | `192.168.1.0/24` | DNS; same VLAN as Proxmox VMs. |
|
||||
| **Home Assistant** | *TBD* | **IoT VLAN** | Add reservation when fixed. |
|
||||
| **Mac mini** | `192.168.1.155` | `192.168.1.0/24` | Align with **Storage B** in [`Racks.md`](Racks.md) if the same machine. |
|
||||
|
||||
---
|
||||
|
||||
## Related docs
|
||||
|
||||
- **Shared Postgres + S3 (centralized):** [`shared-data-services.md`](shared-data-services.md)
|
||||
- **VM → noble migration plan:** [`migration-vm-to-noble.md`](migration-vm-to-noble.md)
|
||||
- Noble cluster topology and ingress: [`architecture.md`](architecture.md)
|
||||
- Physical racks (Primary / Storage B / Rack C): [`Racks.md`](Racks.md)
|
||||
- Cluster checklist: [`../talos/CLUSTER-BUILD.md`](../talos/CLUSTER-BUILD.md)
|
||||
Reference in New Issue
Block a user