Compare commits
88 Commits
b7726c160d
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
9daff01b0b | ||
|
|
bbea917ef7 | ||
| f1ad4bb441 | |||
| 28b586eea6 | |||
| 4452bafdbe | |||
|
|
af6fd2104c | ||
| 030d2bdae5 | |||
|
|
323ccd5a65 | ||
|
|
2eb458a169 | ||
|
|
797aa2e514 | ||
|
|
1d40a0a7ec | ||
|
|
0fce675f67 | ||
|
|
b16f83a59d | ||
| 323b59835e | |||
| 8146c64a7f | |||
| a0df894a3d | |||
| f0ee61ebe2 | |||
| 701d92b48a | |||
| fe72dad0e9 | |||
| 7db7777f2b | |||
| 125bac0f5d | |||
| 63dae839ce | |||
| c1eb0bc7ae | |||
| d493a1eefd | |||
| 65b171e55f | |||
| 3b1ed02aa2 | |||
| b712b36dc8 | |||
| dfd811e839 | |||
| a1c42305cd | |||
|
|
fcb2119859 | ||
|
|
7be62c7a45 | ||
|
|
d5dd912255 | ||
|
|
e22f8895b7 | ||
|
|
75b2702c9a | ||
|
|
91cb5ba36f | ||
|
|
5818115896 | ||
|
|
a7dde5c9fa | ||
|
|
f8591ccae6 | ||
| 6154a93f1b | |||
| d8691f8187 | |||
| c86d3fa887 | |||
| b7c8097148 | |||
| 01d7ef39c8 | |||
| 6c46240a32 | |||
| 816188080d | |||
| bba575c2d0 | |||
| 62132b7f53 | |||
| 059ec48116 | |||
| 81080ad5e4 | |||
| ef6c479be8 | |||
|
|
4d3c8a7eb8 | ||
|
|
99ef20498e | ||
|
|
6db817dbd9 | ||
|
|
564d51b73f | ||
|
|
cb7f64e890 | ||
|
|
01a8717d2e | ||
|
|
83ad9953ae | ||
| d20743165d | |||
| e4795613df | |||
|
|
36f6f75cf1 | ||
|
|
22b4bac128 | ||
|
|
c0ef1fd56f | ||
|
|
986696894f | ||
|
|
cb7f4b0350 | ||
|
|
9a49f8e1eb | ||
|
|
bf07b70f94 | ||
|
|
ff3ac031a9 | ||
|
|
50118ae9ff | ||
|
|
742c605537 | ||
|
|
fdbb21b61c | ||
|
|
05cfe77f26 | ||
|
|
c471c839e3 | ||
|
|
711e0b90cb | ||
|
|
32fdba5fcf | ||
|
|
d5fed6f297 | ||
|
|
a3d3319182 | ||
|
|
ba0bc7955e | ||
|
|
e2eafac7cd | ||
|
|
06f3f69e98 | ||
|
|
8890acb451 | ||
|
|
3e70274ae2 | ||
|
|
db596102d7 | ||
|
|
0169be8f9b | ||
|
|
c3f4d9371c | ||
|
|
d022bc01d5 | ||
|
|
6e86707b64 | ||
|
|
281280eca3 | ||
|
|
3fe30333e2 |
1
.gitignore
vendored
Normal file
1
.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
ansible/inventory/hosts.ini
|
||||
250
README.md
250
README.md
@@ -1,3 +1,249 @@
|
||||
# home-server
|
||||
# Home Server Services
|
||||
|
||||
Collection of all resources deployed on my home server.
|
||||
Collection of all resources deployed on my home server. This repository contains Docker Compose configurations for various services organized by category.
|
||||
|
||||
## Quick Start
|
||||
|
||||
1. Navigate to the service directory you want to deploy
|
||||
2. Copy `env.example` to `.env` and configure the variables
|
||||
3. Run `docker compose up -d` to start the services
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
komodo/
|
||||
├── arr/ # ARR (Automated Media Management) Services
|
||||
├── automate/ # Automation & Workflow Tools
|
||||
├── common/ # Common/Shared Services
|
||||
├── media-server/ # Media Server Applications
|
||||
├── general-purpose/ # General Purpose Applications
|
||||
└── monitor/ # Monitoring & Tracking Services
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📺 ARR Services (`komodo/arr/`)
|
||||
|
||||
The ARR stack is a collection of automated media management tools for organizing and downloading movies, TV shows, music, books, and more.
|
||||
|
||||
### arrs/ (`komodo/arr/arrs/`)
|
||||
|
||||
Main collection of ARR services for different media types:
|
||||
|
||||
- **Radarr** (Port: 7878) - Movie collection manager. Automatically monitors and downloads movies.
|
||||
- **Sonarr** (Port: 8989) - TV series collection manager. Monitors TV shows and manages episodes.
|
||||
- **Lidarr** (Port: 8686) - Music collection manager. Organizes and downloads music.
|
||||
- **Bookshelf** (Port: 8787) - Book collection manager. Alternative to Readarr for managing ebooks.
|
||||
- **Bazarr** (Port: 6767) - Subtitle manager. Automatically downloads subtitles for movies and TV shows.
|
||||
- **Jellyseerr** (Port: 5055) - Request management for media. Handles user requests for movies and TV shows.
|
||||
- **Prowlarr** (Port: 9696) - Indexer manager. Manages torrent/usenet indexers for all ARR services.
|
||||
- **Profilarr** (Port: 6868) - Profile manager for ARR services. Helps manage quality profiles across services.
|
||||
|
||||
**Configuration:** Requires `CONFIG_PATH`, `DATA_PATH`, `PUID`, `PGID`, and `TZ` environment variables.
|
||||
|
||||
### dispatcharr/ (`komodo/arr/dispatcharr/`)
|
||||
|
||||
- **Dispatcharr** (Port: 1866) - Task dispatcher for ARR services. Coordinates and manages tasks across multiple ARR instances.
|
||||
|
||||
**Configuration:** No environment variables required. Uses named volumes for data persistence.
|
||||
|
||||
### dizquetv/ (`komodo/arr/dizquetv/`)
|
||||
|
||||
- **DizqueTV** (Port: 8000) - Creates virtual TV channels from your media library. Streams your media as traditional TV channels.
|
||||
- **ErsatzTV** (Port: 8409) - Alternative virtual TV channel generator. Creates IPTV channels from local media.
|
||||
|
||||
**Configuration:** No environment variables required. Uses named volumes for data persistence.
|
||||
|
||||
### download-clients/ (`komodo/arr/download-clients/`)
|
||||
|
||||
Download clients for fetching media:
|
||||
|
||||
- **Transmission-OpenVPN** (Port: 9092) - BitTorrent client with VPN support via Private Internet Access (PIA). Provides secure torrenting.
|
||||
- **SABnzbd** (Port: 6798) - Usenet downloader. Downloads files from Usenet newsgroups.
|
||||
- **PIA qBittorrent** (Port: 8888) - qBittorrent client with PIA VPN integration. Alternative torrent client with built-in VPN.
|
||||
|
||||
**Configuration:** Requires PIA VPN credentials, network configuration, and path variables.
|
||||
|
||||
---
|
||||
|
||||
## 🎬 Media Servers (`komodo/media-server/`)
|
||||
|
||||
Media server applications for organizing and streaming your media library.
|
||||
|
||||
### audio-bookshelf/ (`komodo/media-server/audio-bookshelf/`)
|
||||
|
||||
- **AudioBookShelf** (Port: 13378) - Self-hosted audiobook and ebook server. Organizes and streams audiobooks, ebooks, and podcasts with a beautiful web interface.
|
||||
|
||||
**Configuration:** Media paths are configured directly in compose.yaml. No environment variables required.
|
||||
|
||||
### booklore/ (`komodo/media-server/booklore/`)
|
||||
|
||||
- **Booklore** - Modern ebook library manager with a focus on organization and metadata management. Includes MariaDB database for data storage.
|
||||
|
||||
**Configuration:** Requires database credentials, user/group IDs, and application port configuration.
|
||||
|
||||
### deprecated/calibre/ (`komodo/media-server/deprecated/calibre/`)
|
||||
|
||||
- **Calibre** - Ebook library management system. Organizes, converts, and manages your ebook collection.
|
||||
|
||||
**Configuration:** No environment variables currently configured.
|
||||
|
||||
**Note:** This service is in the deprecated directory.
|
||||
|
||||
---
|
||||
|
||||
## ⚙️ Automation (`komodo/automate/`)
|
||||
|
||||
Automation and workflow tools for streamlining tasks and processes.
|
||||
|
||||
### n8n/ (`komodo/automate/n8n/`)
|
||||
|
||||
- **n8n** (Port: 5678) - Workflow automation tool. Create automated workflows with a visual interface. Similar to Zapier but self-hosted.
|
||||
|
||||
**Configuration:** Requires timezone configuration. Additional runners can be enabled.
|
||||
|
||||
### node-red/ (`komodo/automate/node-red/`)
|
||||
|
||||
- **Node-RED** (Port: 1880) - Flow-based programming tool. Visual programming for connecting hardware devices, APIs, and online services.
|
||||
|
||||
**Configuration:** No environment variables required. Uses named volumes for data persistence.
|
||||
|
||||
### semaphore/ (`komodo/automate/semaphore/`)
|
||||
|
||||
- **Semaphore** (Port: 3000) - Ansible automation platform. Provides a web UI for managing Ansible playbooks and deployments. Includes MySQL database.
|
||||
|
||||
**Configuration:** Requires database credentials, email configuration for notifications, and user/group IDs.
|
||||
|
||||
---
|
||||
|
||||
## 🛠️ General Purpose (`komodo/general-purpose/`)
|
||||
|
||||
Various general-purpose applications for different use cases.
|
||||
|
||||
### actual-budget/ (`komodo/general-purpose/actual-budget/`)
|
||||
|
||||
- **Actual Budget** (Port: 5006) - Personal finance and budgeting application. Local-first budgeting tool with sync capabilities.
|
||||
|
||||
**Configuration:** No environment variables required. Uses named volumes for data persistence.
|
||||
|
||||
### bookstack/ (`komodo/general-purpose/bookstack/`)
|
||||
|
||||
- **BookStack** (Port: 6875) - Documentation and wiki platform. Knowledge management system with a book-like structure. Includes MariaDB database.
|
||||
|
||||
**Configuration:** Requires database credentials, application key, SAML2/OAuth configuration (optional), SMTP settings, and application URL.
|
||||
|
||||
### flink/ (`komodo/general-purpose/flink/`)
|
||||
|
||||
- **Flink** (Port: 8080) - Streaming analytics platform. Real-time data processing and analytics engine.
|
||||
|
||||
**Configuration:** No environment variables required. Uses default configuration.
|
||||
|
||||
### grocy/ (`komodo/general-purpose/grocy/`)
|
||||
|
||||
- **Grocy** (Port: 9283) - Household management system. Tracks groceries, chores, recipes, and more for household organization.
|
||||
|
||||
**Configuration:** Requires user/group IDs and timezone configuration.
|
||||
|
||||
### hortusfox/ (`komodo/general-purpose/hortusfox/`)
|
||||
|
||||
- **HortusFox** (Port: 8282) - Garden management application. Tracks plants, watering schedules, and garden activities. Includes MariaDB database.
|
||||
|
||||
**Configuration:** Requires database credentials and admin account configuration.
|
||||
|
||||
### it-tools/ (`komodo/general-purpose/it-tools/`)
|
||||
|
||||
- **IT Tools** (Port: 1234) - Collection of useful IT tools. Web-based utilities for developers and IT professionals.
|
||||
|
||||
**Configuration:** No environment variables required. Simple web application.
|
||||
|
||||
### mealie/ (`komodo/general-purpose/mealie/`)
|
||||
|
||||
- **Mealie** (Port: 9925) - Recipe management and meal planning platform. Organize recipes, plan meals, and manage grocery lists. Includes PostgreSQL database.
|
||||
|
||||
**Configuration:** Requires database password, base URL, OIDC/OAuth configuration (optional), and OpenAI API key (optional for AI features).
|
||||
|
||||
### open-webui/ (`komodo/general-purpose/open-webui/`)
|
||||
|
||||
- **Open WebUI** (Port: 11674) - Web UI for LLM (Large Language Model) applications. Provides a chat interface for OpenAI, Anthropic, and other AI models.
|
||||
|
||||
**Configuration:** Requires API keys (OpenAI/Anthropic), OAuth configuration (optional), and redirect URI.
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Common Services (`komodo/common/`)
|
||||
|
||||
Shared services used across multiple applications.
|
||||
|
||||
### newt/ (`komodo/common/newt/`)
|
||||
|
||||
- **Newt** - Service integration tool. Connects with Pangolin endpoint for service management and integration.
|
||||
|
||||
**Configuration:** Requires Pangolin endpoint URL, Newt ID, and Newt secret.
|
||||
|
||||
---
|
||||
|
||||
## 📊 Monitoring (`komodo/monitor/`)
|
||||
|
||||
Services for monitoring and tracking various aspects of your home server.
|
||||
|
||||
### tracearr/ (`komodo/monitor/tracearr/`)
|
||||
|
||||
- **Tracearr** (Port: 3000) - Monitoring and tracking tool for ARR services. Tracks requests, downloads, and activity across your ARR stack. Includes PostgreSQL and Redis.
|
||||
|
||||
**Configuration:** Requires port, timezone, and log level configuration. Secrets are auto-generated by default.
|
||||
|
||||
---
|
||||
|
||||
## Environment Variables
|
||||
|
||||
Each service directory contains an `env.example` file that documents all required and optional environment variables. To use a service:
|
||||
|
||||
1. Copy the `env.example` file to `.env` in the same directory
|
||||
2. Fill in the required values
|
||||
3. Adjust optional settings as needed
|
||||
4. Run `docker compose up -d`
|
||||
|
||||
### Common Variables
|
||||
|
||||
Many services use these common variables:
|
||||
|
||||
- **PUID** - User ID (find with `id -u`)
|
||||
- **PGID** - Group ID (find with `id -g`)
|
||||
- **TZ** - Timezone (e.g., `America/New_York`, `Europe/London`, `UTC`)
|
||||
- **CONFIG_PATH** - Base path for configuration files
|
||||
- **DATA_PATH** - Base path for data/media files
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- All services use Docker Compose for orchestration
|
||||
- Most services include health checks and restart policies
|
||||
- Services are configured to use named volumes for data persistence
|
||||
- Port mappings can be adjusted in the compose files if needed
|
||||
- Some services integrate with authentication providers (like Authentik) for SSO
|
||||
- VPN-enabled download clients require VPN provider credentials
|
||||
|
||||
---
|
||||
|
||||
## Service Dependencies
|
||||
|
||||
Some services depend on others:
|
||||
|
||||
- **ARR Services** typically connect to download clients (Transmission, qBittorrent, SABnzbd)
|
||||
- **Jellyseerr** connects to media servers (Jellyfin/Plex) and ARR services
|
||||
- **Bazarr** connects to Radarr and Sonarr for subtitle management
|
||||
- **Prowlarr** connects to all ARR services to provide indexers
|
||||
- **Tracearr** monitors ARR services
|
||||
- Services with databases (BookStack, Mealie, Booklore, HortusFox, Semaphore) include their database containers
|
||||
|
||||
---
|
||||
|
||||
## Contributing
|
||||
|
||||
When adding new services:
|
||||
|
||||
1. Create a new directory under the appropriate category
|
||||
2. Add a `compose.yaml` file
|
||||
3. Create an `env.example` file documenting all environment variables
|
||||
4. Update this README with service description and configuration requirements
|
||||
|
||||
119
ansible/README.md
Normal file
119
ansible/README.md
Normal file
@@ -0,0 +1,119 @@
|
||||
# Proxmox VM Management Suite
|
||||
|
||||
A comprehensive Ansible automation suite for managing Proxmox Virtual Machines. This suite allows you to easily create Cloud-Init templates, provision new VMs, manage backups, and decommission resources across multiple Proxmox hosts.
|
||||
|
||||
## Features
|
||||
|
||||
- **Template Management**:
|
||||
- Automatically download Cloud Images (Ubuntu, Debian, etc.).
|
||||
- Pre-configured with Cloud-Init (SSH keys, IP Config).
|
||||
- Support for selecting images from a curated list or custom URLs.
|
||||
- **VM Provisioning**:
|
||||
- Clone from templates (Full or Linked clones).
|
||||
- Auto-start option.
|
||||
- **Lifecycle Management**:
|
||||
- Backup VMs (Snapshot mode).
|
||||
- Delete/Purge VMs.
|
||||
- **Security**:
|
||||
- **Automatic SSH Key Injection**: Automatically adds a defined Admin SSH key to every template.
|
||||
- Support for injecting additional SSH keys per deployment.
|
||||
|
||||
## Setup
|
||||
|
||||
### 1. Requirements
|
||||
|
||||
Install the required Ansible collections:
|
||||
```bash
|
||||
ansible-galaxy install -r requirements.yml
|
||||
```
|
||||
|
||||
### 2. Configuration
|
||||
|
||||
Edit `roles/proxmox_vm/defaults/main.yml` to set your global defaults, specifically the **Admin SSH Key**.
|
||||
|
||||
**Important Variable to Change:**
|
||||
```yaml
|
||||
# ansible/roles/proxmox_vm/defaults/main.yml
|
||||
admin_ssh_key: "ssh-ed25519 AAAAC3... your-actual-public-key"
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
The main entry point is the playbook `playbooks/manage_vm.yml`. You control the behavior using the `proxmox_action` variable.
|
||||
|
||||
### 1. Create a Cloud-Init Template
|
||||
|
||||
You can create a template by selecting a predefined alias (e.g., `ubuntu-22.04`) or providing a custom URL.
|
||||
|
||||
**Option A: Select from List (Default)**
|
||||
Current aliases: `ubuntu-22.04`, `ubuntu-24.04`, `debian-12`.
|
||||
|
||||
```bash
|
||||
# Create Ubuntu 22.04 Template (ID: 9000)
|
||||
ansible-playbook playbooks/manage_vm.yml \
|
||||
-e "proxmox_action=create_template vmid=9000 template_name=ubuntu-22-template image_alias=ubuntu-22.04"
|
||||
```
|
||||
|
||||
**Option B: Custom URL**
|
||||
```bash
|
||||
ansible-playbook playbooks/manage_vm.yml \
|
||||
-e "proxmox_action=create_template \
|
||||
vmid=9001 \
|
||||
template_name=custom-linux \
|
||||
image_source_type=url \
|
||||
custom_image_url='https://example.com/image.qcow2'"
|
||||
```
|
||||
|
||||
### 2. Create a VM from Template
|
||||
|
||||
Clone a valid template to a new VM.
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/manage_vm.yml \
|
||||
-e "proxmox_action=create_vm \
|
||||
vmid=9000 \
|
||||
new_vmid=105 \
|
||||
new_vm_name=web-server-01"
|
||||
```
|
||||
|
||||
### 3. Backup a VM
|
||||
|
||||
Create a snapshot backup of a specific VM.
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/manage_vm.yml \
|
||||
-e "proxmox_action=backup_vm vmid=105"
|
||||
```
|
||||
|
||||
### 4. Delete a VM
|
||||
|
||||
Stop and purge a VM.
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/manage_vm.yml \
|
||||
-e "proxmox_action=delete_vm vmid=105"
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Handling Multiple Hosts
|
||||
You can target a specific Proxmox node using the `target_host` variable.
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/manage_vm.yml -e "proxmox_action=create_vm ... target_host=mercury"
|
||||
```
|
||||
|
||||
### Injecting Additional SSH Keys
|
||||
You can add extra SSH keys for a specific run (or add them to the defaults file).
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/manage_vm.yml \
|
||||
-e "proxmox_action=create_template ... additional_ssh_keys=['ssh-rsa AAAAB3... key1', 'ssh-ed25519 AAAA... key2']"
|
||||
```
|
||||
|
||||
## Directory Structure
|
||||
|
||||
- `roles/proxmox_vm/`: Core logic role.
|
||||
- `defaults/main.yml`: Configuration variables (Images, Keys, Defaults).
|
||||
- `tasks/`: Action modules (`create_template.yml`, `create_vm.yml`, etc.).
|
||||
- `inventory/`: Host definitions.
|
||||
6
ansible/ansible.cfg
Normal file
6
ansible/ansible.cfg
Normal file
@@ -0,0 +1,6 @@
|
||||
[defaults]
|
||||
inventory = inventory/hosts.ini
|
||||
host_key_checking = False
|
||||
retry_files_enabled = False
|
||||
interpreter_python = auto_silent
|
||||
roles_path = roles
|
||||
14
ansible/inventory/hosts.ini.sample
Normal file
14
ansible/inventory/hosts.ini.sample
Normal file
@@ -0,0 +1,14 @@
|
||||
[proxmox]
|
||||
# Replace pve1 with your proxmox node hostname or IP
|
||||
mercury ansible_host=192.168.50.100 ansible_user=root
|
||||
|
||||
[proxmox:vars]
|
||||
# If using password auth (ssh key recommended though):
|
||||
# ansible_ssh_pass=yourpassword
|
||||
|
||||
# Connection variables for the proxmox modules (api)
|
||||
proxmox_api_user=root@pam
|
||||
proxmox_api_password=CHANGE_ME
|
||||
proxmox_api_host=192.168.50.100
|
||||
# proxmox_api_token_id=
|
||||
# proxmox_api_token_secret=
|
||||
72
ansible/playbooks/create_ubuntu_template.yml
Normal file
72
ansible/playbooks/create_ubuntu_template.yml
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
- name: Create Ubuntu Cloud-Init Template
|
||||
hosts: proxmox
|
||||
become: yes
|
||||
vars:
|
||||
template_id: 9000
|
||||
template_name: ubuntu-2204-cloud
|
||||
# URL for Ubuntu 22.04 Cloud Image (Jammy)
|
||||
image_url: "https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img"
|
||||
image_name: "ubuntu-22.04-server-cloudimg-amd64.img"
|
||||
storage_pool: "local-lvm"
|
||||
memory: 2048
|
||||
cores: 2
|
||||
|
||||
tasks:
|
||||
- name: Check if template already exists
|
||||
command: "qm status {{ template_id }}"
|
||||
register: vm_status
|
||||
failed_when: false
|
||||
changed_when: false
|
||||
|
||||
- name: Fail if template ID exists
|
||||
fail:
|
||||
msg: "VM ID {{ template_id }} already exists. Please choose a different ID or delete the existing VM."
|
||||
when: vm_status.rc == 0
|
||||
|
||||
- name: Download Ubuntu Cloud Image
|
||||
get_url:
|
||||
url: "{{ image_url }}"
|
||||
dest: "/tmp/{{ image_name }}"
|
||||
mode: '0644'
|
||||
|
||||
- name: Install libguestfs-tools (required for virt-customize if needed, optional)
|
||||
apt:
|
||||
name: libguestfs-tools
|
||||
state: present
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Create VM with hardware config
|
||||
command: >
|
||||
qm create {{ template_id }}
|
||||
--name "{{ template_name }}"
|
||||
--memory {{ memory }}
|
||||
--core {{ cores }}
|
||||
--net0 virtio,bridge=vmbr0
|
||||
--scsihw virtio-scsi-pci
|
||||
--ostype l26
|
||||
--serial0 socket --vga serial0
|
||||
|
||||
- name: Import Disk
|
||||
command: "qm importdisk {{ template_id }} /tmp/{{ image_name }} {{ storage_pool }}"
|
||||
|
||||
- name: Attach Disk to SCSI
|
||||
command: "qm set {{ template_id }} --scsi0 {{ storage_pool }}:vm-{{ template_id }}-disk-0"
|
||||
|
||||
- name: Add Cloud-Init Drive
|
||||
command: "qm set {{ template_id }} --ide2 {{ storage_pool }}:cloudinit"
|
||||
|
||||
- name: Set Boot Order
|
||||
command: "qm set {{ template_id }} --boot c --bootdisk scsi0"
|
||||
|
||||
- name: Resize Disk (Optional, e.g. 10G)
|
||||
command: "qm resize {{ template_id }} scsi0 10G"
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Convert to Template
|
||||
command: "qm template {{ template_id }}"
|
||||
|
||||
- name: Remove Downloaded Image
|
||||
file:
|
||||
path: "/tmp/{{ image_name }}"
|
||||
state: absent
|
||||
6
ansible/playbooks/manage_vm.yml
Normal file
6
ansible/playbooks/manage_vm.yml
Normal file
@@ -0,0 +1,6 @@
|
||||
---
|
||||
- name: Manage Proxmox VMs
|
||||
hosts: "{{ target_host | default('proxmox') }}"
|
||||
become: yes
|
||||
roles:
|
||||
- proxmox_vm
|
||||
26
ansible/playbooks/semaphore/bootstrap.yml
Normal file
26
ansible/playbooks/semaphore/bootstrap.yml
Normal file
@@ -0,0 +1,26 @@
|
||||
---
|
||||
- name: Register Target Host
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: Verify target_host is defined
|
||||
fail:
|
||||
msg: "The 'target_host' variable must be defined (e.g. 192.168.1.10)"
|
||||
when: target_host is not defined
|
||||
|
||||
- name: Add target host to inventory
|
||||
add_host:
|
||||
name: target_node
|
||||
ansible_host: "{{ target_host }}"
|
||||
ansible_user: "{{ target_user | default('root') }}"
|
||||
ansible_ssh_pass: "{{ target_password | default(omit) }}"
|
||||
ansible_ssh_private_key_file: "{{ target_private_key_file | default(omit) }}"
|
||||
ansible_python_interpreter: /usr/bin/python3
|
||||
|
||||
- name: Bootstrap Node
|
||||
hosts: target_node
|
||||
become: yes
|
||||
gather_facts: yes
|
||||
roles:
|
||||
- common
|
||||
29
ansible/playbooks/semaphore/configure_networking.yml
Normal file
29
ansible/playbooks/semaphore/configure_networking.yml
Normal file
@@ -0,0 +1,29 @@
|
||||
---
|
||||
- name: Register Target Host
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: Verify target_host is defined
|
||||
fail:
|
||||
msg: "The 'target_host' variable must be defined (e.g. 192.168.1.10)"
|
||||
when: target_host is not defined
|
||||
|
||||
- name: Add target host to inventory
|
||||
add_host:
|
||||
name: target_node
|
||||
ansible_host: "{{ target_host }}"
|
||||
ansible_user: "{{ target_user | default('root') }}"
|
||||
ansible_ssh_pass: "{{ target_password | default(omit) }}"
|
||||
ansible_ssh_private_key_file: "{{ target_private_key_file | default(omit) }}"
|
||||
ansible_python_interpreter: /usr/bin/python3
|
||||
|
||||
- name: Configure Networking
|
||||
hosts: target_node
|
||||
become: yes
|
||||
gather_facts: yes
|
||||
tasks:
|
||||
- name: Run networking task from common role
|
||||
include_role:
|
||||
name: common
|
||||
tasks_from: networking.yml
|
||||
29
ansible/playbooks/semaphore/configure_users.yml
Normal file
29
ansible/playbooks/semaphore/configure_users.yml
Normal file
@@ -0,0 +1,29 @@
|
||||
---
|
||||
- name: Register Target Host
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: Verify target_host is defined
|
||||
fail:
|
||||
msg: "The 'target_host' variable must be defined (e.g. 192.168.1.10)"
|
||||
when: target_host is not defined
|
||||
|
||||
- name: Add target host to inventory
|
||||
add_host:
|
||||
name: target_node
|
||||
ansible_host: "{{ target_host }}"
|
||||
ansible_user: "{{ target_user | default('root') }}"
|
||||
ansible_ssh_pass: "{{ target_password | default(omit) }}"
|
||||
ansible_ssh_private_key_file: "{{ target_private_key_file | default(omit) }}"
|
||||
ansible_python_interpreter: /usr/bin/python3
|
||||
|
||||
- name: Configure Users
|
||||
hosts: target_node
|
||||
become: yes
|
||||
gather_facts: yes
|
||||
tasks:
|
||||
- name: Run users task from common role
|
||||
include_role:
|
||||
name: common
|
||||
tasks_from: users.yml
|
||||
34
ansible/playbooks/semaphore/manage_proxmox.yml
Normal file
34
ansible/playbooks/semaphore/manage_proxmox.yml
Normal file
@@ -0,0 +1,34 @@
|
||||
---
|
||||
- name: Register Proxmox Host
|
||||
hosts: localhost
|
||||
connection: local
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- name: Verify proxmox_host is defined
|
||||
fail:
|
||||
msg: "The 'proxmox_host' variable must be defined."
|
||||
when: proxmox_host is not defined
|
||||
|
||||
- name: Verify proxmox_action is defined
|
||||
fail:
|
||||
msg: "The 'proxmox_action' variable must be defined (e.g. create_vm, create_template, delete_vm)."
|
||||
when: proxmox_action is not defined
|
||||
|
||||
- name: Add Proxmox host to inventory
|
||||
add_host:
|
||||
name: proxmox_node
|
||||
ansible_host: "{{ proxmox_host }}"
|
||||
ansible_user: "{{ proxmox_user | default('root') }}"
|
||||
ansible_ssh_pass: "{{ proxmox_password | default(omit) }}"
|
||||
ansible_ssh_private_key_file: "{{ proxmox_private_key_file | default(omit) }}"
|
||||
ansible_python_interpreter: /usr/bin/python3
|
||||
|
||||
- name: Execute Proxmox Action
|
||||
hosts: proxmox_node
|
||||
become: yes
|
||||
gather_facts: yes
|
||||
vars:
|
||||
# Explicitly map the action variable if needed, though role should pick it up from host vars or extra vars
|
||||
proxmox_action: "{{ proxmox_action }}"
|
||||
roles:
|
||||
- proxmox_vm
|
||||
2
ansible/requirements.yml
Normal file
2
ansible/requirements.yml
Normal file
@@ -0,0 +1,2 @@
|
||||
collections:
|
||||
- name: community.general
|
||||
30
ansible/roles/common/defaults/main.yml
Normal file
30
ansible/roles/common/defaults/main.yml
Normal file
@@ -0,0 +1,30 @@
|
||||
---
|
||||
# Common packages to install
|
||||
common_packages:
|
||||
- curl
|
||||
- wget
|
||||
- git
|
||||
- vim
|
||||
- htop
|
||||
- net-tools
|
||||
- unzip
|
||||
- dnsutils
|
||||
- software-properties-common
|
||||
- ca-certificates
|
||||
- gnupg
|
||||
- openssh-server
|
||||
|
||||
# SSH Configuration
|
||||
common_ssh_users:
|
||||
- name: "{{ ansible_user_id }}"
|
||||
keys: []
|
||||
# Add your keys in inventory or group_vars override
|
||||
|
||||
# Networking
|
||||
common_configure_static_ip: false
|
||||
common_interface_name: "eth0"
|
||||
# common_ip_address: "192.168.1.100/24"
|
||||
# common_gateway: "192.168.1.1"
|
||||
common_dns_servers:
|
||||
- "1.1.1.1"
|
||||
- "8.8.8.8"
|
||||
6
ansible/roles/common/handlers/main.yml
Normal file
6
ansible/roles/common/handlers/main.yml
Normal file
@@ -0,0 +1,6 @@
|
||||
---
|
||||
- name: Apply Netplan
|
||||
shell: netplan apply
|
||||
async: 45
|
||||
poll: 0
|
||||
ignore_errors: yes
|
||||
10
ansible/roles/common/tasks/main.yml
Normal file
10
ansible/roles/common/tasks/main.yml
Normal file
@@ -0,0 +1,10 @@
|
||||
---
|
||||
- name: Install common packages
|
||||
import_tasks: packages.yml
|
||||
|
||||
- name: Configure users and SSH keys
|
||||
import_tasks: users.yml
|
||||
|
||||
- name: Configure networking
|
||||
import_tasks: networking.yml
|
||||
when: common_configure_static_ip | bool
|
||||
23
ansible/roles/common/tasks/networking.yml
Normal file
23
ansible/roles/common/tasks/networking.yml
Normal file
@@ -0,0 +1,23 @@
|
||||
---
|
||||
- name: Verify required variables for static IP
|
||||
fail:
|
||||
msg: "common_ip_address and common_interface_name must be defined when common_configure_static_ip is true."
|
||||
when:
|
||||
- common_configure_static_ip | bool
|
||||
- (common_ip_address is not defined or common_ip_address | length == 0 or common_interface_name is not defined)
|
||||
|
||||
- name: Install netplan.io
|
||||
apt:
|
||||
name: netplan.io
|
||||
state: present
|
||||
when: ansible_os_family == "Debian"
|
||||
|
||||
- name: Configure Netplan
|
||||
template:
|
||||
src: netplan_config.yaml.j2
|
||||
dest: /etc/netplan/01-netcfg.yaml
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: Apply Netplan
|
||||
when: common_configure_static_ip | bool
|
||||
12
ansible/roles/common/tasks/packages.yml
Normal file
12
ansible/roles/common/tasks/packages.yml
Normal file
@@ -0,0 +1,12 @@
|
||||
---
|
||||
- name: Update apt cache
|
||||
apt:
|
||||
update_cache: yes
|
||||
cache_valid_time: 3600
|
||||
when: ansible_os_family == "Debian"
|
||||
|
||||
- name: Install common packages
|
||||
apt:
|
||||
name: "{{ common_packages }}"
|
||||
state: present
|
||||
when: ansible_os_family == "Debian"
|
||||
18
ansible/roles/common/tasks/users.yml
Normal file
18
ansible/roles/common/tasks/users.yml
Normal file
@@ -0,0 +1,18 @@
|
||||
---
|
||||
- name: Ensure users exist
|
||||
user:
|
||||
name: "{{ item.name }}"
|
||||
shell: /bin/bash
|
||||
groups: sudo
|
||||
append: yes
|
||||
state: present
|
||||
loop: "{{ common_ssh_users }}"
|
||||
when: item.create_user | default(false)
|
||||
|
||||
- name: Add SSH keys
|
||||
authorized_key:
|
||||
user: "{{ item.0.name }}"
|
||||
key: "{{ item.1 }}"
|
||||
loop: "{{ common_ssh_users | subelements('keys', skip_missing=True) }}"
|
||||
loop_control:
|
||||
label: "{{ item.0.name }}"
|
||||
15
ansible/roles/common/templates/netplan_config.yaml.j2
Normal file
15
ansible/roles/common/templates/netplan_config.yaml.j2
Normal file
@@ -0,0 +1,15 @@
|
||||
network:
|
||||
version: 2
|
||||
ethernets:
|
||||
{{ common_interface_name }}:
|
||||
dhcp4: no
|
||||
addresses:
|
||||
- {{ common_ip_address }}
|
||||
{% if common_gateway %}
|
||||
gateway4: {{ common_gateway }}
|
||||
{% endif %}
|
||||
nameservers:
|
||||
addresses:
|
||||
{% for server in common_dns_servers %}
|
||||
- {{ server }}
|
||||
{% endfor %}
|
||||
58
ansible/roles/proxmox_vm/defaults/main.yml
Normal file
58
ansible/roles/proxmox_vm/defaults/main.yml
Normal file
@@ -0,0 +1,58 @@
|
||||
---
|
||||
# Defaults for proxmox_vm role
|
||||
|
||||
# Action to perform: create_template, create_vm, delete_vm, backup_vm
|
||||
proxmox_action: create_vm
|
||||
|
||||
# Common settings
|
||||
storage_pool: Lithium
|
||||
vmid: 9000
|
||||
target_node: "{{ inventory_hostname }}"
|
||||
|
||||
# --- Template Creation Settings ---
|
||||
|
||||
# Image Source Selection
|
||||
# Options: 'list' (use image_alias) or 'url' (use custom_image_url)
|
||||
image_source_type: list
|
||||
|
||||
# Predefined Image List
|
||||
# You can select these by setting image_alias
|
||||
image_list:
|
||||
ubuntu-22.04:
|
||||
url: "https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img"
|
||||
filename: "ubuntu-22.04-server-cloudimg-amd64.img"
|
||||
ubuntu-24.04:
|
||||
url: "https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img"
|
||||
filename: "ubuntu-24.04-server-cloudimg-amd64.img"
|
||||
debian-12:
|
||||
url: "https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2"
|
||||
filename: "debian-12-generic-amd64.qcow2"
|
||||
|
||||
# Selection (Default)
|
||||
image_alias: ubuntu-22.04
|
||||
|
||||
# Custom URL (Used if image_source_type is 'url')
|
||||
custom_image_url: ""
|
||||
custom_image_name: "custom-image.img"
|
||||
|
||||
# Template Config
|
||||
template_name: ubuntu-cloud-template
|
||||
memory: 2048
|
||||
cores: 2
|
||||
|
||||
# --- SSH Key Configuration ---
|
||||
# The Admin Key is always added
|
||||
admin_ssh_key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAI..." # REPLACE THIS with your actual public key
|
||||
|
||||
# Additional keys (list of strings)
|
||||
additional_ssh_keys: []
|
||||
|
||||
# --- Create VM Settings (Cloning) ---
|
||||
new_vm_name: new-vm
|
||||
clone_full: true # Full clone (independent) vs Linked clone
|
||||
start_after_create: true
|
||||
|
||||
# --- Backup Settings ---
|
||||
backup_mode: snapshot # snapshot, suspend, stop
|
||||
backup_compress: zstd
|
||||
backup_storage: Lithium
|
||||
7
ansible/roles/proxmox_vm/tasks/backup_vm.yml
Normal file
7
ansible/roles/proxmox_vm/tasks/backup_vm.yml
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
- name: Create VM Backup
|
||||
command: >
|
||||
vzdump {{ vmid }}
|
||||
--mode {{ backup_mode }}
|
||||
--compress {{ backup_compress }}
|
||||
--storage {{ backup_storage }}
|
||||
91
ansible/roles/proxmox_vm/tasks/create_template.yml
Normal file
91
ansible/roles/proxmox_vm/tasks/create_template.yml
Normal file
@@ -0,0 +1,91 @@
|
||||
---
|
||||
- name: Resolve Image Variables (List)
|
||||
set_fact:
|
||||
_image_url: "{{ image_list[image_alias].url }}"
|
||||
_image_name: "{{ image_list[image_alias].filename }}"
|
||||
when: image_source_type == 'list'
|
||||
|
||||
- name: Resolve Image Variables (URL)
|
||||
set_fact:
|
||||
_image_url: "{{ custom_image_url }}"
|
||||
_image_name: "{{ custom_image_name }}"
|
||||
when: image_source_type == 'url'
|
||||
|
||||
- name: Check if template already exists
|
||||
command: "qm status {{ vmid }}"
|
||||
register: vm_status
|
||||
failed_when: false
|
||||
changed_when: false
|
||||
|
||||
- name: Fail if template ID exists
|
||||
fail:
|
||||
msg: "VM ID {{ vmid }} already exists. Please choose a different ID or delete the existing VM."
|
||||
when: vm_status.rc == 0
|
||||
|
||||
- name: Download Cloud Image
|
||||
get_url:
|
||||
url: "{{ _image_url }}"
|
||||
dest: "/tmp/{{ _image_name }}"
|
||||
mode: '0644'
|
||||
|
||||
- name: Install libguestfs-tools
|
||||
apt:
|
||||
name: libguestfs-tools
|
||||
state: present
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Create VM with hardware config
|
||||
command: >
|
||||
qm create {{ vmid }}
|
||||
--name "{{ template_name }}"
|
||||
--memory {{ memory }}
|
||||
--core {{ cores }}
|
||||
--net0 virtio,bridge=vmbr0
|
||||
--scsihw virtio-scsi-pci
|
||||
--ostype l26
|
||||
--serial0 socket --vga serial0
|
||||
|
||||
- name: Import Disk
|
||||
command: "qm importdisk {{ vmid }} /tmp/{{ _image_name }} {{ storage_pool }}"
|
||||
|
||||
- name: Attach Disk to SCSI
|
||||
command: "qm set {{ vmid }} --scsi0 {{ storage_pool }}:vm-{{ vmid }}-disk-0"
|
||||
|
||||
- name: Add Cloud-Init Drive
|
||||
command: "qm set {{ vmid }} --ide2 {{ storage_pool }}:cloudinit"
|
||||
|
||||
- name: Set Boot Order
|
||||
command: "qm set {{ vmid }} --boot c --bootdisk scsi0"
|
||||
|
||||
- name: Prepare SSH Keys File
|
||||
copy:
|
||||
content: |
|
||||
{{ admin_ssh_key }}
|
||||
{% for key in additional_ssh_keys %}
|
||||
{{ key }}
|
||||
{% endfor %}
|
||||
dest: "/tmp/ssh_keys_{{ vmid }}.pub"
|
||||
mode: '0600'
|
||||
|
||||
- name: Configure Cloud-Init (SSH Keys, User, IP)
|
||||
command: >
|
||||
qm set {{ vmid }}
|
||||
--sshkeys /tmp/ssh_keys_{{ vmid }}.pub
|
||||
--ipconfig0 ip=dhcp
|
||||
|
||||
- name: Resize Disk (Default 10G)
|
||||
command: "qm resize {{ vmid }} scsi0 10G"
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Convert to Template
|
||||
command: "qm template {{ vmid }}"
|
||||
|
||||
- name: Remove Downloaded Image
|
||||
file:
|
||||
path: "/tmp/{{ _image_name }}"
|
||||
state: absent
|
||||
|
||||
- name: Remove Temporary SSH Keys File
|
||||
file:
|
||||
path: "/tmp/ssh_keys_{{ vmid }}.pub"
|
||||
state: absent
|
||||
11
ansible/roles/proxmox_vm/tasks/create_vm.yml
Normal file
11
ansible/roles/proxmox_vm/tasks/create_vm.yml
Normal file
@@ -0,0 +1,11 @@
|
||||
---
|
||||
- name: Clone VM from Template
|
||||
command: >
|
||||
qm clone {{ vmid }} {{ new_vmid }}
|
||||
--name "{{ new_vm_name }}"
|
||||
--full {{ 1 if clone_full | bool else 0 }}
|
||||
register: clone_result
|
||||
|
||||
- name: Start VM (Optional)
|
||||
command: "qm start {{ new_vmid }}"
|
||||
when: start_after_create | default(false) | bool
|
||||
7
ansible/roles/proxmox_vm/tasks/delete_vm.yml
Normal file
7
ansible/roles/proxmox_vm/tasks/delete_vm.yml
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
- name: Stop VM (Force Stop)
|
||||
command: "qm stop {{ vmid }}"
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Destroy VM
|
||||
command: "qm destroy {{ vmid }} --purge"
|
||||
3
ansible/roles/proxmox_vm/tasks/main.yml
Normal file
3
ansible/roles/proxmox_vm/tasks/main.yml
Normal file
@@ -0,0 +1,3 @@
|
||||
---
|
||||
- name: Dispatch task based on action
|
||||
include_tasks: "{{ proxmox_action }}.yml"
|
||||
161
coder/proxmox-vm/Readme.md
Normal file
161
coder/proxmox-vm/Readme.md
Normal file
@@ -0,0 +1,161 @@
|
||||
---
|
||||
display_name: Proxmox VM
|
||||
description: Provision VMs on Proxmox VE as Coder workspaces
|
||||
icon: ../../../../.icons/proxmox.svg
|
||||
verified: false
|
||||
tags: [proxmox, vm, cloud-init, qemu]
|
||||
---
|
||||
|
||||
# Proxmox VM Template for Coder
|
||||
|
||||
Provision Linux VMs on Proxmox as [Coder workspaces](https://coder.com/docs/workspaces). The template clones a cloud‑init base image, injects user‑data via Snippets, and runs the Coder agent under the workspace owner's Linux user.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Proxmox VE 8/9
|
||||
- Proxmox API token with access to nodes and storages
|
||||
- SSH access from Coder provisioner to Proxmox VE
|
||||
- Storage with "Snippets" content enabled
|
||||
- Ubuntu cloud‑init image/template on Proxmox
|
||||
- Latest images: https://cloud-images.ubuntu.com/ ([source](https://cloud-images.ubuntu.com/))
|
||||
|
||||
## Prepare a Proxmox Cloud‑Init Template (once)
|
||||
|
||||
Run on the Proxmox node. This uses a RELEASE variable so you always pull a current image.
|
||||
|
||||
```bash
|
||||
# Choose a release (e.g., jammy or noble)
|
||||
RELEASE=jammy
|
||||
IMG_URL="https://cloud-images.ubuntu.com/${RELEASE}/current/${RELEASE}-server-cloudimg-amd64.img"
|
||||
IMG_PATH="/var/lib/vz/template/iso/${RELEASE}-server-cloudimg-amd64.img"
|
||||
|
||||
# Download cloud image
|
||||
wget "$IMG_URL" -O "$IMG_PATH"
|
||||
|
||||
# Create base VM (example ID 999), enable QGA, correct boot order
|
||||
NAME="ubuntu-${RELEASE}-cloudinit"
|
||||
qm create 999 --name "$NAME" --memory 4096 --cores 2 \
|
||||
--net0 virtio,bridge=vmbr0 --agent enabled=1
|
||||
qm set 999 --scsihw virtio-scsi-pci
|
||||
qm importdisk 999 "$IMG_PATH" local-lvm
|
||||
qm set 999 --scsi0 local-lvm:vm-999-disk-0
|
||||
qm set 999 --ide2 local-lvm:cloudinit
|
||||
qm set 999 --serial0 socket --vga serial0
|
||||
qm set 999 --boot 'order=scsi0;ide2;net0'
|
||||
|
||||
# Enable Snippets on storage 'local' (one‑time)
|
||||
pvesm set local --content snippets,vztmpl,backup,iso
|
||||
|
||||
# Convert to template
|
||||
qm template 999
|
||||
```
|
||||
|
||||
Verify:
|
||||
|
||||
```bash
|
||||
qm config 999 | grep -E 'template:|agent:|boot:|ide2:|scsi0:'
|
||||
```
|
||||
|
||||
### Enable Snippets via GUI
|
||||
|
||||
- Datacenter → Storage → select `local` → Edit → Content → check "Snippets" → OK
|
||||
- Ensure `/var/lib/vz/snippets/` exists on the node for snippet files
|
||||
- Template page → Cloud‑Init → Snippet Storage: `local` → File: your yml → Apply
|
||||
|
||||
## Configure this template
|
||||
|
||||
Edit `terraform.tfvars` with your environment:
|
||||
|
||||
```hcl
|
||||
# Proxmox API
|
||||
proxmox_api_url = "https://<PVE_HOST>:8006/api2/json"
|
||||
proxmox_api_token_id = "<USER@REALM>!<TOKEN>"
|
||||
proxmox_api_token_secret = "<SECRET>"
|
||||
|
||||
# SSH to the node (for snippet upload)
|
||||
proxmox_host = "<PVE_HOST>"
|
||||
proxmox_password = "<NODE_ROOT_OR_SUDO_PASSWORD>"
|
||||
proxmox_ssh_user = "root"
|
||||
|
||||
# Infra defaults
|
||||
proxmox_node = "pve"
|
||||
disk_storage = "local-lvm"
|
||||
snippet_storage = "local"
|
||||
bridge = "vmbr0"
|
||||
vlan = 0
|
||||
clone_template_vmid = 999
|
||||
```
|
||||
|
||||
### Variables (terraform.tfvars)
|
||||
|
||||
- These values are standard Terraform variables that the template reads at apply time.
|
||||
- Place secrets (e.g., `proxmox_api_token_secret`, `proxmox_password`) in `terraform.tfvars` or inject with environment variables using `TF_VAR_*` (e.g., `TF_VAR_proxmox_api_token_secret`).
|
||||
- You can also override with `-var`/`-var-file` if you run Terraform directly. With Coder, the repo's `terraform.tfvars` is bundled when pushing the template.
|
||||
|
||||
Variables expected:
|
||||
|
||||
- `proxmox_api_url`, `proxmox_api_token_id`, `proxmox_api_token_secret` (sensitive)
|
||||
- `proxmox_host`, `proxmox_password` (sensitive), `proxmox_ssh_user`
|
||||
- `proxmox_node`, `disk_storage`, `snippet_storage`, `bridge`, `vlan`, `clone_template_vmid`
|
||||
- Coder parameters: `cpu_cores`, `memory_mb`, `disk_size_gb`
|
||||
|
||||
## Proxmox API Token (GUI/CLI)
|
||||
|
||||
Docs: https://pve.proxmox.com/wiki/User_Management#pveum_tokens
|
||||
|
||||
GUI:
|
||||
|
||||
1. (Optional) Create automation user: Datacenter → Permissions → Users → Add (e.g., `terraform@pve`)
|
||||
2. Permissions: Datacenter → Permissions → Add → User Permission
|
||||
- Path: `/` (or narrower covering your nodes/storages)
|
||||
- Role: `PVEVMAdmin` + `PVEStorageAdmin` (or `PVEAdmin` for simplicity)
|
||||
3. Token: Datacenter → Permissions → API Tokens → Add → copy Token ID and Secret
|
||||
4. Test:
|
||||
|
||||
```bash
|
||||
curl -k -H "Authorization: PVEAPIToken=<USER@REALM>!<TOKEN>=<SECRET>" \
|
||||
https:// < PVE_HOST > :8006/api2/json/version
|
||||
```
|
||||
|
||||
CLI:
|
||||
|
||||
```bash
|
||||
pveum user add terraform@pve --comment 'Terraform automation user'
|
||||
pveum aclmod / -user terraform@pve -role PVEAdmin
|
||||
pveum user token add terraform@pve terraform --privsep 0
|
||||
```
|
||||
|
||||
## Use
|
||||
|
||||
```bash
|
||||
# From this directory
|
||||
coder templates push --yes proxmox-cloudinit --directory . | cat
|
||||
```
|
||||
|
||||
Create a workspace from the template in the Coder UI. First boot usually takes 60–120s while cloud‑init runs.
|
||||
|
||||
## How it works
|
||||
|
||||
- Uploads rendered cloud‑init user‑data to `<storage>:snippets/<vm>.yml` via the provider's `proxmox_virtual_environment_file`
|
||||
- VM config: `virtio-scsi-pci`, boot order `scsi0, ide2, net0`, QGA enabled
|
||||
- Linux user equals Coder workspace owner (sanitized). To avoid collisions, reserved names (`admin`, `root`, etc.) get a suffix (e.g., `admin1`). User is created with `primary_group: adm`, `groups: [sudo]`, `no_user_group: true`
|
||||
- systemd service runs as that user:
|
||||
- `coder-agent.service`
|
||||
|
||||
## Troubleshooting quick hits
|
||||
|
||||
- iPXE boot loop: ensure template has bootable root disk and boot order `scsi0,ide2,net0`
|
||||
- QGA not responding: install/enable QGA in template; allow 60–120s on first boot
|
||||
- Snippet upload errors: storage must include `Snippets`; token needs Datastore permissions; path format `<storage>:snippets/<file>` handled by provider
|
||||
- Permissions errors: ensure the token's role covers the target node(s) and storages
|
||||
- Verify snippet/QGA: `qm config <vmid> | egrep 'cicustom|ide2|ciuser'`
|
||||
|
||||
## References
|
||||
|
||||
- Ubuntu Cloud Images (latest): https://cloud-images.ubuntu.com/ ([source](https://cloud-images.ubuntu.com/))
|
||||
- Proxmox qm(1) manual: https://pve.proxmox.com/pve-docs/qm.1.html
|
||||
- Proxmox Cloud‑Init Support: https://pve.proxmox.com/wiki/Cloud-Init_Support
|
||||
- Terraform Proxmox provider (bpg): `bpg/proxmox` on the Terraform Registry
|
||||
- Coder – Best practices & templates:
|
||||
- https://coder.com/docs/tutorials/best-practices/speed-up-templates
|
||||
- https://coder.com/docs/tutorials/template-from-scratch
|
||||
53
coder/proxmox-vm/cloud-init/user-data.tftpl
Normal file
53
coder/proxmox-vm/cloud-init/user-data.tftpl
Normal file
@@ -0,0 +1,53 @@
|
||||
#cloud-config
|
||||
hostname: ${hostname}
|
||||
|
||||
users:
|
||||
- name: ${linux_user}
|
||||
groups: [sudo]
|
||||
shell: /bin/bash
|
||||
sudo: ["ALL=(ALL) NOPASSWD:ALL"]
|
||||
|
||||
package_update: false
|
||||
package_upgrade: false
|
||||
packages:
|
||||
- curl
|
||||
- ca-certificates
|
||||
- git
|
||||
- jq
|
||||
|
||||
write_files:
|
||||
- path: /opt/coder/init.sh
|
||||
permissions: "0755"
|
||||
owner: root:root
|
||||
encoding: b64
|
||||
content: |
|
||||
${coder_init_script_b64}
|
||||
|
||||
- path: /etc/systemd/system/coder-agent.service
|
||||
permissions: "0644"
|
||||
owner: root:root
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Coder Agent
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=${linux_user}
|
||||
WorkingDirectory=/home/${linux_user}
|
||||
Environment=HOME=/home/${linux_user}
|
||||
Environment=CODER_AGENT_TOKEN=${coder_token}
|
||||
ExecStart=/opt/coder/init.sh
|
||||
OOMScoreAdjust=-1000
|
||||
Restart=always
|
||||
RestartSec=5
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
runcmd:
|
||||
- systemctl daemon-reload
|
||||
- systemctl enable --now coder-agent.service
|
||||
|
||||
final_message: "Cloud-init complete on ${hostname}"
|
||||
283
coder/proxmox-vm/main.tf
Normal file
283
coder/proxmox-vm/main.tf
Normal file
@@ -0,0 +1,283 @@
|
||||
terraform {
|
||||
required_providers {
|
||||
coder = {
|
||||
source = "coder/coder"
|
||||
}
|
||||
proxmox = {
|
||||
source = "bpg/proxmox"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
provider "coder" {}
|
||||
|
||||
provider "proxmox" {
|
||||
endpoint = var.proxmox_api_url
|
||||
api_token = "${var.proxmox_api_token_id}=${var.proxmox_api_token_secret}"
|
||||
insecure = true
|
||||
|
||||
# SSH is needed for file uploads to Proxmox
|
||||
ssh {
|
||||
username = var.proxmox_ssh_user
|
||||
password = var.proxmox_password
|
||||
|
||||
node {
|
||||
name = var.proxmox_node
|
||||
address = var.proxmox_host
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
variable "proxmox_api_url" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "proxmox_api_token_id" {
|
||||
type = string
|
||||
sensitive = true
|
||||
}
|
||||
|
||||
variable "proxmox_api_token_secret" {
|
||||
type = string
|
||||
sensitive = true
|
||||
}
|
||||
|
||||
|
||||
variable "proxmox_host" {
|
||||
description = "Proxmox node IP or DNS for SSH"
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "proxmox_password" {
|
||||
description = "Proxmox password (used for SSH)"
|
||||
type = string
|
||||
sensitive = true
|
||||
}
|
||||
|
||||
variable "proxmox_ssh_user" {
|
||||
description = "SSH username on Proxmox node"
|
||||
type = string
|
||||
default = "root"
|
||||
}
|
||||
|
||||
variable "proxmox_node" {
|
||||
description = "Target Proxmox node"
|
||||
type = string
|
||||
default = "pve"
|
||||
}
|
||||
variable "disk_storage" {
|
||||
description = "Disk storage (e.g., local-lvm)"
|
||||
type = string
|
||||
default = "local-lvm"
|
||||
}
|
||||
|
||||
variable "snippet_storage" {
|
||||
description = "Storage with Snippets content"
|
||||
type = string
|
||||
default = "local"
|
||||
}
|
||||
|
||||
variable "bridge" {
|
||||
description = "Bridge (e.g., vmbr0)"
|
||||
type = string
|
||||
default = "vmbr0"
|
||||
}
|
||||
|
||||
variable "vlan" {
|
||||
description = "VLAN tag (0 none)"
|
||||
type = number
|
||||
default = 0
|
||||
}
|
||||
|
||||
variable "clone_template_vmid" {
|
||||
description = "VMID of the cloud-init base template to clone"
|
||||
type = number
|
||||
}
|
||||
|
||||
data "coder_workspace" "me" {}
|
||||
data "coder_workspace_owner" "me" {}
|
||||
|
||||
data "coder_parameter" "cpu_cores" {
|
||||
name = "cpu_cores"
|
||||
display_name = "CPU Cores"
|
||||
type = "number"
|
||||
default = 2
|
||||
mutable = true
|
||||
}
|
||||
|
||||
data "coder_parameter" "memory_mb" {
|
||||
name = "memory_mb"
|
||||
display_name = "Memory (MB)"
|
||||
type = "number"
|
||||
default = 4096
|
||||
mutable = true
|
||||
}
|
||||
|
||||
data "coder_parameter" "disk_size_gb" {
|
||||
name = "disk_size_gb"
|
||||
display_name = "Disk Size (GB)"
|
||||
type = "number"
|
||||
default = 20
|
||||
mutable = true
|
||||
validation {
|
||||
min = 10
|
||||
max = 100
|
||||
monotonic = "increasing"
|
||||
}
|
||||
}
|
||||
|
||||
resource "coder_agent" "dev" {
|
||||
arch = "amd64"
|
||||
os = "linux"
|
||||
|
||||
env = {
|
||||
GIT_AUTHOR_NAME = data.coder_workspace_owner.me.name
|
||||
GIT_AUTHOR_EMAIL = data.coder_workspace_owner.me.email
|
||||
}
|
||||
|
||||
startup_script_behavior = "non-blocking"
|
||||
startup_script = <<-EOT
|
||||
set -e
|
||||
# Add any startup scripts here
|
||||
EOT
|
||||
|
||||
metadata {
|
||||
display_name = "CPU Usage"
|
||||
key = "cpu_usage"
|
||||
script = "coder stat cpu"
|
||||
interval = 10
|
||||
timeout = 1
|
||||
order = 1
|
||||
}
|
||||
|
||||
metadata {
|
||||
display_name = "RAM Usage"
|
||||
key = "ram_usage"
|
||||
script = "coder stat mem"
|
||||
interval = 10
|
||||
timeout = 1
|
||||
order = 2
|
||||
}
|
||||
|
||||
metadata {
|
||||
display_name = "Disk Usage"
|
||||
key = "disk_usage"
|
||||
script = "coder stat disk"
|
||||
interval = 600
|
||||
timeout = 30
|
||||
order = 3
|
||||
}
|
||||
}
|
||||
|
||||
locals {
|
||||
hostname = lower(data.coder_workspace.me.name)
|
||||
vm_name = "coder-${lower(data.coder_workspace_owner.me.name)}-${local.hostname}"
|
||||
snippet_filename = "${local.vm_name}.yml"
|
||||
base_user = replace(replace(replace(lower(data.coder_workspace_owner.me.name), " ", "-"), "/", "-"), "@", "-") # to avoid special characters in the username
|
||||
linux_user = contains(["root", "admin", "daemon", "bin", "sys"], local.base_user) ? "${local.base_user}1" : local.base_user # to avoid conflict with system users
|
||||
|
||||
rendered_user_data = templatefile("${path.module}/cloud-init/user-data.tftpl", {
|
||||
coder_token = coder_agent.dev.token
|
||||
coder_init_script_b64 = base64encode(coder_agent.dev.init_script)
|
||||
hostname = local.vm_name
|
||||
linux_user = local.linux_user
|
||||
})
|
||||
}
|
||||
|
||||
resource "proxmox_virtual_environment_file" "cloud_init_user_data" {
|
||||
content_type = "snippets"
|
||||
datastore_id = var.snippet_storage
|
||||
node_name = var.proxmox_node
|
||||
|
||||
source_raw {
|
||||
data = local.rendered_user_data
|
||||
file_name = local.snippet_filename
|
||||
}
|
||||
}
|
||||
|
||||
resource "proxmox_virtual_environment_vm" "workspace" {
|
||||
name = local.vm_name
|
||||
node_name = var.proxmox_node
|
||||
|
||||
clone {
|
||||
node_name = var.proxmox_node
|
||||
vm_id = var.clone_template_vmid
|
||||
full = false
|
||||
retries = 5
|
||||
}
|
||||
|
||||
agent {
|
||||
enabled = true
|
||||
}
|
||||
|
||||
on_boot = true
|
||||
started = true
|
||||
|
||||
startup {
|
||||
order = 1
|
||||
}
|
||||
|
||||
scsi_hardware = "virtio-scsi-pci"
|
||||
boot_order = ["scsi0", "ide2"]
|
||||
|
||||
memory {
|
||||
dedicated = data.coder_parameter.memory_mb.value
|
||||
}
|
||||
|
||||
cpu {
|
||||
cores = data.coder_parameter.cpu_cores.value
|
||||
sockets = 1
|
||||
type = "host"
|
||||
}
|
||||
|
||||
network_device {
|
||||
bridge = var.bridge
|
||||
model = "virtio"
|
||||
vlan_id = var.vlan == 0 ? null : var.vlan
|
||||
}
|
||||
|
||||
vga {
|
||||
type = "serial0"
|
||||
}
|
||||
|
||||
serial_device {
|
||||
device = "socket"
|
||||
}
|
||||
|
||||
disk {
|
||||
interface = "scsi0"
|
||||
datastore_id = var.disk_storage
|
||||
size = data.coder_parameter.disk_size_gb.value
|
||||
}
|
||||
|
||||
initialization {
|
||||
type = "nocloud"
|
||||
datastore_id = var.disk_storage
|
||||
|
||||
user_data_file_id = proxmox_virtual_environment_file.cloud_init_user_data.id
|
||||
|
||||
ip_config {
|
||||
ipv4 {
|
||||
address = "dhcp"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
tags = ["coder", "workspace", local.vm_name]
|
||||
|
||||
depends_on = [proxmox_virtual_environment_file.cloud_init_user_data]
|
||||
}
|
||||
|
||||
module "code-server" {
|
||||
count = data.coder_workspace.me.start_count
|
||||
source = "registry.coder.com/coder/code-server/coder"
|
||||
version = "1.3.1"
|
||||
agent_id = coder_agent.dev.id
|
||||
}
|
||||
|
||||
module "cursor" {
|
||||
count = data.coder_workspace.me.start_count
|
||||
source = "registry.coder.com/coder/cursor/coder"
|
||||
version = "1.3.0"
|
||||
agent_id = coder_agent.dev.id
|
||||
}
|
||||
17
coder/proxmox-vm/terraform.tfvars
Normal file
17
coder/proxmox-vm/terraform.tfvars
Normal file
@@ -0,0 +1,17 @@
|
||||
# Proxmox API
|
||||
proxmox_api_url = "https://<PVE_HOST>:8006/api2/json"
|
||||
proxmox_api_token_id = "<USER@REALM>!<TOKEN>"
|
||||
proxmox_api_token_secret = "<SECRET>"
|
||||
|
||||
# SSH to the node (for snippet upload)
|
||||
proxmox_host = "<PVE_HOST>"
|
||||
proxmox_password = "<NODE_ROOT_OR_SUDO_PASSWORD>"
|
||||
proxmox_ssh_user = "root"
|
||||
|
||||
# Infra defaults
|
||||
proxmox_node = "pve"
|
||||
disk_storage = "local-lvm"
|
||||
snippet_storage = "local"
|
||||
bridge = "vmbr0"
|
||||
vlan = 0
|
||||
clone_template_vmid = 999
|
||||
13
komodo/arr/arrs/.env.sample
Normal file
13
komodo/arr/arrs/.env.sample
Normal file
@@ -0,0 +1,13 @@
|
||||
# Common paths - adjust these to match your server setup
|
||||
CONFIG_PATH=/path/to/config
|
||||
DATA_PATH=/path/to/data
|
||||
|
||||
# User and Group IDs - run `id` command to find your IDs
|
||||
PUID=1000
|
||||
PGID=100
|
||||
|
||||
# Timezone - adjust to your timezone (e.g., America/New_York, Europe/London)
|
||||
# Used by: radarr, sonarr, lidarr, bookshelf, bazarr, jellyseerr, prowlarr, profilarr
|
||||
TZ=Canada/Eastern
|
||||
|
||||
|
||||
@@ -15,6 +15,21 @@ services:
|
||||
ports:
|
||||
- 7878:7878
|
||||
restart: unless-stopped
|
||||
radarr-anime:
|
||||
image: lscr.io/linuxserver/radarr
|
||||
container_name: radarr-anime
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=100
|
||||
- TZ=Canada/Eastern
|
||||
volumes:
|
||||
- ${CONFIG_PATH}/radarr-anime:/config
|
||||
- ${DATA_PATH}/media/movies-anime:/movies #optional
|
||||
- ${DATA_PATH}/usenet:/usenet #optional
|
||||
- ${DATA_PATH}/torrents:/torrents #optional
|
||||
ports:
|
||||
- 7879:7878
|
||||
restart: unless-stopped
|
||||
sonarr:
|
||||
image: lscr.io/linuxserver/sonarr
|
||||
container_name: sonarr
|
||||
@@ -31,6 +46,21 @@ services:
|
||||
ports:
|
||||
- 8989:8989
|
||||
restart: unless-stopped
|
||||
sonarr-anime:
|
||||
image: lscr.io/linuxserver/sonarr
|
||||
container_name: sonarr-anime
|
||||
environment:
|
||||
- PUID=${PUID}
|
||||
- PGID=${PGID}
|
||||
- TZ=Canada/Eastern
|
||||
volumes:
|
||||
- ${CONFIG_PATH}/sonarr-anime:/config
|
||||
- ${DATA_PATH}/media/anime:/anime #optional
|
||||
- ${DATA_PATH}/usenet:/usenet #optional
|
||||
- ${DATA_PATH}/torrents:/torrents #optional
|
||||
ports:
|
||||
- 8987:8989
|
||||
restart: unless-stopped
|
||||
|
||||
lidarr:
|
||||
image: lscr.io/linuxserver/lidarr
|
||||
@@ -47,21 +77,7 @@ services:
|
||||
ports:
|
||||
- 8686:8686
|
||||
restart: unless-stopped
|
||||
# readarr:
|
||||
# image: linuxserver/readarr:0.4.19-nightly
|
||||
# container_name: readarr
|
||||
# environment:
|
||||
# - PUID=${PUID}
|
||||
# - PGID=${PGID}
|
||||
# - TZ=Canada/Eastern
|
||||
# volumes:
|
||||
# - ${CONFIG_PATH}/readarr:/config
|
||||
# - ${DATA_PATH}/media/Calibre_Library:/books #optional
|
||||
# - ${DATA_PATH}/usenet:/usenet #optional
|
||||
# - ${DATA_PATH}/torrents:/torrents #optional
|
||||
# ports:
|
||||
# - 8787:8787
|
||||
# restart: unless-stopped
|
||||
|
||||
bookshelf:
|
||||
image: ghcr.io/pennydreadful/bookshelf:hardcover
|
||||
container_name: bookshelf
|
||||
@@ -71,12 +87,13 @@ services:
|
||||
- TZ=Canada/Eastern
|
||||
volumes:
|
||||
- ${CONFIG_PATH}/bookshelf:/config
|
||||
- ${DATA_PATH}/media/Calibre_Library:/books #optional
|
||||
- ${DATA_PATH}/media/books:/books #optional
|
||||
- ${DATA_PATH}/usenet:/usenet #optional
|
||||
- ${DATA_PATH}/torrents:/torrents #optional
|
||||
ports:
|
||||
- 8787:8787
|
||||
restart: unless-stopped
|
||||
|
||||
bazarr:
|
||||
image: lscr.io/linuxserver/bazarr
|
||||
container_name: bazarr
|
||||
@@ -92,9 +109,11 @@ services:
|
||||
ports:
|
||||
- 6767:6767
|
||||
restart: unless-stopped
|
||||
jellyseerr:
|
||||
image: fallenbagel/jellyseerr:latest
|
||||
container_name: jellyseerr
|
||||
|
||||
seerr:
|
||||
image: ghcr.io/seerr-team/seerr:latest
|
||||
init: true
|
||||
container_name: seerr
|
||||
environment:
|
||||
- LOG_LEVEL=debug
|
||||
- TZ=Canada/Eastern
|
||||
@@ -103,6 +122,7 @@ services:
|
||||
volumes:
|
||||
- ${CONFIG_PATH}/jellyseerr:/app/config
|
||||
restart: unless-stopped
|
||||
|
||||
prowlarr:
|
||||
image: ghcr.io/linuxserver/prowlarr:develop
|
||||
container_name: prowlarr
|
||||
@@ -116,19 +136,6 @@ services:
|
||||
- 9696:9696
|
||||
restart: unless-stopped
|
||||
|
||||
ytptube:
|
||||
user: "${PUID:-1000}:${PGID:-1000}" # change this to your user id and group id, for example: "1000:1000"
|
||||
image: ghcr.io/arabcoders/ytptube:latest
|
||||
container_name: ytptube
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "8581:8081"
|
||||
volumes:
|
||||
- ${CONFIG_PATH}/ytptube:/config
|
||||
- ${DATA_PATH}/media/youtube:/downloads
|
||||
tmpfs:
|
||||
- /tmp
|
||||
|
||||
profilarr:
|
||||
image: santiagosayshey/profilarr:latest # Use :beta for early access to new features
|
||||
container_name: profilarr
|
||||
@@ -138,4 +145,28 @@ services:
|
||||
- /path/to/your/data:/config # Replace with your actual path
|
||||
environment:
|
||||
- TZ=UTC # Set your timezone
|
||||
restart: unless-stopped
|
||||
restart: unless-stopped
|
||||
|
||||
flaresolverr:
|
||||
image: ghcr.io/flaresolverr/flaresolverr:latest
|
||||
container_name: flaresolverr
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- 8191:8191
|
||||
environment:
|
||||
- LOG_LEVEL=info
|
||||
- LOG_HTML=false
|
||||
- CAPTCHA_SOLVER=none
|
||||
- TZ=Canada/Eastern
|
||||
|
||||
# Set DNS server to prevent EU blocking
|
||||
dns:
|
||||
- 1.1.1.1
|
||||
- 1.0.0.1
|
||||
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://127.0.0.1:8191/health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
start_period: 30s
|
||||
retries: 3
|
||||
5
komodo/arr/dispatcharr/.env.sample
Normal file
5
komodo/arr/dispatcharr/.env.sample
Normal file
@@ -0,0 +1,5 @@
|
||||
# Dispatcharr - No environment variables required
|
||||
# This service uses a named volume for data persistence
|
||||
# Access the web UI at http://localhost:1866
|
||||
|
||||
|
||||
5
komodo/arr/dizquetv/.env.sample
Normal file
5
komodo/arr/dizquetv/.env.sample
Normal file
@@ -0,0 +1,5 @@
|
||||
# DizqueTV - No environment variables required
|
||||
# This service uses named volumes for data persistence
|
||||
# Access the web UI at http://localhost:8000
|
||||
|
||||
|
||||
19
komodo/arr/download-clients/.env.sample
Normal file
19
komodo/arr/download-clients/.env.sample
Normal file
@@ -0,0 +1,19 @@
|
||||
# Common paths
|
||||
DATA_PATH=/path/to/data
|
||||
|
||||
# User and Group IDs - run `id` command to find your IDs
|
||||
PUID=1000
|
||||
PGID=100
|
||||
|
||||
# Private Internet Access (PIA) VPN Configuration
|
||||
# Required for transmission-openvpn and pia-qbittorrent
|
||||
PIA_OPENVPN_CONFIG=CA Toronto
|
||||
PIA_REGION=ca-toronto
|
||||
PIA_USERNAME=your_pia_username
|
||||
PIA_PASSWORD=your_pia_password
|
||||
|
||||
# Network configuration for transmission-openvpn
|
||||
# Adjust to match your local network subnet
|
||||
LOCAL_NETWORK=192.168.50.0/24
|
||||
|
||||
|
||||
@@ -1,27 +1,26 @@
|
||||
version: '3.3'
|
||||
services:
|
||||
transmission-openvpn:
|
||||
container_name: transmission
|
||||
cap_add:
|
||||
- NET_ADMIN
|
||||
volumes:
|
||||
- ${DATA_PATH}/torrents:/data
|
||||
environment:
|
||||
- PUID=${PUID}
|
||||
- PGID=${PGID}
|
||||
- OPENVPN_PROVIDER=PIA
|
||||
- OPENVPN_CONFIG=${PIA_OPENVPN_CONFIG}
|
||||
- OPENVPN_USERNAME=${PIA_USERNAME}
|
||||
- OPENVPN_PASSWORD=${PIA_PASSWORD}
|
||||
- LOCAL_NETWORK=192.168.50.0/24
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: 10m
|
||||
ports:
|
||||
- '9092:9091'
|
||||
image: haugene/transmission-openvpn
|
||||
restart: unless-stopped
|
||||
# transmission-openvpn:
|
||||
# container_name: transmission
|
||||
# cap_add:
|
||||
# - NET_ADMIN
|
||||
# volumes:
|
||||
# - ${DATA_PATH}/torrents:/data
|
||||
# environment:
|
||||
# - PUID=${PUID}
|
||||
# - PGID=${PGID}
|
||||
# - OPENVPN_PROVIDER=PIA
|
||||
# - OPENVPN_CONFIG=${PIA_OPENVPN_CONFIG}
|
||||
# - OPENVPN_USERNAME=${PIA_USERNAME}
|
||||
# - OPENVPN_PASSWORD=${PIA_PASSWORD}
|
||||
# - LOCAL_NETWORK=192.168.50.0/24
|
||||
# logging:
|
||||
# driver: json-file
|
||||
# options:
|
||||
# max-size: 10m
|
||||
# ports:
|
||||
# - '9092:9091'
|
||||
# image: haugene/transmission-openvpn
|
||||
# restart: unless-stopped
|
||||
|
||||
sabnzbd:
|
||||
image: lscr.io/linuxserver/sabnzbd:latest
|
||||
@@ -57,5 +56,22 @@ services:
|
||||
ports:
|
||||
- "8888:8888"
|
||||
restart: unless-stopped
|
||||
shelfmark:
|
||||
image: ghcr.io/calibrain/shelfmark:latest
|
||||
container_name: shelfmark
|
||||
environment:
|
||||
PUID: ${PUID}
|
||||
PGID: ${PGID}
|
||||
ports:
|
||||
- 8084:8084
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- ${DATA_PATH}/media/bookdrop:/books # Default destination for book downloads
|
||||
- shelfmark:/config # App configuration
|
||||
# Required for torrent / usenet - path must match your download client's volume exactly
|
||||
- ${DATA_PATH}/torrents:/downloads/torrents
|
||||
- ${DATA_PATH}/usenset:/downloads/usenet
|
||||
# - /path/to/downloads:/path/to/downloads
|
||||
volumes:
|
||||
qbittorrent-pia:
|
||||
qbittorrent-pia:
|
||||
shelfmark:
|
||||
|
||||
17
komodo/auth/Authentik/.env.sample
Normal file
17
komodo/auth/Authentik/.env.sample
Normal file
@@ -0,0 +1,17 @@
|
||||
PUID=1000
|
||||
PGID=100
|
||||
AUTHENTIK_SECRET_KEY=supersecretkey
|
||||
PG_PASS=supersecretpassword
|
||||
AUTHENTIK_EMAIL__HOST=smtp.gmail.com
|
||||
AUTHENTIK_EMAIL__PORT=587
|
||||
AUTHENTIK_EMAIL__USERNAME=admin@example.com
|
||||
AUTHENTIK_EMAIL__PASSWORD=password123
|
||||
AUTHENTIK_EMAIL__USE_TLS=true
|
||||
AUTHENTIK_EMAIL__USE_SSL=false
|
||||
AUTHENTIK_EMAIL__TIMEOUT=10
|
||||
AUTHENTIK_EMAIL__FROM=auth@example.com
|
||||
COMPOSE_PORT_HTTP=10000
|
||||
COMPOSE_PORT_HTTPS=10443
|
||||
AUTHENTIK_ERROR_REPORTING__ENABLED=true
|
||||
AUTHENTIK_TAG=2025.10
|
||||
CONFIG_PATH=/srv/dev-disk-by-uuid-7acaa21a-aa26-4605-bb36-8f4c9c1a7695/configs/authentik
|
||||
120
komodo/auth/Authentik/compose.yaml
Normal file
120
komodo/auth/Authentik/compose.yaml
Normal file
@@ -0,0 +1,120 @@
|
||||
---
|
||||
services:
|
||||
postgresql:
|
||||
image: docker.io/library/postgres:16-alpine
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]
|
||||
start_period: 20s
|
||||
interval: 30s
|
||||
retries: 5
|
||||
timeout: 5s
|
||||
volumes:
|
||||
- database:/var/lib/postgresql/data
|
||||
environment:
|
||||
POSTGRES_PASSWORD: ${PG_PASS:?database password required}
|
||||
POSTGRES_USER: ${PG_USER:-authentik}
|
||||
POSTGRES_DB: ${PG_DB:-authentik}
|
||||
redis:
|
||||
image: docker.io/library/redis:alpine
|
||||
command: --save 60 1 --loglevel warning
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "redis-cli ping | grep PONG"]
|
||||
start_period: 20s
|
||||
interval: 30s
|
||||
retries: 5
|
||||
timeout: 3s
|
||||
volumes:
|
||||
- redis:/data
|
||||
server:
|
||||
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2024.12.3}
|
||||
restart: unless-stopped
|
||||
command: server
|
||||
environment:
|
||||
AUTHENTIK_REDIS__HOST: redis
|
||||
AUTHENTIK_POSTGRESQL__HOST: postgresql
|
||||
AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
|
||||
AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
|
||||
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
|
||||
AUTHENTIK_SECRET_KEY: ${AUTHENTIK_SECRET_KEY}
|
||||
AUTHENTIK_EMAIL__HOST: ${AUTHENTIK_EMAIL__HOST}
|
||||
AUTHENTIK_EMAIL__PORT: ${AUTHENTIK_EMAIL__PORT}
|
||||
AUTHENTIK_EMAIL__USERNAME: ${AUTHENTIK_EMAIL__USERNAME}
|
||||
AUTHENTIK_EMAIL__PASSWORD: ${AUTHENTIK_EMAIL__PASSWORD}
|
||||
AUTHENTIK_EMAIL__USE_TLS: ${AUTHENTIK_EMAIL__USE_TLS}
|
||||
AUTHENTIK_EMAIL__USE_SSL: ${AUTHENTIK_EMAIL__USE_SSL}
|
||||
AUTHENTIK_EMAIL__TIMEOUT: ${AUTHENTIK_EMAIL__TIMEOUT}
|
||||
AUTHENTIK_EMAIL__FROM: ${AUTHENTIK_EMAIL__FROM}
|
||||
volumes:
|
||||
- media:/data/media
|
||||
- custom-templates:/templates
|
||||
ports:
|
||||
- "${COMPOSE_PORT_HTTP:-9000}:9000"
|
||||
- "${COMPOSE_PORT_HTTPS:-9443}:9443"
|
||||
|
||||
depends_on:
|
||||
postgresql:
|
||||
condition: service_healthy
|
||||
redis:
|
||||
condition: service_healthy
|
||||
worker:
|
||||
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2024.12.3}
|
||||
restart: unless-stopped
|
||||
command: worker
|
||||
environment:
|
||||
AUTHENTIK_REDIS__HOST: redis
|
||||
AUTHENTIK_POSTGRESQL__HOST: postgresql
|
||||
AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
|
||||
AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
|
||||
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
|
||||
AUTHENTIK_SECRET_KEY: ${AUTHENTIK_SECRET_KEY}
|
||||
AUTHENTIK_EMAIL__HOST: ${AUTHENTIK_EMAIL__HOST}
|
||||
AUTHENTIK_EMAIL__PORT: ${AUTHENTIK_EMAIL__PORT}
|
||||
AUTHENTIK_EMAIL__USERNAME: ${AUTHENTIK_EMAIL__USERNAME}
|
||||
AUTHENTIK_EMAIL__PASSWORD: ${AUTHENTIK_EMAIL__PASSWORD}
|
||||
AUTHENTIK_EMAIL__USE_TLS: ${AUTHENTIK_EMAIL__USE_TLS}
|
||||
AUTHENTIK_EMAIL__USE_SSL: ${AUTHENTIK_EMAIL__USE_SSL}
|
||||
AUTHENTIK_EMAIL__TIMEOUT: ${AUTHENTIK_EMAIL__TIMEOUT}
|
||||
AUTHENTIK_EMAIL__FROM: ${AUTHENTIK_EMAIL__FROM}
|
||||
# `user: root` and the docker socket volume are optional.
|
||||
# See more for the docker socket integration here:
|
||||
# https://goauthentik.io/docs/outposts/integrations/docker
|
||||
# Removing `user: root` also prevents the worker from fixing the permissions
|
||||
# on the mounted folders, so when removing this make sure the folders have the correct UID/GID
|
||||
# (1000:1000 by default)
|
||||
user: root
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- media:/data/media
|
||||
- certs:/certs
|
||||
- custom-templates:/templates
|
||||
depends_on:
|
||||
postgresql:
|
||||
condition: service_healthy
|
||||
redis:
|
||||
condition: service_healthy
|
||||
authentik_ldap:
|
||||
image: ghcr.io/goauthentik/ldap:${AUTHENTIK_TAG:-2024.12.3}
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- 3389:3389
|
||||
- 6636:6636
|
||||
environment:
|
||||
AUTHENTIK_HOST: https://auth.pcenicni.ca
|
||||
AUTHENTIK_INSECURE: "false"
|
||||
AUTHENTIK_TOKEN: 2OutrpIACRD41JdhjiZE6zSL8I48RpwkvnDRVbEPnllDnzdcxO9UJ26iS08Q
|
||||
depends_on:
|
||||
postgresql:
|
||||
condition: service_healthy
|
||||
volumes:
|
||||
database:
|
||||
driver: local
|
||||
redis:
|
||||
driver: local
|
||||
media:
|
||||
driver: local
|
||||
certs:
|
||||
driver: local
|
||||
custom-templates:
|
||||
driver: local
|
||||
11
komodo/automate/coder/.env.sample
Normal file
11
komodo/automate/coder/.env.sample
Normal file
@@ -0,0 +1,11 @@
|
||||
POSTGRES_USER=coder
|
||||
POSTGRES_PASSWORD=password
|
||||
POSTGRES_DB=coder
|
||||
CODER_ACCESS_URL=coder.example.com
|
||||
|
||||
CODER_OIDC_ISSUER_URL="https://auth.example.com/application/o/coder"
|
||||
CODER_OIDC_EMAIL_DOMAIN="${CODER_OIDC_EMAIL_DOMAIN}"
|
||||
CODER_OIDC_CLIENT_ID="${CODER_OIDC_CLIENT_ID}"
|
||||
CODER_OIDC_CLIENT_SECRET="${CODER_OIDC_CLIENT_SECRET}"
|
||||
CODER_OIDC_IGNORE_EMAIL_VERIFIED=true
|
||||
CODER_OIDC_SIGN_IN_TEXT="Sign in with Authentik"
|
||||
65
komodo/automate/coder/compose.yaml
Normal file
65
komodo/automate/coder/compose.yaml
Normal file
@@ -0,0 +1,65 @@
|
||||
services:
|
||||
coder:
|
||||
# This MUST be stable for our documentation and
|
||||
# other automations.
|
||||
image: ${CODER_REPO:-ghcr.io/coder/coder}:${CODER_VERSION:-latest}
|
||||
ports:
|
||||
- "7080:7080"
|
||||
environment:
|
||||
CODER_PG_CONNECTION_URL: "postgresql://${POSTGRES_USER:-username}:${POSTGRES_PASSWORD:-password}@database/${POSTGRES_DB:-coder}?sslmode=disable"
|
||||
CODER_HTTP_ADDRESS: "0.0.0.0:7080"
|
||||
# You'll need to set CODER_ACCESS_URL to an IP or domain
|
||||
# that workspaces can reach. This cannot be localhost
|
||||
# or 127.0.0.1 for non-Docker templates!
|
||||
CODER_ACCESS_URL: "${CODER_ACCESS_URL}"
|
||||
|
||||
# OpenID connect config
|
||||
CODER_OIDC_ISSUER_URL: "${CODER_OIDC_ISSUER_URL}"
|
||||
#CODER_OIDC_EMAIL_DOMAIN: "${CODER_OIDC_EMAIL_DOMAIN}"
|
||||
CODER_OIDC_CLIENT_ID: "${CODER_OIDC_CLIENT_ID}"
|
||||
CODER_OIDC_CLIENT_SECRET: "${CODER_OIDC_CLIENT_SECRET}"
|
||||
CODER_OIDC_IGNORE_EMAIL_VERIFIED: true
|
||||
CODER_OIDC_SIGN_IN_TEXT: "Sign in with Authentik"
|
||||
CODER_OIDC_ICON_URL: https://authentik.company/static/dist/assets/icons/icon.png
|
||||
CODER_OIDC_SCOPES: openid,profile,email,offline_access
|
||||
|
||||
# If the coder user does not have write permissions on
|
||||
# the docker socket, you can uncomment the following
|
||||
# lines and set the group ID to one that has write
|
||||
# permissions on the docker socket.
|
||||
group_add:
|
||||
- "988" # docker group on host
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
# Run "docker volume rm coder_coder_home" to reset the dev tunnel url (https://abc.xyz.try.coder.app).
|
||||
# This volume is not required in a production environment - you may safely remove it.
|
||||
# Coder can recreate all the files it needs on restart.
|
||||
- coder_home:/home/coder
|
||||
depends_on:
|
||||
database:
|
||||
condition: service_healthy
|
||||
database:
|
||||
# Minimum supported version is 13.
|
||||
# More versions here: https://hub.docker.com/_/postgres
|
||||
image: "postgres:17"
|
||||
# Uncomment the next two lines to allow connections to the database from outside the server.
|
||||
#ports:
|
||||
# - "5432:5432"
|
||||
environment:
|
||||
POSTGRES_USER: ${POSTGRES_USER:-username} # The PostgreSQL user (useful to connect to the database)
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-password} # The PostgreSQL password (useful to connect to the database)
|
||||
POSTGRES_DB: ${POSTGRES_DB:-coder} # The PostgreSQL default database (automatically created at first launch)
|
||||
volumes:
|
||||
- coder_data:/var/lib/postgresql/data # Use "docker volume rm coder_coder_data" to reset Coder
|
||||
healthcheck:
|
||||
test:
|
||||
[
|
||||
"CMD-SHELL",
|
||||
"pg_isready -U ${POSTGRES_USER:-username} -d ${POSTGRES_DB:-coder}",
|
||||
]
|
||||
interval: 5s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
volumes:
|
||||
coder_data:
|
||||
coder_home:
|
||||
48
komodo/automate/fleet/.env.sample
Normal file
48
komodo/automate/fleet/.env.sample
Normal file
@@ -0,0 +1,48 @@
|
||||
# MySQL Configuration
|
||||
MYSQL_ROOT_PASSWORD=change_this_root_password
|
||||
MYSQL_DATABASE=fleet
|
||||
MYSQL_USER=fleet
|
||||
MYSQL_PASSWORD=change_this_fleet_password
|
||||
|
||||
# Fleet Server Configuration
|
||||
# Generate a random key with: openssl rand -base64 32
|
||||
FLEET_SERVER_PRIVATE_KEY=change_this_private_key
|
||||
|
||||
# Fleet HTTP Listener Configuration
|
||||
FLEET_SERVER_ADDRESS=0.0.0.0
|
||||
FLEET_SERVER_PORT=1337
|
||||
|
||||
# TLS Configuration
|
||||
# Set to 'true' if Fleet handles TLS directly (requires certificates in ./certs/)
|
||||
# Set to 'false' if using a reverse proxy or load balancer for TLS termination
|
||||
FLEET_SERVER_TLS=false
|
||||
|
||||
# TLS Certificate paths (only needed if FLEET_SERVER_TLS=true)
|
||||
FLEET_SERVER_CERT=/fleet/fleet.crt
|
||||
FLEET_SERVER_KEY=/fleet/fleet.key
|
||||
|
||||
# Fleet License (optional - leave empty for free tier)
|
||||
FLEET_LICENSE_KEY=
|
||||
|
||||
# Fleet Session & Logging
|
||||
FLEET_SESSION_DURATION=24h
|
||||
FLEET_LOGGING_JSON=true
|
||||
|
||||
# Fleet Osquery Configuration
|
||||
FLEET_OSQUERY_STATUS_LOG_PLUGIN=filesystem
|
||||
FLEET_FILESYSTEM_STATUS_LOG_FILE=/logs/osqueryd.status.log
|
||||
FLEET_FILESYSTEM_RESULT_LOG_FILE=/logs/osqueryd.results.log
|
||||
FLEET_OSQUERY_LABEL_UPDATE_INTERVAL=1h
|
||||
|
||||
# Fleet Vulnerabilities
|
||||
FLEET_VULNERABILITIES_CURRENT_INSTANCE_CHECKS=yes
|
||||
FLEET_VULNERABILITIES_DATABASES_PATH=/vulndb
|
||||
FLEET_VULNERABILITIES_PERIODICITY=1h
|
||||
|
||||
# S3 Configuration (optional - leave empty if not using S3)
|
||||
FLEET_S3_SOFTWARE_INSTALLERS_BUCKET=
|
||||
FLEET_S3_SOFTWARE_INSTALLERS_ACCESS_KEY_ID=
|
||||
FLEET_S3_SOFTWARE_INSTALLERS_SECRET_ACCESS_KEY=
|
||||
FLEET_S3_SOFTWARE_INSTALLERS_FORCE_S3_PATH_STYLE=
|
||||
FLEET_S3_SOFTWARE_INSTALLERS_ENDPOINT_URL=
|
||||
FLEET_S3_SOFTWARE_INSTALLERS_REGION=
|
||||
119
komodo/automate/fleet/compose.yaml
Normal file
119
komodo/automate/fleet/compose.yaml
Normal file
@@ -0,0 +1,119 @@
|
||||
volumes:
|
||||
mysql:
|
||||
redis:
|
||||
data:
|
||||
logs:
|
||||
vulndb:
|
||||
|
||||
services:
|
||||
mysql:
|
||||
image: mysql:8
|
||||
platform: linux/x86_64
|
||||
environment:
|
||||
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
|
||||
- MYSQL_DATABASE=${MYSQL_DATABASE}
|
||||
- MYSQL_USER=${MYSQL_USER}
|
||||
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
|
||||
volumes:
|
||||
- mysql:/var/lib/mysql
|
||||
cap_add:
|
||||
- SYS_NICE
|
||||
healthcheck:
|
||||
test:
|
||||
[
|
||||
"CMD-SHELL",
|
||||
"mysqladmin ping -h 127.0.0.1 -u$$MYSQL_USER -p$$MYSQL_PASSWORD --silent || exit 1",
|
||||
]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 12
|
||||
ports:
|
||||
- "3306:3306"
|
||||
restart: unless-stopped
|
||||
|
||||
redis:
|
||||
image: redis:6
|
||||
command: ["redis-server", "--appendonly", "yes"]
|
||||
volumes:
|
||||
- redis:/data
|
||||
healthcheck:
|
||||
test: ["CMD", "redis-cli", "ping"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 12
|
||||
ports:
|
||||
- "6379:6379"
|
||||
restart: unless-stopped
|
||||
|
||||
fleet-init:
|
||||
image: alpine:latest
|
||||
volumes:
|
||||
- logs:/logs
|
||||
- data:/data
|
||||
- vulndb:/vulndb
|
||||
command: sh -c "chown -R 100:101 /logs /data /vulndb"
|
||||
|
||||
fleet:
|
||||
image: fleetdm/fleet
|
||||
platform: linux/x86_64
|
||||
depends_on:
|
||||
mysql:
|
||||
condition: service_healthy
|
||||
redis:
|
||||
condition: service_healthy
|
||||
fleet-init:
|
||||
condition: service_completed_successfully
|
||||
command: sh -c "/usr/bin/fleet prepare db --no-prompt && /usr/bin/fleet serve"
|
||||
environment:
|
||||
# In-cluster service addresses (no hostnames/ports on the host)
|
||||
- FLEET_REDIS_ADDRESS=redis:6379
|
||||
- FLEET_MYSQL_ADDRESS=mysql:3306
|
||||
- FLEET_MYSQL_DATABASE=${MYSQL_DATABASE}
|
||||
- FLEET_MYSQL_USERNAME=${MYSQL_USER}
|
||||
- FLEET_MYSQL_PASSWORD=${MYSQL_PASSWORD}
|
||||
# Fleet HTTP listener
|
||||
- FLEET_SERVER_ADDRESS=${FLEET_SERVER_ADDRESS}:${FLEET_SERVER_PORT}
|
||||
- FLEET_SERVER_TLS=${FLEET_SERVER_TLS}
|
||||
# TLS Certificate paths (only needed if FLEET_SERVER_TLS=true)
|
||||
- FLEET_SERVER_CERT=${FLEET_SERVER_CERT}
|
||||
- FLEET_SERVER_KEY=${FLEET_SERVER_KEY}
|
||||
# Secrets
|
||||
- FLEET_SERVER_PRIVATE_KEY=${FLEET_SERVER_PRIVATE_KEY} # Run 'openssl rand -base64 32' to generate
|
||||
- FLEET_LICENSE_KEY=${FLEET_LICENSE_KEY}
|
||||
# System tuning & other options
|
||||
- FLEET_SESSION_DURATION=${FLEET_SESSION_DURATION}
|
||||
- FLEET_LOGGING_JSON=${FLEET_LOGGING_JSON}
|
||||
- FLEET_OSQUERY_STATUS_LOG_PLUGIN=${FLEET_OSQUERY_STATUS_LOG_PLUGIN}
|
||||
- FLEET_FILESYSTEM_STATUS_LOG_FILE=${FLEET_FILESYSTEM_STATUS_LOG_FILE}
|
||||
- FLEET_FILESYSTEM_RESULT_LOG_FILE=${FLEET_FILESYSTEM_RESULT_LOG_FILE}
|
||||
- FLEET_OSQUERY_LABEL_UPDATE_INTERVAL=${FLEET_OSQUERY_LABEL_UPDATE_INTERVAL}
|
||||
- FLEET_VULNERABILITIES_CURRENT_INSTANCE_CHECKS=${FLEET_VULNERABILITIES_CURRENT_INSTANCE_CHECKS}
|
||||
- FLEET_VULNERABILITIES_DATABASES_PATH=${FLEET_VULNERABILITIES_DATABASES_PATH}
|
||||
- FLEET_VULNERABILITIES_PERIODICITY=${FLEET_VULNERABILITIES_PERIODICITY}
|
||||
# Optional S3 info
|
||||
- FLEET_S3_SOFTWARE_INSTALLERS_BUCKET=${FLEET_S3_SOFTWARE_INSTALLERS_BUCKET}
|
||||
- FLEET_S3_SOFTWARE_INSTALLERS_ACCESS_KEY_ID=${FLEET_S3_SOFTWARE_INSTALLERS_ACCESS_KEY_ID}
|
||||
- FLEET_S3_SOFTWARE_INSTALLERS_SECRET_ACCESS_KEY=${FLEET_S3_SOFTWARE_INSTALLERS_SECRET_ACCESS_KEY}
|
||||
- FLEET_S3_SOFTWARE_INSTALLERS_FORCE_S3_PATH_STYLE=${FLEET_S3_SOFTWARE_INSTALLERS_FORCE_S3_PATH_STYLE}
|
||||
# Override FLEET_S3_SOFTWARE_INSTALLERS_ENDPOINT_URL when using a different S3 compatible
|
||||
# object storage backend (such as RustFS) or running S3 locally with localstack.
|
||||
# Leave this blank to use the default S3 service endpoint.
|
||||
- FLEET_S3_SOFTWARE_INSTALLERS_ENDPOINT_URL=${FLEET_S3_SOFTWARE_INSTALLERS_ENDPOINT_URL}
|
||||
# RustFS users should set FLEET_S3_SOFTWARE_INSTALLERS_REGION to any nonempty value (eg. localhost)
|
||||
# to short-circuit region discovery
|
||||
- FLEET_S3_SOFTWARE_INSTALLERS_REGION=${FLEET_S3_SOFTWARE_INSTALLERS_REGION}
|
||||
ports:
|
||||
- "${FLEET_SERVER_PORT}:${FLEET_SERVER_PORT}" # UI/API
|
||||
volumes:
|
||||
- data:/fleet
|
||||
- logs:/logs
|
||||
- vulndb:${FLEET_VULNERABILITIES_DATABASES_PATH}
|
||||
# - ./certs/fleet.crt:/fleet/fleet.crt:ro
|
||||
# - ./certs/fleet.key:/fleet/fleet.key:ro
|
||||
healthcheck:
|
||||
test:
|
||||
["CMD", "wget", "-qO-", "http://127.0.0.1:${FLEET_SERVER_PORT}/healthz"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 12
|
||||
restart: unless-stopped
|
||||
9
komodo/automate/n8n/.env.sample
Normal file
9
komodo/automate/n8n/.env.sample
Normal file
@@ -0,0 +1,9 @@
|
||||
# n8n - Timezone configuration
|
||||
TZ=ETC
|
||||
GENERIC_TIMEZONE=America/New_York
|
||||
|
||||
# Optional: Additional n8n configuration
|
||||
# N8N_RUNNERS_ENABLED=true
|
||||
# N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true
|
||||
|
||||
|
||||
5
komodo/automate/node-red/.env.sample
Normal file
5
komodo/automate/node-red/.env.sample
Normal file
@@ -0,0 +1,5 @@
|
||||
# Node-RED - No environment variables required
|
||||
# This service uses a named volume for data persistence
|
||||
# Access the web UI at http://localhost:1880
|
||||
|
||||
|
||||
23
komodo/automate/semaphore/.env.sample
Normal file
23
komodo/automate/semaphore/.env.sample
Normal file
@@ -0,0 +1,23 @@
|
||||
# User and Group IDs
|
||||
PUID=1000
|
||||
PGID=100
|
||||
|
||||
# Database Configuration
|
||||
SEMAPHORE_DB_HOST=semaphore_db
|
||||
SEMAPHORE_DB_NAME=semaphore
|
||||
SEMAPHORE_DB_USER=semaphore_user
|
||||
SEMAPHORE_DB_PASS=your_secure_db_password
|
||||
|
||||
# Email Configuration (for notifications)
|
||||
SEMAPHORE_EMAIL_SENDER=semaphore@yourdomain.com
|
||||
SEMAPHORE_EMAIL_HOST=smtp.yourdomain.com
|
||||
SEMAPHORE_EMAIL_PORT=587
|
||||
SEMAPHORE_EMAIL_USERNAME=smtp_username
|
||||
SEMAPHORE_EMAIL_PASSWORD=smtp_password
|
||||
SEMAPHORE_EMAIL_SECURE=false
|
||||
|
||||
|
||||
AUTHENTIK_URL=https://authentik.example.com/application/o/<slug>/
|
||||
AUTHENTIK_CLIENT_ID=your_client_id
|
||||
AUTHENTIK_CLIENT_SECRET=your_client_secret
|
||||
AUTHENTIK_REDIRECT_URI=https://semaphore.example.com/api/auth/oidc/authentik/redirect/
|
||||
9
komodo/automate/termix/.env.sample
Normal file
9
komodo/automate/termix/.env.sample
Normal file
@@ -0,0 +1,9 @@
|
||||
OIDC_CLIENT_ID=your_oidc_client_id
|
||||
OIDC_CLIENT_SECRET=your_oidc_client_secret
|
||||
OIDC_ISSUER_URL=https://your-oidc-provider.com/application/o/termix/ # The base URL of your OIDC provider for this application. This is used to discover the authorization, token, and userinfo endpoints. It should end with a slash.
|
||||
OIDC_AUTHORIZATION_URL=https://your-oidc-provider.com/application/o/authorize
|
||||
OIDC_TOKEN_URL=https://your-oidc-provider.com/application/o/token
|
||||
OIDC_USERINFO_URL=https://your-oidc-provider.com/application/o/userinfo
|
||||
OIDC_IDENTIFIER_PATH=sub # The path in the OIDC userinfo response that contains the unique user identifier (default is 'sub')
|
||||
OIDC_NAME_PATH=name # The path in the OIDC userinfo response that contains the user's display name (default is 'name')
|
||||
OIDC_SCOPES=openid profile email
|
||||
25
komodo/automate/termix/compose.yaml
Normal file
25
komodo/automate/termix/compose.yaml
Normal file
@@ -0,0 +1,25 @@
|
||||
---
|
||||
services:
|
||||
termix:
|
||||
container_name: termix
|
||||
image: ghcr.io/lukegus/termix:latest
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- 8180:8080
|
||||
volumes:
|
||||
- termix-data:/app/data
|
||||
environment:
|
||||
PORT: 8080
|
||||
OIDC_CLIENT_ID: ${OIDC_CLIENT_ID}
|
||||
OIDC_CLIENT_SECRET: ${OIDC_CLIENT_SECRET}
|
||||
OIDC_ISSUER_URL: ${OIDC_ISSUER_URL}
|
||||
OIDC_AUTHORIZATION_URL: ${OIDC_AUTHORIZATION_URL}
|
||||
OIDC_TOKEN_URL: ${OIDC_TOKEN_URL}
|
||||
OIDC_USERINFO_URL: ${OIDC_USERINFO_URL}
|
||||
OIDC_IDENTIFIER_PATH: ${OIDC_IDENTIFIER_PATH}
|
||||
OIDC_NAME_PATH: ${OIDC_NAME_PATH}
|
||||
OIDC_SCOPES: ${OIDC_SCOPES}
|
||||
|
||||
volumes:
|
||||
termix-data:
|
||||
driver: local
|
||||
13
komodo/automate/watchstate/compose.yaml
Normal file
13
komodo/automate/watchstate/compose.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
services:
|
||||
watchstate:
|
||||
image: ghcr.io/arabcoders/watchstate:latest
|
||||
# To change the user/group id associated with the tool change the following line.
|
||||
user: "${UID:-1000}:${UID:-1000}"
|
||||
container_name: watchstate
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "8080:8080" # The port which the watchstate will listen on.
|
||||
volumes:
|
||||
- watchstate:/config:rw # mount ./data in current directory to container /config directory.
|
||||
volumes:
|
||||
watchstate:
|
||||
6
komodo/common/newt/.env.sample
Normal file
6
komodo/common/newt/.env.sample
Normal file
@@ -0,0 +1,6 @@
|
||||
# Newt Configuration
|
||||
PANGOLIN_ENDPOINT=your_pangolin_endpoint_url
|
||||
NEWT_ID=your_newt_id
|
||||
NEWT_SECRET=your_newt_secret
|
||||
|
||||
|
||||
9
komodo/common/newt/compose.yaml
Normal file
9
komodo/common/newt/compose.yaml
Normal file
@@ -0,0 +1,9 @@
|
||||
services:
|
||||
newt:
|
||||
image: fosrl/newt
|
||||
container_name: newt
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- PANGOLIN_ENDPOINT=${PANGOLIN_ENDPOINT}
|
||||
- NEWT_ID=${NEWT_ID}
|
||||
- NEWT_SECRET=${NEWT_SECRET}
|
||||
5
komodo/general-purpose/actual-budget/.env.sample
Normal file
5
komodo/general-purpose/actual-budget/.env.sample
Normal file
@@ -0,0 +1,5 @@
|
||||
# Actual Budget - No environment variables required
|
||||
# This service uses a named volume for data persistence
|
||||
# Access the web UI at http://localhost:5006
|
||||
|
||||
|
||||
58
komodo/general-purpose/adventure-log/.env.sample
Normal file
58
komodo/general-purpose/adventure-log/.env.sample
Normal file
@@ -0,0 +1,58 @@
|
||||
# 🌐 Frontend
|
||||
PUBLIC_SERVER_URL=http://server:8000 # PLEASE DON'T CHANGE :) - Should be the service name of the backend with port 8000, even if you change the port in the backend service. Only change if you are using a custom more complex setup.
|
||||
ORIGIN=http://localhost:8015
|
||||
BODY_SIZE_LIMIT=Infinity
|
||||
FRONTEND_PORT=8015
|
||||
|
||||
# 🐘 PostgreSQL Database
|
||||
PGHOST=db
|
||||
POSTGRES_DB=database
|
||||
POSTGRES_USER=adventure
|
||||
POSTGRES_PASSWORD=changeme123
|
||||
|
||||
# 🔒 Django Backend
|
||||
SECRET_KEY=changeme123
|
||||
DJANGO_ADMIN_USERNAME=admin
|
||||
DJANGO_ADMIN_PASSWORD=admin
|
||||
DJANGO_ADMIN_EMAIL=admin@example.com
|
||||
PUBLIC_URL=http://localhost:8016 # Match the outward port, used for the creation of image urls
|
||||
CSRF_TRUSTED_ORIGINS=http://localhost:8016,http://localhost:8015
|
||||
DEBUG=False
|
||||
FRONTEND_URL=http://localhost:8015 # Used for email generation. This should be the url of the frontend
|
||||
BACKEND_PORT=8016
|
||||
|
||||
# Optional: use Google Maps integration
|
||||
# https://adventurelog.app/docs/configuration/google_maps_integration.html
|
||||
# GOOGLE_MAPS_API_KEY=your_google_maps_api_key
|
||||
|
||||
# Optional: disable registration
|
||||
# https://adventurelog.app/docs/configuration/disable_registration.html
|
||||
DISABLE_REGISTRATION=False
|
||||
# DISABLE_REGISTRATION_MESSAGE=Registration is disabled for this instance of AdventureLog.
|
||||
|
||||
# SOCIALACCOUNT_ALLOW_SIGNUP=False # When false, social providers cannot be used to create new user accounts when registration is disabled.
|
||||
|
||||
# FORCE_SOCIALACCOUNT_LOGIN=False # When true, only social login is allowed (no password login) and the login page will show only social providers or redirect directly to the first provider if only one is configured.
|
||||
|
||||
# ACCOUNT_EMAIL_VERIFICATION='none' # 'none', 'optional', 'mandatory' # You can change this as needed for your environment
|
||||
|
||||
# Optional: Use email
|
||||
# https://adventurelog.app/docs/configuration/email.html
|
||||
# EMAIL_BACKEND=email
|
||||
# EMAIL_HOST=smtp.gmail.com
|
||||
# EMAIL_USE_TLS=True
|
||||
# EMAIL_PORT=587
|
||||
# EMAIL_USE_SSL=False
|
||||
# EMAIL_HOST_USER=user
|
||||
# EMAIL_HOST_PASSWORD=password
|
||||
# DEFAULT_FROM_EMAIL=user@example.com
|
||||
|
||||
# Optional: Use Strava integration
|
||||
# https://adventurelog.app/docs/configuration/strava_integration.html
|
||||
# STRAVA_CLIENT_ID=your_strava_client_id
|
||||
# STRAVA_CLIENT_SECRET=your_strava_client_secret
|
||||
|
||||
# Optional: Use Umami for analytics
|
||||
# https://adventurelog.app/docs/configuration/analytics.html
|
||||
# PUBLIC_UMAMI_SRC=https://cloud.umami.is/script.js # If you are using the hosted version of Umami
|
||||
# PUBLIC_UMAMI_WEBSITE_ID=
|
||||
36
komodo/general-purpose/adventure-log/compose.yaml
Normal file
36
komodo/general-purpose/adventure-log/compose.yaml
Normal file
@@ -0,0 +1,36 @@
|
||||
services:
|
||||
web:
|
||||
#build: ./frontend/
|
||||
image: ghcr.io/seanmorley15/adventurelog-frontend:latest
|
||||
container_name: adventurelog-frontend
|
||||
restart: unless-stopped
|
||||
env_file: .env
|
||||
ports:
|
||||
- "${FRONTEND_PORT:-8015}:3000"
|
||||
depends_on:
|
||||
- server
|
||||
|
||||
db:
|
||||
image: postgis/postgis:16-3.5
|
||||
container_name: adventurelog-db
|
||||
restart: unless-stopped
|
||||
env_file: .env
|
||||
volumes:
|
||||
- postgres_data:/var/lib/postgresql/data/
|
||||
|
||||
server:
|
||||
#build: ./backend/
|
||||
image: ghcr.io/seanmorley15/adventurelog-backend:latest
|
||||
container_name: adventurelog-backend
|
||||
restart: unless-stopped
|
||||
env_file: .env
|
||||
ports:
|
||||
- "${BACKEND_PORT:-8016}:80"
|
||||
depends_on:
|
||||
- db
|
||||
volumes:
|
||||
- adventurelog_media:/code/media/
|
||||
|
||||
volumes:
|
||||
postgres_data:
|
||||
adventurelog_media:
|
||||
@@ -1,104 +0,0 @@
|
||||
---
|
||||
services:
|
||||
|
||||
# The container for BookStack itself
|
||||
bookstack:
|
||||
# You should update the version here to match the latest
|
||||
# release of BookStack: https://github.com/BookStackApp/BookStack/releases
|
||||
# You'll change this when wanting to update the version of BookStack used.
|
||||
image: lscr.io/linuxserver/bookstack:latest
|
||||
container_name: bookstack
|
||||
environment:
|
||||
- PUID=${PUID}
|
||||
- PGID=${PGID}
|
||||
- TZ=Etc/UTC
|
||||
# APP_URL must be set as the base URL you'd expect to access BookStack
|
||||
# on via the browser. The default shown here is what you might use if accessing
|
||||
# direct from the browser on the docker host, hence the use of the port as configured below.
|
||||
- APP_URL=${APP_URL}
|
||||
# APP_KEY must be a unique key. Generate your own by running
|
||||
# docker run -it --rm --entrypoint /bin/bash lscr.io/linuxserver/bookstack:latest appkey
|
||||
# You should keep the "base64:" part for the option value.
|
||||
- APP_KEY=${API_KEY}
|
||||
|
||||
# The below database details are purposefully aligned with those
|
||||
# configuted for the "mariadb" service below:
|
||||
- DB_HOST=mariadb
|
||||
- DB_PORT=3306
|
||||
- DB_DATABASE=${DB_DATABASE}
|
||||
- DB_USERNAME=${DB_USERNAME}
|
||||
- DB_PASSWORD=${DB_PASSWORD}
|
||||
|
||||
# SAML
|
||||
# configured for authentik
|
||||
- AUTH_METHOD=${AUTH_METHOD}
|
||||
- AUTH_AUTO_INITIATE=${AUTH_AUTO_INITIATE}
|
||||
- SAML2_NAME=${SAML2_NAME}
|
||||
- SAML2_EMAIL_ATTRIBUTE=${SAML2_EMAIL_ATTRIBUTE}
|
||||
- SAML2_EXTERNAL_ID_ATTRIBUTE=${SAML2_EXTERNAL_ID_ATTRIBUTE}
|
||||
- SAML2_USER_TO_GROUPS=${SAML2_USER_TO_GROUPS}
|
||||
- SAML2_GROUP_ATTRIBUTE=${SAML2_GROUP_ATTRIBUTE}
|
||||
- SAML2_DISPLAY_NAME_ATTRIBUTES=${SAML2_DISPLAY_NAME_ATTRIBUTES}
|
||||
- SAML2_IDP_ENTITYID=${SAML2_IDP_ENTITYID}
|
||||
- SAML2_AUTOLOAD_METADATA=${SAML2_AUTOLOAD_METADATA}
|
||||
- SAML2_USER_TO_GROUPS=true
|
||||
- SAML2_GROUP_ATTRIBUTE=groups
|
||||
- SAML2_REMOVE_FROM_GROUPS=false
|
||||
|
||||
|
||||
# SMTP
|
||||
- MAIL_DRIVER=${MAIL_DRIVER}
|
||||
- MAIL_HOST=${MAIL_HOST}
|
||||
- MAIL_PORT=${MAIL_PORT}
|
||||
- MAIL_ENCRYPTION=${MAIL_ENCRYPTION}
|
||||
- MAIL_USERNAME=${MAIL_USERNAME}
|
||||
- MAIL_PASSWORD=${MAIL_PASSWORD}
|
||||
- MAIL_FROM=${MAIL_FROM}
|
||||
- MAIL_FROM_NAME=${MAIL_FROM_NAME}
|
||||
volumes:
|
||||
# You generally only ever need to map this one volume.
|
||||
# This maps it to a "bookstack_app_data" folder in the same
|
||||
# directory as this compose config file.
|
||||
- bookstack_app:/config
|
||||
ports:
|
||||
# This exposes port 6875 for general web access.
|
||||
# Commonly you'd have a reverse proxy in front of this,
|
||||
# redirecting incoming requests to this port.
|
||||
- 6875:80
|
||||
restart: unless-stopped
|
||||
|
||||
# The container for the database which BookStack will use to store
|
||||
# most of its core data/content.
|
||||
mariadb:
|
||||
# You should update the version here to match the latest
|
||||
# main version of the linuxserver mariadb container version:
|
||||
# https://github.com/linuxserver/docker-mariadb/pkgs/container/mariadb/versions?filters%5Bversion_type%5D=tagged
|
||||
image: lscr.io/linuxserver/mariadb:latest
|
||||
container_name: bookstack-mariadb
|
||||
environment:
|
||||
- PUID=${PUID}
|
||||
- PGID=${PGID}
|
||||
- TZ=Etc/UTC
|
||||
# You may want to change the credentials used below,
|
||||
# but be aware the latter three options need to align
|
||||
# with the DB_* options for the BookStack container.
|
||||
- MYSQL_ROOT_PASSWORD=${DB_ROOTPASS}
|
||||
- MYSQL_DATABASE=${DB_DATABASE}
|
||||
- MYSQL_USER=${DB_USERNAME}
|
||||
- MYSQL_PASSWORD=${DB_PASSWORD}
|
||||
volumes:
|
||||
# You generally only ever need to map this one volume.
|
||||
# This maps it to a "bookstack_db_data" folder in the same
|
||||
# directory as this compose config file.
|
||||
- bookstack_db:/config
|
||||
|
||||
# These ports are commented out as you don't really need this port
|
||||
# exposed for normal use, mainly only if connecting direct the the
|
||||
# database externally. Otherwise, this risks exposing access to the
|
||||
# database when not needed.
|
||||
# ports:
|
||||
# - 3306:3306
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
bookstack_app:
|
||||
bookstack_db:
|
||||
@@ -1,6 +0,0 @@
|
||||
version: '3.9'
|
||||
services:
|
||||
rtraceio:
|
||||
image: quay.io/rtraceio/flink
|
||||
ports:
|
||||
- '8080:8080'
|
||||
10
komodo/general-purpose/grocy/.env.sample
Normal file
10
komodo/general-purpose/grocy/.env.sample
Normal file
@@ -0,0 +1,10 @@
|
||||
# User and Group IDs
|
||||
PUID=1000
|
||||
PGID=100
|
||||
|
||||
# Timezone
|
||||
TZ=Etc/UTC
|
||||
|
||||
# Access the web UI at http://localhost:9283
|
||||
|
||||
|
||||
10
komodo/general-purpose/hortusfox/.env.sample
Normal file
10
komodo/general-purpose/hortusfox/.env.sample
Normal file
@@ -0,0 +1,10 @@
|
||||
# Database Configuration
|
||||
DB_USERNAME=hortusfox
|
||||
DB_PASSWORD=your_secure_db_password
|
||||
DB_ROOT_PASSWORD=your_secure_root_password
|
||||
|
||||
# Application Configuration
|
||||
APP_ADMIN_EMAIL=admin@yourdomain.com
|
||||
APP_ADMIN_PASSWORD=your_secure_admin_password
|
||||
|
||||
|
||||
48
komodo/general-purpose/hortusfox/compose.yaml
Normal file
48
komodo/general-purpose/hortusfox/compose.yaml
Normal file
@@ -0,0 +1,48 @@
|
||||
services:
|
||||
app:
|
||||
image: ghcr.io/danielbrendel/hortusfox-web:latest
|
||||
ports:
|
||||
- "8282:80"
|
||||
volumes:
|
||||
- app_images:/var/www/html/public/img
|
||||
- app_logs:/var/www/html/app/logs
|
||||
- app_backup:/var/www/html/public/backup
|
||||
- app_themes:/var/www/html/public/themes
|
||||
- app_migrate:/var/www/html/app/migrations
|
||||
environment:
|
||||
APP_ADMIN_EMAIL: ${APP_ADMIN_EMAIL}
|
||||
APP_ADMIN_PASSWORD: ${APP_ADMIN_PASSWORD}
|
||||
APP_TIMEZONE: "UTC"
|
||||
DB_HOST: db
|
||||
DB_PORT: 3306
|
||||
DB_DATABASE: hortusfox
|
||||
DB_USERNAME: ${DB_USERNAME}
|
||||
DB_PASSWORD: ${DB_PASSWORD}
|
||||
DB_CHARSET: "utf8mb4"
|
||||
MARIADB_ROOT_PASSWORD: ${DB_ROOT_PASSWORD}
|
||||
MARIADB_DATABASE: hortusfox
|
||||
MARIADB_USER: ${DB_USERNAME}
|
||||
MARIADB_PASSWORD: ${DB_PASSWORD}
|
||||
depends_on:
|
||||
- db
|
||||
|
||||
db:
|
||||
image: mariadb
|
||||
restart: always
|
||||
environment:
|
||||
MARIADB_ROOT_PASSWORD: ${DB_ROOT_PASSWORD}
|
||||
MARIADB_DATABASE: hortusfox
|
||||
MARIADB_USER: ${DB_USERNAME}
|
||||
MARIADB_PASSWORD: ${DB_PASSWORD}
|
||||
ports:
|
||||
- "3306:3306"
|
||||
volumes:
|
||||
- db_data:/var/lib/mysql
|
||||
|
||||
volumes:
|
||||
db_data:
|
||||
app_images:
|
||||
app_logs:
|
||||
app_backup:
|
||||
app_themes:
|
||||
app_migrate:
|
||||
5
komodo/general-purpose/it-tools/.env.sample
Normal file
5
komodo/general-purpose/it-tools/.env.sample
Normal file
@@ -0,0 +1,5 @@
|
||||
# IT Tools - No environment variables required
|
||||
# This is a simple web-based tool collection
|
||||
# Access the web UI at http://localhost:1234
|
||||
|
||||
|
||||
22
komodo/general-purpose/mealie/.env.sample
Normal file
22
komodo/general-purpose/mealie/.env.sample
Normal file
@@ -0,0 +1,22 @@
|
||||
# User and Group IDs
|
||||
PUID=1000
|
||||
PGID=100
|
||||
|
||||
# Timezone
|
||||
TZ=America/Toronto
|
||||
|
||||
# Base URL - set to the base URL where Mealie will be accessed
|
||||
BASE_URL=http://localhost:9925
|
||||
|
||||
# Database Configuration (PostgreSQL)
|
||||
POSTGRES_PASSWORD=your_secure_postgres_password
|
||||
|
||||
# OIDC/OAuth Configuration (for Authentik or other OIDC providers)
|
||||
OIDC_CONFIGURATION_URL=https://authentik.yourdomain.com/application/o/mealie/.well-known/openid-configuration
|
||||
OIDC_CLIENT_ID=your_oidc_client_id
|
||||
OIDC_CLIENT_SECRET=your_oidc_client_secret
|
||||
|
||||
# OpenAI API Key (optional, for AI features)
|
||||
OPENAI_API_KEY=your_openai_api_key
|
||||
|
||||
|
||||
@@ -22,7 +22,7 @@ services:
|
||||
# Database Settings
|
||||
DB_ENGINE: postgres
|
||||
POSTGRES_USER: mealie
|
||||
POSTGRES_PASSWORD: mealie
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
POSTGRES_SERVER: postgres
|
||||
POSTGRES_PORT: 5432
|
||||
POSTGRES_DB: mealie
|
||||
@@ -52,7 +52,7 @@ services:
|
||||
volumes:
|
||||
- mealie-pgdata:/var/lib/postgresql/data
|
||||
environment:
|
||||
POSTGRES_PASSWORD: mealie
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
POSTGRES_USER: mealie
|
||||
PGUSER: mealie
|
||||
healthcheck:
|
||||
|
||||
18
komodo/general-purpose/open-webui/.env.sample
Normal file
18
komodo/general-purpose/open-webui/.env.sample
Normal file
@@ -0,0 +1,18 @@
|
||||
# API Keys
|
||||
OPEN_API_KEY=your_openai_api_key
|
||||
ANTHROPIC_API_KEY=your_anthropic_api_key
|
||||
|
||||
# OAuth Configuration (for Authentik or other OIDC providers)
|
||||
OPENID_PROVIDER_URL=https://authentik.yourdomain.com/application/o/open-webui/.well-known/openid-configuration
|
||||
OAUTH_CLIENT_ID=your_oauth_client_id
|
||||
OAUTH_CLIENT_SECRET=your_oauth_client_secret
|
||||
OPENID_REDIRECT_URI=http://localhost:11674/auth/oidc/callback
|
||||
|
||||
# OAuth Settings (optional, defaults shown)
|
||||
# ENABLE_OAUTH_SIGNUP=true
|
||||
# OAUTH_MERGE_ACCOUNTS_BY_EMAIL=true
|
||||
# OAUTH_PROVIDER_NAME=Authentik
|
||||
# OAUTH_SCOPES=openid email profile
|
||||
# ENABLE_OAUTH_GROUP_MANAGEMENT=true
|
||||
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
version: '3'
|
||||
services:
|
||||
openwebui:
|
||||
container_name: Openweb-UI
|
||||
146
komodo/general-purpose/sparkyfitness/.env.sample
Normal file
146
komodo/general-purpose/sparkyfitness/.env.sample
Normal file
@@ -0,0 +1,146 @@
|
||||
# SparkyFitness Environment Variables
|
||||
# Copy this file to .env in the root directory and fill in your own values before running 'docker-compose up'.
|
||||
|
||||
# --- PostgreSQL Database Settings ---
|
||||
# These values should match the ones used by your PostgreSQL container.
|
||||
# For local development (running Node.js directly), use 'localhost' or '127.0.0.1' if PostgreSQL is on your host.
|
||||
SPARKY_FITNESS_DB_NAME=sparkyfitness_db
|
||||
#SPARKY_FITNESS_DB_USER is super user for DB initialization and migrations.
|
||||
SPARKY_FITNESS_DB_USER=sparky
|
||||
SPARKY_FITNESS_DB_PASSWORD=changeme_db_password
|
||||
# Application database user with limited privileges. it can be changed any time after initialization.
|
||||
SPARKY_FITNESS_APP_DB_USER=sparky_app
|
||||
SPARKY_FITNESS_APP_DB_PASSWORD=password
|
||||
|
||||
# For Docker Compose deployments, SPARKY_FITNESS_DB_HOST will be the service name (e.g., 'sparkyfitness-db').
|
||||
SPARKY_FITNESS_DB_HOST=sparkyfitness-db
|
||||
#SPARKY_FITNESS_DB_PORT=5432 # Optional. Defaults to 5432 if not specified.
|
||||
|
||||
# --- Backend Server Settings ---
|
||||
# The hostname or IP address of the backend server.
|
||||
# For Docker Compose, this is typically the service name (e.g., 'sparkyfitness-server').
|
||||
# For local development or other deployments, this might be 'localhost' or a specific IP.
|
||||
SPARKY_FITNESS_SERVER_HOST=sparkyfitness-server
|
||||
# The external port the server will be exposed on.
|
||||
SPARKY_FITNESS_SERVER_PORT=3010
|
||||
|
||||
|
||||
|
||||
# The public URL of your frontend (e.g., https://fitness.example.com). This is crucial for CORS security.
|
||||
# For local development, use http://localhost:8080. For production, use your domain with https.
|
||||
SPARKY_FITNESS_FRONTEND_URL=http://localhost:8080
|
||||
|
||||
|
||||
# Allow CORS requests from private network addresses (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, localhost, etc.)
|
||||
# SECURITY WARNING: Only enable this if you are running on a private/self-hosted network.
|
||||
# Do NOT enable on shared hosting or cloud environments where other users might access your network.
|
||||
# Default: false (secure default - only the configured SPARKY_FITNESS_FRONTEND_URL is allowed)
|
||||
#ALLOW_PRIVATE_NETWORK_CORS=false
|
||||
|
||||
# A comma-separated list of additional URLs that Better Auth should trust.
|
||||
# This is useful when accessing the app from a specific local IP on your network.
|
||||
# Example: SPARKY_FITNESS_EXTRA_TRUSTED_ORIGINS=http://192.168.1.175:8080,http://10.0.0.5:8080
|
||||
# SPARKY_FITNESS_EXTRA_TRUSTED_ORIGINS=
|
||||
|
||||
# Logging level for the server (e.g., INFO, DEBUG, WARN, ERROR)
|
||||
SPARKY_FITNESS_LOG_LEVEL=ERROR
|
||||
|
||||
# Node.js environment mode (e.g., development, production, test)
|
||||
# Set to 'production' for deployment to ensure optimal performance and security.
|
||||
NODE_ENV=production
|
||||
|
||||
# Server timezone. Use a TZ database name (e.g., 'America/New_York', 'Etc/UTC').
|
||||
# This affects how dates/times are handled by the server if not explicitly managed in code.
|
||||
TZ=Etc/UTC
|
||||
|
||||
# --- Security Settings ---
|
||||
# A 64-character hex string for data encryption.
|
||||
# You can generate a secure key with the following command:
|
||||
# openssl rand -hex 32
|
||||
# or
|
||||
# node -e "console.log(require('crypto').randomBytes(32).toString('hex'))"
|
||||
# Changing this will invalidate existing encrypted data. You will need to delete and add External Data sources again.
|
||||
SPARKY_FITNESS_API_ENCRYPTION_KEY=changeme_replace_with_a_64_character_hex_string
|
||||
# For Docker Swarm/Kubernetes secrets, you can use a file-based secret:
|
||||
# SPARKY_FITNESS_API_ENCRYPTION_KEY_FILE=/run/secrets/sparkyfitness_api_key
|
||||
|
||||
BETTER_AUTH_SECRET=changeme_replace_with_a_strong_better_auth_secret
|
||||
# For Docker Swarm/Kubernetes secrets, you can use a file-based secret:
|
||||
# BETTER_AUTH_SECRET_FILE=/run/secrets/sparkyfitness_better_auth_secret
|
||||
|
||||
# --- Signup Settings ---
|
||||
# Set to 'true' to disable new user registrations.
|
||||
SPARKY_FITNESS_DISABLE_SIGNUP=false
|
||||
|
||||
# --- Admin Settings ---
|
||||
# Set the email of a user to automatically grant admin privileges on server startup.
|
||||
# This is useful for development or initial setup.
|
||||
# Example: SPARKY_FITNESS_ADMIN_EMAIL=admin@example.com
|
||||
# Optional. If not set, no admin user will be created automatically.
|
||||
# SPARKY_FITNESS_ADMIN_EMAIL=
|
||||
|
||||
# --- Login Management Fail-Safe ---
|
||||
# Set to 'true' to force email/password login to be enabled, overriding any in-app settings.
|
||||
# This is a fail-safe to prevent being locked out if OIDC is misconfigured.
|
||||
SPARKY_FITNESS_FORCE_EMAIL_LOGIN=true
|
||||
|
||||
# --- Email Settings (Optional) ---
|
||||
# Configure these variables if you want to enable email notifications (e.g., for password resets).
|
||||
# If not configured, email functionality will be disabled.
|
||||
# SPARKY_FITNESS_EMAIL_HOST=smtp.example.com
|
||||
# SPARKY_FITNESS_EMAIL_PORT=587
|
||||
# SPARKY_FITNESS_EMAIL_SECURE=true # Use 'true' for TLS/SSL, 'false' for plain text
|
||||
# SPARKY_FITNESS_EMAIL_USER=your_email@example.com
|
||||
# SPARKY_FITNESS_EMAIL_PASS=your_email_password
|
||||
# SPARKY_FITNESS_EMAIL_FROM=no-reply@example.com
|
||||
|
||||
# --- Volume Paths (Optional) ---
|
||||
# These paths define where Docker volumes will store persistent data on your host.
|
||||
# If not set, Docker will manage volumes automatically in its default location.
|
||||
# DB_PATH=../postgresql # Path for PostgreSQL database data
|
||||
# SERVER_BACKUP_PATH=./backup # Path for server backups
|
||||
# SERVER_UPLOADS_PATH=./uploads # Path for profile pictures and exercise images
|
||||
|
||||
|
||||
# --- API Key Rate Limiting (Optional) ---
|
||||
# Override the default rate limit for API key authentication (used by automation tools like n8n).
|
||||
# Defaults to 100 requests per 60-second window if not set.
|
||||
#SPARKY_FITNESS_API_KEY_RATELIMIT_WINDOW_MS=60000
|
||||
#SPARKY_FITNESS_API_KEY_RATELIMIT_MAX_REQUESTS=100
|
||||
|
||||
# --- Start of Garmin Integration Settings ---
|
||||
#Below variables are needed only for Garmin integration. If you don't use Garmin integration, you can remove them in your .env file.
|
||||
|
||||
|
||||
# The URL for the Garmin microservice.
|
||||
# For Docker Compose, this would typically be the service name and port (e.g., 'http://sparkyfitness-garmin:8000').
|
||||
# For local development, use 'http://localhost:8000' or the port you've configured.
|
||||
|
||||
GARMIN_MICROSERVICE_URL=http://sparkyfitness-garmin:8000
|
||||
|
||||
|
||||
# This is used for Garmin Connect synchronization.
|
||||
# If you are not using Garmin integration, you don't need this. Make sure this matches with GARMIN_MICROSERVICE_URL.
|
||||
GARMIN_SERVICE_PORT=8000
|
||||
|
||||
# set to true for China region. Everything else should be false. Optional - defaults to false
|
||||
GARMIN_SERVICE_IS_CN=false
|
||||
|
||||
# --- End of Garmin Integration Settings ---
|
||||
|
||||
|
||||
|
||||
#----- Developers Section -----
|
||||
# Data source for external integrations (fitbit, garmin, withings).
|
||||
# By default, these use live APIs. Set to 'local' to use mock data from the mock_data directory.
|
||||
|
||||
#SPARKY_FITNESS_FITBIT_DATA_SOURCE=local
|
||||
#SPARKY_FITNESS_WITHINGS_DATA_SOURCE=local
|
||||
#SPARKY_FITNESS_GARMIN_DATA_SOURCE=local
|
||||
#SPARKY_FITNESS_POLAR_DATA_SOURCE=local
|
||||
#SPARKY_FITNESS_HEVY_DATA_SOURCE=local
|
||||
|
||||
# Set to 'true' to capture live API responses into mock data JSON files. Defaults to false.
|
||||
#SPARKY_FITNESS_SAVE_MOCK_DATA=false
|
||||
|
||||
#-----------------------------
|
||||
85
komodo/general-purpose/sparkyfitness/compose.yaml
Normal file
85
komodo/general-purpose/sparkyfitness/compose.yaml
Normal file
@@ -0,0 +1,85 @@
|
||||
services:
|
||||
sparkyfitness-db:
|
||||
image: postgres:15-alpine
|
||||
restart: always
|
||||
environment:
|
||||
POSTGRES_DB: ${SPARKY_FITNESS_DB_NAME:?Variable is required and must be set}
|
||||
POSTGRES_USER: ${SPARKY_FITNESS_DB_USER:?Variable is required and must be set}
|
||||
POSTGRES_PASSWORD: ${SPARKY_FITNESS_DB_PASSWORD:?Variable is required and must be set}
|
||||
volumes:
|
||||
- ${DB_PATH:-../postgresql}:/var/lib/postgresql/data
|
||||
networks:
|
||||
- sparkyfitness-network # Use the new named network
|
||||
|
||||
sparkyfitness-server:
|
||||
image: codewithcj/sparkyfitness_server:latest # Use pre-built image
|
||||
environment:
|
||||
SPARKY_FITNESS_LOG_LEVEL: ${SPARKY_FITNESS_LOG_LEVEL}
|
||||
ALLOW_PRIVATE_NETWORK_CORS: ${ALLOW_PRIVATE_NETWORK_CORS:-false}
|
||||
SPARKY_FITNESS_EXTRA_TRUSTED_ORIGINS: ${SPARKY_FITNESS_EXTRA_TRUSTED_ORIGINS:-}
|
||||
SPARKY_FITNESS_DB_USER: ${SPARKY_FITNESS_DB_USER:-sparky}
|
||||
SPARKY_FITNESS_DB_HOST: ${SPARKY_FITNESS_DB_HOST:-sparkyfitness-db} # Use the service name 'sparkyfitness-db' for inter-container communication
|
||||
SPARKY_FITNESS_DB_NAME: ${SPARKY_FITNESS_DB_NAME}
|
||||
SPARKY_FITNESS_DB_PASSWORD: ${SPARKY_FITNESS_DB_PASSWORD:?Variable is required and must be set}
|
||||
SPARKY_FITNESS_APP_DB_USER: ${SPARKY_FITNESS_APP_DB_USER:-sparkyapp}
|
||||
SPARKY_FITNESS_APP_DB_PASSWORD: ${SPARKY_FITNESS_APP_DB_PASSWORD:?Variable is required and must be set}
|
||||
SPARKY_FITNESS_DB_PORT: ${SPARKY_FITNESS_DB_PORT:-5432}
|
||||
SPARKY_FITNESS_API_ENCRYPTION_KEY: ${SPARKY_FITNESS_API_ENCRYPTION_KEY:?Variable is required and must be set}
|
||||
# Uncomment the line below and comment the line above to use a file-based secret
|
||||
# SPARKY_FITNESS_API_ENCRYPTION_KEY_FILE: /run/secrets/sparkyfitness_api_key
|
||||
|
||||
BETTER_AUTH_SECRET: ${BETTER_AUTH_SECRET:?Variable is required and must be set}
|
||||
# Uncomment the line below and comment the line above to use a file-based secret
|
||||
# JWT_SECRET_FILE: /run/secrets/sparkyfitness_jwt_secret
|
||||
SPARKY_FITNESS_FRONTEND_URL: ${SPARKY_FITNESS_FRONTEND_URL:-http://0.0.0.0:3004}
|
||||
SPARKY_FITNESS_DISABLE_SIGNUP: ${SPARKY_FITNESS_DISABLE_SIGNUP}
|
||||
SPARKY_FITNESS_ADMIN_EMAIL: ${SPARKY_FITNESS_ADMIN_EMAIL} #User with this email can access the admin panel
|
||||
SPARKY_FITNESS_EMAIL_HOST: ${SPARKY_FITNESS_EMAIL_HOST}
|
||||
SPARKY_FITNESS_EMAIL_PORT: ${SPARKY_FITNESS_EMAIL_PORT}
|
||||
SPARKY_FITNESS_EMAIL_SECURE: ${SPARKY_FITNESS_EMAIL_SECURE}
|
||||
SPARKY_FITNESS_EMAIL_USER: ${SPARKY_FITNESS_EMAIL_USER}
|
||||
SPARKY_FITNESS_EMAIL_PASS: ${SPARKY_FITNESS_EMAIL_PASS}
|
||||
SPARKY_FITNESS_EMAIL_FROM: ${SPARKY_FITNESS_EMAIL_FROM}
|
||||
GARMIN_MICROSERVICE_URL: http://sparkyfitness-garmin:8000 # Add Garmin microservice URL
|
||||
networks:
|
||||
- sparkyfitness-network # Use the new named network
|
||||
restart: always
|
||||
depends_on:
|
||||
- sparkyfitness-db # Backend depends on the database being available
|
||||
volumes:
|
||||
- ${SERVER_BACKUP_PATH:-./backup}:/app/SparkyFitnessServer/backup # Mount volume for backups
|
||||
- ${SERVER_UPLOADS_PATH:-./uploads}:/app/SparkyFitnessServer/uploads # Mount volume for Profile pictures and excercise images
|
||||
|
||||
sparkyfitness-frontend:
|
||||
image: codewithcj/sparkyfitness:latest # Use pre-built image
|
||||
ports:
|
||||
- "3004:80" # Map host port 8080 to container port 80 (Nginx)
|
||||
environment:
|
||||
SPARKY_FITNESS_FRONTEND_URL: ${SPARKY_FITNESS_FRONTEND_URL}
|
||||
SPARKY_FITNESS_SERVER_HOST: sparkyfitness-server # Internal Docker service name for the backend
|
||||
SPARKY_FITNESS_SERVER_PORT: 3010 # Port the backend server listens on
|
||||
networks:
|
||||
- sparkyfitness-network # Use the new named network
|
||||
restart: always
|
||||
depends_on:
|
||||
- sparkyfitness-server # Frontend depends on the server
|
||||
#- sparkyfitness-garmin # Frontend depends on Garmin microservice. Enable if you are using Garmin Connect features.
|
||||
|
||||
# Garmin integration is still work in progress. Enable once table is ready.
|
||||
# sparkyfitness-garmin:
|
||||
# image: codewithcj/sparkyfitness_garmin:latest
|
||||
# container_name: sparkyfitness-garmin
|
||||
# environment:
|
||||
# GARMIN_MICROSERVICE_URL: http://sparkyfitness-garmin:${GARMIN_SERVICE_PORT}
|
||||
# GARMIN_SERVICE_PORT: ${GARMIN_SERVICE_PORT}
|
||||
# GARMIN_SERVICE_IS_CN: ${GARMIN_SERVICE_IS_CN} # set to true for China region. Everything else should be false. Optional - defaults to false
|
||||
# networks:
|
||||
# - sparkyfitness-network
|
||||
# restart: unless-stopped
|
||||
# depends_on:
|
||||
# - sparkyfitness-db
|
||||
# - sparkyfitness-server
|
||||
|
||||
networks:
|
||||
sparkyfitness-network:
|
||||
driver: bridge
|
||||
99
komodo/mastodon/.env.sample
Normal file
99
komodo/mastodon/.env.sample
Normal file
@@ -0,0 +1,99 @@
|
||||
# Service configuration
|
||||
# ---------------------
|
||||
LOCAL_DOMAIN=example.com
|
||||
LOCAL_HTTPS=true
|
||||
ALTERNATE_DOMAINS=localhost,127.0.0.1
|
||||
# Use 'true' since you have an external proxy (Pangolin/Nginx) handling TLS
|
||||
# This tells Mastodon to generate https:// links
|
||||
|
||||
# Trusted Proxy Configuration
|
||||
# ---------------------------
|
||||
# Allow Mastodon to trust headers (X-Forwarded-For, X-Forwarded-Proto) from your reverse proxy.
|
||||
# We whitelist standard private ranges so the proxy's internal IP is trusted.
|
||||
TRUSTED_PROXY_IP=10.0.0.0/8,172.16.0.0/12,192.168.0.0/16
|
||||
|
||||
# OIDC / Authentik Integration
|
||||
# ----------------------------
|
||||
OIDC_ENABLED=true
|
||||
OIDC_DISPLAY_NAME=Authentik
|
||||
OIDC_DISCOVERY=true
|
||||
# Fill these in from Authentik:
|
||||
OIDC_ISSUER=https://auth.example.com/application/o/mastodon/
|
||||
OIDC_AUTH_ENDPOINT=https://auth.example.com/application/o/authorize/
|
||||
OIDC_CLIENT_ID=<YOUR_CLIENT_ID>
|
||||
OIDC_CLIENT_SECRET=<YOUR_CLIENT_SECRET>
|
||||
OIDC_SCOPE=openid,profile,email
|
||||
OIDC_UID_FIELD=preferred_username
|
||||
OIDC_REDIRECT_URI=https://social.example.com/auth/auth/openid_connect/callback
|
||||
# Automatically verify emails from Authentik
|
||||
OIDC_SECURITY_ASSUME_EMAIL_IS_VERIFIED=true
|
||||
# To force users to log in with Authentik only:
|
||||
# OMNIAUTH_ONLY=true
|
||||
|
||||
# Database configuration
|
||||
# ----------------------
|
||||
DB_HOST=db
|
||||
DB_PORT=5432
|
||||
DB_NAME=mastodon_production
|
||||
DB_USER=mastodon
|
||||
DB_PASS=<DB_PASSWORD>
|
||||
# DB_PASS is used by the Mastodon application to connect
|
||||
|
||||
# Postgres container configuration (must match above)
|
||||
POSTGRES_USER=mastodon
|
||||
POSTGRES_PASSWORD=<DB_PASSWORD>
|
||||
POSTGRES_DB=mastodon_production
|
||||
|
||||
# Redis configuration
|
||||
# -------------------
|
||||
REDIS_HOST=redis
|
||||
REDIS_PORT=6379
|
||||
# REDIS_PASSWORD=
|
||||
# If you set a Redis password, also update REDIS_URL below
|
||||
|
||||
# Mastodon secrets
|
||||
# ----------------
|
||||
# Use `docker-compose run --rm web bundle exec rake secret` to generate new keys if needed
|
||||
# Generate new secrets for production!
|
||||
SECRET_KEY_BASE=<GENERATED_SECRET>
|
||||
OTP_SECRET=<GENERATED_SECRET>
|
||||
|
||||
# VAPID keys (for push notifications)
|
||||
# Required. Generate with `docker-compose run --rm web bundle exec rake mastodon:webpush:generate_vapid_key`
|
||||
VAPID_PRIVATE_KEY=<GENERATED_VAPID_PRIVATE_KEY>
|
||||
VAPID_PUBLIC_KEY=<GENERATED_VAPID_PUBLIC_KEY>
|
||||
|
||||
# ActiveRecord Encryption (Rails 7+)
|
||||
# ----------------------------------
|
||||
# Required. Do not change these once data is encrypted in the DB.
|
||||
# Generate these!
|
||||
ACTIVE_RECORD_ENCRYPTION_PRIMARY_KEY=<GENERATED_KEY>
|
||||
ACTIVE_RECORD_ENCRYPTION_DETERMINISTIC_KEY=<GENERATED_KEY>
|
||||
ACTIVE_RECORD_ENCRYPTION_KEY_DERIVATION_SALT=<GENERATED_SALT>
|
||||
|
||||
# S3 / Object Storage (Optional)
|
||||
# ------------------------------
|
||||
# S3_ENABLED=true
|
||||
# S3_BUCKET=
|
||||
# AWS_ACCESS_KEY_ID=
|
||||
# AWS_SECRET_ACCESS_KEY=
|
||||
# S3_REGION=
|
||||
# S3_PROTOCOL=https
|
||||
# S3_HOSTNAME=
|
||||
|
||||
# SMTP / Email
|
||||
# ------------
|
||||
SMTP_SERVER=smtp.gmail.com
|
||||
SMTP_PORT=587
|
||||
SMTP_LOGIN=notifications@example.com
|
||||
SMTP_PASSWORD=<SMTP_PASSWORD>
|
||||
SMTP_FROM_ADDRESS=notifications@example.com
|
||||
SMTP_AUTH_METHOD=plain
|
||||
SMTP_OPENSSL_VERIFY_MODE=require
|
||||
# SMTP_ENABLE_STARTTLS_AUTO=true
|
||||
|
||||
# Application defaults
|
||||
# --------------------
|
||||
RAILS_ENV=production
|
||||
NODE_ENV=production
|
||||
RAILS_SERVE_STATIC_FILES=true
|
||||
12
komodo/mastodon/Pangolin.md
Normal file
12
komodo/mastodon/Pangolin.md
Normal file
@@ -0,0 +1,12 @@
|
||||
# Pangolin reverse-proxy guidance (concise)
|
||||
- Pangolin handles TLS and obtains certs for masto.pcenicni.social.
|
||||
- Create two upstreams on Pangolin:
|
||||
1) mastodon_web -> <Mastodon host IP>:3000
|
||||
2) mastodon_stream -> <Mastodon host IP>:4000
|
||||
- Site rules:
|
||||
- Default proxy target: mastodon_web
|
||||
- If header "Upgrade" equals "websocket" OR Connection contains "Upgrade", route to mastodon_stream.
|
||||
- Ensure these headers are forwarded to the Mastodon host:
|
||||
Host, X-Forwarded-For, X-Forwarded-Proto=https, X-Forwarded-Host
|
||||
- Increase timeouts on the streaming upstream so long-lived websocket connections don't time out.
|
||||
- If your Mastodon host is firewalled, allow inbound connections from the Pangolin VPS IP to ports 3000 and 4000 only.
|
||||
94
komodo/mastodon/compose.yaml
Normal file
94
komodo/mastodon/compose.yaml
Normal file
@@ -0,0 +1,94 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
db:
|
||||
image: postgres:14-alpine
|
||||
restart: unless-stopped
|
||||
shm_size: 256mb
|
||||
networks:
|
||||
- internal_network
|
||||
healthcheck:
|
||||
test: ["CMD", "pg_isready", "-U", "mastodon", "-d", "mastodon_production"]
|
||||
volumes:
|
||||
- postgres_data:/var/lib/postgresql/data
|
||||
env_file:
|
||||
- .env
|
||||
|
||||
redis:
|
||||
image: redis:7-alpine
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- internal_network
|
||||
healthcheck:
|
||||
test: ["CMD", "redis-cli", "ping"]
|
||||
volumes:
|
||||
- redis_data:/data
|
||||
env_file:
|
||||
- .env
|
||||
|
||||
web:
|
||||
image: ghcr.io/mastodon/mastodon:latest
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
db:
|
||||
condition: service_healthy
|
||||
redis:
|
||||
condition: service_healthy
|
||||
networks:
|
||||
- internal_network
|
||||
- external_network
|
||||
env_file:
|
||||
- .env
|
||||
volumes:
|
||||
- mastodon_system:/mastodon/public/system
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "-q", "--spider", "http://localhost:3000/health"]
|
||||
ports:
|
||||
- "3000:3000"
|
||||
extra_hosts:
|
||||
- "auth.pcenicni.dev:192.168.50.160"
|
||||
command: bash -lc "rm -f /mastodon/tmp/pids/server.pid; bundle exec rails db:prepare; bundle exec puma -C config/puma.rb"
|
||||
|
||||
sidekiq:
|
||||
image: ghcr.io/mastodon/mastodon:latest
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
db:
|
||||
condition: service_healthy
|
||||
redis:
|
||||
condition: service_healthy
|
||||
networks:
|
||||
- internal_network
|
||||
- external_network
|
||||
env_file:
|
||||
- .env
|
||||
volumes:
|
||||
- mastodon_system:/mastodon/public/system
|
||||
command: bash -lc "bundle exec sidekiq"
|
||||
|
||||
streaming:
|
||||
image: ghcr.io/mastodon/mastodon-streaming:latest
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
db:
|
||||
condition: service_healthy
|
||||
redis:
|
||||
condition: service_healthy
|
||||
networks:
|
||||
- internal_network
|
||||
- external_network
|
||||
env_file:
|
||||
- .env
|
||||
ports:
|
||||
- "4000:4000"
|
||||
command: node ./streaming
|
||||
|
||||
networks:
|
||||
internal_network:
|
||||
internal: true
|
||||
external_network:
|
||||
|
||||
volumes:
|
||||
postgres_data:
|
||||
redis_data:
|
||||
mastodon_system:
|
||||
10
komodo/media-server/audio-bookshelf/.env.sample
Normal file
10
komodo/media-server/audio-bookshelf/.env.sample
Normal file
@@ -0,0 +1,10 @@
|
||||
# AudioBookShelf - No environment variables required in compose file
|
||||
# Data volumes are mounted directly in the compose.yaml
|
||||
# Access the web UI at http://localhost:13378
|
||||
#
|
||||
# Note: Media paths are hardcoded in compose.yaml, adjust there if needed:
|
||||
# - /mnt/media/books:/ebooks
|
||||
# - /mnt/media/audiobooks:/audiobooks
|
||||
# - /mnt/media/podcasts:/podcasts
|
||||
|
||||
|
||||
@@ -0,0 +1,16 @@
|
||||
services:
|
||||
audiobookshelf:
|
||||
image: ghcr.io/advplyr/audiobookshelf:latest
|
||||
container_name: audiobookshelf
|
||||
ports:
|
||||
- 13378:80
|
||||
volumes:
|
||||
- /mnt/media/books:/ebooks
|
||||
- /mnt/media/audiobooks:/audiobooks
|
||||
- /mnt/media/podcasts:/podcasts
|
||||
- audiobookshelf:/config
|
||||
- audiobookshelf_metadata:/metadata
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
audiobookshelf:
|
||||
audiobookshelf_metadata:
|
||||
22
komodo/media-server/booklore/.env.sample
Normal file
22
komodo/media-server/booklore/.env.sample
Normal file
@@ -0,0 +1,22 @@
|
||||
# User and Group IDs for the application
|
||||
APP_USER_ID=1000
|
||||
APP_GROUP_ID=100
|
||||
|
||||
# User and Group IDs for the database
|
||||
DB_USER_ID=1000
|
||||
DB_GROUP_ID=100
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Database Configuration
|
||||
DATABASE_URL=mysql://db_user:db_password@mariadb:3306/booklore
|
||||
DB_USER=booklore
|
||||
DB_PASSWORD=your_secure_db_password
|
||||
MYSQL_ROOT_PASSWORD=your_secure_root_password
|
||||
MYSQL_DATABASE=booklore
|
||||
|
||||
# Booklore Application Port
|
||||
BOOKLORE_PORT=8090
|
||||
|
||||
|
||||
47
komodo/media-server/booklore/compose.yaml
Normal file
47
komodo/media-server/booklore/compose.yaml
Normal file
@@ -0,0 +1,47 @@
|
||||
services:
|
||||
booklore:
|
||||
image: booklore/booklore:latest
|
||||
# Alternative: Use GitHub Container Registry
|
||||
# image: ghcr.io/booklore-app/booklore:latest
|
||||
container_name: booklore
|
||||
environment:
|
||||
- USER_ID=${APP_USER_ID}
|
||||
- GROUP_ID=${APP_GROUP_ID}
|
||||
- TZ=${TZ}
|
||||
- DATABASE_URL=${DATABASE_URL}
|
||||
- DATABASE_USERNAME=${DB_USER}
|
||||
- DATABASE_PASSWORD=${DB_PASSWORD}
|
||||
- BOOKLORE_PORT=${BOOKLORE_PORT}
|
||||
depends_on:
|
||||
mariadb:
|
||||
condition: service_healthy
|
||||
ports:
|
||||
- "${BOOKLORE_PORT}:${BOOKLORE_PORT}"
|
||||
volumes:
|
||||
- booklore_data:/app/data
|
||||
- /mnt/media/books:/books
|
||||
- /mnt/media/bookdrop:/bookdrop
|
||||
restart: unless-stopped
|
||||
|
||||
mariadb:
|
||||
image: lscr.io/linuxserver/mariadb:11.4.5
|
||||
container_name: mariadb
|
||||
environment:
|
||||
- PUID=${DB_USER_ID}
|
||||
- PGID=${DB_GROUP_ID}
|
||||
- TZ=${TZ}
|
||||
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
|
||||
- MYSQL_DATABASE=${MYSQL_DATABASE}
|
||||
- MYSQL_USER=${DB_USER}
|
||||
- MYSQL_PASSWORD=${DB_PASSWORD}
|
||||
volumes:
|
||||
- mariadb_config:/config
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: [ "CMD", "mariadb-admin", "ping", "-h", "localhost" ]
|
||||
interval: 5s
|
||||
timeout: 5s
|
||||
retries: 10
|
||||
volumes:
|
||||
mariadb_config:
|
||||
booklore_data:
|
||||
5
komodo/media-server/calibre/.env.sample
Normal file
5
komodo/media-server/calibre/.env.sample
Normal file
@@ -0,0 +1,5 @@
|
||||
# Calibre - No environment variables in compose file
|
||||
# If you need to add environment variables, add them here
|
||||
# Access the web UI at the configured port
|
||||
|
||||
|
||||
3
komodo/media-server/deprecated/calibre/.env.sample
Normal file
3
komodo/media-server/deprecated/calibre/.env.sample
Normal file
@@ -0,0 +1,3 @@
|
||||
# Calibre - No environment variables in compose file
|
||||
# If you need to add environment variables, add them here
|
||||
# Access the web UI at the configured port
|
||||
@@ -8,8 +8,10 @@ services:
|
||||
- PGID=${PGID}
|
||||
- TZ=Canada/Eastern
|
||||
volumes:
|
||||
- ${CONFIG_PATH}/jellyfin:/config
|
||||
- jellyfin:/config
|
||||
- ${DATA_PATH}/media/:/data/media
|
||||
ports:
|
||||
- 8096:8096
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
jellyfin:
|
||||
13
komodo/monitor/peekaping/compose.yaml
Normal file
13
komodo/monitor/peekaping/compose.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
services:
|
||||
peekaping-bundle-sqlite:
|
||||
image: '0xfurai/peekaping-bundle-sqlite:latest'
|
||||
container_name: peekaping
|
||||
volumes:
|
||||
- peekaping-data:/app/data'
|
||||
environment:
|
||||
- DB_NAME=/app/data/peekaping.db
|
||||
ports:
|
||||
- 8383:8383
|
||||
restart: always
|
||||
volumes:
|
||||
peekaping-data:
|
||||
14
komodo/monitor/tracearr/.env.sample
Normal file
14
komodo/monitor/tracearr/.env.sample
Normal file
@@ -0,0 +1,14 @@
|
||||
# Port - default is 3000
|
||||
PORT=3000
|
||||
|
||||
# Timezone
|
||||
TZ=UTC
|
||||
|
||||
# Log Level - options: debug, info, warn, error
|
||||
LOG_LEVEL=info
|
||||
|
||||
# Optional: Override auto-generated secrets (usually not needed)
|
||||
# JWT_SECRET=your_jwt_secret
|
||||
# COOKIE_SECRET=your_cookie_secret
|
||||
|
||||
|
||||
12
komodo/monitor/uptime/compose.yaml
Normal file
12
komodo/monitor/uptime/compose.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
---
|
||||
services:
|
||||
uptime-kuma:
|
||||
image: 'louislam/uptime-kuma:2'
|
||||
container_name: uptime-kuma
|
||||
volumes:
|
||||
- 'uptime-kuma:/app/data'
|
||||
ports:
|
||||
- '3001:3001'
|
||||
restart: always
|
||||
volumes:
|
||||
uptime-kuma:
|
||||
259
macos/Jellyfin-NFS.md
Normal file
259
macos/Jellyfin-NFS.md
Normal file
@@ -0,0 +1,259 @@
|
||||
# Jellyfin + macOS: Persistent NFS Mount (Fix for Libraries Randomly “Clearing”)
|
||||
|
||||
This README documents the working fix I applied when Jellyfin (on a Mac mini) periodically “lost” or cleared my Movies/TV libraries that live on a NAS mounted over NFS.
|
||||
|
||||
It includes the exact commands, files, and rationale so I can reproduce it later.
|
||||
|
||||
---
|
||||
|
||||
## Problem Summary
|
||||
|
||||
- Symptom: Every day or two, Jellyfin showed empty Movies/TV libraries.
|
||||
- Media location: NFS share at `/Volumes/media` from NAS `192.168.50.105:/media`.
|
||||
- Root cause: macOS was using autofs (`/- /etc/auto_nfs`). autofs can unmount after inactivity or brief network blips. When the mount disappears during a Jellyfin scan/file-watch, Jellyfin sees files as missing and removes them from its DB.
|
||||
|
||||
## Solution Summary
|
||||
|
||||
- Stop using autofs for this path.
|
||||
- Create a persistent mount at boot using a LaunchDaemon and a small network‑aware mount script.
|
||||
- The script:
|
||||
- Is idempotent: does nothing if already mounted.
|
||||
- Checks NAS reachability first.
|
||||
- Logs to `/var/log/mount_media.(out|err)`.
|
||||
- Optionally restarts Jellyfin (Homebrew service) if the mount comes back.
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites / Assumptions
|
||||
|
||||
- macOS with admin (sudo) access.
|
||||
- NFS server: `192.168.50.105` exporting `/media` (adjust as needed).
|
||||
- Mount point: `/Volumes/media` (adjust as needed).
|
||||
- Jellyfin installed (optional Homebrew service restart in script).
|
||||
|
||||
> Tip: If your NAS requires privileged source ports for NFSv4, `resvport` helps. The script falls back to `noresvport` if needed.
|
||||
|
||||
---
|
||||
|
||||
## Steps (copy/paste commands)
|
||||
|
||||
### 1) Disable autofs for this path and unmount any automounted share
|
||||
|
||||
```
|
||||
# Backup and comment out the direct map for NFS
|
||||
sudo cp /etc/auto_master /etc/auto_master.bak.$(date +%F_%H%M%S)
|
||||
sudo sed -i.bak 's|^/- /etc/auto_nfs|#/- /etc/auto_nfs|' /etc/auto_master
|
||||
|
||||
# Reload automountd (will unmount /Volumes/media if it was automounted)
|
||||
sudo automount -vc
|
||||
|
||||
# Ensure the mountpoint is not currently mounted (ignore errors if already unmounted)
|
||||
sudo umount /Volumes/media 2>/dev/null || sudo umount -f /Volumes/media 2>/dev/null || true
|
||||
```
|
||||
|
||||
> Note: If `chown`/`chmod` say “Operation not permitted,” the path is still mounted (or your NAS has root-squash). Unmount first.
|
||||
|
||||
---
|
||||
|
||||
### 2) Create the network‑aware mount script
|
||||
|
||||
```
|
||||
sudo mkdir -p /usr/local/sbin
|
||||
|
||||
sudo tee /usr/local/sbin/mount_media_nfs.sh > /dev/null <<'SH'
|
||||
#!/bin/sh
|
||||
set -eu
|
||||
LOG="/var/log/mount_media.out"
|
||||
ERR="/var/log/mount_media.err"
|
||||
MOUNT="/Volumes/media"
|
||||
SERVER="192.168.50.105:/media"
|
||||
HOST="${SERVER%%:*}"
|
||||
|
||||
# Ensure mountpoint exists
|
||||
[ -d "$MOUNT" ] || mkdir -p "$MOUNT"
|
||||
|
||||
# If already mounted as NFS, exit quietly
|
||||
if mount -t nfs | awk '{print $3}' | grep -qx "$MOUNT"; then
|
||||
echo "$(date) already mounted: $MOUNT" >>"$LOG"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Preflight: only try to mount when NFS port is reachable
|
||||
if ! /usr/bin/nc -G 2 -z "$HOST" 2049 >/dev/null 2>&1; then
|
||||
echo "$(date) NAS not reachable on 2049, skipping mount" >>"$LOG"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "$(date) mounting $SERVER -> $MOUNT" >>"$LOG"
|
||||
/sbin/mount -t nfs -o resvport,hard,nfsvers=4.0 "$SERVER" "$MOUNT" >>"$LOG" 2>>"$ERR" || \
|
||||
/sbin/mount -t nfs -o noresvport,hard,nfsvers=4.0 "$SERVER" "$MOUNT" >>"$LOG" 2>>"$ERR"
|
||||
|
||||
# Verify mount succeeded via mount(8)
|
||||
if mount -t nfs | awk '{print $3}' | grep -qx "$MOUNT"; then
|
||||
echo "$(date) mount OK: $MOUNT" >>"$LOG"
|
||||
# Optional: restart Jellyfin if installed via Homebrew
|
||||
if command -v brew >/dev/null 2>&1 && brew services list | grep -q '^jellyfin\b'; then
|
||||
echo "$(date) restarting Jellyfin (brew services)" >>"$LOG"
|
||||
brew services restart jellyfin >>"$LOG" 2>>"$ERR" || true
|
||||
fi
|
||||
else
|
||||
echo "$(date) mount FAILED" >>"$ERR"
|
||||
exit 1
|
||||
fi
|
||||
SH
|
||||
|
||||
sudo chmod 755 /usr/local/sbin/mount_media_nfs.sh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3) Create the LaunchDaemon (mount at boot, re-check periodically, network‑aware)
|
||||
|
||||
```
|
||||
sudo tee /Library/LaunchDaemons/com.local.mountmedia.plist > /dev/null <<'PLIST'
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<key>Label</key>
|
||||
<string>com.local.mountmedia</string>
|
||||
<key>ProgramArguments</key>
|
||||
<array>
|
||||
<string>/usr/local/sbin/mount_media_nfs.sh</string>
|
||||
</array>
|
||||
<key>RunAtLoad</key>
|
||||
<true/>
|
||||
<key>StartInterval</key>
|
||||
<integer>300</integer>
|
||||
<key>KeepAlive</key>
|
||||
<dict>
|
||||
<key>NetworkState</key>
|
||||
<true/>
|
||||
</dict>
|
||||
<key>StandardOutPath</key>
|
||||
<string>/var/log/mount_media.out</string>
|
||||
<key>StandardErrorPath</key>
|
||||
<string>/var/log/mount_media.err</string>
|
||||
</dict>
|
||||
</plist>
|
||||
PLIST
|
||||
|
||||
sudo chown root:wheel /Library/LaunchDaemons/com.local.mountmedia.plist
|
||||
sudo chmod 644 /Library/LaunchDaemons/com.local.mountmedia.plist
|
||||
sudo plutil -lint /Library/LaunchDaemons/com.local.mountmedia.plist
|
||||
|
||||
sudo launchctl bootout system /Library/LaunchDaemons/com.local.mountmedia.plist 2>/dev/null || true
|
||||
sudo launchctl bootstrap system /Library/LaunchDaemons/com.local.mountmedia.plist
|
||||
sudo launchctl enable system/com.local.mountmedia
|
||||
sudo launchctl kickstart -k system/com.local.mountmedia
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4) Run once and verify
|
||||
|
||||
```
|
||||
# Run once now (idempotent; logs "already mounted" if present)
|
||||
sudo /usr/local/sbin/mount_media_nfs.sh
|
||||
|
||||
# LaunchDaemon status
|
||||
sudo launchctl print system/com.local.mountmedia | egrep 'state|last exit|PID' || true
|
||||
|
||||
# Mount status (should NOT say "automounted")
|
||||
mount | grep " on /Volumes/media "
|
||||
|
||||
# NFS mount parameters
|
||||
nfsstat -m | sed -n '/\/Volumes\/media/,+12p'
|
||||
|
||||
# Script logs
|
||||
tail -n 100 /var/log/mount_media.out /var/log/mount_media.err 2>/dev/null || true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5) Jellyfin settings
|
||||
|
||||
- Temporarily disable “Enable real-time monitoring” for libraries under `/Volumes/media` until you confirm the mount stays stable.
|
||||
- Then run “Scan Library Files” to repopulate anything previously removed.
|
||||
|
||||
---
|
||||
|
||||
### 6) Reboot test (recommended)
|
||||
|
||||
```
|
||||
sudo shutdown -r now
|
||||
```
|
||||
|
||||
After reboot:
|
||||
|
||||
```
|
||||
mount | grep " on /Volumes/media " || echo "Not mounted yet"
|
||||
sudo launchctl print system/com.local.mountmedia | egrep 'state|last exit' || true
|
||||
tail -n 100 /var/log/mount_media.out /var/log/mount_media.err 2>/dev/null || true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Rationale for Key Choices
|
||||
|
||||
- Persistent mount (LaunchDaemon) instead of autofs:
|
||||
- autofs can unmount after inactivity; Jellyfin then removes items it thinks are gone.
|
||||
- LaunchDaemon ensures the mount is present before scans and remains mounted.
|
||||
- NFS options:
|
||||
- `hard`: Blocks I/O until server responds, avoiding spurious “file missing” errors.
|
||||
- `nfsvers=4.0`: Matches typical NAS defaults and the client’s chosen version.
|
||||
- `resvport` then fallback `noresvport`: Some servers require privileged ports; the script tries both.
|
||||
- Network preflight:
|
||||
- Check TCP/2049 reachability to avoid “Network is unreachable” failures (exit code 51) at boot or during link flaps.
|
||||
- Logging:
|
||||
- `/var/log/mount_media.out` and `.err` make it easy to correlate with Jellyfin logs.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- “Operation not permitted” when `chown`/`chmod`:
|
||||
- The path is mounted over NFS, and root-squash likely prevents ownership changes from the client. Unmount first or change ownership on the NAS.
|
||||
- LaunchDaemon errors:
|
||||
- Validate plist: `sudo plutil -lint /Library/LaunchDaemons/com.local.mountmedia.plist`
|
||||
- Service state: `sudo launchctl print system/com.local.mountmedia`
|
||||
- Mount health:
|
||||
- `nfsstat -m` should show vers=4.0, hard, resvport/noresvport.
|
||||
- Network/power:
|
||||
- Prevent system sleep that drops the NIC; enable “Wake for network access.”
|
||||
|
||||
---
|
||||
|
||||
## Optional: If you must keep autofs
|
||||
|
||||
Increase the autofs timeout so it doesn’t unmount on brief inactivity (less ideal than the LaunchDaemon approach):
|
||||
|
||||
```
|
||||
sudo cp /etc/auto_master /etc/auto_master.bak.$(date +%F_%H%M%S)
|
||||
sudo sed -i.bak -E 's|^/-[[:space:]]+/etc/auto_nfs$|/- -timeout=604800 /etc/auto_nfs|' /etc/auto_master
|
||||
sudo automount -vc
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Reverting
|
||||
|
||||
To revert to autofs:
|
||||
|
||||
```
|
||||
# Stop and remove LaunchDaemon
|
||||
sudo launchctl bootout system /Library/LaunchDaemons/com.local.mountmedia.plist
|
||||
sudo rm -f /Library/LaunchDaemons/com.local.mountmedia.plist
|
||||
|
||||
# Restore /etc/auto_master (uncomment direct map) and reload
|
||||
sudo sed -i.bak 's|^#/- /etc/auto_nfs|/- /etc/auto_nfs|' /etc/auto_master
|
||||
sudo automount -vc
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- Change permissions/ownership on the NFS export from the NAS, not the macOS client (root-squash).
|
||||
- `showmount` may fail against NFSv4-only servers; it’s not needed here.
|
||||
- Adjust `SERVER`, `MOUNT`, and `StartInterval` to suit your environment.
|
||||
316
macos/Jellyfin-SMB.md
Normal file
316
macos/Jellyfin-SMB.md
Normal file
@@ -0,0 +1,316 @@
|
||||
# Jellyfin + macOS: Persistent NFS Mount (Fix for Libraries Randomly “Clearing”)
|
||||
|
||||
This README documents the working fix I applied when Jellyfin (on a Mac mini) periodically “lost” or cleared my Movies/TV libraries that live on a NAS mounted over NFS.
|
||||
|
||||
It includes the exact commands, files, and rationale so I can reproduce it later.
|
||||
|
||||
---
|
||||
|
||||
## Problem Summary
|
||||
|
||||
- Symptom: Every day or two, Jellyfin showed empty Movies/TV libraries.
|
||||
- Media location: NFS share at `/Volumes/media` from NAS `192.168.50.105:/media`.
|
||||
- Root cause: macOS was using autofs (`/- /etc/auto_nfs`). autofs can unmount after inactivity or brief network blips. When the mount disappears during a Jellyfin scan/file-watch, Jellyfin sees files as missing and removes them from its DB.
|
||||
|
||||
## Solution Summary
|
||||
|
||||
- Stop using autofs for this path.
|
||||
- Create a persistent mount at boot using a LaunchDaemon and a small network‑aware mount script.
|
||||
- The script:
|
||||
- Is idempotent: does nothing if already mounted.
|
||||
- Checks NAS reachability first.
|
||||
- Logs to `/var/log/mount_media.(out|err)`.
|
||||
- Optionally restarts Jellyfin (Homebrew service) if the mount comes back.
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites / Assumptions
|
||||
|
||||
- macOS with admin (sudo) access.
|
||||
- NFS server: `192.168.50.105` exporting `/media` (adjust as needed).
|
||||
- Mount point: `/Volumes/media` (adjust as needed).
|
||||
- Jellyfin installed (optional Homebrew service restart in script).
|
||||
|
||||
> Tip: If your NAS requires privileged source ports for NFSv4, `resvport` helps. The script falls back to `noresvport` if needed.
|
||||
|
||||
---
|
||||
|
||||
## Steps (copy/paste commands)
|
||||
|
||||
### 1) Disable autofs for this path and unmount any automounted share
|
||||
|
||||
```
|
||||
# Backup and comment out the direct map for NFS
|
||||
sudo cp /etc/auto_master /etc/auto_master.bak.$(date +%F_%H%M%S)
|
||||
sudo sed -i.bak 's|^/- /etc/auto_nfs|#/- /etc/auto_nfs|' /etc/auto_master
|
||||
|
||||
# Reload automountd (will unmount /Volumes/media if it was automounted)
|
||||
sudo automount -vc
|
||||
|
||||
# Ensure the mountpoint is not currently mounted (ignore errors if already unmounted)
|
||||
sudo umount /Volumes/media 2>/dev/null || sudo umount -f /Volumes/media 2>/dev/null || true
|
||||
```
|
||||
|
||||
> Note: If `chown`/`chmod` say “Operation not permitted,” the path is still mounted (or your NAS has root-squash). Unmount first.
|
||||
|
||||
---
|
||||
|
||||
### 2) Create the network‑aware mount script
|
||||
|
||||
```
|
||||
sudo mkdir -p /usr/local/sbin
|
||||
|
||||
sudo tee /usr/local/sbin/mount_media_nfs.sh > /dev/null <<'SH'
|
||||
#!/bin/sh
|
||||
set -eu
|
||||
|
||||
LOG="/var/log/mount_media.out"
|
||||
ERR="/var/log/mount_media.err"
|
||||
MOUNT="/Volumes/media"
|
||||
|
||||
# SMB server settings — use domain name (FQDN)
|
||||
HOST="nas.example.local" # <- change to your domain
|
||||
SHARE="media" # <- change share name if needed
|
||||
|
||||
# Optional auth:
|
||||
# - If SMB_USER is set, script will try authenticated mount.
|
||||
# - Supply SMB_PASS (environment) OR set SMB_KEYCHAIN_ITEM to fetch password from Keychain.
|
||||
SMB_USER="${SMB_USER:-}"
|
||||
SMB_PASS="${SMB_PASS:-}"
|
||||
SMB_KEYCHAIN_ITEM="${SMB_KEYCHAIN_ITEM:-}"
|
||||
|
||||
# Ensure mountpoint exists
|
||||
[ -d "$MOUNT" ] || mkdir -p "$MOUNT"
|
||||
|
||||
# If already mounted on the mountpoint, exit quietly
|
||||
if mount | awk '{print $3}' | grep -qx "$MOUNT"; then
|
||||
echo "$(date) already mounted: $MOUNT" >>"$LOG"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Preflight: only try to mount when SMB port is reachable (try 445 then 139)
|
||||
if ! ( /usr/bin/nc -G 2 -z "$HOST" 445 >/dev/null 2>&1 || /usr/bin/nc -G 2 -z "$HOST" 139 >/dev/null 2>&1 ); then
|
||||
echo "$(date) NAS not reachable on SMB ports (445/139), skipping mount" >>"$LOG"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Helpful server listing for debugging (doesn't include credentials)
|
||||
echo "$(date) smbutil listing for debugging" >>"$LOG" 2>>"$ERR"
|
||||
smbutil view "//$HOST" >>"$LOG" 2>>"$ERR" || true
|
||||
smbutil view "//guest@$HOST" >>"$LOG" 2>>"$ERR" || true
|
||||
|
||||
# Helper: function to verify mount and restart Jellyfin if needed
|
||||
verify_and_exit() {
|
||||
if mount | awk '{print $3}' | grep -qx "$MOUNT"; then
|
||||
echo "$(date) mount OK: $MOUNT" >>"$LOG"
|
||||
if command -v brew >/dev/null 2>&1 && brew services list | grep -q '^jellyfin\b'; then
|
||||
echo "$(date) restarting Jellyfin (brew services)" >>"$LOG"
|
||||
brew services restart jellyfin >>"$LOG" 2>>"$ERR" || true
|
||||
fi
|
||||
exit 0
|
||||
fi
|
||||
}
|
||||
|
||||
# Try authenticated mount if SMB_USER provided
|
||||
if [ -n "$SMB_USER" ]; then
|
||||
# Retrieve password from Keychain if requested and SMB_PASS not set
|
||||
if [ -z "$SMB_PASS" ] && [ -n "$SMB_KEYCHAIN_ITEM" ]; then
|
||||
# Try to read password from Keychain (service name = SMB_KEYCHAIN_ITEM, account = SMB_USER)
|
||||
# The -w flag prints only the password
|
||||
SMB_PASS="$(security find-generic-password -s "$SMB_KEYCHAIN_ITEM" -a "$SMB_USER" -w 2>/dev/null || true)"
|
||||
fi
|
||||
|
||||
if [ -n "$SMB_PASS" ]; then
|
||||
# Use password via stdin to avoid exposing it in process list
|
||||
echo "$(date) attempting authenticated mount as user '$SMB_USER' -> $MOUNT" >>"$LOG"
|
||||
# Do NOT include the password in the URL or logs.
|
||||
MOUNT_URL="//${SMB_USER}@${HOST}/${SHARE}"
|
||||
# Send password followed by newline to mount_smbfs which will read it from stdin
|
||||
if printf '%s\n' "$SMB_PASS" | /sbin/mount_smbfs "$MOUNT_URL" "$MOUNT" >>"$LOG" 2>>"$ERR"; then
|
||||
verify_and_exit
|
||||
else
|
||||
echo "$(date) authenticated mount attempt FAILED" >>"$ERR"
|
||||
# Fall through to guest attempts
|
||||
fi
|
||||
else
|
||||
# No password available for authenticated mount
|
||||
echo "$(date) SMB_USER set but no SMB_PASS or Keychain entry found -> will try guest" >>"$LOG"
|
||||
fi
|
||||
fi
|
||||
|
||||
# If we reach here, try guest access (null/anonymous session)
|
||||
echo "$(date) trying guest/null session (mount_smbfs -N) -> $MOUNT" >>"$LOG"
|
||||
if /sbin/mount_smbfs -N "//$HOST/$SHARE" "$MOUNT" >>"$LOG" 2>>"$ERR"; then
|
||||
verify_and_exit
|
||||
fi
|
||||
|
||||
echo "$(date) trying explicit guest user (guest@$HOST) -> $MOUNT" >>"$LOG"
|
||||
if /sbin/mount_smbfs "//guest@$HOST/$SHARE" "$MOUNT" >>"$LOG" 2>>"$ERR"; then
|
||||
verify_and_exit
|
||||
fi
|
||||
|
||||
# If we reached here, all attempts failed
|
||||
echo "$(date) ALL SMB mount attempts FAILED" >>"$ERR"
|
||||
echo "------ smbutil status ------" >>"$ERR"
|
||||
smbutil statshares -a >>"$ERR" 2>&1 || true
|
||||
echo "------ mount table ------" >>"$ERR"
|
||||
mount >>"$ERR" 2>&1 || true
|
||||
|
||||
exit 1
|
||||
SH
|
||||
|
||||
sudo chmod 755 /usr/local/sbin/mount_media_smb.sh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3) Create the LaunchDaemon (mount at boot, re-check periodically, network‑aware)
|
||||
|
||||
```
|
||||
sudo tee /Library/LaunchDaemons/com.local.mountmedia.plist > /dev/null <<'PLIST'
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<key>Label</key>
|
||||
<string>com.local.mountmedia</string>
|
||||
<key>ProgramArguments</key>
|
||||
<array>
|
||||
<string>/usr/local/sbin/mount_media_smb.sh</string>
|
||||
</array>
|
||||
<key>RunAtLoad</key>
|
||||
<true/>
|
||||
<key>StartInterval</key>
|
||||
<integer>300</integer>
|
||||
<key>KeepAlive</key>
|
||||
<dict>
|
||||
<key>NetworkState</key>
|
||||
<true/>
|
||||
</dict>
|
||||
<key>StandardOutPath</key>
|
||||
<string>/var/log/mount_media.out</string>
|
||||
<key>StandardErrorPath</key>
|
||||
<string>/var/log/mount_media.err</string>
|
||||
</dict>
|
||||
</plist>
|
||||
PLIST
|
||||
|
||||
sudo chown root:wheel /Library/LaunchDaemons/com.local.mountmedia.plist
|
||||
sudo chmod 644 /Library/LaunchDaemons/com.local.mountmedia.plist
|
||||
sudo plutil -lint /Library/LaunchDaemons/com.local.mountmedia.plist
|
||||
|
||||
sudo launchctl bootout system /Library/LaunchDaemons/com.local.mountmedia.plist 2>/dev/null || true
|
||||
sudo launchctl bootstrap system /Library/LaunchDaemons/com.local.mountmedia.plist
|
||||
sudo launchctl enable system/com.local.mountmedia
|
||||
sudo launchctl kickstart -k system/com.local.mountmedia
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4) Run once and verify
|
||||
|
||||
```
|
||||
# Run once now (idempotent; logs "already mounted" if present)
|
||||
sudo /usr/local/sbin/mount_media_smb.sh
|
||||
|
||||
# LaunchDaemon status
|
||||
sudo launchctl print system/com.local.mountmedia | egrep 'state|last exit|PID' || true
|
||||
|
||||
# Mount status (should NOT say "automounted")
|
||||
mount | grep " on /Volumes/media "
|
||||
|
||||
# SMB mount parameters
|
||||
smbstat -m | sed -n '/\/Volumes\/media/,+12p'
|
||||
|
||||
# Script logs
|
||||
tail -n 100 /var/log/mount_media.out /var/log/mount_media.err 2>/dev/null || true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5) Jellyfin settings
|
||||
|
||||
- Temporarily disable “Enable real-time monitoring” for libraries under `/Volumes/media` until you confirm the mount stays stable.
|
||||
- Then run “Scan Library Files” to repopulate anything previously removed.
|
||||
|
||||
---
|
||||
|
||||
### 6) Reboot test (recommended)
|
||||
|
||||
```
|
||||
sudo shutdown -r now
|
||||
```
|
||||
|
||||
After reboot:
|
||||
|
||||
```
|
||||
mount | grep " on /Volumes/media " || echo "Not mounted yet"
|
||||
sudo launchctl print system/com.local.mountmedia | egrep 'state|last exit' || true
|
||||
tail -n 100 /var/log/mount_media.out /var/log/mount_media.err 2>/dev/null || true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Rationale for Key Choices
|
||||
|
||||
- Persistent mount (LaunchDaemon) instead of autofs:
|
||||
- autofs can unmount after inactivity; Jellyfin then removes items it thinks are gone.
|
||||
- LaunchDaemon ensures the mount is present before scans and remains mounted.
|
||||
- smb options:
|
||||
- `hard`: Blocks I/O until server responds, avoiding spurious “file missing” errors.
|
||||
- `nfsvers=4.0`: Matches typical NAS defaults and the client’s chosen version.
|
||||
- `resvport` then fallback `noresvport`: Some servers require privileged ports; the script tries both.
|
||||
- Network preflight:
|
||||
- Check TCP/2049 reachability to avoid “Network is unreachable” failures (exit code 51) at boot or during link flaps.
|
||||
- Logging:
|
||||
- `/var/log/mount_media.out` and `.err` make it easy to correlate with Jellyfin logs.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- “Operation not permitted” when `chown`/`chmod`:
|
||||
- The path is mounted over NFS, and root-squash likely prevents ownership changes from the client. Unmount first or change ownership on the NAS.
|
||||
- LaunchDaemon errors:
|
||||
- Validate plist: `sudo plutil -lint /Library/LaunchDaemons/com.local.mountmedia.plist`
|
||||
- Service state: `sudo launchctl print system/com.local.mountmedia`
|
||||
- Mount health:
|
||||
- `nfsstat -m` should show vers=4.0, hard, resvport/noresvport.
|
||||
- Network/power:
|
||||
- Prevent system sleep that drops the NIC; enable “Wake for network access.”
|
||||
|
||||
---
|
||||
|
||||
## Optional: If you must keep autofs
|
||||
|
||||
Increase the autofs timeout so it doesn’t unmount on brief inactivity (less ideal than the LaunchDaemon approach):
|
||||
|
||||
```
|
||||
sudo cp /etc/auto_master /etc/auto_master.bak.$(date +%F_%H%M%S)
|
||||
sudo sed -i.bak -E 's|^/-[[:space:]]+/etc/auto_nfs$|/- -timeout=604800 /etc/auto_nfs|' /etc/auto_master
|
||||
sudo automount -vc
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Reverting
|
||||
|
||||
To revert to autofs:
|
||||
|
||||
```
|
||||
# Stop and remove LaunchDaemon
|
||||
sudo launchctl bootout system /Library/LaunchDaemons/com.local.mountmedia.plist
|
||||
sudo rm -f /Library/LaunchDaemons/com.local.mountmedia.plist
|
||||
|
||||
# Restore /etc/auto_master (uncomment direct map) and reload
|
||||
sudo sed -i.bak 's|^#/- /etc/auto_nfs|/- /etc/auto_nfs|' /etc/auto_master
|
||||
sudo automount -vc
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- Change permissions/ownership on the NFS export from the NAS, not the macOS client (root-squash).
|
||||
- `showmount` may fail against NFSv4-only servers; it’s not needed here.
|
||||
- Adjust `SERVER`, `MOUNT`, and `StartInterval` to suit your environment.
|
||||
Reference in New Issue
Block a user