Add Docker Compose configuration and environment sample for SparkyFitness

This commit is contained in:
Nikholas Pcenicni
2026-02-15 22:02:53 -05:00
parent 2eb458a169
commit 323ccd5a65
4 changed files with 547 additions and 9 deletions

View File

@@ -1,9 +0,0 @@
services:
ossm-configurator:
image: ghcr.io/munchdev-oss/ossm-configurator:latest
container_name: ossm-configurator
ports:
- "2121:80"
restart: unless-stopped
environment:
- NODE_ENV=production

View File

@@ -0,0 +1,146 @@
# SparkyFitness Environment Variables
# Copy this file to .env in the root directory and fill in your own values before running 'docker-compose up'.
# --- PostgreSQL Database Settings ---
# These values should match the ones used by your PostgreSQL container.
# For local development (running Node.js directly), use 'localhost' or '127.0.0.1' if PostgreSQL is on your host.
SPARKY_FITNESS_DB_NAME=sparkyfitness_db
#SPARKY_FITNESS_DB_USER is super user for DB initialization and migrations.
SPARKY_FITNESS_DB_USER=sparky
SPARKY_FITNESS_DB_PASSWORD=changeme_db_password
# Application database user with limited privileges. it can be changed any time after initialization.
SPARKY_FITNESS_APP_DB_USER=sparky_app
SPARKY_FITNESS_APP_DB_PASSWORD=password
# For Docker Compose deployments, SPARKY_FITNESS_DB_HOST will be the service name (e.g., 'sparkyfitness-db').
SPARKY_FITNESS_DB_HOST=sparkyfitness-db
#SPARKY_FITNESS_DB_PORT=5432 # Optional. Defaults to 5432 if not specified.
# --- Backend Server Settings ---
# The hostname or IP address of the backend server.
# For Docker Compose, this is typically the service name (e.g., 'sparkyfitness-server').
# For local development or other deployments, this might be 'localhost' or a specific IP.
SPARKY_FITNESS_SERVER_HOST=sparkyfitness-server
# The external port the server will be exposed on.
SPARKY_FITNESS_SERVER_PORT=3010
# The public URL of your frontend (e.g., https://fitness.example.com). This is crucial for CORS security.
# For local development, use http://localhost:8080. For production, use your domain with https.
SPARKY_FITNESS_FRONTEND_URL=http://localhost:8080
# Allow CORS requests from private network addresses (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, localhost, etc.)
# SECURITY WARNING: Only enable this if you are running on a private/self-hosted network.
# Do NOT enable on shared hosting or cloud environments where other users might access your network.
# Default: false (secure default - only the configured SPARKY_FITNESS_FRONTEND_URL is allowed)
#ALLOW_PRIVATE_NETWORK_CORS=false
# A comma-separated list of additional URLs that Better Auth should trust.
# This is useful when accessing the app from a specific local IP on your network.
# Example: SPARKY_FITNESS_EXTRA_TRUSTED_ORIGINS=http://192.168.1.175:8080,http://10.0.0.5:8080
# SPARKY_FITNESS_EXTRA_TRUSTED_ORIGINS=
# Logging level for the server (e.g., INFO, DEBUG, WARN, ERROR)
SPARKY_FITNESS_LOG_LEVEL=ERROR
# Node.js environment mode (e.g., development, production, test)
# Set to 'production' for deployment to ensure optimal performance and security.
NODE_ENV=production
# Server timezone. Use a TZ database name (e.g., 'America/New_York', 'Etc/UTC').
# This affects how dates/times are handled by the server if not explicitly managed in code.
TZ=Etc/UTC
# --- Security Settings ---
# A 64-character hex string for data encryption.
# You can generate a secure key with the following command:
# openssl rand -hex 32
# or
# node -e "console.log(require('crypto').randomBytes(32).toString('hex'))"
# Changing this will invalidate existing encrypted data. You will need to delete and add External Data sources again.
SPARKY_FITNESS_API_ENCRYPTION_KEY=changeme_replace_with_a_64_character_hex_string
# For Docker Swarm/Kubernetes secrets, you can use a file-based secret:
# SPARKY_FITNESS_API_ENCRYPTION_KEY_FILE=/run/secrets/sparkyfitness_api_key
BETTER_AUTH_SECRET=changeme_replace_with_a_strong_better_auth_secret
# For Docker Swarm/Kubernetes secrets, you can use a file-based secret:
# BETTER_AUTH_SECRET_FILE=/run/secrets/sparkyfitness_better_auth_secret
# --- Signup Settings ---
# Set to 'true' to disable new user registrations.
SPARKY_FITNESS_DISABLE_SIGNUP=false
# --- Admin Settings ---
# Set the email of a user to automatically grant admin privileges on server startup.
# This is useful for development or initial setup.
# Example: SPARKY_FITNESS_ADMIN_EMAIL=admin@example.com
# Optional. If not set, no admin user will be created automatically.
# SPARKY_FITNESS_ADMIN_EMAIL=
# --- Login Management Fail-Safe ---
# Set to 'true' to force email/password login to be enabled, overriding any in-app settings.
# This is a fail-safe to prevent being locked out if OIDC is misconfigured.
SPARKY_FITNESS_FORCE_EMAIL_LOGIN=true
# --- Email Settings (Optional) ---
# Configure these variables if you want to enable email notifications (e.g., for password resets).
# If not configured, email functionality will be disabled.
# SPARKY_FITNESS_EMAIL_HOST=smtp.example.com
# SPARKY_FITNESS_EMAIL_PORT=587
# SPARKY_FITNESS_EMAIL_SECURE=true # Use 'true' for TLS/SSL, 'false' for plain text
# SPARKY_FITNESS_EMAIL_USER=your_email@example.com
# SPARKY_FITNESS_EMAIL_PASS=your_email_password
# SPARKY_FITNESS_EMAIL_FROM=no-reply@example.com
# --- Volume Paths (Optional) ---
# These paths define where Docker volumes will store persistent data on your host.
# If not set, Docker will manage volumes automatically in its default location.
# DB_PATH=../postgresql # Path for PostgreSQL database data
# SERVER_BACKUP_PATH=./backup # Path for server backups
# SERVER_UPLOADS_PATH=./uploads # Path for profile pictures and exercise images
# --- API Key Rate Limiting (Optional) ---
# Override the default rate limit for API key authentication (used by automation tools like n8n).
# Defaults to 100 requests per 60-second window if not set.
#SPARKY_FITNESS_API_KEY_RATELIMIT_WINDOW_MS=60000
#SPARKY_FITNESS_API_KEY_RATELIMIT_MAX_REQUESTS=100
# --- Start of Garmin Integration Settings ---
#Below variables are needed only for Garmin integration. If you don't use Garmin integration, you can remove them in your .env file.
# The URL for the Garmin microservice.
# For Docker Compose, this would typically be the service name and port (e.g., 'http://sparkyfitness-garmin:8000').
# For local development, use 'http://localhost:8000' or the port you've configured.
GARMIN_MICROSERVICE_URL=http://sparkyfitness-garmin:8000
# This is used for Garmin Connect synchronization.
# If you are not using Garmin integration, you don't need this. Make sure this matches with GARMIN_MICROSERVICE_URL.
GARMIN_SERVICE_PORT=8000
# set to true for China region. Everything else should be false. Optional - defaults to false
GARMIN_SERVICE_IS_CN=false
# --- End of Garmin Integration Settings ---
#----- Developers Section -----
# Data source for external integrations (fitbit, garmin, withings).
# By default, these use live APIs. Set to 'local' to use mock data from the mock_data directory.
#SPARKY_FITNESS_FITBIT_DATA_SOURCE=local
#SPARKY_FITNESS_WITHINGS_DATA_SOURCE=local
#SPARKY_FITNESS_GARMIN_DATA_SOURCE=local
#SPARKY_FITNESS_POLAR_DATA_SOURCE=local
#SPARKY_FITNESS_HEVY_DATA_SOURCE=local
# Set to 'true' to capture live API responses into mock data JSON files. Defaults to false.
#SPARKY_FITNESS_SAVE_MOCK_DATA=false
#-----------------------------

View File

@@ -0,0 +1,85 @@
services:
sparkyfitness-db:
image: postgres:15-alpine
restart: always
environment:
POSTGRES_DB: ${SPARKY_FITNESS_DB_NAME:?Variable is required and must be set}
POSTGRES_USER: ${SPARKY_FITNESS_DB_USER:?Variable is required and must be set}
POSTGRES_PASSWORD: ${SPARKY_FITNESS_DB_PASSWORD:?Variable is required and must be set}
volumes:
- ${DB_PATH:-../postgresql}:/var/lib/postgresql/data
networks:
- sparkyfitness-network # Use the new named network
sparkyfitness-server:
image: codewithcj/sparkyfitness_server:latest # Use pre-built image
environment:
SPARKY_FITNESS_LOG_LEVEL: ${SPARKY_FITNESS_LOG_LEVEL}
ALLOW_PRIVATE_NETWORK_CORS: ${ALLOW_PRIVATE_NETWORK_CORS:-false}
SPARKY_FITNESS_EXTRA_TRUSTED_ORIGINS: ${SPARKY_FITNESS_EXTRA_TRUSTED_ORIGINS:-}
SPARKY_FITNESS_DB_USER: ${SPARKY_FITNESS_DB_USER:-sparky}
SPARKY_FITNESS_DB_HOST: ${SPARKY_FITNESS_DB_HOST:-sparkyfitness-db} # Use the service name 'sparkyfitness-db' for inter-container communication
SPARKY_FITNESS_DB_NAME: ${SPARKY_FITNESS_DB_NAME}
SPARKY_FITNESS_DB_PASSWORD: ${SPARKY_FITNESS_DB_PASSWORD:?Variable is required and must be set}
SPARKY_FITNESS_APP_DB_USER: ${SPARKY_FITNESS_APP_DB_USER:-sparkyapp}
SPARKY_FITNESS_APP_DB_PASSWORD: ${SPARKY_FITNESS_APP_DB_PASSWORD:?Variable is required and must be set}
SPARKY_FITNESS_DB_PORT: ${SPARKY_FITNESS_DB_PORT:-5432}
SPARKY_FITNESS_API_ENCRYPTION_KEY: ${SPARKY_FITNESS_API_ENCRYPTION_KEY:?Variable is required and must be set}
# Uncomment the line below and comment the line above to use a file-based secret
# SPARKY_FITNESS_API_ENCRYPTION_KEY_FILE: /run/secrets/sparkyfitness_api_key
BETTER_AUTH_SECRET: ${BETTER_AUTH_SECRET:?Variable is required and must be set}
# Uncomment the line below and comment the line above to use a file-based secret
# JWT_SECRET_FILE: /run/secrets/sparkyfitness_jwt_secret
SPARKY_FITNESS_FRONTEND_URL: ${SPARKY_FITNESS_FRONTEND_URL:-http://0.0.0.0:3004}
SPARKY_FITNESS_DISABLE_SIGNUP: ${SPARKY_FITNESS_DISABLE_SIGNUP}
SPARKY_FITNESS_ADMIN_EMAIL: ${SPARKY_FITNESS_ADMIN_EMAIL} #User with this email can access the admin panel
SPARKY_FITNESS_EMAIL_HOST: ${SPARKY_FITNESS_EMAIL_HOST}
SPARKY_FITNESS_EMAIL_PORT: ${SPARKY_FITNESS_EMAIL_PORT}
SPARKY_FITNESS_EMAIL_SECURE: ${SPARKY_FITNESS_EMAIL_SECURE}
SPARKY_FITNESS_EMAIL_USER: ${SPARKY_FITNESS_EMAIL_USER}
SPARKY_FITNESS_EMAIL_PASS: ${SPARKY_FITNESS_EMAIL_PASS}
SPARKY_FITNESS_EMAIL_FROM: ${SPARKY_FITNESS_EMAIL_FROM}
GARMIN_MICROSERVICE_URL: http://sparkyfitness-garmin:8000 # Add Garmin microservice URL
networks:
- sparkyfitness-network # Use the new named network
restart: always
depends_on:
- sparkyfitness-db # Backend depends on the database being available
volumes:
- ${SERVER_BACKUP_PATH:-./backup}:/app/SparkyFitnessServer/backup # Mount volume for backups
- ${SERVER_UPLOADS_PATH:-./uploads}:/app/SparkyFitnessServer/uploads # Mount volume for Profile pictures and excercise images
sparkyfitness-frontend:
image: codewithcj/sparkyfitness:latest # Use pre-built image
ports:
- "3004:80" # Map host port 8080 to container port 80 (Nginx)
environment:
SPARKY_FITNESS_FRONTEND_URL: ${SPARKY_FITNESS_FRONTEND_URL}
SPARKY_FITNESS_SERVER_HOST: sparkyfitness-server # Internal Docker service name for the backend
SPARKY_FITNESS_SERVER_PORT: 3010 # Port the backend server listens on
networks:
- sparkyfitness-network # Use the new named network
restart: always
depends_on:
- sparkyfitness-server # Frontend depends on the server
#- sparkyfitness-garmin # Frontend depends on Garmin microservice. Enable if you are using Garmin Connect features.
# Garmin integration is still work in progress. Enable once table is ready.
# sparkyfitness-garmin:
# image: codewithcj/sparkyfitness_garmin:latest
# container_name: sparkyfitness-garmin
# environment:
# GARMIN_MICROSERVICE_URL: http://sparkyfitness-garmin:${GARMIN_SERVICE_PORT}
# GARMIN_SERVICE_PORT: ${GARMIN_SERVICE_PORT}
# GARMIN_SERVICE_IS_CN: ${GARMIN_SERVICE_IS_CN} # set to true for China region. Everything else should be false. Optional - defaults to false
# networks:
# - sparkyfitness-network
# restart: unless-stopped
# depends_on:
# - sparkyfitness-db
# - sparkyfitness-server
networks:
sparkyfitness-network:
driver: bridge

316
macos/Jellyfin-SMB.md Normal file
View File

@@ -0,0 +1,316 @@
# Jellyfin + macOS: Persistent NFS Mount (Fix for Libraries Randomly “Clearing”)
This README documents the working fix I applied when Jellyfin (on a Mac mini) periodically “lost” or cleared my Movies/TV libraries that live on a NAS mounted over NFS.
It includes the exact commands, files, and rationale so I can reproduce it later.
---
## Problem Summary
- Symptom: Every day or two, Jellyfin showed empty Movies/TV libraries.
- Media location: NFS share at `/Volumes/media` from NAS `192.168.50.105:/media`.
- Root cause: macOS was using autofs (`/- /etc/auto_nfs`). autofs can unmount after inactivity or brief network blips. When the mount disappears during a Jellyfin scan/file-watch, Jellyfin sees files as missing and removes them from its DB.
## Solution Summary
- Stop using autofs for this path.
- Create a persistent mount at boot using a LaunchDaemon and a small networkaware mount script.
- The script:
- Is idempotent: does nothing if already mounted.
- Checks NAS reachability first.
- Logs to `/var/log/mount_media.(out|err)`.
- Optionally restarts Jellyfin (Homebrew service) if the mount comes back.
---
## Prerequisites / Assumptions
- macOS with admin (sudo) access.
- NFS server: `192.168.50.105` exporting `/media` (adjust as needed).
- Mount point: `/Volumes/media` (adjust as needed).
- Jellyfin installed (optional Homebrew service restart in script).
> Tip: If your NAS requires privileged source ports for NFSv4, `resvport` helps. The script falls back to `noresvport` if needed.
---
## Steps (copy/paste commands)
### 1) Disable autofs for this path and unmount any automounted share
```
# Backup and comment out the direct map for NFS
sudo cp /etc/auto_master /etc/auto_master.bak.$(date +%F_%H%M%S)
sudo sed -i.bak 's|^/- /etc/auto_nfs|#/- /etc/auto_nfs|' /etc/auto_master
# Reload automountd (will unmount /Volumes/media if it was automounted)
sudo automount -vc
# Ensure the mountpoint is not currently mounted (ignore errors if already unmounted)
sudo umount /Volumes/media 2>/dev/null || sudo umount -f /Volumes/media 2>/dev/null || true
```
> Note: If `chown`/`chmod` say “Operation not permitted,” the path is still mounted (or your NAS has root-squash). Unmount first.
---
### 2) Create the networkaware mount script
```
sudo mkdir -p /usr/local/sbin
sudo tee /usr/local/sbin/mount_media_nfs.sh > /dev/null <<'SH'
#!/bin/sh
set -eu
LOG="/var/log/mount_media.out"
ERR="/var/log/mount_media.err"
MOUNT="/Volumes/media"
# SMB server settings — use domain name (FQDN)
HOST="nas.example.local" # <- change to your domain
SHARE="media" # <- change share name if needed
# Optional auth:
# - If SMB_USER is set, script will try authenticated mount.
# - Supply SMB_PASS (environment) OR set SMB_KEYCHAIN_ITEM to fetch password from Keychain.
SMB_USER="${SMB_USER:-}"
SMB_PASS="${SMB_PASS:-}"
SMB_KEYCHAIN_ITEM="${SMB_KEYCHAIN_ITEM:-}"
# Ensure mountpoint exists
[ -d "$MOUNT" ] || mkdir -p "$MOUNT"
# If already mounted on the mountpoint, exit quietly
if mount | awk '{print $3}' | grep -qx "$MOUNT"; then
echo "$(date) already mounted: $MOUNT" >>"$LOG"
exit 0
fi
# Preflight: only try to mount when SMB port is reachable (try 445 then 139)
if ! ( /usr/bin/nc -G 2 -z "$HOST" 445 >/dev/null 2>&1 || /usr/bin/nc -G 2 -z "$HOST" 139 >/dev/null 2>&1 ); then
echo "$(date) NAS not reachable on SMB ports (445/139), skipping mount" >>"$LOG"
exit 0
fi
# Helpful server listing for debugging (doesn't include credentials)
echo "$(date) smbutil listing for debugging" >>"$LOG" 2>>"$ERR"
smbutil view "//$HOST" >>"$LOG" 2>>"$ERR" || true
smbutil view "//guest@$HOST" >>"$LOG" 2>>"$ERR" || true
# Helper: function to verify mount and restart Jellyfin if needed
verify_and_exit() {
if mount | awk '{print $3}' | grep -qx "$MOUNT"; then
echo "$(date) mount OK: $MOUNT" >>"$LOG"
if command -v brew >/dev/null 2>&1 && brew services list | grep -q '^jellyfin\b'; then
echo "$(date) restarting Jellyfin (brew services)" >>"$LOG"
brew services restart jellyfin >>"$LOG" 2>>"$ERR" || true
fi
exit 0
fi
}
# Try authenticated mount if SMB_USER provided
if [ -n "$SMB_USER" ]; then
# Retrieve password from Keychain if requested and SMB_PASS not set
if [ -z "$SMB_PASS" ] && [ -n "$SMB_KEYCHAIN_ITEM" ]; then
# Try to read password from Keychain (service name = SMB_KEYCHAIN_ITEM, account = SMB_USER)
# The -w flag prints only the password
SMB_PASS="$(security find-generic-password -s "$SMB_KEYCHAIN_ITEM" -a "$SMB_USER" -w 2>/dev/null || true)"
fi
if [ -n "$SMB_PASS" ]; then
# Use password via stdin to avoid exposing it in process list
echo "$(date) attempting authenticated mount as user '$SMB_USER' -> $MOUNT" >>"$LOG"
# Do NOT include the password in the URL or logs.
MOUNT_URL="//${SMB_USER}@${HOST}/${SHARE}"
# Send password followed by newline to mount_smbfs which will read it from stdin
if printf '%s\n' "$SMB_PASS" | /sbin/mount_smbfs "$MOUNT_URL" "$MOUNT" >>"$LOG" 2>>"$ERR"; then
verify_and_exit
else
echo "$(date) authenticated mount attempt FAILED" >>"$ERR"
# Fall through to guest attempts
fi
else
# No password available for authenticated mount
echo "$(date) SMB_USER set but no SMB_PASS or Keychain entry found -> will try guest" >>"$LOG"
fi
fi
# If we reach here, try guest access (null/anonymous session)
echo "$(date) trying guest/null session (mount_smbfs -N) -> $MOUNT" >>"$LOG"
if /sbin/mount_smbfs -N "//$HOST/$SHARE" "$MOUNT" >>"$LOG" 2>>"$ERR"; then
verify_and_exit
fi
echo "$(date) trying explicit guest user (guest@$HOST) -> $MOUNT" >>"$LOG"
if /sbin/mount_smbfs "//guest@$HOST/$SHARE" "$MOUNT" >>"$LOG" 2>>"$ERR"; then
verify_and_exit
fi
# If we reached here, all attempts failed
echo "$(date) ALL SMB mount attempts FAILED" >>"$ERR"
echo "------ smbutil status ------" >>"$ERR"
smbutil statshares -a >>"$ERR" 2>&1 || true
echo "------ mount table ------" >>"$ERR"
mount >>"$ERR" 2>&1 || true
exit 1
SH
sudo chmod 755 /usr/local/sbin/mount_media_smb.sh
```
---
### 3) Create the LaunchDaemon (mount at boot, re-check periodically, networkaware)
```
sudo tee /Library/LaunchDaemons/com.local.mountmedia.plist > /dev/null <<'PLIST'
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.local.mountmedia</string>
<key>ProgramArguments</key>
<array>
<string>/usr/local/sbin/mount_media_smb.sh</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>StartInterval</key>
<integer>300</integer>
<key>KeepAlive</key>
<dict>
<key>NetworkState</key>
<true/>
</dict>
<key>StandardOutPath</key>
<string>/var/log/mount_media.out</string>
<key>StandardErrorPath</key>
<string>/var/log/mount_media.err</string>
</dict>
</plist>
PLIST
sudo chown root:wheel /Library/LaunchDaemons/com.local.mountmedia.plist
sudo chmod 644 /Library/LaunchDaemons/com.local.mountmedia.plist
sudo plutil -lint /Library/LaunchDaemons/com.local.mountmedia.plist
sudo launchctl bootout system /Library/LaunchDaemons/com.local.mountmedia.plist 2>/dev/null || true
sudo launchctl bootstrap system /Library/LaunchDaemons/com.local.mountmedia.plist
sudo launchctl enable system/com.local.mountmedia
sudo launchctl kickstart -k system/com.local.mountmedia
```
---
### 4) Run once and verify
```
# Run once now (idempotent; logs "already mounted" if present)
sudo /usr/local/sbin/mount_media_smb.sh
# LaunchDaemon status
sudo launchctl print system/com.local.mountmedia | egrep 'state|last exit|PID' || true
# Mount status (should NOT say "automounted")
mount | grep " on /Volumes/media "
# SMB mount parameters
smbstat -m | sed -n '/\/Volumes\/media/,+12p'
# Script logs
tail -n 100 /var/log/mount_media.out /var/log/mount_media.err 2>/dev/null || true
```
---
### 5) Jellyfin settings
- Temporarily disable “Enable real-time monitoring” for libraries under `/Volumes/media` until you confirm the mount stays stable.
- Then run “Scan Library Files” to repopulate anything previously removed.
---
### 6) Reboot test (recommended)
```
sudo shutdown -r now
```
After reboot:
```
mount | grep " on /Volumes/media " || echo "Not mounted yet"
sudo launchctl print system/com.local.mountmedia | egrep 'state|last exit' || true
tail -n 100 /var/log/mount_media.out /var/log/mount_media.err 2>/dev/null || true
```
---
## Rationale for Key Choices
- Persistent mount (LaunchDaemon) instead of autofs:
- autofs can unmount after inactivity; Jellyfin then removes items it thinks are gone.
- LaunchDaemon ensures the mount is present before scans and remains mounted.
- smb options:
- `hard`: Blocks I/O until server responds, avoiding spurious “file missing” errors.
- `nfsvers=4.0`: Matches typical NAS defaults and the clients chosen version.
- `resvport` then fallback `noresvport`: Some servers require privileged ports; the script tries both.
- Network preflight:
- Check TCP/2049 reachability to avoid “Network is unreachable” failures (exit code 51) at boot or during link flaps.
- Logging:
- `/var/log/mount_media.out` and `.err` make it easy to correlate with Jellyfin logs.
---
## Troubleshooting
- “Operation not permitted” when `chown`/`chmod`:
- The path is mounted over NFS, and root-squash likely prevents ownership changes from the client. Unmount first or change ownership on the NAS.
- LaunchDaemon errors:
- Validate plist: `sudo plutil -lint /Library/LaunchDaemons/com.local.mountmedia.plist`
- Service state: `sudo launchctl print system/com.local.mountmedia`
- Mount health:
- `nfsstat -m` should show vers=4.0, hard, resvport/noresvport.
- Network/power:
- Prevent system sleep that drops the NIC; enable “Wake for network access.”
---
## Optional: If you must keep autofs
Increase the autofs timeout so it doesnt unmount on brief inactivity (less ideal than the LaunchDaemon approach):
```
sudo cp /etc/auto_master /etc/auto_master.bak.$(date +%F_%H%M%S)
sudo sed -i.bak -E 's|^/-[[:space:]]+/etc/auto_nfs$|/- -timeout=604800 /etc/auto_nfs|' /etc/auto_master
sudo automount -vc
```
---
## Reverting
To revert to autofs:
```
# Stop and remove LaunchDaemon
sudo launchctl bootout system /Library/LaunchDaemons/com.local.mountmedia.plist
sudo rm -f /Library/LaunchDaemons/com.local.mountmedia.plist
# Restore /etc/auto_master (uncomment direct map) and reload
sudo sed -i.bak 's|^#/- /etc/auto_nfs|/- /etc/auto_nfs|' /etc/auto_master
sudo automount -vc
```
---
## Notes
- Change permissions/ownership on the NFS export from the NAS, not the macOS client (root-squash).
- `showmount` may fail against NFSv4-only servers; its not needed here.
- Adjust `SERVER`, `MOUNT`, and `StartInterval` to suit your environment.