Skip to main content
Symptom: nixopus-auth or nixopus-api crash-loop with database authentication errors.Cause: Containers were removed but Docker volumes still hold the old database with the original password. The reinstall generated new credentials that don’t match.Fix:
# Option 1: Use the original password from the backup
cat /opt/nixopus/.env.bak | grep DB_PASSWORD
DB_PASSWORD=<original_password> curl -fsSL install.nixopus.com | sudo bash

# Option 2: Start fresh (destroys all data)
docker volume rm $(docker volume ls -q --filter name=nixopus)
curl -fsSL install.nixopus.com | sudo bash
Symptom: Installer hangs at “Waiting for services to start…” and times out after 180s.Fix: Check which service is unhealthy:
nixopus status
nixopus logs
Common causes: port conflict (see Ports), DNS not configured (see HTTPS), or insufficient resources (see Requirements).
Symptom: Browser shows connection refused or timeout.Fix:
  1. Verify services are running: nixopus status
  2. Check ports: nixopus config
  3. Ensure firewall rules are in place (see Firewall)
  4. If behind a cloud provider, check the security group / firewall rules in your cloud console
Symptom: Deploys fail with SSH connection refused or permission denied.The API container SSH-es back into the host to manage deployments. This requires:
  1. SSH service running on the host on the configured port (SSH_PORT, default 22)
  2. The Nixopus SSH public key in the host’s ~/.ssh/authorized_keys
  3. The host reachable from the Docker network
Fix:
# Verify the key is in authorized_keys
grep -q "$(cat /opt/nixopus/ssh/id_rsa.pub)" ~/.ssh/authorized_keys || \
  cat /opt/nixopus/ssh/id_rsa.pub >> ~/.ssh/authorized_keys

# Test SSH from the API container
docker exec nixopus-api ssh -i /etc/nixopus/ssh/id_rsa \
  -p ${SSH_PORT:-22} -o StrictHostKeyChecking=no \
  ${SSH_USER:-root}@${SSH_HOST} echo ok
Symptom: Containers fail to start or can’t access mounted volumes on RHEL-based systems.Fix:
# Check if SELinux is enforcing
getenforce

# Option 1: Allow Docker to access the volume (recommended)
chcon -Rt svirt_sandbox_file_t /opt/nixopus

# Option 2: Set SELinux to permissive (less secure)
setenforce 0
Container logs are capped at 10MB per service (30MB with rotation). If disk still fills up:
# Docker disk usage
docker system df

# Clean unused images and build cache
docker system prune -f

# Check Postgres data size
docker exec nixopus-db psql -U nixopus \
  -c "SELECT pg_size_pretty(pg_database_size('nixopus'));"
Services use restart: unless-stopped, so they start automatically with Docker. If they don’t:
# Ensure Docker starts on boot
sudo systemctl enable docker

# Start services manually
nixopus restart
Symptom: Apps crash unexpectedly, containers restart, or docker inspect shows OOMKilled: true.Cause: A container exceeded its memory limit or the machine ran out of available memory.Fix:
  1. Restart the machine to reclaim memory:
    nixopus restart
    
  2. Check which containers are consuming the most memory:
    docker stats --no-stream --format "table {{.Name}}\t{{.MemUsage}}"
    
  3. If the issue persists, your workload may need more resources. Consider upgrading your machine — see machine resources for Nixopus Cloud, or add more RAM to your VPS for self-hosted.
Symptom: Domain shows a connection error or invalid certificate warning.
  1. Confirm your DNS A record points to your machine’s public IP. DNS propagation can take up to 48 hours.
  2. Caddy provisions SSL automatically, but it needs port 80 and 443 open. Check your firewall or cloud security group.
  3. View Caddy logs: nixopus logs nixopus-caddy
  4. If using Cloudflare, set to “DNS only” (grey cloud) during initial setup.