Database Migrations
Nixopus uses a migration system built with Go and PostgreSQL to manage database schema changes using Bun ORM. This document provides a comprehensive outline and guide on how migrations work in the project, how to create new migrations, and how to manage the migration lifecycle.
Overview
The migration system in Nixopus is designed to:
- Automatically apply schema changes when the server starts
- Support both forward (up) and backward (down) migrations
- Organize migrations by feature modules
- Provide transaction safety for all migration operations
- Track applied migrations in the database
Migration Architecture
Core Components
The migration system consists of several key components:
- Migrator: The main migration engine (
api/internal/storage/migration.go
) - Migration Files: SQL files organized by feature modules
- Migration Table: A database table that tracks applied migrations
- Configuration: Automatic migration execution during server startup
Directory Structure
api/migrations/
├── applications/ # Application-related schema
├── audit/ # Audit logging schema
├── auth/ # Authentication and user management
├── containers/ # Container management schema
├── dashboard/ # Dashboard and analytics
├── domains/ # Domain management
├── feature-flags/ # Feature flag system
├── integrations/ # External integrations
├── notifications/ # Notification system
├── organizations/ # Organization management
├── rbac/ # Role-based access control
├── terminal/ # Terminal and shell access
└── users/ # User profile management
Migration File Naming Convention
Migration files follow a strict naming convention:
{sequence_no}_{descriptive_name}_{up|down}.sql
Examples:
001_create_users_up.sql
001_create_users_down.sql
002_create_refresh_token_up.sql
002_create_refresh_token_down.sql
How Migrations Work
Automatic Execution
Migrations are automatically executed when the Nixopus server starts. The process follows these steps:
- Server Initialization: During server startup, the migration system is initialized
- Migration Discovery: The system scans the
./migrations
directory recursively - Migration Loading: All migration files are loaded and parsed
- State Check: The system checks which migrations have already been applied
- Migration Execution: Pending migrations are applied in order
- Transaction Safety: Each migration runs within a database transaction
Migration Tracking
Bun ORM (the Go ORM you're using) does support migration tracking. It provides migration tooling but leaves tracking and management up to you.
The system maintains a migrations
table in the database to track applied migrations:
CREATE TABLE migrations (
id BIGSERIAL PRIMARY KEY,
name VARCHAR NOT NULL,
applied_at TIMESTAMP WITH TIME ZONE NOT NULL
);
Transaction Safety
Each migration operation is wrapped in a database transaction to ensure atomicity:
- If a migration fails, the transaction is rolled back
- The migration table is only updated after successful execution
- Multiple migrations can be applied in a single transaction for performance
Creating New Migrations
Step 1: Choose the Appropriate Module
Determine which feature module your migration belongs to:
- auth: User authentication, sessions, tokens
- users: User profiles, preferences
- organizations: Organization structure, membership
- rbac: Roles, permissions, access control
- applications: Application definitions, configurations
- containers: Container management, deployments
- terminal: Terminal access, shell configurations
- notifications: Alert systems, messaging
- integrations: External service connections
- dashboard: Analytics, reporting
- domains: DNS, domain management
- feature-flags: Feature toggle system
- audit: System logging, audit trails
Step 2: Determine Version Number
Find the highest version number and increment by 1:
# Check existing migrations in the auth module
ls api/migrations/auth/
# Output: 001_create_users_up.sql, 001_create_users_down.sql, ...
# Next version would be 002
Step 3: Create Migration Files
Create both up and down migration files:
Up Migration Example (003_add_user_preferences_up.sql
):
-- Add user preferences table
CREATE TABLE user_preferences (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
theme VARCHAR(50) DEFAULT 'light',
language VARCHAR(10) DEFAULT 'en',
timezone VARCHAR(100) DEFAULT 'UTC',
notifications_enabled BOOLEAN DEFAULT true,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);
-- Add indexes for performance
CREATE INDEX idx_user_preferences_user_id ON user_preferences(user_id);
CREATE UNIQUE INDEX idx_user_preferences_unique_user ON user_preferences(user_id);
-- Add trigger for updated_at
CREATE OR REPLACE FUNCTION update_updated_at_column()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = CURRENT_TIMESTAMP;
RETURN NEW;
END;
$$ language 'plpgsql';
CREATE TRIGGER update_user_preferences_updated_at
BEFORE UPDATE ON user_preferences
FOR EACH ROW
EXECUTE FUNCTION update_updated_at_column();
Down Migration Example (003_add_user_preferences_down.sql
):
-- Remove user preferences table and related objects
DROP TRIGGER IF EXISTS update_user_preferences_updated_at ON user_preferences;
DROP FUNCTION IF EXISTS update_updated_at_column();
DROP TABLE IF EXISTS user_preferences;
Step 4: Test Your Migration
Before committing, test your migration:
- Apply the migration: Start the server to apply new migrations
- Verify schema: Check that tables and indexes are created correctly
- Test rollback: Use the migration tools to test the down migration
- Data integrity: Ensure existing data is preserved
Migration Operations
Running Migrations
Migrations run automatically when the server starts, but you can also run them manually for testing:
# The server automatically runs migrations on startup
cd api
air # development environment
Rolling Back Migrations
To roll back the most recent migration:
// This functionality is available in the codebase
// but typically used for development/testing
migrator := storage.NewMigrator(db)
migrator.LoadMigrationsFromFS("./migrations")
migrator.MigrateDown()
Checking Migration Status
The migration system provides several utility functions:
// Get all applied migrations
applied, err := migrator.GetAppliedMigrations()
// Check if specific migration is applied
for _, migration := range applied {
if migration.Name == "001_create_users" {
fmt.Println("Users table migration is applied")
}
}
Resetting Migrations (Development Only)
For development purposes, you can reset the migration state:
// WARNING: This drops all tables - development only
storage.MigrateDownAll(db, "./migrations")
// Or reset just the migration tracking
storage.ResetMigrations(db)
Standard Design Practices to be followed
Migration Design
- Atomic Changes: Each migration should represent a single, complete change
- Backward Compatibility: Design migrations to be backward compatible when possible
- Data Preservation: Always consider existing data when modifying schemas
- Dependencies: Ensure migrations can run independently
Naming Conventions
- Descriptive Names: Use clear, descriptive names for migrations
- Consistent Format: Follow the established naming pattern
- Version Ordering: Use zero-padded numbers for proper ordering
- Module Organization: Keep related migrations in the same module
SQL Best Practices
- Use Transactions: Wrap complex operations in explicit transactions
- Add Indexes: Include necessary indexes for performance
- Constraints: Define proper foreign keys and constraints
- Comments: Add comments to explain complex operations
- Extensions: Check for required PostgreSQL extensions
Testing Guidelines
- Local Testing: Always test migrations locally before committing
- Rollback Testing: Verify that down migrations work correctly
- Data Integrity: Ensure data is preserved during migrations
Common Patterns
Creating Tables
-- Standard table creation pattern
CREATE TABLE example_table (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
name VARCHAR(255) NOT NULL,
description TEXT,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
deleted_at TIMESTAMP WITH TIME ZONE
);
-- Add indexes
CREATE INDEX idx_example_table_name ON example_table(name);
CREATE INDEX idx_example_table_created_at ON example_table(created_at);
Adding Columns
-- Add new column with default value
ALTER TABLE users ADD COLUMN status VARCHAR(50) DEFAULT 'active';
-- Add constraint
ALTER TABLE users ADD CONSTRAINT chk_user_status
CHECK (status IN ('active', 'inactive', 'suspended'));
Modifying Columns
-- Change column type
ALTER TABLE users ALTER COLUMN email TYPE VARCHAR(320);
-- Add NOT NULL constraint
ALTER TABLE users ALTER COLUMN email SET NOT NULL;
-- Add default value
ALTER TABLE users ALTER COLUMN created_at SET DEFAULT CURRENT_TIMESTAMP;
Foreign Key Relationships
-- Add foreign key constraint
ALTER TABLE user_preferences
ADD CONSTRAINT fk_user_preferences_user_id
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
This migration system provides a robust foundation for managing database schema changes in Nixopus while maintaining data integrity and system reliability.