Running a single self-hosted Supabase instance is straightforward enough. But what happens when you need to safely test changes before they hit production? What about letting multiple developers work simultaneously without stepping on each other's toes?
If you've been pushing changes directly to production (we've all been there), this guide will help you set up proper environment separation for your self-hosted Supabase stack.
Why Multi-Environment Setup Matters for Self-Hosted Supabase
The shift from Supabase Cloud to self-hosted gives you full control—but that control comes with responsibility. On Supabase Cloud, you get branch databases and preview environments built-in. Self-hosted? You're building that infrastructure yourself.
Here's what typically goes wrong without proper environment separation:
- Untested migrations break production — A schema change that worked in development causes data loss in production
- No rollback path — Changes are irreversible because there's no pre-production testing ground
- Developer conflicts — Multiple team members making schema changes simultaneously creates chaos
- OAuth configuration differences — Redirect URIs work locally but break in production
The solution is a three-tier environment setup: Local → Staging → Production. Each serves a distinct purpose and catches different classes of bugs before they reach users.
Environment Architecture Overview
Before diving into implementation, let's clarify what each environment should handle:
| Environment | Purpose | Who Uses It | Data |
|---|---|---|---|
| Local | Rapid development, experimentation | Individual developers | Seed data, fake users |
| Staging | Integration testing, QA | Whole team, CI/CD | Sanitized production copy |
| Production | Real users | End users | Real data |
For self-hosted Supabase, each environment runs its own complete stack: PostgreSQL, GoTrue (Auth), Storage, PostgREST, and whatever other services you need. This differs from Supabase Cloud where environments share underlying infrastructure.
Setting Up Your Local Environment
The Supabase CLI provides an excellent local development experience, even when your production runs on self-hosted infrastructure. If you haven't already set up local development, check out our local development workflow guide for the basics.
Start with the CLI:
# Initialize Supabase in your project npx supabase init # Start local Supabase npx supabase start
This spins up a complete Supabase stack in Docker—the same components running in your self-hosted production environment.
Configuring Local Environment Variables
Create a .env.local file for your application:
# .env.local NEXT_PUBLIC_SUPABASE_URL=http://127.0.0.1:54321 NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9... SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
The local Supabase instance uses predictable keys, which makes setup easy but also means these should never be used anywhere except local development.
Seed Data for Development
Create a supabase/seed.sql file that populates your local database with test data:
-- supabase/seed.sql
-- Create test users (these only exist locally)
INSERT INTO auth.users (id, email, encrypted_password, email_confirmed_at)
VALUES
('d0d1234-0000-0000-0000-000000000001', '[email protected]',
crypt('password123', gen_salt('bf')), now());
-- Add corresponding test data
INSERT INTO public.profiles (id, username, created_at)
VALUES
('d0d1234-0000-0000-0000-000000000001', 'testuser', now());
Run seed data automatically on database reset:
npx supabase db reset
Setting Up Your Staging Environment
Staging should mirror production as closely as possible while remaining isolated. For self-hosted Supabase, this typically means a separate server or a separate Docker Compose stack on the same infrastructure.
Option 1: Separate Server (Recommended)
Deploy a complete Supabase stack to a smaller VPS. This provides true isolation and lets you test infrastructure changes safely. If you're looking for cost-effective options, our VPS provider guide covers servers suitable for staging environments.
Option 2: Same Server, Separate Stack
If budget is tight, you can run staging alongside production using different ports and a separate Docker Compose project:
# Create staging directory mkdir -p ~/supabase-staging cd ~/supabase-staging # Clone official Supabase repository git clone --depth 1 https://github.com/supabase/supabase.git cd supabase/docker # Copy and modify .env for staging cp .env.example .env
Modify the .env file for staging:
# .env (staging) SITE_URL=https://staging.yourapp.com API_EXTERNAL_URL=https://staging-api.yourapp.com # Use different ports to avoid conflicts KONG_HTTP_PORT=8000 KONG_HTTPS_PORT=8443 POSTGRES_PORT=5433 STUDIO_PORT=3001 # CRITICAL: Generate unique secrets JWT_SECRET=your-staging-specific-jwt-secret ANON_KEY=your-staging-anon-key SERVICE_ROLE_KEY=your-staging-service-role-key
Start the staging stack:
docker compose -p supabase-staging up -d
The -p supabase-staging flag creates a separate Docker project, keeping staging containers isolated from production.
Configuring Staging OAuth
OAuth providers need separate credentials for staging. In your Google Cloud Console or GitHub OAuth settings, create a separate OAuth application with staging redirect URIs.
For self-hosted Supabase, configure these in your staging .env:
GOTRUE_EXTERNAL_GOOGLE_ENABLED=true GOTRUE_EXTERNAL_GOOGLE_CLIENT_ID=staging-client-id GOTRUE_EXTERNAL_GOOGLE_SECRET=staging-client-secret GOTRUE_EXTERNAL_GOOGLE_REDIRECT_URI=https://staging-api.yourapp.com/auth/v1/callback
See our OAuth providers guide for detailed configuration steps.
Setting Up Your Production Environment
Your production self-hosted Supabase should already be running. If not, our deployment guide walks through the complete setup.
The key difference for production is that changes should only arrive through your migration workflow—never through direct database modifications.
Database Migration Workflow
With three environments in place, you need a consistent way to move schema changes through the pipeline. The Supabase CLI handles this through migration files.
Creating Migrations
When making schema changes, create them as migrations rather than editing the database directly:
# Create a new migration file npx supabase migration new add_user_preferences # This creates: supabase/migrations/20260501120000_add_user_preferences.sql
Write your migration:
-- supabase/migrations/20260501120000_add_user_preferences.sql CREATE TABLE public.user_preferences ( id UUID PRIMARY KEY DEFAULT gen_random_uuid(), user_id UUID NOT NULL REFERENCES auth.users(id) ON DELETE CASCADE, theme TEXT DEFAULT 'light', notifications_enabled BOOLEAN DEFAULT true, created_at TIMESTAMPTZ DEFAULT now(), updated_at TIMESTAMPTZ DEFAULT now() ); -- Enable RLS ALTER TABLE public.user_preferences ENABLE ROW LEVEL SECURITY; -- Policy: users can only access their own preferences CREATE POLICY "Users can manage own preferences" ON public.user_preferences FOR ALL USING (auth.uid() = user_id);
Applying Migrations to Each Environment
Local (automatic via CLI):
npx supabase db reset # Applies all migrations and seed data
Staging (via direct connection):
npx supabase db push --db-url "postgresql://postgres:staging-password@staging-server:5433/postgres"
Production (via direct connection with caution):
npx supabase db push --db-url "postgresql://postgres:prod-password@prod-server:5432/postgres"
Automated Deployment with GitHub Actions
For teams, automate migration deployment through CI/CD. Here's a workflow that deploys to staging on PR merge to develop and to production on merge to main:
# .github/workflows/deploy-migrations.yml
name: Deploy Migrations
on:
push:
branches: [develop, main]
paths:
- 'supabase/migrations/**'
jobs:
deploy-staging:
if: github.ref == 'refs/heads/develop'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: supabase/setup-cli@v1
with:
version: latest
- name: Deploy to Staging
run: |
supabase db push --db-url "${{ secrets.STAGING_DB_URL }}"
deploy-production:
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
environment: production # Requires manual approval
steps:
- uses: actions/checkout@v4
- uses: supabase/setup-cli@v1
with:
version: latest
- name: Deploy to Production
run: |
supabase db push --db-url "${{ secrets.PRODUCTION_DB_URL }}"
The environment: production setting in GitHub Actions lets you require manual approval before production deployments.
Managing Configuration Across Environments
Your application needs to connect to the correct Supabase instance based on the environment. Structure your environment variables clearly:
# .env.local (development) NEXT_PUBLIC_SUPABASE_URL=http://127.0.0.1:54321 NEXT_PUBLIC_SUPABASE_ANON_KEY=local-anon-key # .env.staging NEXT_PUBLIC_SUPABASE_URL=https://staging-api.yourapp.com NEXT_PUBLIC_SUPABASE_ANON_KEY=staging-anon-key # .env.production NEXT_PUBLIC_SUPABASE_URL=https://api.yourapp.com NEXT_PUBLIC_SUPABASE_ANON_KEY=production-anon-key
Validating Environment Configuration
Add a startup check to catch misconfiguration early:
// lib/supabase/validate-env.ts
export function validateSupabaseEnv() {
const url = process.env.NEXT_PUBLIC_SUPABASE_URL;
const anonKey = process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY;
if (!url || !anonKey) {
throw new Error('Missing Supabase environment variables');
}
// Prevent accidentally using local config in production
if (process.env.NODE_ENV === 'production' && url.includes('127.0.0.1')) {
throw new Error('Local Supabase URL detected in production!');
}
}
Synchronizing Data Between Environments
Staging works best with realistic data. Here's how to safely copy production data to staging:
Sanitizing Production Data
Never copy raw production data to staging—always sanitize it first:
# Export production data pg_dump -h prod-server -U postgres -d postgres \ --data-only --exclude-table=auth.users > prod_data.sql # Apply to staging (after sanitizing) psql -h staging-server -U postgres -d postgres < sanitized_data.sql
Create a sanitization script that:
- Removes or hashes personal information
- Replaces emails with fake addresses
- Scrubs any PII from custom tables
Refreshing Staging Regularly
Schedule regular staging refreshes to keep it current:
#!/bin/bash # refresh-staging.sh # Dump production (schema and sanitized data) pg_dump -h $PROD_HOST -U postgres -d postgres \ --clean --if-exists > /tmp/prod_dump.sql # Sanitize the dump ./sanitize-dump.sh /tmp/prod_dump.sql # Apply to staging psql -h $STAGING_HOST -U postgres -d postgres < /tmp/prod_dump.sql # Run any staging-specific setup psql -h $STAGING_HOST -U postgres -d postgres < ./staging-overrides.sql
How Supascale Simplifies Multi-Environment Management
Managing multiple self-hosted Supabase environments manually involves significant overhead: tracking configurations, coordinating backups, and ensuring consistency across instances.
Supascale addresses this by providing a unified interface for managing multiple Supabase projects. With a one-time purchase of $39.99, you can:
- Deploy multiple projects with distinct configurations for each environment
- Manage OAuth providers through a GUI instead of editing
.envfiles for each instance - Configure automated backups to S3-compatible storage, ensuring each environment is protected
- Set up custom domains with automatic SSL for staging and production environments
Rather than maintaining separate Docker Compose configurations and manual processes, Supascale lets you treat your environments as managed resources while retaining full control over your infrastructure.
Common Pitfalls and How to Avoid Them
Migration Order Issues
Migrations must run in the same order across all environments. Never rename or reorder migration files after they've been applied anywhere.
Fix: Use timestamps in migration names (the CLI default) and never modify committed migrations.
Environment Variable Leaks
Accidentally deploying with the wrong environment variables happens more often than you'd think.
Fix: Use environment-specific .env files and validate at startup. Never commit .env files to git.
Divergent Schemas
Direct database changes in one environment create schema drift that breaks migrations.
Fix: Enforce a "migrations only" policy. If you need to make an emergency production fix, immediately create a corresponding migration file.
Storage Configuration Differences
Storage paths and S3 configurations often differ between environments, causing file access issues.
Fix: Use environment variables for all storage configuration and test file operations in staging before production.
Verification Checklist
Before considering your multi-environment setup complete, verify:
- [ ] Local Supabase starts and applies all migrations
- [ ] Staging is accessible at its own URL
- [ ] Production remains isolated from staging changes
- [ ] Migrations apply successfully to all environments in order
- [ ] OAuth works in each environment with environment-specific credentials
- [ ] CI/CD deploys to staging automatically and production with approval
- [ ] Backups are configured for both staging and production
- [ ] Application connects to the correct environment based on deployment target
Further Reading
- CI/CD Pipelines for Self-Hosted Supabase — Detailed CI/CD implementation
- Database Schema Migrations Guide — Deep dive into migration patterns
- Environment Variables Guide — Complete configuration reference
- Supascale Pricing — Deploy and manage multiple environments with ease
