If you're running self-hosted Supabase, you've probably deployed it manually at least once. SSH into a server, install Docker, clone the repo, configure environment variables, start the containers. It works, but it doesn't scale—and it definitely doesn't survive the "hit by a bus" test.
Infrastructure as Code (IaC) solves this. With Terraform, you can version-control your entire Supabase deployment: the server, the Docker configuration, the environment variables, everything. One command recreates your entire infrastructure from scratch.
This guide covers provisioning cloud servers, deploying Supabase with Docker Compose, and managing secrets—all through Terraform.
Why Infrastructure as Code for Supabase?
Manual deployments work fine for a single project. But when you're managing multiple Supabase instances across staging and production, or need to recover from infrastructure failures, manual processes become liabilities.
Here's what IaC gives you:
Reproducibility: Your staging environment matches production exactly. No more "works on staging" bugs caused by configuration drift.
Disaster recovery: If your server dies, you can spin up an identical replacement in minutes, not hours. Combined with proper backups, you can recover from almost any failure.
Audit trail: Every infrastructure change lives in Git. You know exactly who changed what, when, and why.
Team scaling: New developers don't need tribal knowledge to understand the infrastructure. They read the Terraform files.
The trade-off is complexity. You're adding another tool to maintain. For a single small project, manual deployment might be simpler. But once you have multiple environments or team members, IaC pays for itself quickly.
Architecture Overview
We'll build a Terraform configuration that:
- Provisions a VPS on your cloud provider of choice
- Installs Docker and Docker Compose
- Deploys Supabase with production-ready configuration
- Manages secrets securely
- Configures networking and firewall rules
The full setup uses three Terraform files: main.tf for resources, variables.tf for configuration, and outputs.tf for connection information.
Setting Up Your Terraform Project
Start with the basic directory structure:
supabase-infrastructure/
├── main.tf
├── variables.tf
├── outputs.tf
├── terraform.tfvars # Your actual values (gitignore this)
├── terraform.tfvars.example
└── scripts/
└── setup-supabase.sh
Defining Variables
Create variables.tf with your configuration options:
variable "do_token" {
description = "DigitalOcean API token"
type = string
sensitive = true
}
variable "region" {
description = "Deployment region"
type = string
default = "nyc1"
}
variable "droplet_size" {
description = "Server size (minimum s-2vcpu-4gb for production)"
type = string
default = "s-2vcpu-4gb"
}
variable "supabase_version" {
description = "Supabase Docker image tag"
type = string
default = "latest"
}
variable "postgres_password" {
description = "PostgreSQL password"
type = string
sensitive = true
}
variable "jwt_secret" {
description = "JWT signing secret (minimum 32 characters)"
type = string
sensitive = true
}
variable "anon_key" {
description = "Supabase anon key"
type = string
sensitive = true
}
variable "service_role_key" {
description = "Supabase service role key"
type = string
sensitive = true
}
variable "dashboard_username" {
description = "Studio dashboard username"
type = string
default = "admin"
}
variable "dashboard_password" {
description = "Studio dashboard password"
type = string
sensitive = true
}
The sensitive = true flag prevents Terraform from logging these values.
Provider Configuration
In main.tf, configure your cloud provider. This example uses DigitalOcean, but the pattern works for any provider:
terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.0"
}
}
}
provider "digitalocean" {
token = var.do_token
}
# SSH key for server access
resource "digitalocean_ssh_key" "supabase" {
name = "supabase-terraform"
public_key = file("~/.ssh/id_rsa.pub")
}
# Firewall rules
resource "digitalocean_firewall" "supabase" {
name = "supabase-firewall"
droplet_ids = [digitalocean_droplet.supabase.id]
# SSH access (restrict to your IP in production)
inbound_rule {
protocol = "tcp"
port_range = "22"
source_addresses = ["0.0.0.0/0", "::/0"]
}
# HTTP/HTTPS
inbound_rule {
protocol = "tcp"
port_range = "80"
source_addresses = ["0.0.0.0/0", "::/0"]
}
inbound_rule {
protocol = "tcp"
port_range = "443"
source_addresses = ["0.0.0.0/0", "::/0"]
}
# Supabase Kong API Gateway
inbound_rule {
protocol = "tcp"
port_range = "8000"
source_addresses = ["0.0.0.0/0", "::/0"]
}
outbound_rule {
protocol = "tcp"
port_range = "all"
destination_addresses = ["0.0.0.0/0", "::/0"]
}
outbound_rule {
protocol = "udp"
port_range = "all"
destination_addresses = ["0.0.0.0/0", "::/0"]
}
}
Provisioning the Server
The droplet resource creates your Supabase server:
resource "digitalocean_droplet" "supabase" {
name = "supabase-production"
image = "ubuntu-24-04-x64"
size = var.droplet_size
region = var.region
ssh_keys = [digitalocean_ssh_key.supabase.fingerprint]
# Cloud-init script for initial setup
user_data = templatefile("${path.module}/scripts/cloud-init.yaml", {
postgres_password = var.postgres_password
jwt_secret = var.jwt_secret
anon_key = var.anon_key
service_role_key = var.service_role_key
dashboard_username = var.dashboard_username
dashboard_password = var.dashboard_password
supabase_version = var.supabase_version
})
tags = ["supabase", "production"]
}
Cloud-Init Script
The cloud-init.yaml script handles Docker installation and Supabase deployment:
#cloud-config
package_update: true
package_upgrade: true
packages:
- apt-transport-https
- ca-certificates
- curl
- gnupg
- lsb-release
- git
runcmd:
# Install Docker
- curl -fsSL https://get.docker.com | sh
- systemctl enable docker
- systemctl start docker
# Install Docker Compose
- curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
# Clone Supabase
- git clone --depth 1 https://github.com/supabase/supabase /opt/supabase
- cd /opt/supabase/docker
# Configure environment
- cp .env.example .env
- |
sed -i 's/POSTGRES_PASSWORD=.*/POSTGRES_PASSWORD=${postgres_password}/' .env
sed -i 's/JWT_SECRET=.*/JWT_SECRET=${jwt_secret}/' .env
sed -i 's/ANON_KEY=.*/ANON_KEY=${anon_key}/' .env
sed -i 's/SERVICE_ROLE_KEY=.*/SERVICE_ROLE_KEY=${service_role_key}/' .env
sed -i 's/DASHBOARD_USERNAME=.*/DASHBOARD_USERNAME=${dashboard_username}/' .env
sed -i 's/DASHBOARD_PASSWORD=.*/DASHBOARD_PASSWORD=${dashboard_password}/' .env
# Start Supabase
- docker-compose up -d
Managing Secrets Properly
Hardcoding secrets in Terraform files is a bad idea—they end up in state files and potentially in version control. Here are better approaches:
Using Environment Variables
Terraform automatically reads TF_VAR_ prefixed environment variables:
export TF_VAR_postgres_password="your-secure-password" export TF_VAR_jwt_secret="your-32-char-jwt-secret" terraform apply
Using a .tfvars File (Gitignored)
Create terraform.tfvars and add it to .gitignore:
postgres_password = "your-secure-password" jwt_secret = "your-32-char-jwt-secret" anon_key = "your-anon-key" service_role_key = "your-service-role-key" dashboard_password = "your-dashboard-password"
Using HashiCorp Vault
For production deployments, integrate with Vault or another secrets manager. This keeps secrets out of your Terraform state entirely. Check our secrets management guide for detailed Vault integration.
Multi-Environment Setup
Real projects need staging and production environments. Use Terraform workspaces or separate state files:
# Use workspace name to differentiate environments
locals {
environment = terraform.workspace
droplet_sizes = {
staging = "s-1vcpu-2gb"
production = "s-2vcpu-4gb"
}
droplet_size = local.droplet_sizes[local.environment]
}
resource "digitalocean_droplet" "supabase" {
name = "supabase-${local.environment}"
size = local.droplet_size
# ... rest of configuration
}
Deploy to different environments:
# Staging terraform workspace select staging terraform apply # Production terraform workspace select production terraform apply
Each workspace maintains separate state, so your staging changes don't affect production.
Connecting to Your CI/CD Pipeline
Terraform fits naturally into CI/CD workflows. Here's a GitHub Actions example:
name: Deploy Supabase Infrastructure
on:
push:
branches: [main]
paths:
- 'infrastructure/**'
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
with:
terraform_version: 1.7.0
- name: Terraform Init
working-directory: ./infrastructure
run: terraform init
env:
TF_VAR_do_token: ${{ secrets.DO_TOKEN }}
- name: Terraform Plan
working-directory: ./infrastructure
run: terraform plan -out=tfplan
env:
TF_VAR_do_token: ${{ secrets.DO_TOKEN }}
TF_VAR_postgres_password: ${{ secrets.POSTGRES_PASSWORD }}
TF_VAR_jwt_secret: ${{ secrets.JWT_SECRET }}
TF_VAR_anon_key: ${{ secrets.ANON_KEY }}
TF_VAR_service_role_key: ${{ secrets.SERVICE_ROLE_KEY }}
TF_VAR_dashboard_password: ${{ secrets.DASHBOARD_PASSWORD }}
- name: Terraform Apply
working-directory: ./infrastructure
run: terraform apply -auto-approve tfplan
env:
TF_VAR_do_token: ${{ secrets.DO_TOKEN }}
For more CI/CD patterns, see our complete CI/CD guide.
Handling Supabase Updates
One challenge with IaC is updating Supabase versions. The Supabase team regularly releases updates with new features and security patches. Your Terraform configuration should handle this gracefully.
Add an update script that Terraform can trigger:
resource "null_resource" "supabase_update" {
triggers = {
version = var.supabase_version
}
connection {
type = "ssh"
user = "root"
host = digitalocean_droplet.supabase.ipv4_address
private_key = file("~/.ssh/id_rsa")
}
provisioner "remote-exec" {
inline = [
"cd /opt/supabase/docker",
"docker-compose pull",
"docker-compose up -d"
]
}
}
When you change supabase_version, Terraform re-runs the provisioner and pulls the latest images. For major version upgrades, check our version migration guide.
Outputs for Easy Access
Define outputs to get connection information after deployment:
output "supabase_ip" {
description = "Supabase server IP address"
value = digitalocean_droplet.supabase.ipv4_address
}
output "api_url" {
description = "Supabase API URL"
value = "http://${digitalocean_droplet.supabase.ipv4_address}:8000"
}
output "studio_url" {
description = "Supabase Studio URL"
value = "http://${digitalocean_droplet.supabase.ipv4_address}:3000"
}
output "ssh_command" {
description = "SSH command to connect"
value = "ssh root@${digitalocean_droplet.supabase.ipv4_address}"
}
After terraform apply, you'll see:
supabase_ip = "167.99.123.45" api_url = "http://167.99.123.45:8000" studio_url = "http://167.99.123.45:3000" ssh_command = "ssh [email protected]"
Trade-Offs and Limitations
Terraform isn't perfect for every situation:
Learning curve: If your team doesn't know Terraform, there's ramp-up time. For a single personal project, this overhead might not be worth it.
State management: Terraform state must be stored securely. For teams, you need a remote backend like Terraform Cloud or S3.
Drift detection: Terraform doesn't automatically detect manual changes. If someone SSHs in and modifies the server, Terraform won't know.
Provider limitations: Not every cloud provider has a mature Terraform provider. Check your provider's support before committing.
For teams already using IaC, adding Supabase is straightforward. For individuals or small teams, consider whether the added complexity justifies the benefits.
How Supascale Simplifies This
All this Terraform configuration gives you reproducible infrastructure—but you still need to manage backups, SSL certificates, OAuth providers, and updates yourself. That's a lot of moving parts.
Supascale handles the operational complexity for self-hosted Supabase. You get automated S3 backups with one-click restore, custom domains with free SSL certificates, and a UI for configuring OAuth providers—all for a one-time purchase of $39.99.
Your Terraform handles infrastructure provisioning. Supascale handles everything that comes after. They complement each other: Terraform for reproducible deployments, Supascale for day-to-day management.
Wrapping Up
Infrastructure as Code transforms self-hosted Supabase from a manual, error-prone process into a repeatable, auditable deployment. With Terraform, you can:
- Provision servers identically across environments
- Track every infrastructure change in Git
- Recover from failures in minutes
- Scale to multiple projects without tribal knowledge
The investment in learning Terraform pays off quickly, especially as your infrastructure grows or your team expands.
