Audit Logging for Self-Hosted Supabase: A Complete PGAudit Guide

Learn how to set up comprehensive audit logging for self-hosted Supabase using PGAudit to meet compliance requirements.

Cover Image for Audit Logging for Self-Hosted Supabase: A Complete PGAudit Guide

When running self-hosted Supabase in production, compliance requirements often mandate detailed audit trails of database activity. Whether you're working toward SOC 2, HIPAA, GDPR, or internal security policies, you need to know who did what, when, and to which data.

This is where PGAudit comes in. It's a PostgreSQL extension that extends the built-in logging capabilities to provide detailed session and object-level audit logging. Unlike Supabase Cloud (which has some audit features built into the dashboard), self-hosted deployments require manual configuration—but this also gives you complete control over what gets logged and where those logs go.

Why Audit Logging Matters for Self-Hosted Deployments

Self-hosting Supabase is often driven by compliance requirements. Organizations in healthcare, finance, and government frequently can't use managed services due to data residency rules or internal security policies. But self-hosting alone doesn't make you compliant—you need proper audit trails.

The core requirements across most compliance frameworks converge on these essentials:

  • User identification: Link every action to a specific account
  • Timestamps: Precise, synchronized time records
  • Event types: Document whether data was created, read, updated, or deleted
  • Success/failure indicators: Track both successful operations and failed attempts
  • Source information: Know where requests originated

Without proper audit logging, you can't prove compliance during audits. Worse, you can't investigate security incidents effectively.

Understanding Supabase's Built-in Auth Audit Logs

Before diving into PGAudit, it's worth knowing that Supabase already captures authentication events automatically. These logs live in the auth.audit_log_entries table and track:

  • Sign-ins and sign-outs
  • Password changes
  • Token refreshes
  • OAuth events
  • Failed authentication attempts

You can query these directly:

SELECT 
  id,
  created_at,
  ip_address,
  payload->>'action' as action,
  payload->>'actor_id' as user_id
FROM auth.audit_log_entries
ORDER BY created_at DESC
LIMIT 100;

This covers authentication events, but it doesn't tell you what users did after they authenticated. For that, you need PGAudit.

Setting Up PGAudit for Self-Hosted Supabase

Since you're self-hosting, you have full control over PostgreSQL configuration. Here's how to enable and configure PGAudit properly.

Step 1: Enable the Extension

Connect to your PostgreSQL instance and enable the extension:

CREATE EXTENSION IF NOT EXISTS pgaudit;

Step 2: Configure Logging Scope

PGAudit supports several logging modes. For most self-hosted deployments, you'll want to configure it at the session or global level first, then refine based on your needs.

Add these settings to your PostgreSQL configuration (or set them via SQL for testing):

-- Log all DDL and DML statements
ALTER SYSTEM SET pgaudit.log = 'ddl, write, role';

-- Include the user who executed the statement
ALTER SYSTEM SET pgaudit.log_relation = 'on';

-- Log parameter values (be careful with sensitive data)
ALTER SYSTEM SET pgaudit.log_parameter = 'on';

Then reload the configuration:

SELECT pg_reload_conf();

Step 3: Configure for Docker Compose Deployments

If you're using the standard Docker Compose setup for Supabase, you'll need to modify your PostgreSQL configuration. Create a custom postgresql.conf or add to your existing one:

# PGAudit configuration
shared_preload_libraries = 'pgaudit'
pgaudit.log = 'ddl, write'
pgaudit.log_relation = on
pgaudit.log_parameter = on
pgaudit.log_statement_once = on

Mount this configuration in your docker-compose.yml:

services:
  db:
    image: supabase/postgres:15.1.0.147
    volumes:
      - ./volumes/db/data:/var/lib/postgresql/data
      - ./postgresql.conf:/etc/postgresql/postgresql.conf
    command: postgres -c config_file=/etc/postgresql/postgresql.conf

Audit Logging Strategies for Different Compliance Needs

Not every deployment needs the same level of logging. Here's how to tailor PGAudit to your requirements.

Minimal Logging (Security Baseline)

For basic security monitoring, log DDL changes and write operations:

ALTER SYSTEM SET pgaudit.log = 'ddl, write';

This captures:

  • Table/schema changes
  • INSERT, UPDATE, DELETE operations
  • Role and permission changes

Comprehensive Logging (SOC 2 / HIPAA)

For stricter compliance, add read logging for sensitive tables:

-- Global settings
ALTER SYSTEM SET pgaudit.log = 'ddl, write, role';

-- Object-level auditing for sensitive tables
CREATE ROLE audit_reader;
GRANT SELECT ON auth.users TO audit_reader;

ALTER SYSTEM SET pgaudit.role = 'audit_reader';

This creates a dedicated role for object-level auditing. Any SELECT operations on tables granted to audit_reader will be logged.

Selective Table Auditing

You probably don't want to log every SELECT on every table—that would generate massive log volumes. Instead, target specific sensitive tables:

-- Create audit role
CREATE ROLE pgaudit_role NOLOGIN;

-- Grant access to tables you want to audit
GRANT ALL ON auth.users TO pgaudit_role;
GRANT ALL ON public.orders TO pgaudit_role;
GRANT ALL ON public.payments TO pgaudit_role;

-- Set the audit role
ALTER SYSTEM SET pgaudit.role = 'pgaudit_role';
SELECT pg_reload_conf();

Now only operations on auth.users, orders, and payments will generate audit logs.

Reading and Managing Audit Logs

PGAudit writes to the PostgreSQL log. The format looks like this:

AUDIT: SESSION,1,1,WRITE,INSERT,TABLE,public.orders,"INSERT INTO orders (user_id, total) VALUES ($1, $2)",<parameters>

Log Location in Docker Deployments

By default, logs go to stdout in Docker containers. To persist them, configure logging in your docker-compose.yml:

services:
  db:
    logging:
      driver: "json-file"
      options:
        max-size: "200m"
        max-file: "10"

Or mount a log directory:

volumes:
  - ./volumes/db/logs:/var/log/postgresql

Integrating with Log Management

For production deployments, ship logs to a centralized system. Common approaches:

Using Fluentd/Fluent Bit:

services:
  fluent-bit:
    image: fluent/fluent-bit:latest
    volumes:
      - ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf
      - ./volumes/db/logs:/var/log/postgresql:ro

Direct to S3 or cloud storage: Use log rotation with upload scripts, or tools like Vector to ship logs directly to your storage backend.

For a complete log management setup, see our guide on log management for self-hosted Supabase.

Performance Considerations

Audit logging isn't free. Every logged statement adds overhead. Here's how to keep performance acceptable:

What to Avoid

Don't log everything:

-- This will destroy performance
ALTER SYSTEM SET pgaudit.log = 'all';

Don't log parameters for high-volume tables: If you're logging reads on frequently-accessed tables, disable parameter logging to reduce log volume:

ALTER SYSTEM SET pgaudit.log_parameter = 'off';
  1. Start with DDL and write operations only
  2. Add object-level auditing for sensitive tables
  3. Monitor log volume and query performance
  4. Adjust based on actual compliance needs

Storage Planning

Audit logs can grow quickly. A moderately active application might generate:

  • 10-50 MB/day with minimal logging (DDL + writes)
  • 100-500 MB/day with comprehensive logging
  • Multiple GB/day if logging all reads

Build log rotation and archival into your backup strategy.

Creating a Custom Audit Trail Table

Sometimes you need audit data queryable within the database itself—not just in log files. Here's a lightweight approach using triggers:

-- Create audit table
CREATE TABLE audit.activity_log (
  id BIGSERIAL PRIMARY KEY,
  table_name TEXT NOT NULL,
  operation TEXT NOT NULL,
  old_data JSONB,
  new_data JSONB,
  changed_by UUID,
  changed_at TIMESTAMPTZ DEFAULT NOW(),
  client_ip INET
);

-- Create index for common queries
CREATE INDEX idx_activity_log_table ON audit.activity_log(table_name);
CREATE INDEX idx_activity_log_changed_at ON audit.activity_log(changed_at);

-- Generic audit trigger function
CREATE OR REPLACE FUNCTION audit.log_changes()
RETURNS TRIGGER AS $$
BEGIN
  INSERT INTO audit.activity_log (
    table_name,
    operation,
    old_data,
    new_data,
    changed_by,
    client_ip
  ) VALUES (
    TG_TABLE_SCHEMA || '.' || TG_TABLE_NAME,
    TG_OP,
    CASE WHEN TG_OP IN ('UPDATE', 'DELETE') THEN to_jsonb(OLD) END,
    CASE WHEN TG_OP IN ('INSERT', 'UPDATE') THEN to_jsonb(NEW) END,
    auth.uid(),
    inet_client_addr()
  );
  
  RETURN COALESCE(NEW, OLD);
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;

Apply to sensitive tables:

CREATE TRIGGER audit_orders
  AFTER INSERT OR UPDATE OR DELETE ON public.orders
  FOR EACH ROW EXECUTE FUNCTION audit.log_changes();

CREATE TRIGGER audit_payments  
  AFTER INSERT OR UPDATE OR DELETE ON public.payments
  FOR EACH ROW EXECUTE FUNCTION audit.log_changes();

This approach works alongside PGAudit—use both for defense in depth.

Querying Your Audit Data

With the trigger-based approach, you can easily query audit history:

-- Recent changes to orders
SELECT 
  operation,
  old_data->>'status' as old_status,
  new_data->>'status' as new_status,
  changed_by,
  changed_at
FROM audit.activity_log
WHERE table_name = 'public.orders'
ORDER BY changed_at DESC
LIMIT 50;

-- All changes by a specific user
SELECT *
FROM audit.activity_log
WHERE changed_by = 'user-uuid-here'
ORDER BY changed_at DESC;

-- Failed to capture (no user context)
SELECT *
FROM audit.activity_log
WHERE changed_by IS NULL
ORDER BY changed_at DESC;

Retention and Archival

Audit logs should be retained based on your compliance requirements:

  • SOC 2: Typically 1 year minimum
  • HIPAA: 6 years
  • GDPR: Varies, but "only as long as necessary"

For the trigger-based audit table, implement automatic archival:

-- Archive old audit records
CREATE OR REPLACE FUNCTION audit.archive_old_logs()
RETURNS void AS $$
BEGIN
  -- Move records older than 90 days to archive table
  INSERT INTO audit.activity_log_archive
  SELECT * FROM audit.activity_log
  WHERE changed_at < NOW() - INTERVAL '90 days';
  
  -- Delete archived records from main table
  DELETE FROM audit.activity_log
  WHERE changed_at < NOW() - INTERVAL '90 days';
END;
$$ LANGUAGE plpgsql;

Schedule this with pg_cron:

SELECT cron.schedule(
  'archive-audit-logs',
  '0 2 * * *',  -- Daily at 2 AM
  'SELECT audit.archive_old_logs()'
);

How Supascale Simplifies Audit Log Management

Managing audit logging infrastructure is just one more operational burden for self-hosted Supabase. With Supascale, you can reduce this complexity:

  • Automated backups include your audit tables, ensuring compliance data is protected
  • One-click restore means you can recover audit history if something goes wrong
  • Multiple project management lets you maintain consistent audit policies across environments

At $39.99 for unlimited projects, you get the infrastructure management sorted so you can focus on compliance configuration rather than backup scripts.

Common Pitfalls and How to Avoid Them

Pitfall 1: Logging sensitive data PGAudit with log_parameter = on will log actual values, including passwords, tokens, and PII. Use this carefully—or disable it for tables with sensitive columns.

Pitfall 2: Log storage filling up Audit logs can grow unexpectedly. Set up monitoring and alerts on log volume before it becomes a production incident.

Pitfall 3: Forgetting to audit admin actions Don't just audit application users—audit direct database access too. This catches both legitimate admin operations and potential breaches.

Pitfall 4: Not testing log recovery Audit logs are useless if you can't read them when needed. Periodically test your ability to retrieve and analyze historical logs.

Conclusion

Audit logging for self-hosted Supabase requires deliberate configuration, but it's essential for compliance and security. Start with PGAudit for comprehensive database-level logging, add trigger-based audit tables for queryable history, and implement proper log management for long-term retention.

The key is matching your logging strategy to actual compliance requirements—not logging everything (which kills performance) or logging nothing (which fails audits).

For help managing the operational aspects of self-hosted Supabase, including backups of your audit data, check out Supascale's features or explore our documentation to get started.


Further Reading