Uploading a 500MB video file to your self-hosted Supabase instance sounds straightforward until your user's network hiccups at 80% and the entire upload fails. Standard HTTP uploads don't handle interruptions gracefully, leaving users frustrated and your server wasting bandwidth on failed attempts.
This is where resumable uploads become essential. Supabase Storage implements the TUS (The Upload Server) protocol, enabling uploads that can pause, resume, and survive network failures. For self-hosted instances, this requires specific configuration that differs from the managed platform.
Why Standard Uploads Fall Short
The default supabase.storage.from('bucket').upload() method works well for small files. However, it has hard limits and reliability issues:
- 6MB practical limit: While the API supports up to 5GB, anything over 6MB becomes increasingly unreliable
- No resume capability: Network interruption means starting over
- Memory pressure: Large files load entirely into memory before transmission
- Timeout risks: Slow connections may exceed server timeouts
For applications handling user-generated content, media files, or data exports, these limitations quickly become blockers.
Understanding TUS Protocol
TUS is an open protocol specifically designed for resumable file uploads. Unlike traditional multipart uploads, TUS:
- Creates a unique upload URL for each file
- Uploads data in configurable chunks (default 6MB in Supabase)
- Tracks upload progress server-side
- Allows clients to query upload state and resume from the last byte
Supabase Storage v3 implements TUS with support for files up to 50GB. The upload URL remains valid for 24 hours, giving users plenty of time to complete interrupted uploads.
Configuring Self-Hosted Supabase for Resumable Uploads
Before implementing resumable uploads in your application, ensure your self-hosted Supabase deployment is properly configured.
Storage Service Configuration
Check your docker-compose.yml for these critical settings in the storage service:
storage:
image: supabase/storage-api:v1.0.6
environment:
FILE_SIZE_LIMIT: "52428800000" # 50GB in bytes
TUS_URL_PATH: /storage/v1/upload/resumable
TUS_URL_EXPIRY_MS: "86400000" # 24 hours
TUS_ENABLE: "true"
The FILE_SIZE_LIMIT controls the maximum file size. Adjust this based on your use case—not every application needs 50GB uploads.
S3-Compatible Backend Considerations
If you're using MinIO, Cloudflare R2, or another S3-compatible storage backend (covered in our S3 storage guide), you may encounter a specific issue.
Cloudflare R2 and some other providers don't implement S3's object tagging feature. This causes TUS uploads to fail with HTTP 500 errors. The fix:
storage:
environment:
TUS_ALLOW_S3_TAGS: "false"
This disables the tagging feature that TUS uses, resolving compatibility issues with R2 and similar providers.
Reverse Proxy Timeout Configuration
Large uploads take time. Your reverse proxy needs timeouts that accommodate this:
Nginx:
location /storage/v1/upload/resumable {
proxy_pass http://storage:5000;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
client_max_body_size 50G;
# Required for chunked uploads
proxy_http_version 1.1;
proxy_set_header Connection "";
}
Traefik:
http:
middlewares:
large-upload:
buffering:
maxRequestBodyBytes: 53687091200 # 50GB
retryExpression: "IsNetworkError() && Attempts() < 3"
Implementing Resumable Uploads
With the server configured, implement resumable uploads in your client application.
Using tus-js-client Directly
The tus-js-client library provides fine-grained control over the upload process:
import * as tus from 'tus-js-client';
import { createClient } from '@supabase/supabase-js';
const supabase = createClient(
'https://your-supabase-url.com',
'your-anon-key'
);
async function uploadLargeFile(file: File, bucket: string, path: string) {
// Get the session for authentication
const { data: { session } } = await supabase.auth.getSession();
if (!session) {
throw new Error('User must be authenticated');
}
return new Promise((resolve, reject) => {
const upload = new tus.Upload(file, {
endpoint: `${supabase.storage.url}/upload/resumable`,
retryDelays: [0, 1000, 3000, 5000, 10000],
chunkSize: 6 * 1024 * 1024, // 6MB chunks
headers: {
authorization: `Bearer ${session.access_token}`,
'x-upsert': 'true', // Overwrite if exists
},
uploadDataDuringCreation: true,
removeFingerprintOnSuccess: true,
metadata: {
bucketName: bucket,
objectName: path,
contentType: file.type,
cacheControl: '3600',
},
onError: (error) => {
console.error('Upload failed:', error);
reject(error);
},
onProgress: (bytesUploaded, bytesTotal) => {
const percentage = ((bytesUploaded / bytesTotal) * 100).toFixed(2);
console.log(`${percentage}%`);
},
onSuccess: () => {
console.log('Upload complete');
resolve(upload.url);
},
});
// Check for previous upload to resume
upload.findPreviousUploads().then((previousUploads) => {
if (previousUploads.length > 0) {
console.log('Resuming previous upload');
upload.resumeFromPreviousUpload(previousUploads[0]);
}
upload.start();
});
});
}
Using Uppy for User-Friendly UI
For applications needing a polished upload interface, Uppy integrates seamlessly with TUS:
import Uppy from '@uppy/core';
import Dashboard from '@uppy/dashboard';
import Tus from '@uppy/tus';
import { createClient } from '@supabase/supabase-js';
const supabase = createClient(SUPABASE_URL, SUPABASE_ANON_KEY);
async function initializeUppy() {
const { data: { session } } = await supabase.auth.getSession();
const uppy = new Uppy({
restrictions: {
maxFileSize: 50 * 1024 * 1024 * 1024, // 50GB
allowedFileTypes: ['video/*', 'image/*', 'application/pdf'],
},
})
.use(Dashboard, {
inline: true,
target: '#upload-area',
showProgressDetails: true,
proudlyDisplayPoweredByUppy: false,
})
.use(Tus, {
endpoint: `${SUPABASE_URL}/storage/v1/upload/resumable`,
headers: {
authorization: `Bearer ${session?.access_token}`,
},
chunkSize: 6 * 1024 * 1024,
allowedMetaFields: ['bucketName', 'objectName', 'contentType'],
});
uppy.on('file-added', (file) => {
uppy.setFileMeta(file.id, {
bucketName: 'uploads',
objectName: `${Date.now()}-${file.name}`,
contentType: file.type,
});
});
uppy.on('complete', (result) => {
console.log('Uploads complete:', result.successful);
});
return uppy;
}
Handling Concurrent Upload Conflicts
TUS generates unique URLs for each upload, but concurrent uploads to the same path create conflicts. The server handles this gracefully—only one client succeeds while others receive HTTP 409 Conflict.
Design your application to handle this:
upload.onError = (error) => {
if (error.message.includes('409')) {
// Another client is uploading to this URL
// Generate a new path and retry
const newPath = `${originalPath}-${Date.now()}`;
// Restart upload with new path
}
};
Monitoring Upload Progress Server-Side
For applications tracking upload status beyond the client session, store progress in your database:
CREATE TABLE upload_sessions ( id UUID PRIMARY KEY DEFAULT gen_random_uuid(), user_id UUID REFERENCES auth.users(id), file_name TEXT NOT NULL, file_size BIGINT NOT NULL, bytes_uploaded BIGINT DEFAULT 0, status TEXT DEFAULT 'in_progress', tus_upload_url TEXT, created_at TIMESTAMPTZ DEFAULT NOW(), updated_at TIMESTAMPTZ DEFAULT NOW() ); -- Enable RLS ALTER TABLE upload_sessions ENABLE ROW LEVEL SECURITY; CREATE POLICY "Users can manage their own uploads" ON upload_sessions FOR ALL USING (auth.uid() = user_id);
Update this table from your client as chunks complete, allowing users to see pending uploads across devices.
Performance Optimization for Large Files
Direct Storage Hostname
For uploads over 100MB, bypass Kong and connect directly to the storage service:
// Instead of: https://your-domain.com/storage/v1/upload/resumable // Use: https://storage.your-domain.com/upload/resumable
This requires DNS configuration but reduces latency significantly.
Chunk Size Tuning
The default 6MB chunk size balances reliability with throughput. For high-bandwidth scenarios:
// Faster connections can use larger chunks const chunkSize = connectionSpeed > 100 // Mbps ? 25 * 1024 * 1024 // 25MB : 6 * 1024 * 1024; // 6MB
Larger chunks reduce HTTP overhead but increase data loss on failure.
Memory Management
For very large files, avoid loading the entire file into memory:
// Use File objects directly, not ArrayBuffers const file = event.target.files[0]; // File object, not read into memory // DON'T do this for large files: // const buffer = await file.arrayBuffer(); // Loads entire file into RAM
Troubleshooting Common Issues
"tus: failed to resume upload" in Dashboard
A known issue with self-hosted instances affects dashboard uploads over 6MB. The workaround: upload programmatically using the methods above rather than through Supabase Studio.
Uploads Stall at Specific Percentage
Check your reverse proxy client_max_body_size and timeout settings. Also verify the storage container has sufficient memory:
storage:
deploy:
resources:
limits:
memory: 2G
Authentication Errors Mid-Upload
Long uploads may outlast JWT expiration. Refresh the token before starting:
const { data: { session }, error } = await supabase.auth.refreshSession();
if (error || !session) {
// Re-authenticate user
}
When to Use Resumable vs Standard Uploads
Not every upload needs TUS. Use standard uploads when:
- Files are under 5MB
- Network is reliable (internal services)
- Simplicity outweighs resilience
Use resumable uploads when:
- Files exceed 6MB
- Users are on mobile or unreliable networks
- Uploads represent significant user investment (video editing, data exports)
- Your application handles media or large documents
Wrapping Up
Resumable uploads transform unreliable large file transfers into resilient operations. For self-hosted Supabase, the key configuration points are:
- Enable TUS in storage service environment
- Disable S3 tags for R2/compatible backends
- Configure proxy timeouts appropriately
- Use tus-js-client or Uppy in your application
With proper setup, your self-hosted instance handles 50GB files as gracefully as Supabase Cloud—without the bandwidth costs of failed retries.
