AWS S3

Configure Amazon S3 for backup storage.

Store backups in Amazon S3 for reliable, scalable cloud storage.

Prerequisites

  • AWS account
  • IAM user with S3 access
  • S3 bucket created

Create IAM User

  1. Go to AWS IAM Console
  2. Click Users > Create user
  3. Name: supascale-backups
  4. Select Programmatic access
  5. Attach policy (see below)
  6. Save Access Key ID and Secret Access Key

IAM Policy

Create a custom policy with minimum permissions:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::your-bucket-name",
        "arn:aws:s3:::your-bucket-name/*"
      ]
    }
  ]
}

Create S3 Bucket

  1. Go to S3 Console
  2. Click Create bucket
  3. Name: your-company-supascale-backups
  4. Region: Choose closest to your server
  5. Enable versioning (recommended)
  6. Enable server-side encryption
  • Versioning: Enabled (protect against accidental deletes)
  • Encryption: SSE-S3 or SSE-KMS
  • Block public access: All enabled
  • Lifecycle rules: Auto-delete old versions

Configure in Supascale

Via Web UI

  1. Navigate to Cloud Storage
  2. Click Add Provider
  3. Select AWS S3
  4. Enter:
    • Name: "Production S3"
    • Access Key ID
    • Secret Access Key
    • Region
    • Bucket name
  5. Click Test Connection
  6. Click Save

Via API

curl -X POST https://supascale.example.com/api/v1/cloud-storage \
  -H "X-API-Key: your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Production S3",
    "type": "s3",
    "isDefault": true,
    "s3Config": {
      "accessKeyId": "AKIAXXXXXXXXXXXXXXXX",
      "secretAccessKey": "your-secret-key",
      "region": "us-east-1",
      "bucket": "your-bucket-name"
    }
  }'

Configuration Options

OptionRequiredDescription
accessKeyIdYesIAM user access key
secretAccessKeyYesIAM user secret key
regionYesAWS region (us-east-1, eu-west-1, etc.)
bucketYesS3 bucket name
endpointNoCustom endpoint (for S3-compatible)
pathStyleNoUse path-style URLs

S3 Regions

Common regions:

RegionLocation
us-east-1N. Virginia
us-west-2Oregon
eu-west-1Ireland
eu-central-1Frankfurt
ap-southeast-1Singapore
ap-northeast-1Tokyo

Choose a region close to your server for faster uploads.

Test Connection

Verify configuration works:

curl -X POST https://supascale.example.com/api/v1/cloud-storage/provider-id/test \
  -H "X-API-Key: your-api-key"

Response:

{
  "success": true,
  "message": "Connection successful"
}

Lifecycle Rules

Configure automatic cleanup in S3:

  1. Go to bucket Management > Lifecycle rules
  2. Create rule:
    • Name: "Delete old backups"
    • Scope: Entire bucket or prefix
    • Action: Expire current versions
    • Days: 90

Example for tiered storage:

  • Move to Glacier after 30 days
  • Delete after 365 days

Cost Optimization

Storage Classes

ClassUse CaseCost
StandardFrequent access$$$
Standard-IAInfrequent access$$
GlacierLong-term archive$

Tips

  1. Use lifecycle rules to move old backups to cheaper tiers
  2. Enable S3 Intelligent-Tiering for automatic optimization
  3. Delete backups you no longer need
  4. Monitor costs with AWS Cost Explorer

Security

Encryption

Enable server-side encryption:

  • SSE-S3: Amazon-managed keys
  • SSE-KMS: Customer-managed keys (more control)

Access Control

  • Block all public access
  • Use IAM policies for access control
  • Enable bucket logging for audit trail
  • Consider VPC endpoints for private access

Troubleshooting

"Access Denied"

  1. Verify access key is correct
  2. Check IAM policy permissions
  3. Verify bucket name and region
  4. Check bucket policy doesn't block access

"Bucket not found"

  1. Verify bucket name spelling
  2. Confirm bucket exists in specified region
  3. Check bucket wasn't deleted

"Slow uploads"

  1. Check network connectivity
  2. Consider using a closer region
  3. Enable S3 Transfer Acceleration
  4. Check for bandwidth limitations