Documentation Improvement - Connecting self-hosted Storage API to S3 #12919
mohannadhussain
started this conversation in
General
Replies: 2 comments 5 replies
-
Hey! |
Beta Was this translation helpful? Give feedback.
5 replies
-
@mohannadhussain amazing, thank you for sharing this! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I posted about this in Discord and wanted to share my findings here in hopes of improving documentation for others going forward.
Connecting Supabase Storage API to AWS S3
The instructions here apply to the self-hosted version of Supabase, i.e. the one used via
docker-compose
Step 1: What you need
Step 2: Configuration
In your supabase directory, open
docker-compose.yml
in your favorite editor. Locate thestorage
section and Change the following environment varialbes:STORAGE_BACKEND: file
toSTORAGE_BACKEND: s3
REGION
to your AWS Region, e.g.us-east-1
GLOBAL_S3_BUCKET
to the name of your AWS S3 bucketTENANT_ID
is the top-level directory (i.e. s3 path prefix), you can change it if you want.Now, you need to setup AWS authentication, which can be done in one of two ways, depending on your preferences:
docker-compose.yml
, you can add environment variables forAWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
, orcredentials
then map it indocker-compose.yml
undervolumes
like so:./credentials:/root/.aws/credentials
- the file's contents would look like this:Step 3: Test it
Boot up your setup with
docker-compose up
, use the UI to create a new bucket and upload a file into it. Now, navigate to your S3 bucket in the AWS console and verify your file was uploaded there. The path looks like this{TENANT_ID from docker-compose.yml}/{your bucket name}/{your file name}
A few gotchas to keep in mind:
Beta Was this translation helpful? Give feedback.
All reactions