You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm interested in using Litestream to create backups of databases for different users in a SaaS application. Each user's database will be served from an isolated Docker/Kubernetes container.
As such, I'd like each user's container to have AWS credentials for Litestream which are restricted to just their area of the S3 bucket - if something goes wrong I want to ensure the credentials in a container cannot be used to access data for other users.
It looks like the best way to do that is using STS - Security Token Service - and assume_role. This lets me retrieve a temporary (valid for up to 12 hours) AWS AccessKeyId/SecretAccessKey/SessionToken and I can attach an additional JSON policy to it which will restrict access to just files in the S3 bucket that begin with /dedicated-prefix-for-that-account.
Here's the challenge: if the tokens only last for 12 hours, what's the right way to feed them to Litestream?
I'm planning to run my application in a container using litestream replicate ... -exec "myapplication -p 8081" - so the obvious approach appears to be restarting the container with freshly injected secrets every twelve hours, but I'm not excited about the impact that will have on user-experience if each container has several seconds of unavoidable downtime every twelve hours.
Are there better ways to handle this? Or is this a feature that Litestream could add, the ability to switch to different AWS S3 credentials without restarting the process?
The text was updated successfully, but these errors were encountered:
On further thought I think this may be out of scope for Litestream - there are plenty of other situations where you might want to restart a container with fresh secrets that are unrelated, eg if you rotate your database credentials.
Yeah, I think restarting the container is probably your best option in Kubernetes. I'd like to add support for sending SIGHUP to reload the config file without restarting Litestream but that doesn't help if you're passing in credentials via environment variables.
If I could restart using SIGHUP I could change my implementation to run from a config file, then update that config file and restart Litestream inside the container - so I'd definitely embrace that feature.
I'm interested in using Litestream to create backups of databases for different users in a SaaS application. Each user's database will be served from an isolated Docker/Kubernetes container.
As such, I'd like each user's container to have AWS credentials for Litestream which are restricted to just their area of the S3 bucket - if something goes wrong I want to ensure the credentials in a container cannot be used to access data for other users.
It looks like the best way to do that is using STS - Security Token Service - and assume_role. This lets me retrieve a temporary (valid for up to 12 hours) AWS AccessKeyId/SecretAccessKey/SessionToken and I can attach an additional JSON policy to it which will restrict access to just files in the S3 bucket that begin with
/dedicated-prefix-for-that-account
.Here's the challenge: if the tokens only last for 12 hours, what's the right way to feed them to Litestream?
I'm planning to run my application in a container using
litestream replicate ... -exec "myapplication -p 8081"
- so the obvious approach appears to be restarting the container with freshly injected secrets every twelve hours, but I'm not excited about the impact that will have on user-experience if each container has several seconds of unavoidable downtime every twelve hours.Are there better ways to handle this? Or is this a feature that Litestream could add, the ability to switch to different AWS S3 credentials without restarting the process?
The text was updated successfully, but these errors were encountered: