Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thoughts on using Litestream with time-limited AWS credentials? #246

Closed
simonw opened this issue Nov 6, 2021 · 3 comments
Closed

Thoughts on using Litestream with time-limited AWS credentials? #246

simonw opened this issue Nov 6, 2021 · 3 comments

Comments

@simonw
Copy link

simonw commented Nov 6, 2021

I'm interested in using Litestream to create backups of databases for different users in a SaaS application. Each user's database will be served from an isolated Docker/Kubernetes container.

As such, I'd like each user's container to have AWS credentials for Litestream which are restricted to just their area of the S3 bucket - if something goes wrong I want to ensure the credentials in a container cannot be used to access data for other users.

It looks like the best way to do that is using STS - Security Token Service - and assume_role. This lets me retrieve a temporary (valid for up to 12 hours) AWS AccessKeyId/SecretAccessKey/SessionToken and I can attach an additional JSON policy to it which will restrict access to just files in the S3 bucket that begin with /dedicated-prefix-for-that-account.

Here's the challenge: if the tokens only last for 12 hours, what's the right way to feed them to Litestream?

I'm planning to run my application in a container using litestream replicate ... -exec "myapplication -p 8081" - so the obvious approach appears to be restarting the container with freshly injected secrets every twelve hours, but I'm not excited about the impact that will have on user-experience if each container has several seconds of unavoidable downtime every twelve hours.

Are there better ways to handle this? Or is this a feature that Litestream could add, the ability to switch to different AWS S3 credentials without restarting the process?

@simonw
Copy link
Author

simonw commented Nov 7, 2021

On further thought I think this may be out of scope for Litestream - there are plenty of other situations where you might want to restart a container with fresh secrets that are unrelated, eg if you rotate your database credentials.

I found this article about the more general problem: https://medium.com/devops-dudes/how-to-propagate-a-change-in-kubernetes-secrets-by-restarting-dependent-pods-b71231827656

@simonw simonw closed this as completed Nov 7, 2021
@benbjohnson
Copy link
Owner

Yeah, I think restarting the container is probably your best option in Kubernetes. I'd like to add support for sending SIGHUP to reload the config file without restarting Litestream but that doesn't help if you're passing in credentials via environment variables.

@simonw
Copy link
Author

simonw commented Nov 10, 2021

If I could restart using SIGHUP I could change my implementation to run from a config file, then update that config file and restart Litestream inside the container - so I'd definitely embrace that feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants