New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
An attacker can flood the storage by direct uploading files #13
Comments
Fails in what way? If the attachment is changed on the record before the file is promoted, the stored file is deleted. However, if the record gets deleted, the promotion will error and the stored file won't get deleted, need to fix that. |
When we are using a cloud storage (eg s3), presumably So if this s3 api fail, the file isn't promoted, the cache ultimately gets cleared, then we'll lose a file? |
Yeah, |
Back when attaché was uploading asynchronously, I've had experience fail scenario before and the retries didn't persist as long as the s3 outage. That incident resulted in some data loss. For a while we were investigating several methods to make the retry more robust but eventually ended with synchronous upload instead: either the end user experience upload error or the data is safe in s3, no silent data loss. So for promotion, I'm back to worrying about the failure state again. This picking your brain on this. If there's no satisfactory solution, I wonder if there's another algorithm to address the original attack vector |
I think this can be fixed by just setting a long-enough timespan. For example, you make the Sidekiq job retry for 1 day, in increasing periods (e.g. 1st retry in 5 seconds, 2nd is in 15 seconds after last retry, 3rd in a minute etc.), and you also make the cache storage clear out only files that are more than 1 day old. It's statistically impossible that S3 is down for 24 hours. |
Guess it'll have to be an option for |
- if configured, calling "backup_file" will copy the file from default bucket to a "backup bucket" - addresses #13 (aka "cache" vs "store" concept in refile) - reference simplified model of shrinerb/shrine#25 (comment)
Though attache presign uploads offers the same protection as S3 direct upload
Within that duration, an attacker can still upload as many files.
To mitigate that, we can adopt the refile and shrine procedure of always uploading to
cache
then promote tostore
only when the client app sends a confirmation pingCurrent proposal is for
/promote
to mimic the/delete
endpoint@janko-m if async promotion fails in the background, what does a shrine user do?
The text was updated successfully, but these errors were encountered: