-
-
Notifications
You must be signed in to change notification settings - Fork 761
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HEAD - 503 Slow Down #783
Comments
I'm thinking this problem should be solved in the FlySystem driver. Either by providing some kind of throttling functionality in the S3 driver or a dedicated Spaces driver that has the right throttling baked in. |
@repat Did you come up with a solution? We are also experiencing this issue with Digital Ocean Spaces when executing the cleanup command. Would be happy to help.
|
Just had the same issue now
Also interested in possible throttling options. Especially for using S3 driver https://github.com/thephpleague/flysystem-aws-s3-v3 as we are for Digital Ocean Spaces. Found https://stackoverflow.com/questions/54364522/how-to-optimise-aws-s3-usage-to-handle-please-reduce-your-request-rate-problem stating
We use Digital Ocean Spaces and we use one Space only which is 250GB min so plenty for us for now. So do not want to distribute staging and production backup across two Spaces. Also, this only happened on backups cleanup, not doing the actual backup itself. Funnily enough cleaning staging backups went through fine a minute later. And because cleanup did not work for production we now have 26 backups instead of 25. |
Any news about this? |
Just for reference. Still an issue. Problem seem to be a bug at DigitalOcean though: https://www.digitalocean.com/community/questions/rate-limiting-on-spaces |
@EmilMoe They say
and I doubt Laravel Backup does more than 200 request per second..? NB They recommend a CDN and refer to https://www.digitalocean.com/docs/spaces/resources/performance-tips/ |
@jasperf I'm sure that Spatie Backup is not anything near 200/s but making it a CDN makes no sense, however it might solve it, just seems as something is wrong at DO. They must have a miscalculation in their API. |
We do have files that are larger than 500mb. Like 550-600MB for example and 10-20 of them from two servers. Perhaps we could use multi-part uploads for those as they recommend? Not sure if that is built into Laravel Backup. Other thing I will look into is timing. Two servers should not backup on the same day and time either. Just had another SlowDown response:
.. |
Even now had this error uploading only a few images to the origin (images are served from the EDGE url). So not even a large zipped file. Opened another ticket at Digital Ocean on this to get to the bottom of this all. Also again looking into combining files smaller then 1MB and uploading 500+ MB files into parts. That is besides possibly queuing. But do hope there are easier ways here and also do believe rate limiting is still triggered too soon. |
Super weird! I have the same issue and my files are only about 15kb big......
|
@pmochine Yeah, been at this for a few months now. Spaces is nice and not expensive but as general storage for images I had to cancel using them as I ran into rate limiting errors all the time despite using 4 buckets with them. As for backups, I do still use Digital Ocean Spaces but my Staging backup cleaning has been failing for 3 months now. Been talking to S3 Adapter package maintainer at https://github.com/thephpleague/flysystem-aws-s3-v3/issues/205 and perhaps exponential backoffs in combination with some other tricks can make this work. But it seems like a hassle and simply tough for me to implement. For image storage for projects we consider using volumes now. But even considering moving elsewhere. To our Dutch based provider TransIP to be precise. They have Big Storage you can attach to VPSs with relative ease and are not too pricey. I prefer not though as I liked Digital Ocean, their database management, floating ip addresses, API and so on. But this is a serious issue. NB Using Amazon S3 would also be an option as they are more generous in their limiting, but they are more expensive and calculating cost is a major pain. |
Sorry for annoying everyone who is getting a notification. But @jasperf do you have any experience with Object Storage by vultr.com? They have a limit of 400 requests of seconds, not like DO with 200. |
@pmochine The 400 requests per second Vultr offers is better yet again. Have you used them? I have only worked with one of their servers in the past, not with Object Storage. But if their API is more precise besides the double requests it may be interesting. That and possible CDN locations.. |
Almost a year later, I started to use B2 as a storage provider (since cost is way below S3), and I started to get Anyone got around an acceptable scenario since the last time this issue was in the spotight ? |
Hey @jpmurray, I found your comment through a Google-Search since we had the same issue. We also used Backblaze B2 as a storage. We have changed the cleanup strategy with our own class and finally everything went ok. |
@TobyMaxham A new cleanup strategy you added I understood. Interesting. Would you mind sharing that with us Toby? |
I also run into rate limiting problems using this package and Digital Ocean spaces. Is the issue related to this package? I will have a look at the DefaultStrategy class. |
Same issue. No one debugged this yet? |
I think it's an issue with DO |
It is not, I have it with another provider (Backblaze's B2). |
related: #618
Any fixes so far? @mdavis1982 @freekmurze @leeuwd
The text was updated successfully, but these errors were encountered: