Retrying Temporal Issues with S3 #1399

Open
ralphleon opened this Issue Mar 18, 2013 · 2 comments

Comments

Projects
None yet
2 participants

Our application does a lot of small S3 transactions during the day and we get 2-3 400 errors from Boto of the nature of:

S3ResponseError: 400 Bad Request
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>RequestTimeout</Code><Message>Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.</Message><RequestId>...</RequestId><HostId>...</HostId></Error>
S3ResponseError: 403 Forbidden
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>RequestTimeTooSkewed</Code><Message>The difference between the request time and the current time is too large.</Message><MaxAllowedSkewMilliseconds>900000</MaxAllowedSkewMilliseconds><RequestId>...</RequestId><HostId>...</HostId><RequestTime>Wed, 13 Mar 2013 01:27:27 GMT</RequestTime><ServerTime>2013-03-13T01:42:40Z</ServerTime></Error>

These errors are always temporal, and will work on retry. Is there a good way to tap into Boto's retry system to solve these issues? It seems silly to write a retry system on-top of Boto's retry system.

Owner

garnaat commented Mar 19, 2013

We can configure which errors are retried. I find this strange though because the HTTP spec says that 400 errors and 403 errors should NEVER be retried without modifying the request to address the issue. Also, if your clock is out of sync with S3 this request, why would it not continue to be out of sync on the next request if you have done anything to adjust your clock in between.

I'm doing some follow up questions to the S3 team to figure out if they really want customers to retry on these errors. It seems completely inconsistent with the HTTP spec.

Sorry, I think I wasn't being specific enough with the word "retry". Our work is generated via SQS messages, so I meant retry on a logical level -- e.g. there isn't a bug in our stack. I'm not sure about retrying the exact message. I'm also talking to the S3 team about this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment