You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When pushing an object to S3 with PutObjectRequest failed for any reason (network, latency...), the S3 client retries the push operation. If we provided an InputStream that does not implement markSupported(), mark() and reset(), the retry will fail.
By default when creating a new S3 client, retry is enabled with one attempt.
Expected Behavior
If the implementation of markSupported() the provided InputStream returns false, we should not retry the push operation and fail fast. The retry attempt makes sense only when we have a guarantee we can read input data a second time.
We should either fail fast and raise an exception after the first failure to push, or just reject the input stream very early and always require an implementation that supports mark()/reset().
Current Behavior
Here is the stack trace raised:
software.amazon.awssdk.services.s3.model.S3Exception: Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed. (Service: S3, Status Code: 400, Request ID: DF9X2DJMH6FN46B6, Extended Request ID: 9ch9A9wKThV//Q1tKuk8R5+0//gIeiQajbaLA/N1zWRH2tyDmEAy1a771XHhs1dqllgzP515Q1E=)
at software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleErrorResponse(AwsXmlPredicatedResponseHandler.java:156)
at software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleResponse(AwsXmlPredicatedResponseHandler.java:108)
at software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handle(AwsXmlPredicatedResponseHandler.java:85)
at software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handle(AwsXmlPredicatedResponseHandler.java:43)
at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler$Crc32ValidationResponseHandler.handle(AwsSyncClientHandler.java:95)
at software.amazon.awssdk.core.internal.handler.BaseClientHandler.lambda$successTransformationResponseHandler$6(BaseClientHandler.java:234)
........
Reproduction Steps
See below broken input stream that will inject a failure at the first push. It has a call to sleep() to cause a timeout error in the middle of the push operation.
privatestaticclassS3BreakerInputStreamextendsInputStream {
privatefinalbyte[] array;
privateintbytesRead;
S3BreakerInputStream(byte[] array) {
this.array = array;
}
@Overridepublicintread() {
if (bytesRead == 500) {
try {
// Inject an error to make the push failThread.sleep(30000L);
} catch (InterruptedExceptione) {
e.printStackTrace();
}
}
if (bytesRead >= array.length) {
return -1;
} else {
returnarray[bytesRead++];
}
}
}
We can reproduce the bug by pushing any object using this input stream. Because of the sleep, the first push attempt will fail as expected after ~30 seconds.
Then, the retry will fail too after a ~60 seconds timeout, which is actually the socket read timeout of the S3 client.
The runtime of test is always a little more than 90s. My understanding is this is the sum of the 30s timeout explicitly added in the test, and the 60s of the default socket read timeout.
As a workaround, if I disable the S3 client retry attempt with .overrideConfiguration(a -> a.retryPolicy(RetryPolicy.none())), now the push fails as expected after 30 seconds.
Possible Solution
No response
Additional Information/Context
The error message has already been reported in many issue. But I found no issue where we actually point a non replayable InputStream.
AWS Java SDK version used
2.17.121
JDK version used
openjdk_11.0.19.0.103_11.65.54_x64
Operating System and version
Darwin 21.6.0 (Max OS)
The text was updated successfully, but these errors were encountered:
What is the proposed solution for this issue, and what is the ETA for this fix?
Is there any plan to enable retry on non replayable input streams like it was supported in aws sdk v1 using the setReadLimit option?
Describe the bug
When pushing an object to S3 with
PutObjectRequest
failed for any reason (network, latency...), the S3 client retries the push operation. If we provided anInputStream
that does not implementmarkSupported()
,mark()
andreset()
, the retry will fail.By default when creating a new S3 client, retry is enabled with one attempt.
Expected Behavior
If the implementation of
markSupported()
the providedInputStream
returnsfalse
, we should not retry the push operation and fail fast. The retry attempt makes sense only when we have a guarantee we can read input data a second time.We should either fail fast and raise an exception after the first failure to push, or just reject the input stream very early and always require an implementation that supports
mark()
/reset()
.Current Behavior
Here is the stack trace raised:
Reproduction Steps
See below broken input stream that will inject a failure at the first push. It has a call to
sleep()
to cause a timeout error in the middle of the push operation.We can reproduce the bug by pushing any object using this input stream. Because of the sleep, the first push attempt will fail as expected after ~30 seconds.
Then, the retry will fail too after a ~60 seconds timeout, which is actually the socket read timeout of the S3 client.
The runtime of test is always a little more than 90s. My understanding is this is the sum of the 30s timeout explicitly added in the test, and the 60s of the default socket read timeout.
As a workaround, if I disable the S3 client retry attempt with
.overrideConfiguration(a -> a.retryPolicy(RetryPolicy.none()))
, now the push fails as expected after 30 seconds.Possible Solution
No response
Additional Information/Context
The error message has already been reported in many issue. But I found no issue where we actually point a non replayable
InputStream
.AWS Java SDK version used
2.17.121
JDK version used
openjdk_11.0.19.0.103_11.65.54_x64
Operating System and version
Darwin 21.6.0 (Max OS)
The text was updated successfully, but these errors were encountered: