Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

java.lang.IllegalStateException: The service request was not made within 10 seconds of doBlockingWrite being invoked. Make sure to invoke the service request BEFORE invoking doBlockingWrite if your caller is single-threaded.ription)(short issue description) #4893

Open
ramakrishna-g1 opened this issue Feb 6, 2024 · 10 comments
Labels
bug This issue is a bug. p2 This is a standard priority issue

Comments

@ramakrishna-g1
Copy link

Describe the bug

The service request was not made within 10 seconds of doBlockingWrite being invoked. Make sure to invoke the service request BEFORE invoking doBlockingWrite if your caller is single-threaded.
at software.amazon.awssdk.core.async.BlockingInputStreamAsyncRequestBody.waitForSubscriptionIfNeeded(BlockingInputStreamAsyncRequestBody.java:110) ~[sdk-core-2.22.2.jar!/:na]
at software.amazon.awssdk.core.async.BlockingInputStreamAsyncRequestBody.writeInputStream(BlockingInputStreamAsyncRequestBody.java:74) ~[sdk-core-2.22.2.jar!/:na]

Expected Behavior

We are experiencing these failures very often even after using latest aws crt client "aws-crt-client" 2.23.12.
We expect this to wait for longer time / should have option to increase the time out which would be helpful when there is huge data with large files.

Current Behavior

We are trying to steam large number of files from source system to AWS s3 using Transfer manager by reading the stream using HttpURLConnection, below is sample code -

URL targetURL = new URL("URL");
HttpURLConnection urlConnection = (HttpURLConnection) targetURL.openConnection();
urlConnection.setRequestMethod(HttpMethod.GET.toString());
urlConnection.setRequestProperty(HttpHeaders.ACCEPT, MediaType.ALL_VALUE);

if (urlConnection.getResponseCode() == HttpStatus.OK.value()) {
BlockingInputStreamAsyncRequestBody body = AsyncRequestBody.forBlockingInputStream(null);

Upload upload = transferManager.upload(builder -> builder
.requestBody(body)
.addTransferListener(UploadProcessListener.create(fileTracker.getPath()))
.putObjectRequest(req -> req.bucket(s3BucketName).key("v3/" + s3Key + "/" + fileTracker.getPath()))
.build());

long totalBytes = body.writeInputStream(urlConnection.getInputStream());

}

Reproduction Steps

URL targetURL = new URL("URL");
HttpURLConnection urlConnection = (HttpURLConnection) targetURL.openConnection();
urlConnection.setRequestMethod(HttpMethod.GET.toString());
urlConnection.setRequestProperty(HttpHeaders.ACCEPT, MediaType.ALL_VALUE);

if (urlConnection.getResponseCode() == HttpStatus.OK.value()) {
BlockingInputStreamAsyncRequestBody body = AsyncRequestBody.forBlockingInputStream(null);

Upload upload = transferManager.upload(builder -> builder
.requestBody(body)
.addTransferListener(UploadProcessListener.create(fileTracker.getPath()))
.putObjectRequest(req -> req.bucket(s3BucketName).key("v3/" + s3Key + "/" + fileTracker.getPath()))
.build());

long totalBytes = body.writeInputStream(urlConnection.getInputStream());

}

Possible Solution

No response

Additional Information/Context

Last week I have created ticket(awslabs/aws-crt-java#754) under aws-crt-java, as per the suggestion/comments creating this ticket here.

AWS Java SDK version used

2.23.12

JDK version used

11

Operating System and version

window / linux

@debora-ito
Copy link
Member

Hi @ramakrishna-g1 apologies for the silence.

We identified an issue with multipart uploads using BlockingInputStream where the client enters a bad state and doesn't recover from it. We are working on a fix.

We'll also consider creating a timeout configuration so this default value can be customized.

Will keep this updated with progress of the fix.

@debora-ito debora-ito added p2 This is a standard priority issue and removed needs-triage This issue or PR still needs to be triaged. labels Mar 4, 2024
@benarnao
Copy link
Contributor

benarnao commented Mar 13, 2024

Would this apply when using S3AsyncClient? ex. S3AsyncClient.crtBuilder().build()

I am running into a similar issue

Also any eta on a fix? Thx

@nredhefferprovidertrust
Copy link

nredhefferprovidertrust commented Mar 13, 2024

Running into this issue with BlockingInputStreamAsyncRequestBody instead of the Output body.

Default S3Async setup and creds.

BlockingInputStreamAsyncRequestBody body =
      AsyncRequestBody.forBlockingInputStream(null); // 'null' indicates a stream will be provided later.

  CompletableFuture<PutObjectResponse> responseFuture =
      _s3AsyncClient.putObject(r -> r.bucket(bucketName).key(key), body);
  body.writeInputStream(inputStream); <- fails here
  return responseFuture.get();

@zoewangg
Copy link
Contributor

Hey all, we've exposed an option to allow users to configure subscribeTimeout via #5000, could you try with it?

BlockingOutputStreamAsyncRequestBody.builder()
                                    .contentLength(1024L)
                                    .subscribeTimeout(Duration.ofSeconds(30))
                                    .build();

@vswamy3
Copy link

vswamy3 commented Mar 14, 2024

In which version fix is available?

@nredhefferprovidertrust

In which version fix is available?

2.25.8

@nredhefferprovidertrust

This issue describes the timeout problem in the BlockingInputStreamAsyncRequestBody, but the change made by #5000 adds the configuration option to BlockingOutputStreamAsyncRequestBody. Is a similar configuration option going to be exposed for the BlockingInputStreamAsyncRequestBody as well?

@zoewangg
Copy link
Contributor

Hi @nredhefferprovidertrust #4893 is created to add the same config for BlockingInputStreamAsyncRequestBody

@vswamy3
Copy link

vswamy3 commented Mar 15, 2024

What is the best fail-safe value for .subscribeTimeout() in the PROD environment. where we have uploading thousands of messages per minute.

@mohithm2
Copy link

Hi @zoewangg, I see that you have provided an option to extend the timeout which is good. But it still doesn't solve the original issue of the client going into an unhealthy state.

So is there going to be a fix for that?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug This issue is a bug. p2 This is a standard priority issue
Projects
None yet
Development

No branches or pull requests

7 participants