Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

@aws-sdk/client-translate : socket hang up Error [TimeoutError] #4219

Closed
3 tasks done
bsriniv opened this issue Nov 22, 2022 · 14 comments
Closed
3 tasks done

@aws-sdk/client-translate : socket hang up Error [TimeoutError] #4219

bsriniv opened this issue Nov 22, 2022 · 14 comments
Assignees
Labels
bug This issue is a bug. p2 This is a standard priority issue

Comments

@bsriniv
Copy link

bsriniv commented Nov 22, 2022

Checkboxes for prior research

Describe the bug

We are experiencing the intermittent socket hang up Error [TimeoutError] when using the SDK, particularly the TranslateClient.

Error in AWS translate service : socket hang up Error [TimeoutError]: socket hang up
    at connResetException (internal/errors.js:639:14)
    at TLSSocket.socketOnEnd (_http_client.js:499:23)
    at TLSSocket.emit (events.js:412:35)
    at TLSSocket.emit (domain.js:475:12)
    at /home/deploy/node_modules/newrelic/lib/shim/shim.js:1313:22
    at LegacyContextManager.runInContext (/home/deploy/node_modules/newrelic/lib/context-manager/legacy-context-manager.js:59:23)
    at Shim.applySegment (/home/deploy/node_modules/newrelic/lib/shim/shim.js:1303:25)
    at TLSSocket.wrapper [as emit] (/home/deploy/node_modules/newrelic/lib/shim/shim.js:1904:17)
    at endReadableNT (internal/streams/readable.js:1333:12)
    at processTicksAndRejections (internal/process/task_queues.js:82:21) {
  code: 'ECONNRESET',
  '$metadata': { attempts: 3, totalRetryDelay: 426 }
}

SDK version number

@aws-sdk/client-translate@3.192.0 and @aws-sdk/client-translate@3.213.0

Which JavaScript Runtime is this issue in?

Node.js

Details of the browser/Node.js/ReactNative version

v14.21.1

Reproduction Steps

const AwsTranslateClient = new TranslateClient({
  region: 'us-east-1'
})


const awsTranslateText = async ( originalText, targetLanguageCode, sourceLanguageCode ) => {
  let data = null
  try {
    const params = {
      Text: originalText,
      SourceLanguageCode: sourceLanguageCode || 'en',
      TargetLanguageCode: targetLanguageCode,
    }
    const command = new TranslateTextCommand(params)
    data = await AwsTranslateClient.send(command)
  } catch (err) {
    console.log(err, `Error in AWS translate service : ${err.message}`)
  }
  return data
}

Observed Behavior

Error in AWS translate service : socket hang up Error [TimeoutError]: socket hang up
    at connResetException (internal/errors.js:639:14)
    at TLSSocket.socketOnEnd (_http_client.js:499:23)
    at TLSSocket.emit (events.js:412:35)
    at TLSSocket.emit (domain.js:475:12)
    at /home/deploy/node_modules/newrelic/lib/shim/shim.js:1313:22
    at LegacyContextManager.runInContext (/home/deploy/node_modules/newrelic/lib/context-manager/legacy-context-manager.js:59:23)
    at Shim.applySegment (/home/deploy/node_modules/newrelic/lib/shim/shim.js:1303:25)
    at TLSSocket.wrapper [as emit] (/home/deploy/node_modules/newrelic/lib/shim/shim.js:1904:17)
    at endReadableNT (internal/streams/readable.js:1333:12)
    at processTicksAndRejections (internal/process/task_queues.js:82:21) {
  code: 'ECONNRESET',
  '$metadata': { attempts: 3, totalRetryDelay: 426 }
}

Expected Behavior

The API call should succeed without socket hang up Error [TimeoutError]: socket hang up

Possible Solution

No response

Additional Information/Context

Still we are seeing the socket hangup error after updating the latest translate client version 3.213.0 and also enabled the keepAlive with httpOptions as shown below:

const AwsTranslateClient = new TranslateClient({
  region: 'us-east-1',
  httpOptions: {
    agent: new Agent({ keepAlive: true }),
  },
  region: 'us-east-1'
})
@bsriniv bsriniv added bug This issue is a bug. needs-triage This issue or PR still needs to be triaged. labels Nov 22, 2022
@yenfryherrerafeliz yenfryherrerafeliz self-assigned this Nov 27, 2022
@yenfryherrerafeliz
Copy link
Contributor

Hi @bsriniv, thanks for opening this issue. This issue commonly occurs when there is a connectivity problem at the time of the request, and I actually see it did retry 3 times and got the same result, which is weird. I tried to get a reproduction of this error on my end but I could not, so, are there any other details you think I should consider to replicate this error on my end?

Thanks!

@yenfryherrerafeliz yenfryherrerafeliz added response-requested Waiting on additional info and feedback. Will move to \"closing-soon\" in 7 days. and removed needs-triage This issue or PR still needs to be triaged. labels Dec 9, 2022
@github-actions
Copy link

This issue has not received a response in 1 week. If you still think there is a problem, please leave a comment to avoid the issue from automatically closing.

@github-actions github-actions bot added closing-soon This issue will automatically close in 4 days unless further comments are made. closed-for-staleness and removed closing-soon This issue will automatically close in 4 days unless further comments are made. labels Dec 17, 2022
@bsriniv
Copy link
Author

bsriniv commented Dec 22, 2022

Hi @yenfryherrerafeliz,
We had a discussion with Jake Izumi on this issue on 22 Dec 2022. As perJake Izumi suggestion, we have to reopen this jira. Reopen option is not showing for me. Can you please reopen this jira?

@yenfryherrerafeliz
Copy link
Contributor

Hi @bsriniv, I have reopened the issue. Do you have any other details that could help me in reproducing this on my end?

Thanks!

@github-actions github-actions bot removed the response-requested Waiting on additional info and feedback. Will move to \"closing-soon\" in 7 days. label Dec 24, 2022
@yenfryherrerafeliz yenfryherrerafeliz added the p2 This is a standard priority issue label Feb 10, 2023
@oielbanna
Copy link

Hey, I'm having the same issue when using client-sns. Are there any updates?

@benheymink
Copy link

benheymink commented Jun 19, 2023

Started seeing this as well (on the lambda client) since transitioning to V3 of the SDK. Am using 3.188.0, since that's the version bundled with the runtime. I can't pin down what is causing it, and we're manually destroying our lambda client after each use:

    const lambdaClient = new LambdaClient({});
    try {
        return await lambdaClient.send(command);
    } finally {
        lambdaClient.destroy();
    }

"TimeoutError: socket hang up\n at connResetException (node:internal/errors:717:14)\n at TLSSocket.socketCloseListener (node:_http_client:475:25)\n at TLSSocket.emit (node:events:525:35)\n at node:net:322:12\n at TCP.done (node:_tls_wrap:588:7)\n at TCP.callbackTrampoline (node:internal/async_hooks:130:17)"

@yenfryherrerafeliz
Copy link
Contributor

Hi @benheymink, @oielbanna, which version of the SDK are you using?, if is not the latest, could you please try with the latest version?. We have introduced the following change here that makes the underlying sockets to send keep-alive packets and keep the connection alive longer, so I am wondering if this could be helpful for you.

Please let me know.

Thanks!

@yenfryherrerafeliz yenfryherrerafeliz added the response-requested Waiting on additional info and feedback. Will move to \"closing-soon\" in 7 days. label Sep 7, 2023
@benheymink
Copy link

@yenfryherrerafeliz We're using the version made available by the AWS Lambda runtime, 3.188.0. (So I doubt it includes the fix you mentioned).

How often are the AWS runtimes/environments updated?

@github-actions github-actions bot removed the response-requested Waiting on additional info and feedback. Will move to \"closing-soon\" in 7 days. label Sep 9, 2023
@yenfryherrerafeliz
Copy link
Contributor

@benheymink, I do not have this information available as of right now, but I will ask and get back to you regarding this. However, would it be possible for you to use a lambda layer where you can deploy the latest SDK version?. Here is the documentation that can help you to accomplish this.

Please let me know.

Thanks!

@yenfryherrerafeliz yenfryherrerafeliz added the response-requested Waiting on additional info and feedback. Will move to \"closing-soon\" in 7 days. label Sep 13, 2023
@github-actions
Copy link

This issue has not received a response in 1 week. If you still think there is a problem, please leave a comment to avoid the issue from automatically closing.

@github-actions github-actions bot added the closing-soon This issue will automatically close in 4 days unless further comments are made. label Sep 21, 2023
@dan-istoc-k24
Copy link

dan-istoc-k24 commented Sep 21, 2023

Hi, we got this error as well yesterday with the SNSClient.

Didn't enable keepAlive witht the requestHandler (didn't pass a requestHandler at all).

Package version:

"@aws-sdk/client-sns": "^3.363.0"

Nodejs version:

v19

Stack trace:

TimeoutError: socket hang up
at connResetException (node:internal/errors:717:14)
at TLSSocket.socketCloseListener (node:_http_client:468:25)
at TLSSocket.emit (node:events:525:35)
at runInContextCb (/app/node_modules/newrelic/lib/shim/shim.js:1315:22)
at LegacyContextManager.runInContext (/app/node_modules/newrelic/lib/context-manager/legacy-context-manager.js:59:23)
at Shim.applySegment (/app/node_modules/newrelic/lib/shim/shim.js:1305:25)
at TLSSocket.wrapper [as emit] (/app/node_modules/newrelic/lib/shim/shim.js:1861:17)
at node:net:332:12
at TCP.done (node:_tls_wrap:588:7)
at TCP.callbackTrampoline (node:internal/async_hooks:130:17)

@github-actions github-actions bot removed the closing-soon This issue will automatically close in 4 days unless further comments are made. label Sep 22, 2023
@github-actions github-actions bot removed the response-requested Waiting on additional info and feedback. Will move to \"closing-soon\" in 7 days. label Sep 22, 2023
@yenfryherrerafeliz
Copy link
Contributor

@dan-istoc-k24 this error is caused by a network issues, such as connection interruption or lost, etc., and enabling keep-alive may help sometimes, but all depends on what really happens with the connection at that time. Also, keep-alive is enabled by default.

Thanks!

@yenfryherrerafeliz
Copy link
Contributor

I am closing this for now, but please if you still need help feel free of opening a new issue.

Thanks!

Copy link

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs and link to relevant comments in this thread.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 10, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug This issue is a bug. p2 This is a standard priority issue
Projects
None yet
Development

No branches or pull requests

5 participants