-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
receiving "connect EHOSTUNREACH 169.254.169.254:80" when following simple README #389
Comments
@michaelreinig @marklawlor if you set the
|
Thank you @bcoe. I will do as you have suggested until you release the fix. Thank you to you and your team for looking into this so quickly. |
@michaelreinig if you update your dependencies, such that Please feel free to reopen if you're continuing to have issues, thank you for your patience. |
Hello, |
@robymes make sure you delete If you continue to bump into issues, please go ahead and open a new issue 👍 |
@bcoe, I can confirm that the issue has been resolved. Thanks again for your help. |
@michaelreinig awesome, thanks for reporting the issue; helped is identify the problem quickly. |
@michaelreinig Hey, the bug still appears in 5.3.1 but it has a not consistent behaviour.
|
yes resolved, thank you for your support @bcoe ! |
This is still intermittently happening for me with the following config:
The other annoying thing is, simply importing the transport as above (and never even new'ing one up), is enough for it to cause open handle warnings for
|
I notice that this one is still closed, it's definitely causing problems for me still. It's intermittently failing with
|
@dan-turner is this mainly happening when in the context of Jest, I think this might potentially be related to: |
@bcoe no it was happening in our now.sh environments. It was causing the first few requests to each lambda to fail with 502. Hitting each route enough times after they become warm, the problem would go away and logs would start to appear in StackDriver. Ultimately we couldn’t accept this though and had to remove this package for now. |
i was able to replicate this as well. if you hit the end point a few times it will start to work. is there anyway to make this work on first request? this is definitely not fixed. |
@chrisathook in what environment are you using |
I am testing in a local docker container based on the node:13.8 image from docker. Below are the versions of logging winston and logging i have installed on the project.
when i run the app in my local Docker container i get the error. if i run it a few times the error disappears, then if i wait a bit it will return. Oddly if i run the app just in node on windows 10 it does not happen. |
@chrisathook could you share the exact exception that you're seeing? |
here is what i am seeing when testing |
It looks like |
Pretty sure we have this one fixed now :) Please do let us know if you're still running into problems? |
Environment details
@google-cloud/logging-winston
version: 2.0.1Steps to reproduce
This issue is very similar if not identical to:
googleapis/nodejs-logging-bunyan#353
The only resolution I have found so far is to revert back to logging-winston version 0.11.1. Under the environment above, everything then works.
I am seeing this issues whether I set the
GOOGLE_APPLICATION_CREDENTIALS
env variable or specify theprojectId
andkeyFilename
or specifyprojectId
andcredentials
when creating theLoggingWinston
object. So all 3 ways fail on v1.0.0+, but all 3 work on v0.11.1.I have seen the error appear two different ways, with the first type of error message appearing from following the above steps.
Here is an alternative error message that we are seeing using the same environment outlined above, but on a remote server. I do not have reproducible steps for this error message, but this was the first error message we saw before digging deeper. The below error message lead us to the reproducible error message above.
Thanks!
The text was updated successfully, but these errors were encountered: