-
-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to locate credentials #808
Comments
you can try the same code with a sync version of the same botocore to see if it's a botocore or aiobotocore issue |
btw what do you mean by |
btw @potatochip could you post an example using containers or something fully fleshed out? also can you test against pre 1.x version of aiobotocore to see if this is an injection? |
Is something to do with the kube iam assume role thing I would guess. We'd need a way to reproduce it, my guess is it's triggered by a few env vars and files placed on the container. We'd need those (or at least those with the values somewhat anonymised) |
Pre 1.x did not experience this. Only started after upgrading to 1.x |
@potatochip pre 1.x credential refresh was broken, so probably why you're starting to see this issue. If you could give us a full example with docker containers via moto or some cloudformation for us to repro that would be best otherwise there's no way for us to test. |
I understand. I think best to close then. I can’t replicate it reliably. |
For future viewers, I’m attempting to solve it by increasing the timeout / number of retries botocore allocates to connecting to the metadata server. |
btw at FBN we're running into an issue where botocore first tries to use retrieve the a task based IAM, times out, then falls back to the EC2 IAM. So that scenario seems like it's expected via botocore. I'm going to close for now but please keep us apprised and let us know if this is specific to aiobotocore or happens with botocore as well. I'm interested either way. Thanks! btw metadata retries/timeout is deep in botocore, it has separate constants for the metadata calls. |
I'm definitely interested in ensuring we don't have a bug, so please re-open when you have more details! Probably worth setting botocore to DEBUG level logging for more details so you can see what's going on. |
I have the same issue. Package version that I'm using:
I need to download one hundred files from S3 and I'm using
repeated many times and than:
I'm using it in the wrong way or we are hitting some limit of the metadata service? |
@fox91 try with botocore instead of aiobotocore to see if it's related to aiobotocore |
Also are you opening a new client for each file? as opening 1 client and using that for all the 60 s3 files would be better. |
Describe the bug
I get the following exception when awaiting the
s3_put
coro below. This only happens intermittently when creating a k8s pod running the app. Credentials come from an iam role assigned to the pod. The pod will restart a couple times and eventually the error will not raise. The error did no raise in the same environment when using version 0 of aiobotocore (with a persistent client rather than context managed client). Maybe this is related to a change in pinned botocore version?pip freeze results
Environment:
The text was updated successfully, but these errors were encountered: