Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to load AWS credentials from any provider in the chain #1324

Closed
poonamtr opened this issue Oct 2, 2017 · 20 comments
Closed

Unable to load AWS credentials from any provider in the chain #1324

poonamtr opened this issue Oct 2, 2017 · 20 comments
Labels
guidance Question that needs advice or information. response-requested Waiting on additional info or feedback. Will move to "closing-soon" in 5 days.

Comments

@poonamtr
Copy link

poonamtr commented Oct 2, 2017

I have my application running on EC2. Trying to access dynamoDB via SDK from my Java application. But for every operation, I get "Unable to load AWS credentials from any provider in the chain".

  • I have added dynamoDB permissions in the IAM role that gets attached to EC2 instance
  • And also configured EC2 to assume that role in trust relationship.

Code:

try {
            AWSCredentialsProvider provider = new DefaultAWSCredentialsProviderChain();
            AWSCredentials credentials = provider.getCredentials();
            if(credentials != null) {
                LOG.info("Credentials Key: " + credentials.getAWSAccessKeyId());
                LOG.info("Credentials secret: " + credentials.getAWSSecretKey());
            }
        } catch (Exception e) {
            LOG.info("Exception in credentials cause:" + e.getCause() +";message: " + e.getMessage() +";stack: " +e.getStackTrace());
        }

The getCredentials() line itself throws an Exception with message: "Unable to load AWS credentials from any provider in the chain".
If I do not try to get credentials here and pass it to the DynamoDB client:

dynamoDBClient = builder.withEndpointConfiguration(new 
                AwsClientBuilder.EndpointConfiguration(url, region.getName()))
                .withCredentials(credentialProvider())
                .withClientConfiguration(clientConfig)
                .build();

Then it throws the same exception when I try to do any operation.

I tried setting up the proxy too:

ClientConfiguration clientConfig = new ClientConfiguration();
        clientConfig.setProxyHost("yyy");
        clientConfig.setProxyPort(port);
        clientConfig.setNonProxyHosts("xxx");
        clientConfig.setProtocol(Protocol.HTTP);

and then passing it while creating the client:

dynamoDBClient = builder.withEndpointConfiguration(new 
        AwsClientBuilder.EndpointConfiguration(url, region.getName()))
                .withCredentials(credentialProvider())
                .withClientConfiguration(clientConfig)
                .build();

But no success.

Can someone @here guide me what am I missing here?

@varunnvs92
Copy link
Contributor

SDK has a credential resolution logic to resolve the aws credentials to use when interacting with aws services. See the below link:
http://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html

When you see the error "Unable to load AWS credentials from any provider in the chain", it means we could not find credentials in any of the places the DefaultAWSCredentialsProviderChain looks at. Please make sure the credentials are located at atleast one of the places mentioned in the above link.

@varunnvs92
Copy link
Contributor

Feel free to reopen if you still face the issue.

@HSDen
Copy link

HSDen commented Nov 4, 2017

@varunnvs92 This issue still exists for me. I am running a plain Java application(Not Spark) on an emr and while trying to access the S3, I am facing the same issue.

AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
            .withCredentials(new AWSStaticCredentialsProvider(credentials))
            .withRegion(" us-east-1")
            .build();

try{
s3Client.getObjectAsString(bucket_name, object_key);
}catch(AmazonClientException e){
log.error("AmazonClient Exception Occured"+e.getMessage());
}

I am accessing the s3 from EMR through an IAM role and Not secret key or access keys

@srchase srchase added guidance Question that needs advice or information. needs-response and removed Question labels Jan 4, 2019
@LarsAlmgren
Copy link

@HSDen did you find a solution to this problem?

@AmitZoh
Copy link

AmitZoh commented Sep 15, 2019

If anybody stumbles upon this issue - in my case it turned out to be a missing trust relationship with the cluster nodes.

@venkateshkonduru1
Copy link

I am still facing this issue.. in Jenkins Environment with AWS SM Service? Any Solution

@monigala
Copy link

monigala commented Feb 3, 2020

If anybody stumbles upon this issue - in my case it turned out to be a missing trust relationship with the cluster nodes.

Can you expand on this solution. How was the fix implemented?

@debora-ito debora-ito added response-requested Waiting on additional info or feedback. Will move to "closing-soon" in 5 days. and removed needs-response labels Feb 25, 2020
@csllc-one
Copy link

I encountered this issue too and resolved it be ensuring there was a literal default profile in the credentials file (~/.aws/credentials). The default profile can be any IAM user but must be defined with default profile name.

@oniseun
Copy link

oniseun commented Oct 19, 2020

Just as @CSLLCUser mentioned
I changed the profile name to default and it fixed the issue ..
got to command line and open

open ~/.aws/credentials

updated like

change whatever name that is in [whatevername] to [default]

[default]
aws_access_key_id = *************************
aws_secret_access_key = *********************************************
aws_session_token =**************************************************

@albertoandreottiATgmail

I have set the secret information everywhere(in all the recommended places) and it is unable to find it anywhere. This is very broken.

@therealppk
Copy link

I've setup a Spark Standalone Cluster on EC2 Instances (1 Master, 2 Workers). I'm trying to deploy an application in cluster mode. The application jar is present in S3. I'm getting the same error.

Command:

spark/bin/spark-submit --deploy-mode client --master spark://xxxx:7077 --class RawProcessingHandler s3a://xxxxx/spark.jar some args

Output:

21/01/05 09:15:38 INFO SecurityManager: Changing view acls to: root
21/01/05 09:15:38 INFO SecurityManager: Changing modify acls to: root
21/01/05 09:15:38 INFO SecurityManager: Changing view acls groups to: 
21/01/05 09:15:38 INFO SecurityManager: Changing modify acls groups to: 
21/01/05 09:15:38 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
21/01/05 09:15:38 INFO Utils: Successfully started service 'driverClient' on port 43569.
21/01/05 09:15:38 INFO TransportClientFactory: Successfully created connection to xxxx:7077 after 84 ms (0 ms spent in bootstraps)
21/01/05 09:15:38 INFO ClientEndpoint: Driver successfully submitted as driver-20210105091538-0021
21/01/05 09:15:38 INFO ClientEndpoint: ... waiting before polling master for driver state
21/01/05 09:15:43 INFO ClientEndpoint: ... polling master for driver state
21/01/05 09:15:43 INFO ClientEndpoint: State of driver-20210105091538-0021 is ERROR
21/01/05 09:15:43 ERROR ClientEndpoint: Exception from cluster was: com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain
com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain
	at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3521)
	at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
	at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
	at org.apache.spark.util.Utils$.getHadoopFileSystem(Utils.scala:1866)
	at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:721)
	at org.apache.spark.util.Utils$.fetchFile(Utils.scala:509)
	at org.apache.spark.deploy.worker.DriverRunner.downloadUserJar(DriverRunner.scala:155)
	at org.apache.spark.deploy.worker.DriverRunner.prepareAndRunDriver(DriverRunner.scala:173)
	at org.apache.spark.deploy.worker.DriverRunner$$anon$1.run(DriverRunner.scala:92)
21/01/05 09:15:43 INFO ShutdownHookManager: Shutdown hook called
21/01/05 09:15:43 INFO ShutdownHookManager: Deleting directory /tmp/spark-40618d5d-af47-4700-b8db-1d4befea22eb

I've added the credentials as env variables as well as set them using aws configure. Can you please help?

@kwiecien
Copy link

kwiecien commented Feb 9, 2021

I had the same error. In my case it helped to add the STS dependency. If you use an AWS profile, STS has to be on the class path.

My solution for mvn project with AWS SDK Java v2 :

        <!-- necessary to be on the class path for ProfileCredentialsProvider -->
        <dependency>
            <groupId>software.amazon.awssdk</groupId>
            <artifactId>sts</artifactId>
        </dependency>

A solution for SDK v1 is similar - aws-java-sdk-sts module must be on the class path.

@rajeevprasanna
Copy link

adding this dependency solved this problem
api("com.amazonaws:aws-java-sdk-sts:1.11.956")

@jrichardsz
Copy link

jrichardsz commented Jul 9, 2021

If someone have problems with credentials provider chain, I was able to connect to aws with env variables (best practice) instead chain (properties or another type):

EnvironmentVariableCredentialsProvider credentialsProvider = new EnvironmentVariableCredentialsProvider();
CloudWatchLogsClient logsClient = CloudWatchLogsClient.builder().region(Region.of(regionId))
				.credentialsProvider(credentialsProvider).build();

And these dependencies:

<dependency>
  <groupId>software.amazon.awssdk</groupId>
  <artifactId>logs</artifactId>
  <version>2.0.0-preview-4</version>
</dependency>

Just export these variables before the execution:

AWS_ACCESS_KEY_ID=changeme
AWS_SECRET_ACCESS_KEY=dontseeme

@zoltangoendoes
Copy link

Another casue could be if you are using a proxy. In that case make sure you allow access to 169.254.170.2 , which address is used to query the credentials if using Role/Instance profile.

@RandLVT
Copy link

RandLVT commented May 23, 2022

You can also go to your AWS account, select the "Command line or programmatic access" on the right side, and copy option 2 to your credential file. Next, remove [NUMBERS_PowerUseAceess] and replace it with the profile name. The only downside is that you will have to copy the session_token every four hours.

@babaralishah
Copy link

Putting default in the top of the AWS configuration file really resolved my issue.
Thank you so much!

@DarkBitz
Copy link

The problem is that AWS Credentials Provider is simply unreliable, if you stick to the best practice of using IAM roles you will run into this error sooner or later.

@ShanikaEdiriweera
Copy link

I am experiencing this issue while EKS container trying to do cognitoidentityprovider.DefaultCognitoIdentityProviderClient.getUser. Do I need to specify any cognito related permission to the pod role?

I am curious because I did not have any cognito specific perms when I ran the same app in ECS Fargate.

@ShanikaEdiriweera
Copy link

This helped me aws/aws-sdk-java-v2#2961
Adding the sts dependency

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
guidance Question that needs advice or information. response-requested Waiting on additional info or feedback. Will move to "closing-soon" in 5 days.
Projects
None yet
Development

No branches or pull requests