-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to load AWS credentials from any provider in the chain #213
Comments
Spark doesn't look at What you need is to use the Better than managing credentials, though, would be if you used IAM roles as described in the README. Then you don't need to specify secrets anywhere. |
In the end I guessed it must not be - though this in combination with
seemed to imply that it should be looking in there which keep me trying for longer - maybe it's not because it's not Do you have a suggestion how to either automatically inject those xml config files, or to "bake" them into the ami without clashing with flintrock's setup process? I'm finding using flintrock great because it's so speedy, and I'm avoiding (almost) all extra cluster setup steps to keep it speedy by having everything else already setup on a custom ami. |
The easiest thing to do is use My recommended solution is to not use credentials at all and instead use IAM roles. A potential future solution is coming in #202, where you'll be able to bring your own templates and have Flintrock use them during launch. |
Ok, that's helpful thanks, and #202 looks great, hoping it gets merged! |
Hi,
This may well be a spark issue which doesn't concern flintrock, but it is something that becomes a bit more tricky in flintrock.
I'm trying to use the ID and secret key of an IAM user to authentic pulling data into a cluster from s3. I've confirmed that my ID and secret work when used like
sqlCtx.read.csv('s3a://<ID>:<SECRET>@bucket/file.txt)
.But I get the
Unable to load AWS credentials from any provider in the chain
if trying to load the ID and secret dynamically. Ways I've tried getting them to load, all resulting in same mentioned error;• adding
~/.aws/credentials
(and confirming it's found by aws cli usingaws configure
) to master and slaves• running
!export AWS_ACCESS_KEY_ID=...
, and for secret, in Jupyter notebook that is connected to the spark clusterSuggestions on how you solved this appreciated.
Note: I want to rely solely on the ID and secret key as working in a team and want to easily share bucket access.
Thanks
The text was updated successfully, but these errors were encountered: