Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retrieve default access key from standard AWS SDK credentials for opened links #10582

Closed
cyberduck opened this issue Jan 23, 2019 · 9 comments
Closed
Assignees
Labels
bug s3 AWS S3 Protocol Implementation worksforme
Milestone

Comments

@cyberduck
Copy link
Collaborator

cyberduck commented Jan 23, 2019

57e29aa created the issue

We are looking for an app that will let our users click on s3:// URLs and download the file the URL points to (like s3://bucket/path/file.txt). CyberDuck is almost there, the problem we're running into is that when the user clicks on an s3:// URL, CyberDuck opens a sheet requesting the user's access and secret keys even after those keys have been saved in the user's keychain and in their ~/.aws/credentials file. It would be much better if there were no additional user interaction required and CyberDuck could pull the access and secret keys from the credentials file or from the keychain, best would be from the credentials file with a profile set in the preferences.

Steps to reproduce:
Set up user's account so that they can use "aws s3 cp s3://PATH/TO/FILE /PATH/TO/LOCAL_DIR" to copy files from S3 to local. Nominally this means setting up ~/.aws/config and ~/.aws/credentials properly.
Run CyberDuck and have it save the user's AWS access and secret keys in the Keychain.
Set up a clickable s3:// URL (like s3://PATH/TO/FILE)
Click on the s3:// URL

Expected Results:
CyberDuck downloads the file the s3:// URL points to with no user interaction required.

Actual Results:
CyberDuck opens a sheet on the transfers window requesting the user's access and secret keys.

@cyberduck
Copy link
Collaborator Author

cyberduck commented Feb 3, 2019

@dkocher commented

This should work if you include the access key in the URI such as s3://ACCESSKEY@container/PATH/TO/FILE.

@cyberduck
Copy link
Collaborator Author

cyberduck commented Feb 4, 2019

57e29aa commented

Replying to [comment:3 dkocher]:

This should work if you include the access key in the URI such as s3://ACCESSKEY@container/PATH/TO/FILE.

Good to know, though that will not work for our use case. We are looking to send out or post s3:// URLs for our internal users so they can get access to files. We will not know their access keys so, nor can we send a single email to multiple people and have this work.

Please let me know if there's anything else we can do to help out.

@cyberduck
Copy link
Collaborator Author

cyberduck commented Feb 5, 2019

@dkocher commented

We will have a fix to obtain the AWS access key from the default profile in ~/.aws/credentials.

@cyberduck
Copy link
Collaborator Author

cyberduck commented Feb 5, 2019

57e29aa commented

Sounds good.
It'd be really great if we could define a profile to use (rather than the default), but do understand that's a more complex thing to do.

@cyberduck
Copy link
Collaborator Author

cyberduck commented Feb 6, 2019

@ylangisc commented

Fixed in 11dce90.

@cyberduck
Copy link
Collaborator Author

cyberduck commented Mar 20, 2019

57e29aa commented

I tried this with CyberDuck 6.9.4 and it is not working for me, I'm still being prompted for access key and secret when I click on a valid s3:// URL. I've tried downloading the URL with the "aws s3" command and it works, and I verified that the "[default]" section of ~/.aws/credentials is valid and works with the command line "aws s3" command.
Perhaps I'm missing something or have something set incorrectly, but if so I don't know what it would be - if there's anything to check on my end please let me know.

@cyberduck
Copy link
Collaborator Author

cyberduck commented Apr 16, 2019

@dkocher commented

Please use the format s3:/bucketname/ for the URI to refer to a bucket name and make use of the default hostname s3.amazonaws.com configured for S3. This will then allow the lookup of the default credentials to work. The URI format is documented in here.

@cyberduck
Copy link
Collaborator Author

cyberduck commented Apr 16, 2019

57e29aa commented

Using that format does work for me, yay!

Unfortunately it does not align with what the AWS python client (https://aws.amazon.com/cli/) uses, and we also use that. Specifically we send out an aws s3... command line, and the s3://... portion is generally rendered as a clickable URI, so people can either copy the command line or click the URI to do the download.

It'd be nice if CyberDuck and "aws s3" aligned, but that would mean that S3 URI handling in CyberDuck would be different from all the other URIs that you support (and that doesn't seem like a good idea right off).

I'll see what I can do here to make something that can tweak the URI and pass it along, that might work for us.

@cyberduck
Copy link
Collaborator Author

cyberduck commented Jun 4, 2019

@dkocher commented

Ticket retargeted after milestone deleted

@iterate-ch iterate-ch locked as resolved and limited conversation to collaborators Nov 26, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug s3 AWS S3 Protocol Implementation worksforme
Projects
None yet
Development

No branches or pull requests

2 participants