Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added :session-token and :client-id to enable temporary session credentials #28

Closed
wants to merge 2 commits into from

Conversation

neatonk
Copy link

@neatonk neatonk commented Apr 29, 2013

AWS session credentials include a session token so which is now accepted in your cred map as :session-token.

Because s3-client uses memoize to cache clients and temporary session credentials are... temporary, it was necessary to improve the client caching strategy to avoid creating a new client every time the session credentials expire and get refreshed. See #15 for the full rationale.

The caching strategy used in this pull request, behaves just like the memoize solution, unless a :client-id is included in the cred map. The client-id is used as the cache key instead of the map itself so that only one client is cached for each client-id. When the client is cached, the arguments used to create it are cached as well and a new client is created when the arguments change.

This solution should continue to avoid issue #3 except in a very limited case where the same client-id is used on multiple threads and the first client gets booted from the cache and GC'd too soon. A better solution may be to return a reference to the client used for each request in the response so the caller can hold onto it until it's ok for the client to be GC'd.

* new private fn `aws-creds` instantiates creds
* importing `com.amazonaws.auth.BasicSessionCredentials`
* importing `com.amazonaws.auth.AWSCredentials`
* type hint on `aws-creds` to avoid reflection

Thanks to @marshallbrekka. Closes weavejester#21.
…ion...

Allows multiple clients created using temporary session credentials to be cached
under a single key by providing a :client-id in the `cred` map - Closes weavejester#15

Also:
* added `core.cache` and `core.memoize` to deps
* moved s3-client code to `aws.sdk.s3.client` ns
* Added type hint to memoized s3-client
* added basic tests to verify cache behavior
@weavejester
Copy link
Owner

Why use such a complex caching scheme? The only reason we want to cache the client is for performance purposes; we don't particularly care if we recreate the client, just as long as we're not doing it too often. A LRU-cache with a reasonable default size and an option to override it should be sufficient.

@neatonk
Copy link
Author

neatonk commented May 9, 2013

Sorry, I thought memoization was added to solve #3 with the added benefit of performance. Switching to an LRU cache would break #3, by dropping old client references after the cache fills up. If an LRU cache is used, how would you address #3? As I commented above, another solution for #3 would be to stash a reference to the client in the response, or make the client accesible to the caller in some other way rather than relying on caching for this.

@weavejester
Copy link
Owner

Good point. I missed that mention of #3, and you're right, although it would only happen if the number of simultaneous downloads exceeded the size of the cache, which would be an unusual circumstance, and could be solved by manually tweaking the cache size.

Perhaps a better solution would be to add the client to the metadata of the response, or to the resulting InputStream. However, since #3 was reported a year ago, it's probably worth checking that the original bug in the AWS library still exists. If it doesn't, we don't need the workaround.

@sw1nn
Copy link
Collaborator

sw1nn commented Jan 13, 2014

See #49, please raise a new request if it doesn't cover your needs.

@sw1nn sw1nn closed this Jan 13, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants