Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

Loading…

support fog's new use_iam_profile option #203

Closed
wants to merge 1 commit into from

3 participants

@fcheung

EC2 recently gained the option for credentials to be fetched from the instance metadata service rather than embedding them in a credentials file. This patch allows dragonfly to do this (via fog's support for this - currently on master but unreleased)

@fcheung

This has been part of the recent releases of fog - it would be great if dragonfly was able to use it. Perhaps it would be better to make it so that dragonfly is less tightly coupled to fog? Instead of the current setup users could pass a hash of options that would be passed through verbatim to fog

@markevans
Owner

hi - sorry for the massively late response on the original request - I agree - passing through options verbatim simplifies things a bit. I'll merge this in soon, though FYI I will also at some point move S3, Mongo and Couch datastores out of core and into their own self-contained gems
thanks again

@fcheung

That sounds like a great plan

@dpehrson

Just wanted to pop in here and give a +1 on this. In the process of setting up the CloudFormation infrastructure for an app that will use DragonFly and IAM Instance Profiles are very attractive for credential management.

Thank to both of you for working on this, can' wait to see it merged.

@markevans
Owner

sorry again for the delay - I've merged via a cherry-pick in master (still plan to separate out into a separate gem though!)

@markevans markevans closed this
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Commits on Jun 24, 2012
  1. @fcheung
This page is out of date. Refresh to see the latest.
View
17 lib/dragonfly/data_storage/s3data_store.rb
@@ -12,6 +12,7 @@ class S3DataStore
configurable_attr :access_key_id
configurable_attr :secret_access_key
configurable_attr :region
+ configurable_attr :use_iam_profile
configurable_attr :use_filesystem, true
configurable_attr :storage_headers, {'x-amz-acl' => 'public-read'}
configurable_attr :url_scheme, 'http'
@@ -33,6 +34,7 @@ def initialize(opts={})
self.access_key_id = opts[:access_key_id]
self.secret_access_key = opts[:secret_access_key]
self.region = opts[:region]
+ self.use_iam_profile = opts[:use_iam_profile]
end
def store(temp_object, opts={})
@@ -96,12 +98,13 @@ def domain
def storage
@storage ||= begin
- storage = Fog::Storage.new(
+ storage = Fog::Storage.new({
:provider => 'AWS',
:aws_access_key_id => access_key_id,
:aws_secret_access_key => secret_access_key,
- :region => region
- )
+ :region => region,
+ :use_iam_profile => use_iam_profile
+ }.reject {|name, option| option.nil?})
storage.sync_clock
storage
end
@@ -118,8 +121,12 @@ def bucket_exists?
def ensure_configured
unless @configured
- [:bucket_name, :access_key_id, :secret_access_key].each do |attr|
- raise NotConfigured, "You need to configure #{self.class.name} with #{attr}" if send(attr).nil?
+ if use_iam_profile
+ raise NotConfigured, "You need to configure #{self.class.name} with #{attr}" if bucket_name.nil?
+ else
+ [:bucket_name, :access_key_id, :secret_access_key].each do |attr|
+ raise NotConfigured, "You need to configure #{self.class.name} with #{attr}" if send(attr).nil?
+ end
end
@configured = true
end
View
10 spec/dragonfly/data_storage/s3_data_store_spec.rb
@@ -176,6 +176,16 @@
@data_store.secret_access_key = nil
proc{ @data_store.retrieve('asdf') }.should raise_error(Dragonfly::Configurable::NotConfigured)
end
+
+ if !enabled #this will fail since the specs are not running on an ec2 instance with an iam role defined
+ it 'should allow missing secret key and access key on store if iam profiles are allowed' do
+ @data_store.use_iam_profile = true
+ @data_store.secret_access_key = nil
+ @data_store.access_key_id = nil
+ proc{ @data_store.store(@temp_object) }.should_not raise_error(Dragonfly::Configurable::NotConfigured)
+ end
+ end
+
end
describe "autocreating the bucket" do
Something went wrong with that request. Please try again.