New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Paperclip with Fog / Ceph #1577

Closed
mpearrow opened this Issue Jun 19, 2014 · 5 comments

Comments

Projects
None yet
2 participants
@mpearrow

mpearrow commented Jun 19, 2014

Paperclip support for Fog does not appear to allow for use of non-Amazon servers. For example, I can use the following Ruby + Fog gem code to successfully write to my Ceph FS:

require 'fog'

Excon.defaults[:ssl_verify_peer] = false

conn = Fog::Storage.new({
   :aws_access_key_id => 'xxx',
   :aws_secret_access_key => 'xxx',
   :host => 'ceph-host',
   :path_style => true,
   :provider => "AWS",
})

This works just fine. I can't figure out how to do this in Paperclip, but if I use the following code, I can tell from tcpdump and the resulting error that my app is talking to Amazon, not the host I have specified:

has_attached_file :videofile,
  :storage => :fog,
  :fog_credentials => { 
      :aws_access_key_id => 'xxx', 
      :aws_secret_access_key => 'xxx', 
      :provider => 'AWS'},
  :fog_public => true,
  :fog_directory => 'replay',
  :fog_host => 'my-hostname',

Is there a way to cause Paperclip to use something other than AWS S3?

@mpearrow

This comment has been minimized.

Show comment
Hide comment
@mpearrow

mpearrow Jun 20, 2014

Reading fog.rb, it looks like there are hardcoded assumptions that only AWS S3 is going to be used by the :fog datastore - at least, I think that's what is happening:

def host_name_for_directory
        if @options[:fog_directory].to_s =~ Fog::AWS_BUCKET_SUBDOMAIN_RESTRICTON_REGEX
          "#{@options[:fog_directory]}.s3.amazonaws.com"
        else
          "s3.amazonaws.com/#{@options[:fog_directory]}"
        end
      end

If that's the case, would it be preferable to patch fog.rb or to write a new storage module for paperclip? I'll potentially need to do one or the other (or use an alternative, like Dragonfly)

mpearrow commented Jun 20, 2014

Reading fog.rb, it looks like there are hardcoded assumptions that only AWS S3 is going to be used by the :fog datastore - at least, I think that's what is happening:

def host_name_for_directory
        if @options[:fog_directory].to_s =~ Fog::AWS_BUCKET_SUBDOMAIN_RESTRICTON_REGEX
          "#{@options[:fog_directory]}.s3.amazonaws.com"
        else
          "s3.amazonaws.com/#{@options[:fog_directory]}"
        end
      end

If that's the case, would it be preferable to patch fog.rb or to write a new storage module for paperclip? I'll potentially need to do one or the other (or use an alternative, like Dragonfly)

@jyurek

This comment has been minimized.

Show comment
Hide comment
@jyurek

jyurek Jun 20, 2014

Contributor

That is actually the case, unfortunately. I would say we should change the Fog module, as others would want this as well.

Contributor

jyurek commented Jun 20, 2014

That is actually the case, unfortunately. I would say we should change the Fog module, as others would want this as well.

@mpearrow

This comment has been minimized.

Show comment
Hide comment
@mpearrow

mpearrow Jun 20, 2014

Since the fog gem itself can cope with a non-AWS host, it seems like we just need to pass it on through, but maintain the logic going on in host_name_for_directory. Would it make sense to add a :provider of CEPH maybe? Then we can check for that in the fog_credentials and construct the host/path appropriately?

mpearrow commented Jun 20, 2014

Since the fog gem itself can cope with a non-AWS host, it seems like we just need to pass it on through, but maintain the logic going on in host_name_for_directory. Would it make sense to add a :provider of CEPH maybe? Then we can check for that in the fog_credentials and construct the host/path appropriately?

@mpearrow

This comment has been minimized.

Show comment
Hide comment
@mpearrow

mpearrow Jun 20, 2014

Actually... if you pass in the :host in :fog_credentials, it properly passes it on to Fog. The bucket-name prepending that happens throws things off if you specify :fog_directory. Currently I'm getting an error from (I think) Fog - undefined method 'gsub!' for nil:NilClass.

mpearrow commented Jun 20, 2014

Actually... if you pass in the :host in :fog_credentials, it properly passes it on to Fog. The bucket-name prepending that happens throws things off if you specify :fog_directory. Currently I'm getting an error from (I think) Fog - undefined method 'gsub!' for nil:NilClass.

@mpearrow

This comment has been minimized.

Show comment
Hide comment
@mpearrow

mpearrow Jun 30, 2014

Ok, it is all working now. Here's the deal: you need to pass in :host in the :fog_credentials and things work as one would expect. The problems I was having stemmed from us not having some important configuration bits on our Ceph cluster, but the good news for anyone who finds this thread is that Paperclip works out of the box if your cluster is properly configured.

mpearrow commented Jun 30, 2014

Ok, it is all working now. Here's the deal: you need to pass in :host in the :fog_credentials and things work as one would expect. The problems I was having stemmed from us not having some important configuration bits on our Ceph cluster, but the good news for anyone who finds this thread is that Paperclip works out of the box if your cluster is properly configured.

@mpearrow mpearrow closed this Jun 30, 2014

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment