s3 perms diff for version when using custom filename #646

Closed
pdk opened this Issue Mar 6, 2012 · 16 comments

Projects

None yet

6 participants

@pdk
pdk commented Mar 6, 2012

I have an image uploader that resizes images to a "standard" version, and stores via fog on s3.

version :standard do
process :resize_to_limit => [200, 200]
end

Everything is fine until I add this to my uploader:

def filename
"#{Time.now.to_i % 1000000}_#{super}" if original_filename
end

The original file is still publicly readable on s3, but the standard version is not (access denied from s3 in web browser). If I comment out the filename method (ie, go back to default filename) the standard version is publicly readable.

@FGurevich

I believe you have a typo in your code. It should be Time.now.to_i unless you've extended the Time class with a 'toi' method

@pdk

I believe that's markdown syntax eating my underscores and displaying italics.

Let me try again.

  def filename
    "#{Time.now.to_i % 1000000}_#{super}" if original_filename
  end
@FGurevich

ahh, I see. What about this (probably not the best way to do this, but a quick solution which can be refactored later):

def filename
  super
end

version :standard do
  process :resize_to_limit => [200, 200]

  def full_filename(for_file = model.file)
    "#{Time.now.to_i % 1000000}_#{sanitize(original_filename)}" if original_filename
  end
end
@pdk

The problem is not that the filename doesn't get changed, it's that the permissions are messed up. I want both versions to have the timestamp thingy in the name (and my method did that). The problem is that when I changed the name like this, the permissions of the versioned file was different than when I didn't change the name.

@FGurevich

It seems to me that carrierwave has a problem with just that particular method. If you log into your s3 bucket, are you able to view the standard version file? Also, how are you querying that file?

@pdk

When I connect to the bucket with authentication, both the original and the version are there. They should both be public, but when the version is accessed via the public http url access is denied. The original is accessible via plain http.

@FGurevich

hmm...can you paste in your fog config initializer file?
I know in the carrierwave fog instructions, it says to do this:
config.fog_public = false
but i'm pretty sure it should be
config.fog_public = true

@pdk

Here you go. (Some bits changed to protect the guilty.)

CarrierWave.configure do |config|
  config.permissions = 0666
  config.storage = :fog

  config.s3_access_policy = :public_read

  config.fog_credentials = {
    :provider               => 'AWS',
    :aws_access_key_id      => 'blahblahblahackackack',
    :aws_secret_access_key  => 'morphalotaelephantsgoingroundandround'
  }

  if Rails.env.production?
    config.fog_directory = 'prod.oinkoink'
  elsif Rails.env.test?
    config.fog_directory = 'test.oinkoink'
  else
    config.fog_directory = 'dev.oinkoink'
  end

  config.fog_host = "http://#{config.fog_directory}.s3.amazonaws.com"  # optional, defaults to nil
  config.fog_public     = true                                      # optional, defaults to true
  # config.fog_attributes = {'Cache-Control'=>'max-age=315576000'}  # optional, defaults to {}

end
@FGurevich

What if you remove the permissions and s3_access_policy options?
Fog as far as I know does not support s3_access_policy (like aws/s3 did), instead sets it with fog_public.

@bensie
CarrierWave member

You can also use config.fog_attributes - something like config.fog_attributes = { :acl => 'authenticated-read' } for example.

@jurisgalang

@bensie: i'm seeing the same issue as the @pdk - setting config.fog_attributes = { :acl => 'authenticated-read' } does not appear to work. Reading through the code it looks like the key has to be x-amz-acl - but that does not work either.

@bensie
CarrierWave member

Just to clarify - you all are only having issues if setting a custom filename, correct?

I want to make sure I'm narrowing this down correctly when figuring out what's wrong.

@jurisgalang

@bensie I can't say for sure. my app does mess around with generating a custom filename and path. The original file is not accessible as well. In my case I'm trying to make it the files on s3 be read-accessible by anonymous users/clients - so have been banging my head trying various combinations to communicate what the acl of the uploaded file and versions should be (see my previous comment)

Here's the current incarnation of my carrier wave config:

s3 = YAML.load_file(Rails.root.join 'config/amazon_s3.yml')[Rails.env]
CarrierWave.configure do |config|
  config.permissions     = 0644
  config.fog_credentials = {
    provider:              'AWS',
    aws_access_key_id:     s3['access_key_id'],
    aws_secret_access_key: s3['secret_access_key']
  }
  config.fog_public     = true
  config.fog_attributes = { 'Cache-Control' => 'max-age=315576000', 'x-amz-acl' => 'public-read' }
end

If the posts I found on the web is to be believed, it should be setting the ACLs on the uploaded file like I need to (it doesn't)
I wonder if maybe the issue lies in the way fog is used by carrierwave, or fog itself...

@agibralter

I'm just seeing this issue -- I would like to use a custom filename as well but doing that gives me permission issues. Has anyone found a way round this?

@agibralter

Ohhhh of course! It's because Time.now.to_i changes call after call. You need to have the timestamp established in the model, not the uploader!

@dfurber

I wish I had an answer, however, I came here with the same problem. Eventually noticed that the Amazon 403 errors weren't from permissions not being set properly, but from the URL being written wrong. It came down to my overridden filename method containing a reference to model.id. When I removed it, there was much rejoicing.

@bensie bensie closed this Oct 19, 2012
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment