Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

Loading…

Also use AWS S3 subdomain URL when directory name contains a period. #809

Merged
merged 1 commit into from

4 participants

Douwe Maan Don't Add Me To Your Organization a.k.a The Travis Bot Thiago Fernandes Massa Rémy Coutable
Douwe Maan

See #285 for info about the difference between AWS subdomain URLs (https://#{bucket_name}.s3.amazonaws.com) and path URLs (https://s3.amazonaws.com/#{bucket_name}).

Currently, the subdomain URL is not being used for buckets with a name containing a period like assets.example.com, while Amazon is perfectly fine with this and while this is actually required for buckets outside of the US Standard zone.

This pull request modifies the regular expression to allow this.

Don't Add Me To Your Organization a.k.a The Travis Bot

This pull request fails (merged 0b4f79e0 into 2b32dbd).

Thiago Fernandes Massa
Owner

Hello @DouweM

Can you create the test case for it?

Thanks.

Douwe Maan

Test case added!

Thiago Fernandes Massa
Owner

Awesome, can you please squash your commits?

Douwe Maan

Done.

Thiago Fernandes Massa
Owner

@DouweM just giving you a update, it looks good. But for some reason travis-ci isn't running new pull requests submissions from our repository, once it does it and we're still green I will merge.

Thanks a lot for your contribution.

Thiago Fernandes Massa thiagofm merged commit 56592f3 into from
Rémy Coutable

Actually, I think this was intentional since it's causing a SSL warning when a subdomain contains period(s).

Hmm. I've been using this for the last 8 months now and haven't had any problems. Note that the old way wasn't working at all, because the subdomain style is required for buckets outside of the US Standard zone. See the PR that introduced this commit: #809.

I see... For now the better trade-off is then to not have bucket names with period I guess.

Unfortunately there's no way around bucket names with periods if you want to have your bucket accessible through a whitelabel URL such as assets.domain.com. In this case, your bucket has to be named "assets.domain.com" and CarrierWave has to connect to assets.domain.com.s3.amazonaws.com, even though this causes issues with SSL.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
This page is out of date. Refresh to see the latest.
Showing with 15 additions and 1 deletion.
  1. +1 −1  lib/carrierwave/storage/fog.rb
  2. +14 −0 spec/storage/fog_helper.rb
2  lib/carrierwave/storage/fog.rb
View
@@ -286,7 +286,7 @@ def public_url
case @uploader.fog_credentials[:provider]
when 'AWS'
# if directory is a valid subdomain, use that style for access
- if @uploader.fog_directory.to_s =~ /^(?:[a-z]|\d(?!\d{0,2}(?:\d{1,3}){3}$))(?:[a-z0-9]|(?![\-])|\-(?![\.])){1,61}[a-z0-9]$/
+ if @uploader.fog_directory.to_s =~ /^(?:[a-z]|\d(?!\d{0,2}(?:\d{1,3}){3}$))(?:[a-z0-9\.]|(?![\-])|\-(?![\.])){1,61}[a-z0-9]$/
"https://#{@uploader.fog_directory}.s3.amazonaws.com/#{path}"
else
# directory is not a valid subdomain, so use path style for access
14 spec/storage/fog_helper.rb
View
@@ -61,6 +61,20 @@ class FogSpec#{fog_credentials[:provider]}Uploader < CarrierWave::Uploader::Base
@fog_file.url.should_not be_nil
end
end
+
+ it "should use a subdomain URL for AWS if the directory is a valid subdomain" do
+ if @provider == 'AWS'
+ @uploader.stub(:fog_directory).and_return('assets.site.com')
+ @fog_file.public_url.should include('https://assets.site.com.s3.amazonaws.com')
+ end
+ end
+
+ it "should not use a subdomain URL for AWS if the directory is not a valid subdomain" do
+ if @provider == 'AWS'
+ @uploader.stub(:fog_directory).and_return('SiteAssets')
+ @fog_file.public_url.should include('https://s3.amazonaws.com/SiteAssets')
+ end
+ end
end
context "with fog_host" do
Something went wrong with that request. Please try again.