New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
external services (S3 specifically) responds 403 Forbidden to VM hosted HTML #192
Comments
Possible workaround: replace all Possible problems with the workaround:
My understanding is that S3 only holds uploaded files temporarily for celery to snatch, and then celery does magic with Google Drive and drops files locally rather than on S3. If that is true, then the browser's problem with S3 will not effect uploaded files. |
You will either need to login to S3 and find the setting that has allowable domains, and add the new domain:port. Or you need to disable the s3 staticfile management. It's not hard, just checkout the differences between production and development settings. |
The super good news is that the static link was programmed appropriately, with just a single config entry as far as I can tell: Actually there is a second instance in the secret file for static_s3.py, which is imported in the above file: That's the only place the static_s3 version of the URL is used, and it is completely ignored by line 125 noted above, which overrides the imported variable from secret. |
According to http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html, the URL format for static website hosting is: We appear to be using the S3 RESTful API (http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html), Even though we're using S3 static hosting at present, we aren't actually using the AWS S3 Static Website Hosting feature... Differences are listed as: None of that hints at why I can copy/paste an S3 static link and load it in my browser, but the same link returns 403 in the same browser when loaded from a web page hosted locally. Again, as far as I can tell, it all comes down to REFERER but I can find no settings in my AWS Management Console that cares about it. |
Following up from the previous comment, more info about REST API: http://docs.aws.amazon.com/AmazonS3/latest/API/APIRest.html After toying around a bit, I found out each individual file has its own permissions. There are grantees on each bucket, each directory, and each file. It wasn't obvious how the grantees are defined, but there's a doc: http://docs.aws.amazon.com/AmazonS3/latest/dev/ACLOverview.html "When using ACLs, a grantee can be an AWS account or one of the predefined Amazon S3 groups. However, the grantee cannot be an IAM user." So the good news is that grantees are either everybody, all authenticated AWS users, or particular AWS users. There is no possible way for REFERER to matter at all here. |
Someone was having 403 forbidden on S3 and solved the problem, but two of the answers mentioned that Cloudfront CDN needs to be configured in certain ways: So here's a question: Does Cloudfront CDN host S3 transparently? If so, this black box is much harder to penetrate. It'd be super good to just hop on the group's AWS Management Console and poke around to see how exactly things are setup and whether anything needs to be changed. The alternative is to stop using S3 for static hosting and have nginx hijack static hosting locally. |
re: workaround.
page looks good on localhost and jquery/javascript is actually loading. |
Did you try: It might be a permissions error on the root of the static directory on s3. On Mon, Dec 9, 2013 at 9:09 PM, Bryan Bonvallet notifications@github.comwrote:
|
Bolo is related to Django access, but this problem is experienced directly Also ... no access to S3 perms :(
|
Nuked VM and reinitialized it using
But now the CSS and JS actually loads from S3 as if there weren't no problem. More Alice in Wonderland craziness. Let's say it was the caterpillar getting my VM all smoky with his hookah. Closing ticket. |
This is a problem again. It mysteriously solved itself last time, but I think we missed a step. We have not setup our S3 for static hosting at all! I think it's a fluke that we use it as we do. I looked over the original instructions, which didn't say what to do with the S3 setup, only how to make Django push to the S3. I'm convinced we need to set our S3 for static hosting. There's a button. It says "You can host your static website entirely on Amazon S3. Once you enable your bucket for static website hosting, all your content is accessible to web browsers via the Amazon S3 website endpoint for your bucket." There are three options on our buckets:
Guess which one we've selected for all our buckets? "Do not enable website hosting." If we do enable website hosting, the URL will look like this: This should be an easy thing to fix. Check that button. Replace our STATIC_URL with the above. |
Turns out each object does have its own permissions. not sure why I didn't see that before, it used to just say "details" when I selected an object, but now I see "permissions." Each object in the bucket has "Open/Download" permission which must be assigned to Everyone. I wonder if those permissions are there because I enabled Static S3 hosting? Let's disable and find out. Nope. Somehow I missed it the last go through. Here's the S3 Policy Generator: http://awspolicygen.s3.amazonaws.com/policygen.html According to this, Principal "AWS": "." should be all anonymous users: Here's the Policy I generated for Everyone to GetObject on all objects in karma-beta's S3: {
"Id": "Policy1389912543493",
"Statement": [
{
"Sid": "Stmt1389912533433",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::karma-beta",
"Principal": {"AWS": "*.*"}
}
]
} That won't save on the S3 bucket, says the principal is invalid. Unrelated, this might be worth looking into to prevent other sites from hosting directly out of our S3 bucket. |
There has got to be a way to let Everyone Open/Download without setting it on every single file. |
Selecting a directory shows only Details, no Permissions. Checking more than one object shows only Details, no Permissions. No wonder I missed it. Permissions are only available on a single object, one at a time. |
Alright well there's no nice way to batch this through S3's interface, but the problem is clearly individual file permissions. Solution: go through each file, one by one, and add Everyone Read/Download. This good enough for this ticket. |
A number of assets are coded to
https://s3.amazonaws.com/fc_filepicker/
. According to web console, Amazon is responding with 403 Forbidden to all of them, be they javascript, svg, or css.I thought it might be a REFERER filter on S3. Andrew gave me the idea to hack the DNS a bit. I modified
/etc/hosts
on my host system (where I run my browser) so thatkarmanotes.org
points to 127.0.0.1. That worked in the sense that my URL bar saidkarmanotes.org:6659
and loaded the VM server. However, Amazon still chucked 403 errors my way.This is very perplexing. A number of the links can be loaded if copy/pasted into my browser, but when instructed to load the same links from HTML out of the VM, Amazon gives my browser 403.
The text was updated successfully, but these errors were encountered: