Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Access without making Bucket public #11

Closed
vaulstein opened this issue Mar 6, 2018 · 16 comments
Closed

Access without making Bucket public #11

vaulstein opened this issue Mar 6, 2018 · 16 comments

Comments

@vaulstein
Copy link

Is there a way to use the index.html without making the bucket public?

@jflasher
Copy link

jflasher commented Mar 6, 2018

@vaulstein can you explain a bit more the use case you have in mind?

@vaulstein
Copy link
Author

vaulstein commented Mar 6, 2018 via email

@vaulstein
Copy link
Author

@jflasher I tried using Cognito to achieve the above requirement as done on this link - Cognito for S3 Access, but I still receive a Forbidden response.

@john-aws
Copy link
Contributor

john-aws commented Mar 9, 2018

Hi, we're looking at some options to provide a variant that includes authentication using AWS credentials.

@vaulstein
Copy link
Author

vaulstein commented Mar 9, 2018 via email

@john-aws
Copy link
Contributor

While not a complete solution, a short-term workaround for this request might be to make the bucket public but restrict access to whitelisted IPs, as follows in an S3 bucket policy:

"Condition": {
    "IpAddress": {
        "aws:SourceIp": "1.2.3.4/32"
    }
}

@davereinhart
Copy link

I would still like to see a version that uses makeRequest instead of makeUnauthenticatedRequest if possible, to use non-public buckets. I'm hoping to use the Condition operator for a StringLike argument to restrict access to a specific subfolder within a bucket. It doesn't look like it's possible to combine two different condition operators, so I can't use IpAddress and StringLike together. If that's something you can add or help me implement, I'd really appreciate it!

@john-aws
Copy link
Contributor

john-aws commented Jun 20, 2018

@geomapdev Apologies for the late response. Could you implement what you need as follows?

  1. have the user always visit https://s3.amazonaws.com/mybucket/index.html#myfolder/ rather than https://s3.amazonaws.com/mybucket/index.html

  2. implement the following S3 bucket policy for mybucket (replace mybucket, myfolder, and 1.2.3.4/32 as appropriate):

{
  "Version": "2012-10-17",
  "Id": "prefixpolicy",
  "Statement": [
      {
          "Sid": "index",
          "Effect": "Allow",
          "Principal": "*",
          "Action": "s3:GetObject",
          "Resource": [
              "arn:aws:s3:::mybucket/index.html"
          ],
          "Condition": {
              "IpAddress": {
                  "aws:SourceIp": "1.2.3.4/32"
              }
          }
      },
      {
          "Sid": "prefixlist",
          "Effect": "Allow",
          "Principal": "*",
          "Action": "s3:List*",
          "Resource": [
              "arn:aws:s3:::mybucket",
              "arn:aws:s3:::mybucket/*"
          ],
          "Condition": {
              "StringLike": {
                  "s3:prefix": "myfolder/*"
              },
              "IpAddress": {
                  "aws:SourceIp": "1.2.3.4/32"
              }
          }
      },
      {
          "Sid": "prefixobjects",
          "Effect": "Allow",
          "Principal": "*",
          "Action": "s3:Get*",
          "Resource": [
              "arn:aws:s3:::mybucket/myfolder",
              "arn:aws:s3:::mybucket/myfolder/*"
          ],
          "Condition": {
              "IpAddress": {
                  "aws:SourceIp": "1.2.3.4/32"
              }
          }
      }
  ]
}

Without modifications to the code, the user would still see the breadcrumb for the top-level folder and could click it but the user would not be able to navigate to it and instead would see Access Denied.

This could potentially be extended to multiple unauthenticated users accessing different folders in the same bucket if you can differentiate them by source IP.

Anyhow, hope this gives you some ideas.

@john-aws
Copy link
Contributor

@vaulstein Hi, I have uploaded an alpha of version 2 of S3 Explorer that can be used with private S3 buckets.

This version is optimized for private buckets, so always asks you for bucket name and credentials when loading, but it can also be used for public S3 buckets. You can host this tool in any S3 bucket you like and use it to explore any other bucket(s) (assuming that you have appropriate CORS settings on the chosen target bucket and your IAM credentials have sufficient S3 permissions).

Note some of the key features of this v2 alpha:

  • support for private buckets
  • support for file uploads
  • support for file deletion

If you choose to explore a private S3 bucket then you will need to supply AWS credentials. Credentials can be provided in one of the following forms:

  • IAM credentials: access key ID and secret access key
  • IAM credentials with MFA: access key ID, secret access key, and authentication code from an MFA device
  • STS credentials: access key ID, secret access key, and session token

@john-aws
Copy link
Contributor

@geomapdev Please note availability of an alpha of version 2 of S3 Explorer, supporting authentication for private S3 buckets.

@Pongchaiwat
Copy link

Hi, John
I try your file but I have some question. I need to put html, js, css in the bucket right?

@john-aws
Copy link
Contributor

john-aws commented Jun 26, 2018

@Pongchaiwat Correct, you'll need all 3 files (HTML, CSS, and JS) in the same S3 bucket (they should all be publicly readable, but you can also configure an IP whitelist in your S3 bucket policy if desired).

Note that you could choose to create a single HTML that contained all 3 files inline if you wanted for ease of distribution, but we chose to separate them for v2 because the file sizes were getting large.

@Pongchaiwat
Copy link

I try it but it show error like this.
screen shot 2018-06-26 at 3 14 16 pm

@john-aws
Copy link
Contributor

john-aws commented Jun 26, 2018

@Pongchaiwat Please ensure that your target S3 bucket (test-test-0001) has the correct CORS configuration, especially the AllowedOrigin. You have a few options when it comes to AllowedOrigin.

  1. To allow cross-origin requests from a web page at https://bucket1.s3.amazonaws.com/index.html to bucket2, supply the following CORS configuration in the S3 bucket policy on bucket2:
<AllowedOrigin>https://bucket1.s3.amazonaws.com</AllowedOrigin>
  1. Access your web page at https://s3.amazonaws.com/bucket1/index.html (path style URL) instead of https://bucket1.s3.amazonaws.com/index.html (virtual-hosted style URL) and supply the following CORS configuration in the S3 bucket policy on bucket2:
<AllowedOrigin>https://s3.amazonaws.com</AllowedOrigin>
  1. One final option you have is to allow all remote origins, as follows, if appropriate:
<AllowedOrigin>*</AllowedOrigin>

@Pongchaiwat
Copy link

Pongchaiwat commented Jun 26, 2018

I try to use private s3 bucket.
screen shot 2018-06-26 at 4 48 01 pm

@john-aws
Copy link
Contributor

@Pongchaiwat I've created #27 to track this issue with signature v4 regions.

@john-aws john-aws closed this as completed Jan 3, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants