New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Comments: Upload a File to S3 #49

Closed
jayair opened this Issue Apr 10, 2017 · 70 comments

Comments

Projects
None yet
@jayair
Copy link
Member

jayair commented Apr 10, 2017

@jayair jayair added the Discussion label Apr 10, 2017

@geirman

This comment has been minimized.

Copy link

geirman commented Apr 15, 2017

I see the following 400 error in my dev console (Network > XHR) under the name ?max-keys=0. I've searched my code for 'us-east-1' and find zero results. I've converted all those instances to 'us-east-2' to match values I got back from aws. Not too sure what is causing this, and initially was concerned, but after checking the database...everything seems to have inserted into DynamoDB and uploaded to S3 correctly. ¯\(ツ)

<Error>
    <Code>AuthorizationHeaderMalformed</Code>
    <Message>The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-1'</Message>
    <Region>us-west-1</Region>
    <RequestId>39E99C1D42C6E600</RequestId>
    <HostId>tfuC/uhW4xhxPwMW+kqicWQxCdTznTrsYpM+lr40QGIyriIFysywMKlnnKqOGIKQ88SqN7SxWxE=</HostId>
</Error>

2017-04-15_1609

@jayair

This comment has been minimized.

Copy link
Member

jayair commented Apr 16, 2017

@geirman still having this issue? I noticed you commented on the Delete Note chapter.

@geirman

This comment has been minimized.

Copy link

geirman commented Apr 16, 2017

Yes, it has something to do with the S3 upload. It works, but I get the error each time I attach something. No error when I just update the comment.

@jayair

This comment has been minimized.

Copy link
Member

jayair commented Apr 16, 2017

You get the error every time you create a new note with a file as an attachment? But the file is uploaded successfully? That's strange.

Where are you seeing this error?

<Error>
    <Code>AuthorizationHeaderMalformed</Code>
    <Message>The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-1'</Message>
    <Region>us-west-1</Region>
    <RequestId>39E99C1D42C6E600</RequestId>
    <HostId>tfuC/uhW4xhxPwMW+kqicWQxCdTznTrsYpM+lr40QGIyriIFysywMKlnnKqOGIKQ88SqN7SxWxE=</HostId>
</Error>
@geirman

This comment has been minimized.

Copy link

geirman commented Apr 16, 2017

Everytime I create or update a note with and attach something. It's successfully uploaded though, which I agree seems strange. So it's not gating me. Would be nice to understand why it's happening though.

I found the error under Networking > XHR > click on the item with a 400 error ?max-keys=0 > Preview > then expand the Error node.

2017-04-15_2330

It's deployed now, so you can see for yourself... http://notes-app-client-geirman.s3-website-us-east-1.amazonaws.com/

@jayair

This comment has been minimized.

Copy link
Member

jayair commented Apr 17, 2017

I played around with your app. I think I know what's going on. The AWS JS SDK has a region set to us-east-1 but your S3 file uploads bucket is in us-west-1. Apparently, the SDK retries with the correct region, hence it ends up working. You can set the correct region before you do the upload by doing so.

  const s3 = new AWS.S3({
    region: 'us-west-1',
    params: {
      Bucket: config.s3.BUCKET,
    }
  });

The tutorial doesn't need to do this because region set for the AWS JS SDK using the AWS.config.update({ region: config.cognito.REGION }); call is the same for the S3 file uploads bucket.

You can read more about here - aws/aws-sdk-js#986 (comment)

@geirman

This comment has been minimized.

Copy link

geirman commented Apr 17, 2017

Thanks @jayair, you've been a huge help. Setting the region to 'us-west-1' does resolve the problem, but I can't for the life of me figure out how that makes any sense. Everything I'm seeing indicates that the region should be 'us-east-2'. I tried 'us-east-2' for giggles, but it errored as well. Where should I have been looking to know that 'us-west-1' was the right value?

2017-04-17_1113

@jayair

This comment has been minimized.

Copy link
Member

jayair commented Apr 17, 2017

What about the bucket that you set up for file uploads? The one we do in this chapter - http://serverless-stack.com/chapters/create-an-s3-bucket-for-file-uploads.html

@geirman

This comment has been minimized.

Copy link

geirman commented Apr 17, 2017

That's the one

@jayair

This comment has been minimized.

Copy link
Member

jayair commented Apr 18, 2017

Thanks.

I don't think the region in the URL for the AWS Console is the region of the bucket. The console does show the correct region either in the list of buckets or in the bucket page. You can see it here in this screenshot.

select-created-s3-bucket

And here is the US East (N. Virginia) - us-east-1 mapping http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region

@nerdguru

This comment has been minimized.

Copy link

nerdguru commented Apr 19, 2017

I'm getting a 403 on the PUT and rechecked my CORS settings on the bucket, which look OK. What else should I be looking at to troubleshoot here?

@geirman

This comment has been minimized.

Copy link

geirman commented Apr 20, 2017

@jayair Good call! The region thing seems like an important detail to AWS, but it gets in the way from my perspective. I wish we could abstract it away.

image

I created a codestar project this morning and as I was checking it out, I went back to my aws console and navigated back to it... and it was gone! I was confused, so I created another. Both seemed to work, so I kept scratching my head and figured out that my demo 1 was in a different region. Not sure why I switched regions, but it's a confusing detail that I continually seem to stumble on. (sorry to get off topic)

image

@jayair

This comment has been minimized.

Copy link
Member

jayair commented Apr 20, 2017

@nerdguru can I see the full error? I think you can expand the 403 error in the console and it might give you some info on why it's failing. Also, let's see what the url endpoint is for the PUT request.

@geirman yeah in future we might look into building something that would hopefully abstract out these details and gotchas. If you come across some ideas, send them our way 😉

@nerdguru

This comment has been minimized.

Copy link

nerdguru commented Apr 20, 2017

@jayair Here's what gets output in the Chrome console:

pete-notes-app.s3.amazonaws.com/us-east-1%3A636ea0f9-5d92-41f2-86eb-93aa67b66968-1492639359454-addams.txt:1 PUT https://pete-notes-app.s3.amazonaws.com/us-east-1%3A636ea0f9-5d92-41f2-86eb-93aa67b66968-1492639359454-addams.txt 403 (Forbidden)

That path looks right to me, but you're eyes might reveal something

@abagasra98

This comment has been minimized.

Copy link

abagasra98 commented Apr 21, 2017

@nerdguru I had the same problem. AWS throws a 403 error because the user permissions associated with the authorized users (of your identity pool) does not grant them access to read/write S3 data.

The solution is to go into the IAM console, go to Roles tab on the side, click on the one associated with your Identity pool. For reference, mine was called "Cognito_notesidentitypoolAuth_Role" After you're on the Summary page, click attach policy and choose the following: AmazonS3FullAccess

@fwang

This comment has been minimized.

Copy link
Member

fwang commented Apr 21, 2017

@abagasra98 is correct in that lack of S3 upload permission can cause the 403 error. Granting the identity pool with AmazonS3FullAccess solves the problem, but it also grants a user access to edit/remove files uploaded by other users. A very subtle tweak to the solution is to grant users edit/remove access only to files they uploaded.

@nerdguru Let's first take a look at the IAM policies assigned to the identity pool. As @abagasra98 suggested, go to IAM console, click on Roles in the left menu, click on Cognito_notesidentitypoolAuth_Role, click on Show Policy near the bottom of the page.

@nerdguru

This comment has been minimized.

Copy link

nerdguru commented Apr 27, 2017

@abagasra98 and @fwang, that was it, thanks so much. I clearly missed that step when setting up the Identity Pool. I changed that policy to the one shown on that step and now it works like a champ. My .txt file I selected in the app shows up with the expected prefixed name in my bucket.

Sorry it took me so long to find the quiet time to try it out 8)

@alpiepho

This comment has been minimized.

Copy link

alpiepho commented May 11, 2017

@jayair add the Amazons3FullAccess policy allows me to upload files now. Two questions:

  1. I didn't follow comment from @fwang. Is there a way to tighten that access? (details would be appreciated)
  2. did I miss a step in the tutorial?

Thanks for all the help here.

@fwang

This comment has been minimized.

Copy link
Member

fwang commented May 11, 2017

@alpiepho the policy allowing the Identity Pool to access S3 resources was defined in Create a Cognito Identity Pool chapter. When the Identity Pool was first created, we attached the following policy:

{
  "Version": "2012-10-17",
  "Statement": [
  ...,
    {
      "Effect": "Allow",
      "Action": [
        "s3:*"
      ],
      "Resource": [
        "arn:aws:s3:::YOUR_S3_UPLOADS_BUCKET_NAME/${cognito-identity.amazonaws.com:sub}*"
      ]
    }
  ]
}

This grants access to YOUR_S3_UPLOADS_BUCKET_NAME bucket, and files prefixed with the users' identity in the bucket.

@hutualive

This comment has been minimized.

Copy link

hutualive commented Aug 2, 2017

why

const uploadedFilename = (this.file)
 ? (await s3Upload(this.file, this.props.userToken)).Location
 : null;

return an URL ?

in s3Upload, the returned object just have:

return s3.upload({
    Key: filename,
    Body: file,
    ContentType: file.type,
    ACL: 'public-read',
  }).promise();

I do not see the key word like "Location".

thanks.

@jayair

This comment has been minimized.

Copy link
Member

jayair commented Aug 2, 2017

@hutualive This the AWS SDK docs for the upload method - http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#upload-property. It returns the Location property that we use. Our own s3Upload method is simply returning a Promise that eventually will give us the object containing the Location property.

@michaelcuneo

This comment has been minimized.

Copy link

michaelcuneo commented Aug 10, 2017

I have an odd problem... my DynamoDB Tables and S3 File upload appear to be actually updating, if I log in to AWS console and look in the S3 Bucket, and the DynamoDB table, I'm actually getting proper data... but after a call, the Creating just spins and spins and spins... eventually I get these errors in my console.

PUT https://hrs-notes-uploads.s3-ap-southeast-2.amazonaws.com/undefined-1502368754577-IMG_0454.jpg net::ERR_CONNECTION_ABORTED
hrs-notes-uploads.s3-ap-southeast-2.amazonaws.com/?max-keys=0:1 GET https://hrs-notes-uploads.s3-ap-southeast-2.amazonaws.com/?max-keys=0 403 (Forbidden)
xhr.js?28e2:81 PUT https://hrs-notes-uploads.s3-ap-southeast-2.amazonaws.com/undefined-1502368754577-IMG_0454.jpg 403 (Forbidden)

No idea what I've done wrong.

@michaelcuneo

This comment has been minimized.

Copy link

michaelcuneo commented Aug 10, 2017

Now I've got a new error with seemingly no changes whatsoever. :o

POST https://5qyf9lxnte.execute-api.ap-southeast-2.amazonaws.com/prod/notes 403 ()
notes:1 Fetch API cannot load https://5qyf9lxnte.execute-api.ap-southeast-2.amazonaws.com/prod/notes. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://192.168.0.10:3000' is therefore not allowed access. The response had HTTP status code 403. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
@michaelcuneo

This comment has been minimized.

Copy link

michaelcuneo commented Aug 10, 2017

Disregard all that, I did a stupid thing. Circularly tried to push /notes ... N.B. don't do that. :)

@jayair

This comment has been minimized.

Copy link
Member

jayair commented Aug 10, 2017

@michaelcuneo Glad you figured it out.

@designpressure

This comment has been minimized.

Copy link

designpressure commented Sep 19, 2017

@fwang I've the same 403 error. I've verified my IAM roles policy and it is exactly as requested but I still have the problem.... what should I check?
I have also verified CORS:

<CORSRule>
   <AllowedOrigin>*</AllowedOrigin>
   <AllowedMethod>GET</AllowedMethod>
   <MaxAgeSeconds>3000</MaxAgeSeconds>
   <AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>

and policy in IAM:

            "Effect": "Allow",
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::notes-app-api-prod-ZzZZzzzZzzz/${cognito-identity.amazonaws.com:sub}*"
            ]
        },
@jayair

This comment has been minimized.

Copy link
Member

jayair commented Sep 19, 2017

@designpressure That CORS block that you posted is the default one. The one we use in the tutorial (https://serverless-stack.com/chapters/create-an-s3-bucket-for-file-uploads.html) looks like this:

<CORSConfiguration>
	<CORSRule>
		<AllowedOrigin>*</AllowedOrigin>
		<AllowedMethod>GET</AllowedMethod>
		<AllowedMethod>PUT</AllowedMethod>
		<AllowedMethod>POST</AllowedMethod>
		<AllowedMethod>HEAD</AllowedMethod>
		<MaxAgeSeconds>3000</MaxAgeSeconds>
		<AllowedHeader>*</AllowedHeader>
	</CORSRule>
</CORSConfiguration>

Not sure if you missed it but give that a try.

@designpressure

This comment has been minimized.

Copy link

designpressure commented Sep 21, 2017

Yeah, that was the problem, now it uploads fine, thanks.

@QuantumInformation

This comment has been minimized.

Copy link

QuantumInformation commented Oct 5, 2017

Note if you get an error that says AccessDenied, your policy for the auth role is likely incorrect, for me it was the wrong bucket setting.

@picwell-mgeiser

This comment has been minimized.

Copy link

picwell-mgeiser commented Dec 31, 2017

I checked and the permissions look correct.

If I go into the IAM - Roles Cognito_NotesAppPoolAuth_role and expand the Policy and click on S3, under Write, the PutObject Resource =

BucketName | string like | pickme-sandbox-reactapp-files, ObjectPath | string like | ${cognito-identity.amazonaws.com:sub}*

Also, would the error specifically say CORS 'Access-Control-Allow-Origin' error if it were a role config error? I know error messages aren't always 100% spot on accurate, but I'd guess the CORS failure would happen before a ACL check and probably in a different component, so I'm guessing it is unlikely that an ACL error will generate a CORS 'Access-Control-Allow-Origin' error...just sayin' :)

How can I better trace and diagnose where exactly this failed?

@jayair

This comment has been minimized.

Copy link
Member

jayair commented Dec 31, 2017

@picwell-mgeiser AWS doesn't make it easy to debug these. The main thing I would check is for typos in these. Make sure the bucket name is correct in the roles and while you are uploading them. And the region as well.

@jlissner

This comment has been minimized.

Copy link

jlissner commented Jan 18, 2018

@picwell-mgeiser I had the same issue, I had a type in my CORS configuration on my bucket, specifically my last rule was: <AllowedHeader>Authorization</AllowedHeader>
and it needed to be: <AllowedHeader>*</AllowedHeader>

@danielkaczmarczyk

This comment has been minimized.

Copy link

danielkaczmarczyk commented Jan 25, 2018

I have the very same issue as @picwell-mgeiser does. Have tried all methods in this (and other) threads. I have used an Allow-Control-Origin-* chrome extension to omit the requirement by enabling cross-origin response sharing and it helps with the preflight authorization but doesn't fix the fact that I get the 301 result. The current error is Response for preflight has invalid HTTP status code 301.

@jayair

This comment has been minimized.

Copy link
Member

jayair commented Jan 26, 2018

@danielkaczmarczyk And does this happen just for file uploads? Or for other API calls as well?

@danielkaczmarczyk

This comment has been minimized.

Copy link

danielkaczmarczyk commented Jan 27, 2018

@jayair Only on file uploads. I try to send a file, first I get a 200 GET from s3 with all cors headers set correctly, and after that, the OPTIONS call gets 301'd, the very same way as @picwell-mgeiser has described.

@michaelcuneo

This comment has been minimized.

Copy link

michaelcuneo commented Jan 27, 2018

@bharloe

This comment has been minimized.

Copy link

bharloe commented Jan 27, 2018

For what it's worth the push to S3 still works fine for my app which I haven't touched in months

@jayair

This comment has been minimized.

Copy link
Member

jayair commented Jan 28, 2018

@danielkaczmarczyk Can you go into the IAM section of your AWS Console and find the role that is being used? Here are some screenshots on how to do it.

Image 1
Image 2
Image 3
Image 4

@danielkaczmarczyk

This comment has been minimized.

Copy link

danielkaczmarczyk commented Jan 28, 2018

Here's my IAM policy for cognito

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "mobileanalytics:PutEvents",
                "cognito-sync:*",
                "cognito-identity:*"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::notes-app-uploads-dan/${cognito-identity.amazonaws.com:sub}*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "execute-api:Invoke"
            ],
            "Resource": [
                "arn:aws:execute-api:us-east-2:*:lg7wxu2y8b/*"
            ]
        }
    ]
}
@jayair

This comment has been minimized.

Copy link
Member

jayair commented Jan 29, 2018

@danielkaczmarczyk That looks good. Can you also check the permissions for your S3 bucket?

Image

@danielkaczmarczyk

This comment has been minimized.

Copy link

danielkaczmarczyk commented Jan 29, 2018

here:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedMethod>PUT</AllowedMethod>
    <AllowedMethod>POST</AllowedMethod>
    <AllowedMethod>HEAD</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
    <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>

@jayair

This comment has been minimized.

Copy link
Member

jayair commented Jan 29, 2018

@danielkaczmarczyk Yeah that looks okay too. I can try your repo out and see what I can find. Can you publish it?

@danielkaczmarczyk

This comment has been minimized.

Copy link

danielkaczmarczyk commented Jan 29, 2018

many thanks for the effort, @jayair ! Here's the backend and here's the client should you need it

@jayair

This comment has been minimized.

Copy link
Member

jayair commented Jan 29, 2018

@danielkaczmarczyk So I tried it out. I noticed that your bucket is in us-east-2. This isn't a problem but it needs a slight tweak to the code before we do the upload.

  const s3 = new S3({
    region: 'us-east-2',
    params: {
      Bucket: config.s3.BUCKET
    }
  });

I created a new S3 bucket in us-east-2 using the tutorial instructions. And used your code base and my credentials and the upload worked. But it still didn't work on your bucket.

I cannot check what the settings are on your bucket but can you try re-creating the bucket (and editing the IAM policy for Cognito with the new bucket)?

@danielkaczmarczyk

This comment has been minimized.

Copy link

danielkaczmarczyk commented Jan 30, 2018

I have created a new bucket, in us-east-2 (Ohio), amended config.js with the new value for BUCKET, amended the IAM policy for Cognito to make sure that I give it access to write to this particular bucket, and I get a 403 Bucket not found as if the authentication didn't work. No more error details present

@danielkaczmarczyk

This comment has been minimized.

Copy link

danielkaczmarczyk commented Jan 30, 2018

Ok, I have gone back & forth with different configurations and it has eventually worked. Not sure what exactly has made the difference, as upon inspecting the details it seems that all is as was during previous 'correct' versions. I blame that on human error - I may have mistyped something. The only takeaway that I'm sure of is that if the bucket was in us-east-1, which is N. Virginia, it always would 301.

@jayair

This comment has been minimized.

Copy link
Member

jayair commented Jan 31, 2018

@danielkaczmarczyk Glad you figured it out. These can be tricky to debug!

@mochfauz

This comment has been minimized.

Copy link

mochfauz commented Apr 6, 2018

everyone know how to make our upload images become public read?

@jayair

This comment has been minimized.

Copy link
Member

jayair commented Apr 6, 2018

@mbahfauz There is a section in the Amplify Docs that might help with this - https://aws.github.io/aws-amplify/media/storage_guide#s3image

@mochfauz

This comment has been minimized.

Copy link

mochfauz commented Apr 7, 2018

Thanks man

@viccooper142

This comment has been minimized.

Copy link

viccooper142 commented Apr 8, 2018

When I replace the handleSubmit method then I get " Line 7: 'API' is defined but never used no-unused-vars"
Also, when I try to create a note with an attachment I get an error: _TypeError: this.createNote is not a function

I fixed both of these errors by using files from your repository. Seems like there is some sort of mismatch between the guide and the repositories?

@jayair

This comment has been minimized.

Copy link
Member

jayair commented Apr 9, 2018

@viccooper142 I think you might have missed a step in the previous chapter - https://serverless-stack.com/chapters/call-the-create-api.html

There we add the createNote method which in turn uses the API module. This will resolve both your issues.

@19bharatvikram

This comment has been minimized.

Copy link

19bharatvikram commented May 5, 2018

Hi Jay,

I have a <UnauthenticatedRoute ...> page where I want to display some public image from AWS S3 using <S3Image imgKey={temp.attachment}/> where S3Image component is from 'aws-amplify-react'

I get error: cannot get guest credentials when mandatory signin enabled.

To solve it I used mandatorySignIn: false in my Amplify.configure settings in index.js but I am still getting the below error.

StorageClass - ensure credentials error": NotAuthorizedException: Unauthenticated access is not supported for this identity pool.

Any insights will be appreciated. Please suggest if there is a way to load images for non authenticated users.

Thanks,
Bharat Chand

@jayair

This comment has been minimized.

Copy link
Member

jayair commented May 7, 2018

@19bharatvikram Yeah this setup is a bit different. You need to create an Unauth role for your Identity Pool to specify the resources unauthenticated users have access to.

@19bharatvikram

This comment has been minimized.

Copy link

19bharatvikram commented May 9, 2018

Thanks Jay, Enabling access to unauthenticated identities and giving readonly s3 permission to Unauthenticated role solved the issue.

image

@jayair jayair closed this May 9, 2018

@jayair jayair reopened this May 9, 2018

@jayair

This comment has been minimized.

Copy link
Member

jayair commented May 9, 2018

@jayair jayair closed this May 9, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment