Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

boto3 lib of python, s3 image upload using presigned url with content type #1149

Closed
ashishgupta2014 opened this issue Jun 22, 2017 · 24 comments
Assignees
Labels

Comments

@ashishgupta2014
Copy link

ashishgupta2014 commented Jun 22, 2017

s3_con = boto3.client(
    's3',aws_access_key_id='xxxxx', aws_secret_access_key='xxxxx',
    config=Config(signature_version='s3v4'), region_name=AWS_SETUP['S3']['region']
)
url = s3_con.generate_presigned_url(
    'put_object', Params={
        'Bucket':AWS_SETUP['S3']['bucket_name'], 
        'Key':key,'ContentType':'image/jpg'
    },
    ExpiresIn=AWS_SETUP['S3']['expiresInsecs'],
    HttpMethod='PUT'
)
print(url)

Above code is my python code which generating signed url but when I am trying to upload image using signed url I am getting error message from AWS
SignatureDoesNotMatch
If I will remove ContentType from above code, I am able to upload image but content type is set as application/x-www-form-urlencoded; charset=UTF-8.

I have to set content type as image/jpg or image/png because while accessing, I have to send it to the third party application who needs content type to be set properly.

I am very new to AWS section integration.

@kyleknap
Copy link
Contributor

How are you trying to upload the image? Could you show a sample code snippet or command you ran to get the SignatureDoesNotMatch?

@kyleknap kyleknap added closing-soon This issue will automatically close in 4 days unless further comments are made. question labels Jun 22, 2017
@ashishgupta2014
Copy link
Author

ashishgupta2014 commented Jun 23, 2017

<html>
<head>
<script src="http://code.jquery.com/jquery-latest.min.js"></script>
<script type="text/javascript">
// Remember to include jQuery somewhere.

// Remember to include jQuery somewhere.
$(document).ready(function(){

	prisigned_url=[REDACTED];
	$(function() {

	  $('#theForm').on('submit', sendFile);
	});

	function sendFile(e) {
	    e.preventDefault();

	    // get the reference to the actual file in the input
	    var theFormFile = $('#theFile').get()[0].files[0];

	    $.ajax({
	      type: 'PUT',
	      url:prisigned_url, //server will send presigned url to upload image expires in 3600
	      // Content type must much with the parameter you signed your URL with
	      //contentType: 'binary/octet-stream',
	      // this flag is important, if not set, it will try to send data as a form
	      //ContentType: 'image/jpg',
	      processData: false,
	      // the actual file is sent raw
	      data: theFormFile
	    })
	    .success(function(file,response) {
	    	console.log("file=>",file);
	    	console.log("response=>",response);

	      alert('File uploaded');
	    })
	    
	    .error(function() {
	      alert('File NOT uploaded');
	      console.log( arguments);
	    });

	    return false;
  
	  }
});
</script>
</head>
<body>
<form id="theForm" method="POST" enctype="multipart/form-data" >
    <input id="theFile" name="file" type="file"/> 
    <button id="theButton" type="submit">send 1</button>
</form>
</body>
</html>

@ashishgupta2014
Copy link
Author

above code is my javascript ajax call and html. where I am using preassigned URL.

@stealthycoin
Copy link
Contributor

You need to make sure that bucket's CORS config is set to accept the content-type header.

When you make the PUT request ajax makes a preflight OPTIONS request to see if the request it is about to make is allowed. S3 will check the preflight headers against that buckets cors config object to ensure everything is allowed. Since by default the Content-Type header is not in the list of allowed headers the preflight request will fail and ajax will not make the PUT request.

Here is a cors config document that works for me with your script:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedMethod>PUT</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
    <ExposeHeader>GET</ExposeHeader>
    <ExposeHeader>PUT</ExposeHeader>
    <AllowedHeader>Authorization</AllowedHeader>
    <AllowedHeader>Content-Type</AllowedHeader>
</CORSRule>
</CORSConfiguration>

All I did was add that last <AllowedHeader>Content-Type</AllowedHeader> line to the default one.

That config worked for the following two scripts to presign the url and upload the image:

Generate the presigned url:

import boto3
import botocore


s3_con = boto3.client('s3')
url=s3_con.generate_presigned_url('put_object',
                                  Params={'Bucket': 'bucket_name',
                                          'Key':'img.jpg',
                                          'ContentType': 'image/jpg'
                                  },
                                  ExpiresIn=600)
print(url)

Ajax call to upload:

presigned_url = "...";

function sendFile(e) {
    e.preventDefault();

    // get the reference to the actual file in the input
    var theFormFile = $('#theFile').get()[0].files[0];

    $.ajax({
      type: 'PUT',
      url:presigned_url, 
      contentType: 'image/jpg',
      processData: false,
      data: theFormFile
    }).success(function(file,response) {
      console.log("file=>",file);
      console.log("response=>",response);

      alert('File uploaded');
    }).error(function() {
      alert('File NOT uploaded');
      console.log( arguments);
    });
  }

@ashishgupta2014
Copy link
Author

ashishgupta2014 commented Jun 28, 2017 via email

@ashishgupta2014
Copy link
Author

ashishgupta2014 commented Jun 30, 2017 via email

@stealthycoin
Copy link
Contributor

I don't know what the particular issue with your code is. I would suggest you open a question on stackoverflow about it. We don't have the bandwidth here to deal with questions.

@stealthycoin stealthycoin added move-to-stackoverflow and removed closing-soon This issue will automatically close in 4 days unless further comments are made. labels Aug 15, 2017
@kcstewart
Copy link

You need to make sure the file is already uploaded before you ask for the presigned url.

@ashishgupta2014
Copy link
Author

ashishgupta2014 commented Mar 26, 2018 via email

@Owen045
Copy link

Owen045 commented Aug 29, 2018

Hi @ashishgupta2014 I am experiencing exactly the same issue as you. Is there any chance you can elaborate on how you fixed it please? Thanks :)

@joshkpeterson
Copy link

joshkpeterson commented Aug 29, 2018

@Owen045 I came to this thread with the same issue, fixed it for myself by setting up the boto3 client similar to what OP had above:

s3 = boto3.client('s3', region_name=settings.AWS_S3_REGION_NAME)

boto3 will look for the other stuff in environment variables, but not signature version or region name. In my case, those are in django settings.

I spent 12 hours to find this fix. To make this thread more useful for people troubleshooting similar issues - is there a way to confirm that boto3 is sending everything you expect? I know you could do the following to see the exact request being sent to amazon, but the logs still don't have an explicit declaration of what region it's redirecting to.
boto3.set_stream_logger(name='botocore')

@Owen045
Copy link

Owen045 commented Aug 30, 2018

@joshkpeterson Thanks for the help Josh, although I had been hardcoding in my region for testing purposes atm.

I am still getting an error message as follows:

SignatureDoesNotMatch The request signature we calculated does not match the signature you provided. Check your key and signing method.

Interestingly the canonical request seems to read as a GET request still:

GET /005.jpg X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJB7L5VWCPYFXDA4A%2F20180830%2Feu-west-2%2Fs3%2Faws4_request&X-Amz-Date=20180830T090828Z&X-Amz-Expires=500&X-Amz-SignedHeaders=content-type%3Bhost content-type: host:imgictesting-localtesting.s3.amazonaws.com content-type;host UNSIGNED-PAYLOAD

My code is as follows:

class GetS3SignedUrl(View):
"""
Generate Signed url for s3
"""
@csrf_exempt
def get(self, request):
bucket = 'imgictesting-localtesting'
s3 = boto3.client('s3', region_name='eu-west-2')
file_name = str(request.GET.get('file_name'))
print(file_name)
url = s3.generate_presigned_url('put_object',
Params={'Bucket': bucket, 'Key': file_name, 'ContentType': 'image/jpg'},
ExpiresIn=500, HttpMethod='PUT'
)
out_url = 'https://%s.s3.amazonaws.com/%s' % (bucket, file_name)

    return JsonResponse({'signed_request': url, 'url': out_url, 's3_key': file_name, 'status': 'ok'})

JS/AJAX Function:

$.ajax({
    type: 'PUT',
    url: url,
    processData: false,
    data: theFile,
    contentType: 'image/jpg',
    processData: false,
    success: function(file,response) {
        alert('File uploaded');
        console.log('file=>', file);
        console.log('response=>', response);
    },
    error: function() {
        alert('File NOT uploaded')
        console.log();
        console.log(arguments);
        console.log(arguments[0]);
       console.log(arguments[1]);
    }
});

Any ideas would be much appreciated, Thanks!

@claweyenuk
Copy link

claweyenuk commented Sep 6, 2018

This one took me a while to figure out. It turned out that the issue was I was passing in the wrong region to the S3 client call on the server. I didn't dig into why this fixed my issue - my bucket is encrypted, but a test bucket I first used was unencrypted and worked with the original region.

I tried all the different tricks (creating a dummy file, trying different content types, etc). Fixing the region is what did it for me though. Before that I got a vague message "The bucket you are attempting to access must be addressed using the specified endpoint"

In hopes of helping others, this is what I had:

Server (note I AWS credentials in environmental variables):

        client = boto3.client('s3', region_name=AWS_REGION,
                              config=botocore.client.Config(signature_version='s3v4'))
        return client.generate_presigned_url('get_object', Params = {'Bucket': bucket, 'Key': path})

Bucket CORS config:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedMethod>PUT</AllowedMethod>
    <AllowedMethod>HEAD</AllowedMethod>
    <MaxAgeSeconds>3600</MaxAgeSeconds>
    <ExposeHeader>ETag</ExposeHeader>
    <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>

client

  function uploadToS3(file, url) {
	//The fun begins, first we get a token to upload to S3
	var xhr = new XMLHttpRequest();
	xhr.open("PUT", url, true);
	xhr.onload = function() {
		if (xhr.readyState == 4) {
			if (xhr.status == 200) {
				//nothing to do yet
			} else if (xhr.status >= 400) {
				loadErrorPage(xhr.responseText);
			}
		}
	}
	xhr.send(file);
  }

@eguven
Copy link

eguven commented Sep 7, 2018

After looking through all the related comments, how I got this working was by setting the correct region_name and endpoint_url according to AWS Regions and Endpoints

s3_client = session.client(
    's3', region_name='eu-central-1', endpoint_url='https://s3.eu-central-1.amazonaws.com'
)

@lukasz-madon
Copy link

I had a similar problem. I was getting 403 error with no message. It was missing HTTP header in Params

  getSignedUrl = (file, callback) => {
    axios.get("/api/s3/sign-upload", {
      params: {
        objectName: file.name,
        contentType: file.type,
        dir: this.props.dir,
      }
    })
    .then(res => {
      callback(res.data);
    })
    .catch(error => {
      console.error(error);
    });
  }

              <ReactS3Uploader
                className={"todo"}
                getSignedUrl={this.getSignedUrl}
                accept="image/*"
                onProgress={this.onProgress}
                onError={this.onError}
                onFinish={this.onFinish}
                uploadRequestHeaders={{
                  "x-amz-acl": "public-read"
                }}
                contentDisposition="auto"
              />

server

import boto3
from flask import current_app, Blueprint, request, jsonify
import os


FIVE_MINUES = 5 * 60
s3_blueprint = Blueprint("s3", __name__)


@s3_blueprint.route("/sign-upload")
def sign_s3_upload():
    s3 = boto3.client(
        "s3",
        aws_access_key_id=current_app.config["AWS_ACCESS_KEY_ID"],
        aws_secret_access_key=current_app.config["AWS_SECRET_ACCESS_KEY"],
    )
    object_name = request.args.get("objectName")
    dir = request.args.get("dir")
    content_type = request.args.get("contentType")

    url = s3.generate_presigned_url(
        ClientMethod="put_object",
        Params={
            "Bucket": current_app.config["AWS_USER_UPLOAD_BUCKET"],
            "Key": f"{dir}/{object_name/",
            "ContentType": content_type,
            "ACL": "public-read",
        },
        ExpiresIn=FIVE_MINUES,
    )
    return jsonify({"signedUrl": url})

@brydavis
Copy link

For those who stumble across this post, I solved something similar (after much struggle) over here.

https://stackoverflow.com/questions/57932243/javascript-not-working-when-uploading-file-to-aws-s3-via-presigned-url/57932593#57932593

@dino-cell
Copy link

I am managed to generate pre-signed url but i am getting this error message SignatureDiesNotmatch
this is the code and i am new to python

import boto3
import botocore
aws_access_key_id = 'xxxxxx',
aws_secret_access_key = 'xxxxx',

s3_con = boto3.client('s3', 'us-west-2',)
url=s3_con.generate_presigned_url('put_object',
Params={'Bucket': 'xxxx',
'Key':'xxx',
'ContentType': 'image/jpg'
},
ExpiresIn=3600)
print(url)

@dino-cell
Copy link

anybody help please

@eguven
Copy link

eguven commented Jan 22, 2020

@dino-cell #1149 (comment) Try adding endpoint_url to client:

s3_con = boto3.client(
    's3', region_name='us-west-2', endpoint_url='https://s3.us-west-2.amazonaws.com',
)

@dino-cell
Copy link

hi eguven, thank you for responding but still i get

SignatureDoesNotMatch
The request signature we calculated does not match the signature you provided. Check your key and signing method.

@dino-cell
Copy link

@eguven
hi eguven, thank you for responding but still i get

SignatureDoesNotMatch
The request signature we calculated does not match the signature you provided. Check your key and signing method.

@StefanTheWiz
Copy link

StefanTheWiz commented Jan 24, 2021

(after 3+ hours of debugging and almost smashing the keyboard....)

in the response, it's telling you which header is missing:

<Error>
    <Code>SignatureDoesNotMatch</Code>
    <Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
    <!-- .... -->
    <CanonicalRequest>PUT (your pre-signed url)
    content-type:image/jpeg
    host:s3.eu-west-2.amazonaws.com
    x-amz-acl:

    content-type;host;x-amz-acl
    UNSIGNED-PAYLOAD</CanonicalRequest>

Needed a x-amz-acl header matching the ACL set when generating the pre-signed URL

def python_presign_url():
    return s3.generate_presigned_url('put_object', Params={
        'Bucket': bucket_name,
        'Key': filename,
        'ContentType': type,
        'ACL':'public-read' # your x-amz-acl
    })
curl -X PUT \
    -H "content-type: image/jpeg" \
    -H "Host: s3.eu-west-2.amazonaws.com" \
    -H "x-amz-acl: public-read" \
    -d @/path/to/upload/file.jpg "$PRE_SIGNED_URL"

@vijaykumar1356
Copy link

After looking through all the related comments, how I got this working was by setting the correct region_name and endpoint_url according to AWS Regions and Endpoints

s3_client = session.client(
    's3', region_name='eu-central-1', endpoint_url='https://s3.eu-central-1.amazonaws.com'
)

Dude you made my day. thank you.

@Blacksuan19
Copy link

Blacksuan19 commented Jul 22, 2022

s3_con = boto3.client(
    's3', region_name='us-west-2', endpoint_url='https://s3.us-west-2.amazonaws.com',
)

this works, but why? I fixed this same issue before by specifying signature v4 and one time it was fixed by specifying region_name only, it makes no sense.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests