100 Continue behaviour is not correct #840

Open
fgalan opened this Issue Jun 26, 2012 · 7 comments

Comments

Projects
None yet
7 participants

fgalan commented Jun 26, 2012

Hi,

I would like to report a problem with Boto, as it seems that it is not implementing correctly the 100-Continue HTTP protocol. In particular, this is what I get on the wire (with wireshark) when I attemp to create a given object in a bucket:

PUT /anarcardo.bucket5/examples/prueba3 HTTP/1.1
Host: s3.amazonaws.com
Accept-Encoding: identity
Content-Length: 20
Content-MD5: 9Os31TZ1KGziiXV7Zp9p3Q==
Expect: 100-Continue
Date: Thu, 21 Jun 2012 16:27:33 GMT
User-Agent: Boto/2.5.1 (linux2)
Content-Type: text/plain
Authorization: AWS AKIAJZ77MCZLJV2GR7XQ:X2eBTLMfADR4daPlYYOu+2sJegE=

Contenido de prueba
HTTP/1.1 100 Continue
HTTP/1.1 200 OK
x-amz-id-2: iXt2TQrQFJGrE5TklW2XDwKgEFl34oHcdhm0DvOeMHbipBvXLyKXgSTBGBAg7v3C
x-amz-request-id: 9C2353D5A7D138B3
Date: Thu, 21 Jun 2012 16:27:35 GMT
ETag: "f4eb37d53675286ce289757b669f69dd"
Content-Length: 0
Server: AmazonS3

In summary, although Boto is including the "Expect: 100-Continue" header in the request, it is not honoring it, as it is starting to send the content of the request (i.e. the 20-bytes string "Contenido de prueba") before it receives the "HTTP/1.1 100 Continue" from the server. I have done tests also with large objects (around 4 MBytes) and the behaviour is the same.

I'm using Boto 2.5.1, creating the connection in the following way:

calling_format=boto.s3.connection.OrdinaryCallingFormat()
conn = S3Connection(is_secure=False, path = "/", calling_format=calling_format)

and the object in the following way ('bucket' is a given bucket, obtained using the above conn):

key = bucket.new_key('examples/prueba3')
key.set_contents_from_filename('/boot/vmlinuz-2.6.32-220.el6.x86_64')

I'm not sure if this is actually a bug in Boto or I'm missing something in the API (e.g. using a given parameter in some of the methods). Any help or information related with this problem is highly welcome, please.

Thanks!

Best regards,


Fermín

Owner

garnaat commented Jul 2, 2012

This goes back a long, long time. See https://forums.aws.amazon.com/thread.jspa?messageID=72324&#72324 for one thread on the AWS forums.

Basically, a long time ago there seemed to be an issue related to uploading very large files to S3. Mysteriously, adding the "100 Continue" header seemed to solve the problem. There was never really any official confirmation from AWS but there was lots of anecdotal data that adding that header solved the problem. So, I added the header to boto.

Is this causing problems for you?

fgalan commented Jul 20, 2012

Hi!

Well, not sure if "problem" is the right word, but I think that if an HTTP client (as an S3 client is) includes a Expect: 100-Continue header, it shall honour this behaviour, as described in HTTP 1.1 RFC. Note that other S3 clients (as the Java SDK provided by Amazon) behaves actually waiting for 100 Continue response before sending the body of the message.

I mean, including the header just to do a "quick fix" but without implementing all the funciontality it implies such inclusion, it is not the right way of doing things, IMHO.

Thanks!


Fermín

bjunix commented Aug 25, 2014

As far as I can tell there is a fix for this in botocore (boto/botocore@9e59c4e) and needs to be backported to the boto package. This commit is also referenced in ticket #2207.

@jclanoe jclanoe added a commit to sminteractive/boto that referenced this issue Jan 8, 2015

@jclanoe jclanoe Added a temporary fix to prevent errors when S3 returns a 100 Continu…
…e response

For more info: boto#840
59b0d66

nside commented Apr 24, 2015

had the same issue today. had to comment that Expect header too

@MattFaus MattFaus added a commit to Khan/boto that referenced this issue Apr 28, 2015

@MattFaus MattFaus Comment out 100-Continue expect header to hack image upload working a…
…gain

On 4/24/2015, content creators reported that they could no longer upload images at /devadmin/content/items/new. The stack trace of the server error was:

```
S3ResponseError: 100 Continue
api/errors.py:76 in api_errors_formatted
api/auth/decorators.py:313 in wrapper
api/decorators.py:435 in jsonp_enabled
api/decorators.py:295 in wrapper
api/internal/assessment_items.py:941 in create_assessment_item_image
api/internal/assessment_items.py:923 in _upload_image_to_s3
third_party/boto/s3/key.py:1172 in set_contents_from_file
third_party/boto/s3/key.py:710 in send_file
third_party/boto/s3/key.py:882 in _send_file_internal
third_party/boto/s3/connection.py:543 in make_request
third_party/boto/connection.py:937 in make_request
third_party/boto/connection.py:837 in _mexe
third_party/boto/s3/key.py:839 in sender
```

Due to ancient mysteries of working with S3 uploads, boto decided to add this header even though it does not implement the full HTTP workflow that it implies. The full workflow was eventually added to the boto master branch (called botocore), but it has not been pushed into the boto package available for install, yet. Other people who have encountered this error state that commenting out this header relieves the issue, so that's what we're going to try.

Read more at these links.
Another reports the error and the workaround: boto#840
The "real" fix in botocore, that is not yet available in the install package: boto/botocore@9e59c4e

Auditors: marcos

Test Plan:
Deploy to production, then:

1. Go to /devadmin/content/items/new
2. Click "Add Image"
3. Click "Choose files"
4. Select a file to upload
5. Click "Add Image"

You should not see a "Failed to upload image :(" error message.
da8d0d4

@MattFaus MattFaus added a commit to Khan/boto that referenced this issue Apr 28, 2015

@MattFaus MattFaus Comment out 100-Continue expect header to hack image upload working a…
…gain

On 4/24/2015, content creators reported that they could no longer upload images at /devadmin/content/items/new. The stack trace of the server error was:

```
S3ResponseError: 100 Continue
api/errors.py:76 in api_errors_formatted
api/auth/decorators.py:313 in wrapper
api/decorators.py:435 in jsonp_enabled
api/decorators.py:295 in wrapper
api/internal/assessment_items.py:941 in create_assessment_item_image
api/internal/assessment_items.py:923 in _upload_image_to_s3
third_party/boto/s3/key.py:1172 in set_contents_from_file
third_party/boto/s3/key.py:710 in send_file
third_party/boto/s3/key.py:882 in _send_file_internal
third_party/boto/s3/connection.py:543 in make_request
third_party/boto/connection.py:937 in make_request
third_party/boto/connection.py:837 in _mexe
third_party/boto/s3/key.py:839 in sender
```

Due to ancient mysteries of working with S3 uploads, boto decided to add this header even though it does not implement the full HTTP workflow that it implies. The full workflow was eventually added to the boto master branch (called botocore), but it has not been pushed into the boto package available for install, yet. Other people who have encountered this error state that commenting out this header relieves the issue, so that's what we're going to try.

Read more at these links.
Another reports the error and the workaround: boto#840
The "real" fix in botocore, that is not yet available in the install package: boto/botocore@9e59c4e

Auditors: marcos

Test Plan:
Deploy to production, then:

1. Go to /devadmin/content/items/new
2. Click "Add Image"
3. Click "Choose files"
4. Select a file to upload
5. Click "Add Image"

You should not see a "Failed to upload image :(" error message.
d1f5185

I also encountered this issue. To fix, I commented out this line: https://github.com/boto/boto/blob/develop/boto/s3/key.py#L943

headers['Expect'] = '100-Continue'

@jjmurre jjmurre added a commit to jjmurre/boto that referenced this issue May 5, 2015

@jjmurre jjmurre Commented out 100-Continue Expect header. cbd892f

@nunogt nunogt added a commit to nunogt/docker-registry that referenced this issue May 26, 2015

@nunogt nunogt Upgraded boto boto/boto#840 3272b14

@nunogt nunogt added a commit to nunogt/boto that referenced this issue May 26, 2015

@nunogt nunogt Patch as per boto#840 e2a755f

wernerb commented May 27, 2015

Also encountered a bug with timeouts, transfers not completing, lots of tcp resets. Removing the 100-continue fixes it.

@yofreke yofreke added a commit to weebygames/boto that referenced this issue Jun 1, 2015

@yofreke yofreke Workaround #840 152a7dc

@yofreke yofreke added a commit to weebygames/boto that referenced this issue Jun 1, 2015

@yofreke yofreke Workaround #840 712d107

soby commented Jul 1, 2015

+1 to the workaround, I also hit this

winks referenced this issue in tbarbugli/cassandra_snapshotter Sep 25, 2015

Closed

Strange boto error on `upload_node_backups` #54

@yofreke yofreke added a commit to weebygames/boto that referenced this issue Sep 28, 2015

@yofreke yofreke Workaround #840 90a6af6

thehesiod referenced this issue in aio-libs/aiobotocore Feb 15, 2016

Closed

variety of fixes #28

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment