Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

uploading file to s3 results in error 104 (connection reset) #2207

Open
mjpan opened this issue Apr 8, 2014 · 108 comments
Open

uploading file to s3 results in error 104 (connection reset) #2207

mjpan opened this issue Apr 8, 2014 · 108 comments

Comments

@mjpan
Copy link

mjpan commented Apr 8, 2014

this is on boto 2.27.0. uploading a 14 byte file works, but 512k file causes this error.
using aws cli to upload works on the 512k file

File "/usr/local/lib/python2.7/dist-packages/boto-2.27.0-py2.7.egg/boto/s3/key.py", line 1315, in set_contents_from_filename
encrypt_key=encrypt_key)
File "/usr/local/lib/python2.7/dist-packages/boto-2.27.0-py2.7.egg/boto/s3/key.py", line 1246, in set_contents_from_file
chunked_transfer=chunked_transfer, size=size)
File "/usr/local/lib/python2.7/dist-packages/boto-2.27.0-py2.7.egg/boto/s3/key.py", line 725, in send_file
chunked_transfer=chunked_transfer, size=size)
File "/usr/local/lib/python2.7/dist-packages/boto-2.27.0-py2.7.egg/boto/s3/key.py", line 914, in _send_file_internal
query_args=query_args
File "/usr/local/lib/python2.7/dist-packages/boto-2.27.0-py2.7.egg/boto/s3/connection.py", line 633, in make_request
retry_handler=retry_handler
File "/usr/local/lib/python2.7/dist-packages/boto-2.27.0-py2.7.egg/boto/connection.py", line 1046, in make_request
retry_handler=retry_handler)
File "/usr/local/lib/python2.7/dist-packages/boto-2.27.0-py2.7.egg/boto/connection.py", line 919, in _mexe
request.body, request.headers)
File "/usr/local/lib/python2.7/dist-packages/boto-2.27.0-py2.7.egg/boto/s3/key.py", line 815, in sender
http_conn.send(chunk)
File "/usr/lib/python2.7/httplib.py", line 794, in send
self.sock.sendall(data)
File "/usr/lib/python2.7/ssl.py", line 229, in sendall
v = self.send(data[count:])
File "/usr/lib/python2.7/ssl.py", line 198, in send
v = self._sslobj.write(data)
error: [Errno 104] Connection reset by peer

@fbeister
Copy link

fbeister commented May 8, 2014

Just wanted to say "me too". I use boto through duplicity to access S3. On another machine, I have boto 2.2.2 and it works there. After downgrading boto from 2.27 to 2.2.2, the error persists, so I suspect the problem to lie either in ssl.py or the OS (Ubuntu Linux 14.04)

@fbeister
Copy link

fbeister commented May 8, 2014

Found a solution (at least for duplicity):
aws/aws-cli#634

instead of "s3+http://bucketname" use "s3:///s3.amazonaws.com/bucketname"

@danielgtaylor
Copy link
Member

From the CLI issue:

I'm reopening this issue. After debugging this I can confirm what others have said. This problem exists when trying to upload a large file to a newly created bucket that's not in the classic region.

From what I can tell the CLI is properly retrying requests and following 307 redirects. The problem, however, is that the CLI sends the entire request and then waits for a response. However, S3 will immediately send the 307 response before we've finished sending the body. Eventually it will just close the connection, and if we're still in the process of streaming the body, we will not see the response. Instead we get the ConnectionError as shown in the various debug logs above.

The normal way to address this would be to use the expect 100 continue header. However, the HTTP client we use (requests) does not support this. There might be a way to work around this, but I'll need to do some digging into the requests library to see the best way to fix this issue.

This will require adding some support for currently unsupported HTTP headers.

@avandendorpe
Copy link

I think the fix for awscli addresses this as it was merged in botocore. @jamesls - is that correct?

@danielgtaylor
Copy link
Member

The fix for Botocore was in boto/botocore@9e59c4e and seems to have solved the issue.

We need to backport it to Boto. I'll see about trying to get it into the next release.

@j0hnsmith
Copy link

I'm seeing a lot of these errors, please backport the fix asap.

@unwitting
Copy link

@danielgtaylor any update on getting it ported back into boto?

@jvantuyl
Copy link

+1

@anna-buttfield-sirca
Copy link

This issue hit us too. The workaround that worked for us was explicitly connecting to the bucket's region. Because we can't guarantee which region buckets are in we use this code:

conn = boto.connect_s3()
bucket = conn.get_bucket(bucket_name)
bucket_location = bucket.get_location()
if bucket_location:
    conn = boto.s3.connect_to_region(bucket_location)
    bucket = conn.get_bucket(bucket_name)

@charles-vdulac
Copy link

+1

4 similar comments
@gpg90
Copy link

gpg90 commented Oct 29, 2014

+1

@whitequark
Copy link

+1

@dwijnand
Copy link

👍

@eberle1080
Copy link

+1

@wengole
Copy link

wengole commented Nov 21, 2014

This doesn't quite work as regions and locations have different names, however the concept does help (connecting to the correct region)

@fangpenlin
Copy link

+1

2 similar comments
@volnt
Copy link

volnt commented Jan 5, 2015

+1

@hwkns
Copy link

hwkns commented Jan 9, 2015

+1

@antonagestam
Copy link

Come on ppl, commenting +1 isn't helping anyone.

@hwkns
Copy link

hwkns commented Jan 10, 2015

@antonagestam, my intent was to help provide a sense of the number of people affected by this bug, which should hopefully lend it some priority. If there's something more I can do to help, please let me know. This is a sponsored project now, and this issue has been around for at least 9 months already, affecting quite a few users. I'm just trying not to let it languish.

@hellwolf
Copy link

+1

1 similar comment
@ledmonster
Copy link

+1

@spikewilliams
Copy link

I am hitting this bug. Just as someone referenced above, I am also on Ubuntu Linux 14.04.

@slavpetroff
Copy link

Hello guys! The solution for me was first, to change S3 Location from Frankfurt to London. Then additionally i've added AWS_S3_HOST = 's3-eu-west-2.amazonaws.com' to the settings files and this solved my problem. Hope this helps!

@dvl
Copy link

dvl commented Mar 27, 2017

@ilmesi solution from 2 years ago still working on latest boto version... so next time I found this issue on Google I know what to do.

adrpar pushed a commit to adrpar/incubator-airflow that referenced this issue Apr 25, 2017
This fixes an issue with the S3 hook when uploading local files to a
S3 bucket that does not reside in the standard US location (e.g.
eu-west-1).

According to

boto/boto#2207

this is an issue in the boto library and does not seem to get fixed
anytime soon.

A work around is proposed in the boto issue and is implemented here.
adrpar pushed a commit to adrpar/incubator-airflow that referenced this issue May 18, 2017
This fixes an issue with the S3 hook when uploading local files to a
S3 bucket that does not reside in the standard US location (e.g.
eu-west-1).

According to

boto/boto#2207

this is an issue in the boto library and does not seem to get fixed
anytime soon.

A work around is proposed in the boto issue and is implemented here.

test
erinzm pushed a commit to erinzm/NEXT that referenced this issue Jun 12, 2017
@jcampbell05
Copy link

:) 2017 and this still needed to be used as a workaround.

@karthikvadla
Copy link

@jcampbell05 I see the same error

error: [Errno 104] Connection reset by peer

i'm trying to upload files from my local machine to s3 using boto.
I'm using set_contents_from_filename(filename). It's failing for data which is morethan 5GB.

did you use upload_file from boto3. What workaround did you use.?

@jcampbell05
Copy link

Rolling back my entire server to an older OS and version of boto. Not really a long term viable solution.

@shanx
Copy link

shanx commented Jul 19, 2017

If you want to transfer files > 5gb you need to use multipart uploading (unfortunately boto2 doesn't abstract this away from us). I've done an implementation for this using the example from: http://boto.cloudhackers.com/en/latest/s3_tut.html#storing-large-data in the bakthat project (a project similar to duplicity) see my pull request tsileo/bakthat#84

@karthikvadla
Copy link

@jcampbell05 @shanx : Thank you for your responses. I migrated from boto2.x to boto3(which has in-built support for mutlipart file upload with upload_file feature. ). I followed this.
http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.upload_file .
Which made my work simple and easier. And i tested, it works fine. :)

armbox added a commit to armbox/edx-platform that referenced this issue Nov 27, 2018
armbox added a commit to armbox/edx-video-pipeline that referenced this issue Nov 27, 2018
armbox added a commit to armbox/edx-video-worker that referenced this issue Nov 27, 2018
yorickpeterse pushed a commit to inko-lang/inko that referenced this issue Feb 19, 2019
We might be running into boto/boto#2207
when building releases, so let's see if this solves the problem.
@tambakoo
Copy link

So is this resolved ? I still cannot push a 1.9GB file to s3 from an ec2 . Gets a ConnectionResetError(104, 'Connection reset by peer')

@jcampbell05
Copy link

@tambakoo short answer is this isn't supported in boto2 upgrade to boto3 to get a method which supports large files :)

@kenny1323
Copy link

On ubuntu 18
Install python 3
sudo apt-get install python3-pip
pip3 install awscli --upgrade --user
aws s3 sync ./ s3://mybucket01

https://linuxhint.com/install_aws_cli_ubuntu/
https://linuxhint.com/install_aws_cli_ubuntu/

@StanSilas
Copy link

@tambakoo How did you resolve it?

@jcampbell05
Copy link

short answer is this isn't supported in boto2 upgrade to boto3 to get a method which supports large files :)

@StanSilas
See above 😀

@jiangwei-yu-pony
Copy link

Amazing to find out the solution (setting the region name) in a 9-year old thread!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests