Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Download from s3 is interrupted if client downloads data at low speed #2541

Closed
aoberest opened this issue Dec 29, 2021 · 5 comments
Closed

Comments

@aoberest
Copy link

Hello,

If filler and s3 are running on the same host, then the download of the file may be interrupted if the download speed of the client is very slow. (limit-rate 100K)

My Test via docker

I brought up the seaweed via docker-compose, I create a file and upload it, and ran curl for downloading, and it crashed again.

Server: SeaweedFS Filer 30GB 2.83

Error curl: (18) transfer closed with 396369981 bytes remaining to read

check.sh

#!/bin/bash

DOCKER_HOST="127.0.0.1"
FILER_URL="http://${DOCKER_HOST}:8888"
S3_URL="http://${DOCKER_HOST}:8333"
BUCKET=test
FILE_NAME=$(date "+%Y-%m-%d_%H-%M-%S").bin
DOWNLOAD_URL=${S3_URL}/${BUCKET}/${FILE_NAME}
ERRORS=errors.log

function prepareFile() {
	dd if=/dev/zero of=${FILE_NAME}  bs=100M count=1

	curl -v  \
	  -F "file=@${FILE_NAME}" \
	  "${FILER_URL}/buckets/${BUCKET}/"
	  
	rm "${FILE_NAME}"
}

function dl() {
    curl -v $DOWNLOAD_URL \
	    --limit-rate 100K \
		--http1.0 \
		--insecure \
	    --output /dev/null  \
		-H "Connection: close" \
		-w "%{time_total},%{size_download},%{speed_download}\n" \
		|| echo "Exit code for request $1 is $?" >> $ERRORS
}


echo "Failed requests:" > $ERRORS


prepareFile
dl 1 &
dl 2 &
dl 3 &
dl 4 &
dl 5 &
dl 6 &
dl 7 &
dl 8 &
dl 9 &
dl 10 &

wait

docker-compose.yml

version: '2'

services:
  master:
    image: chrislusf/seaweedfs # use a remote image
    ports:
      - 9333:9333
      - 19333:19333
    command: "master -ip=master"
  volume:
    image: chrislusf/seaweedfs # use a remote image
    ports:
      - 8080:8080
      - 18080:18080
      - 9325:9325
    command: 'volume -mserver="master:9333" -port=8080  -metricsPort=9325'
    depends_on:
      - master
  filer:
    image: chrislusf/seaweedfs # use a remote image
    ports:
      - 8888:8888
      - 18888:18888
      - 9326:9326
    command: 'filer -master="master:9333"  -metricsPort=9326'
    tty: true
    stdin_open: true
    depends_on:
      - master
      - volume
  s3:
    image: chrislusf/seaweedfs # use a remote image
    ports:
      - 8333:8333
      - 9327:9327
    command: 's3 -filer="filer:8888" -metricsPort=9327'
    depends_on:
      - master
      - volume
      - filer

@chrislusf
Please try to do similar steps on your own. Do you download a file using curl?

Originally posted by @AlekseyFicht in #2538 (comment)

@aoberest aoberest changed the title Download from s3 is interrupted if client downloads data at low speed <100kb / s Download from s3 is interrupted if client downloads data at low speed Dec 29, 2021
@aoberest
Copy link
Author

Additional Information:

We ran tests on our servers, on the same cluster.

We have split filer and s3. We ran s3 on a single node and repeated the tests.

curl -> s3(host1) -> filer (host1) - check failed
curl -> s3(host2) -> filer (host2) - check failed
curl -> s3(host1) -> filer (host2) - ok
curl -> s3(host2) -> filer (host1) - ok
curl -> filer (host1) - ok
curl -> filer (host2) - ok

Maybe this will help.

@chrislusf
Copy link
Collaborator

chrislusf commented Dec 30, 2021

Screen Shot 2021-12-29 at 6 10 19 PM

I ran weed server -s3 on my mac and It was fine. It should be the same as using a local filer and a local s3 server. Could you please try it out?

I can reproduce with the docker compose.

@aoberest
Copy link
Author

Hello.

I run weed server -s3 on my ubuntu and it was`t fine.
Please look output.

alex@alex-hd:~$ curl http://localhost:8333/test/virtio-win-0.1.185.iso --output /dev/null
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  393M  100  393M    0     0   786M      0 --:--:-- --:--:-- --:--:--  785M

alex@alex-hd:~$ curl --limit-rate 100K http://localhost:8333/test/virtio-win-0.1.185.iso --output /dev/null
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  2  393M    2 9757k    0     0    99k      0  1:07:10  0:01:37  1:05:33   99k
curl: (18) transfer closed with 402487340 bytes remaining to read

alex@alex-hd:~$ weed version
version 30GB 2.83 c935b9669e6b18a07c28939b1bd839552e7d2cf5 linux amd64

alex@alex-hd:~$ lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 20.04.3 LTS
Release:	20.04
Codename:	focal

Screenshot from 2021-12-30 10-34-33

curl -V
curl 7.68.0 (x86_64-pc-linux-gnu) libcurl/7.68.0 OpenSSL/1.1.1f zlib/1.2.11 brotli/1.0.7 libidn2/2.2.0 libpsl/0.21.0 (+libidn2/2.2.0) libssh/0.9.3/openssl/zlib nghttp2/1.40.0 librtmp/2.3
Release-Date: 2020-01-08
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtmp rtsp scp sftp smb smbs smtp smtps telnet tftp 
Features: AsynchDNS brotli GSS-API HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz NTLM NTLM_WB PSL SPNEGO SSL TLS-SRP UnixSockets

chrislusf added a commit that referenced this issue Dec 30, 2021
@chrislusf
Copy link
Collaborator

Added a fix to increased the timeout setting.

chrislusf added a commit that referenced this issue Dec 30, 2021
@aoberest
Copy link
Author

I ran the tests - it works well. Thank you. Happy New Year!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants