Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[s3] post to filer http: read on closed response body #1609

Closed
kmlebedev opened this issue Nov 11, 2020 · 12 comments
Closed

[s3] post to filer http: read on closed response body #1609

kmlebedev opened this issue Nov 11, 2020 · 12 comments

Comments

@kmlebedev
Copy link
Contributor

kmlebedev commented Nov 11, 2020

Describe the bug
post to filer http: read on closed response body return

s3_1         | E1111 06:44:18     1 s3api_object_handlers.go:317] post to filer: Put "http://filer:8888/buckets/registry/.uploads/8ed6ddca-e16f-476f-8fb4-57cc2ac3b268/0001.part?collection=registry": http: read on closed response body
s3_1         | I1111 06:44:18     1 s3api_handlers.go:89] status 500 application/xml: <?xml version="1.0" encoding="UTF-8"?>
s3_1         | <Error><Code>InternalError</Code><Message>We encountered an internal error, please try again.</Message><Resource>/registry/docker/registry/v2/blobs/sha256/75/75f829a71a1c5277a7abf55495ac8d16759691d980bf1d931795e5eb68a294c0/data</Resource><RequestId>1605077058120997200</RequestId></Error>
filer_1      | I1111 06:44:18     1 common.go:53] response method:PUT URL:/buckets/registry/.uploads/8ed6ddca-e16f-476f-8fb4-57cc2ac3b268/0001.part?collection=registry with httpStatus:500 and JSON:{"error":"read input: unexpected EOF"}

System Setup
version 2.09
local-registry-compose.yml:

version: '2'

services:
  master:
    image: chrislusf/seaweedfs:local
    ports:
      - 9333:9333
      - 19333:19333
    command: "master -ip=master"
  volume:
    image: chrislusf/seaweedfs:local
    ports:
      - 8080:8080
      - 18080:18080
    command: "volume -mserver=master:9333 -port=8080 -ip=volume -max=0"
    depends_on:
      - master
  filer:
    image: chrislusf/seaweedfs:local
    ports:
      - 8888:8888
      - 18888:18888
    command: 'filer -master="master:9333"'
    depends_on:
      - master
      - volume
  s3:
    image: chrislusf/seaweedfs:local
    ports:
      - 8333:8333
    command: '-v 9 s3 -filer="filer:8888"'
    depends_on:
      - master
      - volume
      - filer
  minio:
    image: minio/minio
    ports:
      - 9000:9000
    command: 'minio server /data'
    environment:
      MINIO_ACCESS_KEY: "some_access_key1"
      MINIO_SECRET_KEY: "some_secret_key1"
    depends_on:
      - master
  registry1:
    image: registry:2
    environment:
      REGISTRY_HTTP_ADDR: "0.0.0.0:5001" # seaweedfs s3
      REGISTRY_LOG_LEVEL: "debug"
      REGISTRY_STORAGE: "s3"
      REGISTRY_STORAGE_S3_REGION: "us-east-1"
      REGISTRY_STORAGE_S3_REGIONENDPOINT: "http://s3:8333"
      REGISTRY_STORAGE_S3_BUCKET: "registry"
      REGISTRY_STORAGE_S3_ACCESSKEY: "some_access_key1"
      REGISTRY_STORAGE_S3_SECRETKEY: "some_secret_key1"
      REGISTRY_STORAGE_S3_V4AUTH: "true"
      REGISTRY_STORAGE_S3_SECURE: "false"
      REGISTRY_STORAGE_S3_SKIPVERIFY: "true"
      REGISTRY_STORAGE_S3_ROOTDIRECTORY: "/"
      REGISTRY_STORAGE_DELETE_ENABLED: "true"
      REGISTRY_STORAGE_REDIRECT_DISABLE: "true"
      REGISTRY_VALIDATION_DISABLED: "true"
    ports:
      - 5001:5001
    depends_on:
      - s3
      - minio
  registry2:
    image: registry:2
    environment:
      REGISTRY_HTTP_ADDR: "0.0.0.0:5002" # minio
      REGISTRY_LOG_LEVEL: "debug"
      REGISTRY_STORAGE: "s3"
      REGISTRY_STORAGE_S3_REGION: "us-east-1"
      REGISTRY_STORAGE_S3_REGIONENDPOINT: "http://minio:9000"
      REGISTRY_STORAGE_S3_BUCKET: "registry"
      REGISTRY_STORAGE_S3_ACCESSKEY: "some_access_key1"
      REGISTRY_STORAGE_S3_SECRETKEY: "some_secret_key1"
      REGISTRY_STORAGE_S3_V4AUTH: "true"
      REGISTRY_STORAGE_S3_SECURE: "false"
      REGISTRY_STORAGE_S3_SKIPVERIFY: "true"
      REGISTRY_STORAGE_S3_ROOTDIRECTORY: "/"
      REGISTRY_STORAGE_DELETE_ENABLED: "true"
      REGISTRY_STORAGE_REDIRECT_DISABLE: "true"
      REGISTRY_VALIDATION_DISABLED: "true"
    ports:
      - 5002:5002
    depends_on:
      - s3
      - minio

run
docker-compose -f local-registry-compose.yml -p seaweedfs up

Expected behavior
Avoid httpStatus:500

Additional context

  1. docker pull logstash:7.9.3
  2. docker tag logstash:7.9.3 127.0.0.1:5001/logstash:7.9.3
  3. docker push 127.0.0.1:5001/logstash:7.9.3
The push refers to repository [127.0.0.1:5001/logstash]
d531f1620717: Pushed 
da29e51f4b5a: Pushed 
e130db055e76: Pushed 
d01d03eef5d2: Pushed 
11689358ee84: Pushed 
39d2e7f98e7d: Pushed 
174f912d511e: Pushed 
99de5206293b: Pushing [=================>                                 ]    102MB/290.2MB
51ff5420b316: Pushed 
2e0e4cba5ae9: Retrying in 8 seconds 
613be09ab3c0: Retrying in 15 seconds 
  1. but for mino it is OK
The push refers to repository [127.0.0.1:5002/logstash]
d531f1620717: Pushed 
da29e51f4b5a: Pushed 
e130db055e76: Pushed 
d01d03eef5d2: Pushed 
11689358ee84: Pushed 
39d2e7f98e7d: Pushed 
174f912d511e: Pushed 
99de5206293b: Pushed 
51ff5420b316: Pushed 
2e0e4cba5ae9: Pushed 
613be09ab3c0: Pushed 
7.9.3: digest: sha256:cbd441b072dbd5271ecd02de560d7780594672321f90036c9ace1618be41f042 size: 2823

@chrislusf
Copy link
Collaborator

can you try this? Add the -max=0

    command: "volume -mserver=master:9333 -port=8080 -ip=volume -max=0"

@kmlebedev
Copy link
Contributor Author

can you try this? Add the -max=0

tried the error remained

@chrislusf
Copy link
Collaborator

Add the "-max=0", not "-max 0"

@kmlebedev
Copy link
Contributor Author

Add the "-max=0", not "-max 0"

did not help

@kmlebedev
Copy link
Contributor Author

I disable multipart copy chunks on registry, size up to 1Gb

      REGISTRY_STORAGE_S3_MULTIPARTCOPYCHUNKSIZE: "1048576000"
      REGISTRY_STORAGE_S3_MULTIPARTCOPYTHRESHOLDSIZE: "1048576000"

https://docs.docker.com/registry/configuration/#list-of-configuration-options
then push worked

docker push 127.0.0.1:5001/logstash:7.9.3
The push refers to repository [127.0.0.1:5001/logstash]
d531f1620717: Layer already exists 
da29e51f4b5a: Layer already exists 
e130db055e76: Layer already exists 
d01d03eef5d2: Layer already exists 
11689358ee84: Layer already exists 
39d2e7f98e7d: Layer already exists 
174f912d511e: Layer already exists 
99de5206293b: Pushed 
51ff5420b316: Layer already exists 
2e0e4cba5ae9: Pushed 
7.9.3: digest: sha256:cbd441b072dbd5271ecd02de560d7780594672321f90036c9ace1618be41f042 size: 2823

@kmlebedev
Copy link
Contributor Author

kmlebedev commented Nov 11, 2020

It seems to have something to do with determining the file size
Since for multipart copy chunks registry sees layer size 290.2MB
99de5206293b: Pushing [============================================> ] 259.9MB/290.2MB
but Pushing 301.9MB
99de5206293b: Pushing [==================================================>] 301.9MB

@chrislusf
Copy link
Collaborator

chrislusf commented Nov 11, 2020

Yes, I can reproduce this. What was the previous chunk size? May need to test with a simpler case. Maybe using aws cli?

@kmlebedev
Copy link
Contributor Author

kmlebedev commented Nov 11, 2020

 by default chunk size 32Mb

    multipartcopychunksize: 33554432
    multipartcopythresholdsize: 33554432

There were no problems with aws cli during the whole testing period

@chrislusf
Copy link
Collaborator

Found the problem. It was an error that the reader is closed too early with s3 UploadPartCopy.

Thanks for the detailed bug reporting!

@kmlebedev
Copy link
Contributor Author

I do not confirm that this fixes

@chrislusf
Copy link
Collaborator

can you apply the latest commits and show the logs?

@kmlebedev
Copy link
Contributor Author

kmlebedev commented Nov 11, 2020

can you apply the latest commits and show the logs?

I'm sorry, I double-checked everything works, Thank you very much

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants