Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upload Files read timeout (with Persist data) #1306

Closed
JJRdec opened this issue Aug 16, 2023 · 3 comments
Closed

Upload Files read timeout (with Persist data) #1306

JJRdec opened this issue Aug 16, 2023 · 3 comments
Assignees
Labels

Comments

@JJRdec
Copy link

JJRdec commented Aug 16, 2023

Following on from this - #1115 (comment)

Uploading large files (i.e. 1GB) to a container that does not implement data persistence works great!

But for our local development we need data uploaded to local S3 to persist. We are doing to this by mounting volumes (folder on devs local env)

Run Container (and mount volume):

docker run -d -e "retainFilesOnExit=true" -e "root=data" -e "initialBuckets=local-bucket" --name s3-local -p 9090:9090 -p 9191:9191 -v "C:\s3":/data --restart=always adobe/s3mock

Upload Steps:

1. truncate -s 1G output.file
2. aws s3 cp ./output.file s3://local-bucket --endpoint-url=http://localhost:9090

Output:

upload failed: ./output.file to s3://local-bucket/output.file Read timeout on endpoint URL: "http://localhost:9090/local-bucket/output.file?uploadId=403b4a75-3646-4abf-ba24-6b822fedcd59"

@afranken

@afranken afranken self-assigned this Aug 16, 2023
@afranken
Copy link
Member

@JJRdec
I have an M1 MacBook here and the test case is running without problems:

$ cat docker-compose.yml
services:
  s3mock:
    image: adobe/s3mock:3.1.0
    environment:
      - initialBuckets=bucket1,bucket2
      - root=/data
      - debug=true
      - retainFilesOnExit=true
    ports:
      - 9090:9090
    volumes:
      - ./data:/data
$ docker compose up -d
$ truncate -s 1G output.file
$ aws s3 cp ./output.file s3://bucket1/ --endpoint-url=http://localhost:9090
Completed 1.0 GiB/1.0 GiB (134.0 MiB/s) with 1 file(s) remaining
upload: ./output.file to s3://bucket1/output.file

This took 38s.
Without a mounted volume it takes about half that time.

I see that you're running Docker on Windows.
Unfortunately, volumes are known to be super slow in Windows.
Search the web, many people are complaining about this.

Can't do much here in the short term, I'm afraid.
My advice is to use smaller files for testing, or move to a different operating system.
Docker on Mac introduced VirtioFS which is way faster than the previously available formats.
Supposedly, Docker on Linux is also way faster than Docker on Windows.

At some point, I plan to refactor the way we handle multipart uploads (can't fix #1205 without a refactoring), but I can't say yet if that will speed up the process... Currently we persist every part separately and when the multipart upload is completed, we synchronously write all parts to a new file and delete all parts.
The speed of this operation is highly dependent on the speed of the disk.

@JJRdec
Copy link
Author

JJRdec commented Aug 17, 2023

Thanks for the detail around this issue. I see now this is more a docker on windows related issue!

For others on Windows we see two workaround options:

  1. Set the read timeout to infinite (only for local develop of course)
  2. Use WSL2 and mount a volume on linux subsystem instead on windows.

@afranken I'll leave it up to you if you want to close this issue, thanks!

@afranken
Copy link
Member

@JJRdec sorry I can't do more right now.
I'll close this issue, you should open an issue in the Docker for win repo if you haven't already done so:
https://github.com/docker/for-win
maybe - if enough people ask about it - the Docker team will implement a speed improvement for Windows in the future :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants