Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add S3 backend #156

Merged
merged 11 commits into from Jan 25, 2019
Merged

Add S3 backend #156

merged 11 commits into from Jan 25, 2019

Conversation

@mutantmonkey
Copy link
Contributor

mutantmonkey commented Jan 24, 2019

No description provided.

mutantmonkey added 11 commits Jan 1, 2019
This new backend currently isn't hooked up; new and existing installs
will continue to use the localfs backend.

* Rework torrent generation to be backend-dependent so we can use S3's
  existing torrent API.
* Remove the torrent test cases, which broke with this torrent rework;
  they will need to be added back later.
* Use `http.MaxBytesReader` for better max size handling.
* Allow backends to return errors in `ServeFile` if needed.
* Bail out on files that are too large earlier if possible.
* Return 400 instead of 500 for empty files and files that are too large
  (when we can bail out early).
Although S3 offers a GetObjectTorrent API call to generate a torrent
file on their end, it doesn't look like any similar systems with S3
compatible APIs actually support it. Notably, Minio and Wasabi do not.
In order to remain compatible with these, it's better to not rely on the
storage backend to handle creation.
Previously, missing files would return a "corrupt metadata" error
because errors were not being properly handled in the S3 backend. This
change catches not found errors and passes them up to be handled
accordingly.

Alpine Linux puts the CA cert bundle at /etc/ssl/cert.pem by default; to
ensure that Go looks in that location, we now set the SSL_CERT_FILE
environment variable.
When generating a torrent, we need to get the SHA1 hash of each chunk of
the file. Because we stream the data for S3, Read doesn't always fill
the chunk buffer, so to fix this, we can use ReadFull.
@andreimarcu andreimarcu merged commit 5d9a93b into andreimarcu:master Jan 25, 2019
1 check passed
1 check passed
continuous-integration/travis-ci/pr The Travis CI build passed
Details
@andreimarcu

This comment has been minimized.

Copy link
Owner

andreimarcu commented Jan 25, 2019

Thanks!

@mutantmonkey mutantmonkey deleted the mutantmonkey:s3_backend branch Feb 2, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

2 participants
You can’t perform that action at this time.