New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Google Cloud Storage #501
Conversation
f43b656
to
b1e1cee
Compare
e2e/testdata/backup_gcs.yaml
Outdated
value: fake-gcs-server.default.svc:4443 | ||
bucketConfig: | ||
bucketName: moco | ||
endpointURL: http://fake-gcs-server.default.svc:4443 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
endpointURL
is not used when gcs
backend. Please remove this field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I fixed it.
bf6744d
e2e/testdata/restore_gcs.yaml
Outdated
value: fake-gcs-server.default.svc:4443 | ||
bucketConfig: | ||
bucketName: moco | ||
endpointURL: http://fake-gcs-server.default.svc:4443 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
endpointURL
is not used when gcs
backend. Please remove this field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed: bf6744d
pkg/bucket/gcs.go
Outdated
bucket := b.client.Bucket(b.name) | ||
|
||
w := bucket.Object(key).NewWriter(ctx) | ||
w.ChunkSize = int(decidePartSize(objectSize)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the GCS have a limitation on the maximum number of parts per upload?
If the GCS has a different limitation from Amazon S3, we need to adjust accordingly.
The Amazon S3 has the following limitation.
So MOCO adjusts the chunk size according to backup file size. Please reffer to #318
https://docs.aws.amazon.com/AmazonS3/latest/userguide/qfacts.html
Maximum number of parts per upload | 10,000 |
---|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I've researched, I haven't found any specific limits. However, specifying the chunk size involves a trade-off with memory, and it seems that it is already well tuned by default. I think it's okay to leave the chunk size specification to the default in the client library and remove the description, but what do you think?
The Go client library uses a buffer size that's equal to the chunk size. The buffer size must be a multiple of 256 KiB (256 x 1024 bytes). Larger buffer sizes typically make uploads faster, but note that there's a tradeoff between speed and memory usage. If you're running several resumable uploads concurrently, you should set Writer.ChunkSize to a value that's smaller than 16 MiB to avoid memory bloat.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a comment about chunk size.
816f1e4
Signed-off-by: d-kuro <kurosawa7620@gmail.com>
Signed-off-by: d-kuro <kurosawa7620@gmail.com>
Signed-off-by: d-kuro <kurosawa7620@gmail.com>
Signed-off-by: d-kuro <kurosawa7620@gmail.com>
Signed-off-by: d-kuro <kurosawa7620@gmail.com>
Signed-off-by: d-kuro <kurosawa7620@gmail.com>
Signed-off-by: d-kuro <kurosawa7620@gmail.com>
Signed-off-by: d-kuro <kurosawa7620@gmail.com>
Signed-off-by: d-kuro <kurosawa7620@gmail.com>
Signed-off-by: d-kuro <kurosawa7620@gmail.com>
Signed-off-by: d-kuro <kurosawa7620@gmail.com>
Signed-off-by: d-kuro <kurosawa7620@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thank you!
refs: #427
Add support for GCS to moco based on the proposal.
We are using a mock GCS server for testing.
refs: https://github.com/fsouza/fake-gcs-server
To verify the actual operation using GCP, we will describe the steps we performed locally below.
Steps
1. Create GCP secret
2. Apply manifests
3. Create backup job
4. Backup job logs
5. Check GCP console