Skip to content

Commit

Permalink
gcs: upload in chunks of bufferSize if the content is more then `bu…
Browse files Browse the repository at this point in the history
…fferSize.

When uploading large objects in one go the remote server might timeout
and throw an error. By splitting larger files into multiple upload
chunks.
  • Loading branch information
andir committed Aug 1, 2019
1 parent 17a82e8 commit 37297df
Showing 1 changed file with 26 additions and 10 deletions.
36 changes: 26 additions & 10 deletions src/libstore/gcs-binary-cache-store.cc
Original file line number Diff line number Diff line change
Expand Up @@ -93,16 +93,32 @@ struct GCSBinaryCacheStoreImpl : public GCSBinaryCacheStore
{
const auto size = data.size();
const auto now1 = std::chrono::steady_clock::now();
const auto metadata = client->InsertObject(
bucketName, path, std::move(data),
gcs::WithObjectMetadata(
gcs::ObjectMetadata()
.set_content_type(mimeType)
.set_content_encoding(contentEncoding)
));

if (!metadata) {
throw Error(format("GCS error uploading '%s': %s") % path % metadata.status().message());

if (size < bufferSize) {

const auto metadata = client->InsertObject(
bucketName, path, std::move(data),
gcs::WithObjectMetadata(
gcs::ObjectMetadata()
.set_content_type(mimeType)
.set_content_encoding(contentEncoding)
));
if (!metadata) {
throw Error(format("GCS error uploading '%s': %s") % path % metadata.status().message());
}

} else {
auto stream = client->WriteObject(bucketName, path);
for (size_t n = 0; n < size; n += bufferSize) {
const auto slice = data.substr(n, bufferSize);
stream << slice;
}
stream.Close();

const auto metadata = std::move(stream).metadata();
if (!metadata) {
throw Error(format("GCS error uploading '%s': %s") % path % metadata.status().message());
}
}

const auto now2 = std::chrono::steady_clock::now();
Expand Down

0 comments on commit 37297df

Please sign in to comment.