Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes default chunk size for Azure repositories #22577

Merged
merged 2 commits into from
Jan 12, 2017

Conversation

abeyad
Copy link

@abeyad abeyad commented Jan 12, 2017

Before, the default chunk size for Azure repositories was
-1 bytes, which meant that if the chunk_size was not set on
the Azure repository, nor as a node setting, then no data
files would get written as part of the snapshot (because
the BlobStoreRepository's PartSliceStream does not know
how to process negative chunk sizes).

This commit fixes the default chunk size for Azure repositories
to be the same as the maximum chunk size. This commit also
adds tests for both the Azure and Google Cloud repositories to
ensure only valid chunk sizes can be set.

Closes #22513

Before, the default chunk size for Azure repositories was
-1 bytes, which meant that if the chunk_size was not set on
the Azure repository, nor as a node setting, then no data
files would get written as part of the snapshot (because
the BlobStoreRepository's PartSliceStream does not know
how to process negative chunk sizes).

This commit fixes the default chunk size for Azure repositories
to be the same as the maximum chunk size.  This commit also
adds tests for both the Azure and Google Cloud repositories to
ensure only valid chunk sizes can be set.
Copy link
Member

@jasontedor jasontedor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left one question, but it LGTM.

// default chunk size
AzureRepository azureRepository = azureRepository(Settings.EMPTY);
assertEquals(AzureStorageService.MAX_CHUNK_SIZE, azureRepository.chunkSize());
assertThat(azureRepository.chunkSize().getBytes(), greaterThanOrEqualTo(0L));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand the value of this assertion given the assertion on the previous line?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, this is sort of left over as the assertion is wrong (the chunk size should always be > 0, not >= 0). I'll remove this and the one below

RepositoryMetaData repositoryMetaData = new RepositoryMetaData("repo", GoogleCloudStorageRepository.TYPE, Settings.EMPTY);
ByteSizeValue chunkSize = GoogleCloudStorageRepository.getSetting(GoogleCloudStorageRepository.CHUNK_SIZE, repositoryMetaData);
assertEquals(GoogleCloudStorageRepository.MAX_CHUNK_SIZE, chunkSize);
assertThat(chunkSize.getBytes(), greaterThanOrEqualTo(0L));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same question here?

@abeyad abeyad merged commit bdf836a into elastic:master Jan 12, 2017
@abeyad
Copy link
Author

abeyad commented Jan 12, 2017

Thanks @jasontedor

abeyad pushed a commit that referenced this pull request Jan 12, 2017
Before, the default chunk size for Azure repositories was
-1 bytes, which meant that if the chunk_size was not set on
the Azure repository, nor as a node setting, then no data
files would get written as part of the snapshot (because
the BlobStoreRepository's PartSliceStream does not know
how to process negative chunk sizes).

This commit fixes the default chunk size for Azure repositories
to be the same as the maximum chunk size.  This commit also
adds tests for both the Azure and Google Cloud repositories to
ensure only valid chunk sizes can be set.

Closes #22513
abeyad pushed a commit that referenced this pull request Jan 12, 2017
Before, the default chunk size for Azure repositories was
-1 bytes, which meant that if the chunk_size was not set on
the Azure repository, nor as a node setting, then no data
files would get written as part of the snapshot (because
the BlobStoreRepository's PartSliceStream does not know
how to process negative chunk sizes).

This commit fixes the default chunk size for Azure repositories
to be the same as the maximum chunk size.  This commit also
adds tests for both the Azure and Google Cloud repositories to
ensure only valid chunk sizes can be set.

Closes #22513
@abeyad
Copy link
Author

abeyad commented Jan 12, 2017

5.x commit: 14fd14d
5.2 commit: 784fe34

@jasontedor
Copy link
Member

@abeyad Can you relabel this as v5.2.0?

@abeyad abeyad added v5.2.0 and removed v5.2.1 labels Jan 12, 2017
@abeyad
Copy link
Author

abeyad commented Jan 12, 2017

@jasontedor done

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug :Distributed/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs v5.2.0 v6.0.0-alpha1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Snapshot restore failed after a successful snapshot with ES 5.1.1
3 participants