Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

During recovery, mark last file chunk to fail fast if payload is truncated #7830

Merged
merged 1 commit into from Sep 23, 2014

Conversation

s1monw
Copy link
Contributor

@s1monw s1monw commented Sep 23, 2014

Today we rely on the metadata length of the file we are recoverying
to indicate when the last chunk was received. Yet, this might hide bugs
on the compression layer if payloads are truncated. We should indicate
if the last chunk is send to make sure we validate checksums
accordingly if possible.

@bleskes
Copy link
Contributor

bleskes commented Sep 23, 2014

LGTM. Good catch.

Today we rely on the metadata length of the file we are recoverying
to indicate when the last chunk was received. Yet, this might hide bugs
on the compression layer if payloads are truncated. We should indicate
if the last chunk is send to make sure we validate checksums
accordingly if possible.

Closes elastic#7830
s1monw added a commit that referenced this pull request Sep 23, 2014
Today we rely on the metadata length of the file we are recoverying
to indicate when the last chunk was received. Yet, this might hide bugs
on the compression layer if payloads are truncated. We should indicate
if the last chunk is send to make sure we validate checksums
accordingly if possible.

Closes #7830
s1monw added a commit that referenced this pull request Sep 23, 2014
Today we rely on the metadata length of the file we are recoverying
to indicate when the last chunk was received. Yet, this might hide bugs
on the compression layer if payloads are truncated. We should indicate
if the last chunk is send to make sure we validate checksums
accordingly if possible.

Closes #7830
@s1monw s1monw merged commit 6c8aa5f into elastic:master Sep 23, 2014
s1monw added a commit that referenced this pull request Sep 23, 2014
This test sometimes corrupts a file by truncating a chunk on the
network layer. Yet this was not detected and required changes in the
wire protocol. It's fixed in 1.4 see #7830 for details

Relates to #7830
@s1monw s1monw deleted the last_chunk branch September 23, 2014 09:48
@s1monw s1monw removed the review label Sep 23, 2014
@clintongormley clintongormley changed the title [RECOVERY] Mark last file chunk to fail fast if payload is truncated Resiliency: During recovery, mark last file chunk to fail fast if payload is truncated Sep 26, 2014
@clintongormley clintongormley added the :Distributed/Recovery Anything around constructing a new shard, either from a local or a remote source. label Jun 7, 2015
@clintongormley clintongormley changed the title Resiliency: During recovery, mark last file chunk to fail fast if payload is truncated During recovery, mark last file chunk to fail fast if payload is truncated Jun 7, 2015
mute pushed a commit to mute/elasticsearch that referenced this pull request Jul 29, 2015
Today we rely on the metadata length of the file we are recoverying
to indicate when the last chunk was received. Yet, this might hide bugs
on the compression layer if payloads are truncated. We should indicate
if the last chunk is send to make sure we validate checksums
accordingly if possible.

Closes elastic#7830
mute pushed a commit to mute/elasticsearch that referenced this pull request Jul 29, 2015
This test sometimes corrupts a file by truncating a chunk on the
network layer. Yet this was not detected and required changes in the
wire protocol. It's fixed in 1.4 see elastic#7830 for details

Relates to elastic#7830
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed/Recovery Anything around constructing a new shard, either from a local or a remote source. >enhancement resiliency v1.4.0.Beta1 v2.0.0-beta1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants