Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not alloc full buffer for small change requests #35158

Merged
merged 5 commits into from Nov 2, 2018

Conversation

dnhatn
Copy link
Member

@dnhatn dnhatn commented Nov 1, 2018

Today we always allocate a full buffer (1024 elements) in a LuceneChangesSnapshot even though the requesting size is much smaller. With this change, we will use the requesting size as the buffer size if it's smaller than the default batch size; otherwise uses the default batch size.

Today we always allocate a full buffer (1024 elements) even though the
requesting size is much smaller. With this change, we will use the
requesting size as the buffer size if it's smaller than the default
batch size; otherwise uses the default batch.
@dnhatn dnhatn added the :Distributed/Engine Anything around managing Lucene and the Translog in an open shard. label Nov 1, 2018
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-distributed

Copy link
Member

@jasontedor jasontedor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Copy link
Contributor

@bleskes bleskes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@@ -95,14 +95,15 @@
}
};
this.mapperService = mapperService;
this.searchBatchSize = searchBatchSize;
final long requestingSize = (toSeqNo - fromSeqNo) == Long.MAX_VALUE ? Long.MAX_VALUE : (toSeqNo - fromSeqNo + 1L);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(toSeqNo - fromSeqNo) == Long.MAX_VALUE this would practically always evaluate to false?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes for CCR but there is another usage (i.e, peer recovery) where to_seq_no=MAX_VALUE and from_seq_no can be zero:
https://github.com/elastic/elasticsearch/blob/master/server/src/main/java/org/elasticsearch/index/engine/InternalEngine.java#L496.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, thanks for explaining!

Copy link
Member

@martijnvg martijnvg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@dnhatn
Copy link
Member Author

dnhatn commented Nov 2, 2018

Thanks everyone :)

@dnhatn dnhatn merged commit e753e12 into elastic:master Nov 2, 2018
@dnhatn dnhatn deleted the snapshot-buffer-size branch November 2, 2018 12:50
dnhatn added a commit that referenced this pull request Nov 2, 2018
Today we always allocate a full buffer (1024 elements) in a
LuceneChangesSnapshot even though the requesting size is smaller.
With this change, we will use the requesting size as the buffer size if
it's smaller than the default batch size; otherwise uses the default
batch size.
dnhatn added a commit that referenced this pull request Nov 2, 2018
Today we always allocate a full buffer (1024 elements) in a
LuceneChangesSnapshot even though the requesting size is smaller.
With this change, we will use the requesting size as the buffer size if
it's smaller than the default batch size; otherwise uses the default
batch size.
@colings86 colings86 added v6.5.0 and removed v6.5.1 labels Nov 9, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed/Engine Anything around managing Lucene and the Translog in an open shard. >enhancement v6.5.0 v6.6.0 v7.0.0-beta1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants