Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PERFORMANCE: Flatten logic for write batch instantiation #8163

Conversation

original-brownbear
Copy link
Member

Follow up to #8155:

The get_new_batch methods on the memory and the acked queue are both internal APIs and completely redundant since plugins only see WrappedWriteClient.
=> flattened this out

Also ... this actually comes with a measurable, statistically significant gain of ~3% on the baseline (likely removing one step of stack depth got us over the barrier to not be affected by jruby/jruby#4763 for RubyArray here).

@jordansissel
Copy link
Contributor

@guyboertje can you review? My thoughts are that this goes backwards with respect to some anticipated-future-work where we have a firmer API around batches.

@guyboertje
Copy link
Contributor

It does look like it removes some anticipated work. I am satisfied that, when the future arrives, we will do a better job in Java for the Queue clients and Batches.

Copy link
Contributor

@guyboertje guyboertje left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@original-brownbear
Copy link
Member Author

@guyboertje thanks ! :)

@elasticsearch-bot
Copy link

Armin Braun merged this into the following branches!

Branch Commits
6.x 89af438
master 2514c6b

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants