Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Core: only one optimize operation should run at once #9638

Closed
mikemccand opened this issue Feb 10, 2015 · 3 comments · Fixed by #9640
Closed

Core: only one optimize operation should run at once #9638

mikemccand opened this issue Feb 10, 2015 · 3 comments · Fixed by #9640
Labels

Comments

@mikemccand
Copy link
Contributor

Even though we have ThreadPool.OPTIMIZE pool with size=1, if the incoming optimize request does not wait_for_completion, then Lucene's IndexWriter runs the optimize in the background and the request returns immediately, freeing the thread pool to run another optimize.

So if the application submits 10 optimize requests (without wait_for_completion), all 10 will run concurrently, which is bad.

If the optimize is for upgrade, or flush is requested, InternalEngine.optimize does submit a waitForMerges call back to the OPTIMIZE pool, but that's at the end, and so all 10 incoming requests will still run at once I think?

@rjernst
Copy link
Member

rjernst commented Feb 10, 2015

If the optimize is for upgrade, or flush is requested, InternalEngine.optimize does submit a waitForMerges call back to the OPTIMIZE pool, but that's at the end, and so all 10 incoming requests will still run at once I think?

I think that is correct, or rather whatever other optimize requests have already been queued will be run before the blocking wait for merges for the first request that ran. So I think we need to somehow push the waiting thread to the front of the queue, and always have that regardless of the optimize settings (except for wait_for_completion, which of course just runs in the foreground and holds the single optimize thread).

@rjernst
Copy link
Member

rjernst commented Feb 10, 2015

Ok, here is my proposal after speaking with Shay:

  • For 1.4.3: Change the default for wait_for_completion to true
  • For 1.5.0: Remove wait_for_completion (and wait_for_merges in the optimize api)
  • For 2.0: Once we have the task api, try to add back some of this async functionality as a long running task that can be managed.

@rjernst rjernst added v1.4.3 and removed v1.4.4 labels Feb 10, 2015
@mikemccand
Copy link
Contributor Author

+1 for this plan.

Separately, it would be nice if we could simply call IW.forceMerge(), which waits itself. This would fix #8923 ... must we really hold the readLock when calling forceMerge? Anyway, that can be done separately...

rjernst added a commit to rjernst/elasticsearch that referenced this issue Feb 10, 2015
This has ended up being very trappy.  Most people don't realize the
parameter is there, and using a wildcard on index names for upgrade
will end up essentially bypassing the optimize concurrency controls
through its threadpool.

See elastic#9638
rjernst added a commit to rjernst/elasticsearch that referenced this issue Feb 11, 2015
This has been very trappy. Rather than continue to allow buggy behavior
of having upgrade/optimize requests sidestep the single shard per node
limits optimize is supposed to be subject to, this removes
the ability to run the upgrade/optimize async.

closes elastic#9638
rjernst added a commit that referenced this issue Feb 11, 2015
This has been very trappy. Rather than continue to allow buggy behavior
of having upgrade/optimize requests sidestep the single shard per node
limits optimize is supposed to be subject to, this removes
the ability to run the upgrade/optimize async.

closes #9638
mute pushed a commit to mute/elasticsearch that referenced this issue Jul 29, 2015
This has ended up being very trappy.  Most people don't realize the
parameter is there, and using a wildcard on index names for upgrade
will end up essentially bypassing the optimize concurrency controls
through its threadpool.

See elastic#9638
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants