Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce contention in DocumentsWriterPerThreadPool. #12199

Merged
merged 9 commits into from
Mar 15, 2023

Conversation

jpountz
Copy link
Contributor

@jpountz jpountz commented Mar 10, 2023

Obtaining a DWPT and putting it back into the pool is subject to contention. This change reduces contention by using 8 sub pools that are tried sequentially. When applied on top of #12198, this reduces the time to index geonames with 20 threads from ~19s to ~16-17s.

Obtaining a DWPT and putting it back into the pool is subject to contention.
This change reduces contention by using 8 sub pools that are tried sequentially.
When applied on top of apache#12198, this reduces the time to index geonames with 20
threads from ~19s to ~16-17s.
dwpt = newWriter();
ensureOpen();
DocumentsWriterPerThread dwpt = freeList.poll(DocumentsWriterPerThread::tryLock);
if (dwpt == null) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For some reason, double locking always makes me cringe. There are lengthy discussions about this idiom all around the web (visibility of partially constructed objects).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that this is a case of double-checked locking. In general double-checked locking tries to reduce the overhead of acquiring a lock by adding a quick check before the lock. Here the logic is different, it would be legal to remove the call to poll under lock, I only added it because there is a chance that a thread had to wait on the lock, so we could check if another DWPT was added to the queue in the meantime in order to save creating a new DWPT. I don't think it's actually important, I could remove the second call to poll to make it look less like double-checked locking.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not about the poll. It's about whether dwpt = newWriter(); can be expanded and reordered by the compiler so that dwpt gets assigned a value before the constructor (or whatever initialization inside newWriter) takes place. Any thread checking dwpt == null outside synchronized could, at least in theory, see such a "partially constructed" object.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alex Shipilev had a nice writeup about it - found it here: https://shipilev.net/blog/2014/safe-public-construction/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for sharing this blog post, I remember reading it in the past, it was a good re-read. I mentioned polls because I thought that they were what made you think that this code is a case of double-checked locking, as there is a first call before the lock and another one under the lock, like the null checks with double-checked locking with singletons. I need to go on weekend, I'll try to post a convincing explanation of why this is not a case of double-checked locking and why it is safe next week. By the way, thanks for looking!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to convince me, @jpountz - I just expressed my reluctance at it because, well, it requires convincing. :) Unless there's really a huge gain, I typically just go with what I understand works (making the field volatile, for example).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still have the feeling that you are making incorrect assumptions about what this piece of code is doing (this method itself doesn't publish the object to other threads), but I took this thread as a call to keep things simple, so I removed the retry and added a few more comments about the logic of this pool.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I certainly am! But anyone looking at this code without jumping deep will have the same doubts, I think. Unless there is a convincing (performance) argument that it's worth it, I like the updated version a lot better - it's clear and simple.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks code looks much better. I was also irritated by the long chain of "tries" with and without synchronized. To me actualy code is easier to read.

@uschindler
Copy link
Contributor

uschindler commented Mar 13, 2023

Thanks for the updates. Sorry for nitpicking, I just prefer code that goes sequentially and uses the pattern "exit method as soon as condition mets". This makes it easier to understand.

Nice that there's no double locking anymore.

Copy link
Member

@s1monw s1monw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM just a bunch of nits


// Only used for assertions
boolean contains(Object o) {
for (int i = 0; i < CONCURRENCY; ++i) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick can you add a check that assertions are enabled?

// Only used for assertions
boolean contains(Object o) {
for (int i = 0; i < CONCURRENCY; ++i) {
locks[i].lock();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is really a nit pick but for methods that use a lock I'd prefer to assign the lock to a local var instead of dereferencing it again in the finally block.


boolean remove(Object o) {
for (int i = 0; i < CONCURRENCY; ++i) {
locks[i].lock();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here, maybe use a local var for the lock. It really looks cleaner

@jpountz jpountz merged commit f324204 into apache:main Mar 15, 2023
@jpountz jpountz deleted the reduced_contention_dwpt_pool branch March 15, 2023 12:17
jpountz added a commit that referenced this pull request Mar 15, 2023
Obtaining a DWPT and putting it back into the pool is subject to contention.
This change reduces contention by using 8 sub pools that are tried sequentially.
When applied on top of #12198, this reduces the time to index geonames with 20
threads from ~19s to ~16-17s.
@jpountz
Copy link
Contributor Author

jpountz commented Mar 17, 2023

I suspect this change to be the source of the speedup when indexing vectors on https://home.apache.org/~mikemccand/lucenebench/indexing.html, but maybe more because of the introduced affinity between indexing threads and DWPTs than because of reduced contention since contention generally shows up when indexing is fast, which isn't the case with vectors?

jpountz added a commit to jpountz/lucene that referenced this pull request Mar 29, 2023
After upgrading Elasticsearch to a recent Lucene snapshot, we observed a few
indexing slowdowns when indexing with low numbers of cores. This appears to be
due to the fact that we lost too much of the bias towards larger DWPTs in
apache#12199. This change tries to add back more ordering by adjusting
the concurrency of `DWPTPool` to the number of cores that are available on the
local node.
jpountz added a commit to jpountz/lucene that referenced this pull request Mar 29, 2023
After upgrading Elasticsearch to a recent Lucene snapshot, we observed a few
indexing slowdowns when indexing with low numbers of cores. This appears to be
due to the fact that we lost too much of the bias towards larger DWPTs in
apache#12199. This change tries to add back more ordering by adjusting
the concurrency of `DWPTPool` to the number of cores that are available on the
local node.
jpountz added a commit that referenced this pull request Mar 31, 2023
After upgrading Elasticsearch to a recent Lucene snapshot, we observed a few
indexing slowdowns when indexing with low numbers of cores. This appears to be
due to the fact that we lost too much of the bias towards larger DWPTs in
#12199. This change tries to add back more ordering by adjusting
the concurrency of `DWPTPool` to the number of cores that are available on the
local node.
jpountz added a commit that referenced this pull request Mar 31, 2023
After upgrading Elasticsearch to a recent Lucene snapshot, we observed a few
indexing slowdowns when indexing with low numbers of cores. This appears to be
due to the fact that we lost too much of the bias towards larger DWPTs in
#12199. This change tries to add back more ordering by adjusting
the concurrency of `DWPTPool` to the number of cores that are available on the
local node.
waziqi89 pushed a commit to waziqi89/lucene that referenced this pull request Apr 3, 2023
After upgrading Elasticsearch to a recent Lucene snapshot, we observed a few
indexing slowdowns when indexing with low numbers of cores. This appears to be
due to the fact that we lost too much of the bias towards larger DWPTs in
apache#12199. This change tries to add back more ordering by adjusting
the concurrency of `DWPTPool` to the number of cores that are available on the
local node.
@vsop-479
Copy link
Contributor

vsop-479 commented Apr 14, 2023

DocumentsWriterPerThread getAndLock() {
ensureOpen();
DocumentsWriterPerThread dwpt = freeList.poll(DocumentsWriterPerThread::tryLock);
if (dwpt != null) {
return dwpt;
}
// newWriter() adds the DWPT to the dwpts set as a side-effect. However it is not added to
// freeList at this point, it will be added later on once DocumentsWriter has indexed a
// document into this DWPT and then gives it back to the pool by calling
// #marksAsFreeAndUnlock.
return newWriter();
}

@jpountz Will this change makes more small segments generated than the synchronized version?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants