-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Check if a block is already in the emege queue before checking occlusion culling and trying to reemerge #15949
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…ion culling and trying to reemerge
|
@sfan5 How about this? This now has the exact same condition as the emerge later, and sets |
|
|
||
| // if the block is already in the emerge queue we don't have to check again | ||
| if (want_emerge && emerge->isBlockInQueue(p)) { | ||
| nearest_emerged_d = d; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there's a bug here actually, the if (nearest_emerged_d == -1) check is missing
so the variable will contain the furthest position in the end
Partially restore the existing logic, and try to enqueue a block as before, if the queue is full it will be handled correctly.
I was looking into why the server throughput is lower when
emergequeue_limit_diskonly/emergequeue_limit_generateare set to large values. I found that in that case much time is spent in doing occlusion culling over and over again for the same blocks until they are finally loaded. For large queue sizes (5000 or 10000) this can reduce the server throughput by 25x (from 19k to 800 or less blocks/s) - and increase CPU load by the same.The fix is to check whether the block is already enqueued. In that case occlusion culling is no longer performed.
No attempt is made to reenqueue the block either (the only possible bit we can loose is the
generate, which has no impact in this case as the block is already on the queue.)This is similar to checking blocks_sending and blocks_send before we attempt to handle them again.
Note that will be even worse if retrieving blocks from the DB takes longer (as might be the case with the Postgres backend.)
(With this I see throughput close to the theoretical maximum. On my machine it takes about 45us to load/deserialize a block, i.e. ~22k blocks/s, what I see is about 19k blocks/s... Pretty close!)
Increase server loading throughput
See description
To do
This PR is Ready for Review.
How to test
Load any world, dig around. Set
emergequeue_limit_diskonly/emergequeue_limit_generateto large values, notice that throughput is not reduced.