-
Notifications
You must be signed in to change notification settings - Fork 502
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HDDS-8888. Consider Datanode queue capacity when sending DeleteBlocks command #4939
Conversation
…ns to Datanode whose queue is full
List<DatanodeDetails> datanodes) throws NodeNotFoundException { | ||
final Set<DatanodeDetails> included = new HashSet<>(); | ||
for (DatanodeDetails dn : datanodes) { | ||
if (nodeManager.getTotalDatanodeCommandCount(dn, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The basic idea looks good, but I have a concern that the queued cmd size from the last dn heartbeat has a potential delay. If the delete operation is frequent, there might be a case that the service cannot find idle data nodes to send cmd until the next heartbeat comes. The deletion process might not be smooth as you expect.
Pls correct me if i am wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By default the SCM generates a delete command every 60s and sends it to the DN. The DN reports a heartbeat every 30s. So normally SCM can get the newer DN status.
getTotalDatanodeCommandCount
returns the number of DeleteBlocksCommand
, each time SCM executes DeletedBlockTransactionScanner
, only one DeleteBlocksCommand
is sent to a specific DN. DeletedBlockTransactionScanner
execution frequency is fixed, the limit here is 5, so the SCM must execute at least 5 times before the DN's queue is full, which needs 5 min. as long as the DN can send a heartbeat of before all these commands are executed, then the SCM can continue to send delete commands to the DN.
If the queue of all DNs is full, SCM should not continue to send new commands to DN
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. Most of the time it should be fine.
The worst case:
A If DN reported the HB at time A with a full cmd queue.
A+ 29.9s. SCM DeletedBlockTransactionScanner executes and cannot send cmd to DN, needs to wait for next
round.
A+ 90s. SCM has updated the latest HB from DN and DeletedBlockTransactionScanner executes, and finally
send cmd to DN again.
It could lead to at most a 90-sec gap.
Right now it seems to be trivial compared to the interval of DeletedBlockTransactionScanner.
Thx for the explanation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, and when A+ 29.9s ~ A+ 90s the DN can continue to process the DeleteBlocksCommand
in its command queue. And the DN's command queue is full, DN will not be idle. Because this PR is determine whether to continue sending DeleteBlocksCommand
to DN according to the command queue length of DN.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Without processing DN HB, this will get same command again. So it is not useful to keep queue at SCM side. Else SCM will add duplicate command only, and DN will be processing duplicate commands only.
- DN queue is limited to "5" only as default (to limit the duplicate request queueing up when DN is slow). and handle memory in queue.
Considering parallel HB and SCM deleteBlock processing, they are synchronized using lock,
- HB take lock, get commands, process HB response, release lock
- SCM deleteBlock, getCommandQueueCount() take lock, get count and then release lock
So below sequence,
Scenario 1: - No issue as queue empty and next command can be added
- HB comes first
- SCM delete block processing next
Scenario 2: Here, adding same command will be duplicate and there is retry
- SCM delete block processing first
- HB comes next
Considering this, we need not have queue at SCM and above changes not required.
@Xushaohong PTAL Thanks. |
Hi @xichen01, not sure if you forget to push the commit or comment. |
Sorry, The comment is Pending status, I have commit the comments |
Thanks @xichen01 for working on this. License header is missing (reported by
https://github.com/xichen01/ozone/actions/runs/5322342178/jobs/9638901446#step:6:831 And new config property needs to be added to
https://github.com/xichen01/ozone/actions/runs/5322342178/jobs/9638902642#step:5:3834 |
License header has been added and the new config property has been added to |
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
Outdated
Show resolved
Hide resolved
…on instead of new added configuration
@adoroszlai PTAL. Thanks |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@xichen01 IMO, the changes is not required, as having queue will not provide any benefits. As per SCM deleteBlock processing, it will retry same blocks for delete till response in HB is not received.
SCM sends duplicate BlockIDs to the DN, we should avoid this problem because it affects the speed of DN marking blocks, in our Cluster, only about 1/3 of the blocks sent by SCM to the DN are valid, and the remaining 2/3 are duplicates. A common scenario is: Our business requires a deletion speed of at least 1000+ QPS to meet our business needs. The current Ozone deletion speed is not stable enough to reach this speed. |
I have seen another PR for solving duplicate delete blocks handling. IMO, we can do below changes to improve deletion speed,
|
Thanks you for the suggestion, we have increase the size of And some DN are slower than others, that cause most (sometimes is all) of the "Block deletion transaction"'s are sent to the slow DN, which results in the other normal DN can not receiving the new transaction(figure below). this PR can resolve this issue. |
For duplicate handled, HDDS-8882 is raised. So with that, this might not be issue. Slow DN part, there can be another improvement can be done at SCM, if number of commitTransaction for the DN is more, SCM can control sending new blocks, but it will be tricky, to choose retry or sending new blocks to DN. As,
Instead of queue capacity at SCM, some analysis can be done for controlling retry and sending new blocks to DN - strategy. |
Are you mean that we should consider "Does DN have too many unexecuted Block delete transactions" inside HDDS-8882? But this is actually a similar implementation. In this PR we monitorer the length of the queue on the DN to determine whether to send a new delete transaction to the DN. This PR does not added any new queue on the SCM side. if a DN is down or other failure, its length of queue change can be monitored by SCM, so that shouldn't be a problem either. Based on the HDDS-8882, it is acceptable to implement transmit control, do you have any suggestions? |
@xichen01 Sorry for late reply, Re-looked over this, It checks count of DN;s queue length and pending Command queue at SCM and take decision,
nodeManager.getTotalDatanodeCommandCount(() -- returns command count at DN and at SCM together Case: if DN command queue previoulsy is "0" or less than 5 and also DN is down, this will cause to add same command at SCM till limit "5". (duplicating exactly till 5 overall). So we can have above as additional check if required to avoid sending command if DN queue is already reached "5". |
Thank you for your reply. Yes, currently, if the DN is down, duplicate commands are put to the SCM command queue until the limit. However, if the HDDS-8882 is merged, the SCM will not place duplicate commands into the SCM command queue, because if a command is not sent to the DN, the command status will be So if the HDDS-8882 is also merged, while the DN is down, the SCM will generate unduplicated
If HDDS-8882 can be merged, maybe this doesn't need to be added |
@sumitagrawl @adoroszlai, PTAL, Thanks |
@xichen01 My concern was: We can optimize the logic as, |
@sumitagrawl |
@xichen01 I got your point, that need one command to be in advance. We can then have it as reason of using only one in advance is that, when HB comes from DN, it will send all items in command queue from SCM to DN together. This is to avoid big message size in response as can happen if someone configure higher value of queueLen. But this will solve the problem of idle, that if only one command in queue, next command can run parallel to be prepared. |
Done. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@xichen01 LGTM
Thanks @xichen01 for the patch, @sumitagrawl, @Xushaohong for the review. |
… command (apache#4939) (cherry picked from commit 43c9565)
… command (apache#4939) (cherry picked from commit 43c9565)
What changes were proposed in this pull request?
Currently, one of the conditions for the
SCMBlockDeletingService
to decide whether to send a deletion transaction to a DN is whether the SCM'sCommandQueue
is empty, a better way to determine this is based on whether there is enough space in the DN's queue rather than the SCM's queue.In some cases, even if a DN is able to get a delete transaction from the SCM, the DN will not be able to complete the task fast enough, command will be accumulated in the DN's queue, in which case, using this strategy will better control the load on the DN
What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-8888
Please replace this section with the link to the Apache JIRA)
How was this patch tested?
unit test