Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HDDS-8888. Consider Datanode queue capacity when sending DeleteBlocks command #4939

Merged
merged 9 commits into from
Jan 6, 2024

Conversation

xichen01
Copy link
Contributor

What changes were proposed in this pull request?

Currently, one of the conditions for the SCMBlockDeletingService to decide whether to send a deletion transaction to a DN is whether the SCM's CommandQueue is empty, a better way to determine this is based on whether there is enough space in the DN's queue rather than the SCM's queue.

In some cases, even if a DN is able to get a delete transaction from the SCM, the DN will not be able to complete the task fast enough, command will be accumulated in the DN's queue, in which case, using this strategy will better control the load on the DN

What is the link to the Apache JIRA

https://issues.apache.org/jira/browse/HDDS-8888

Please replace this section with the link to the Apache JIRA)

How was this patch tested?

unit test

List<DatanodeDetails> datanodes) throws NodeNotFoundException {
final Set<DatanodeDetails> included = new HashSet<>();
for (DatanodeDetails dn : datanodes) {
if (nodeManager.getTotalDatanodeCommandCount(dn,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The basic idea looks good, but I have a concern that the queued cmd size from the last dn heartbeat has a potential delay. If the delete operation is frequent, there might be a case that the service cannot find idle data nodes to send cmd until the next heartbeat comes. The deletion process might not be smooth as you expect.
Pls correct me if i am wrong.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By default the SCM generates a delete command every 60s and sends it to the DN. The DN reports a heartbeat every 30s. So normally SCM can get the newer DN status.

getTotalDatanodeCommandCount returns the number of DeleteBlocksCommand, each time SCM executes DeletedBlockTransactionScanner, only one DeleteBlocksCommand is sent to a specific DN. DeletedBlockTransactionScanner execution frequency is fixed, the limit here is 5, so the SCM must execute at least 5 times before the DN's queue is full, which needs 5 min. as long as the DN can send a heartbeat of before all these commands are executed, then the SCM can continue to send delete commands to the DN.

If the queue of all DNs is full, SCM should not continue to send new commands to DN

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. Most of the time it should be fine.
The worst case:
A If DN reported the HB at time A with a full cmd queue.
A+ 29.9s. SCM DeletedBlockTransactionScanner executes and cannot send cmd to DN, needs to wait for next
round.
A+ 90s. SCM has updated the latest HB from DN and DeletedBlockTransactionScanner executes, and finally
send cmd to DN again.
It could lead to at most a 90-sec gap.
Right now it seems to be trivial compared to the interval of DeletedBlockTransactionScanner.
Thx for the explanation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, and when A+ 29.9s ~ A+ 90s the DN can continue to process the DeleteBlocksCommand in its command queue. And the DN's command queue is full, DN will not be idle. Because this PR is determine whether to continue sending DeleteBlocksCommand to DN according to the command queue length of DN.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xichen01

  1. Without processing DN HB, this will get same command again. So it is not useful to keep queue at SCM side. Else SCM will add duplicate command only, and DN will be processing duplicate commands only.
  2. DN queue is limited to "5" only as default (to limit the duplicate request queueing up when DN is slow). and handle memory in queue.

Considering parallel HB and SCM deleteBlock processing, they are synchronized using lock,

  • HB take lock, get commands, process HB response, release lock
  • SCM deleteBlock, getCommandQueueCount() take lock, get count and then release lock

So below sequence,
Scenario 1: - No issue as queue empty and next command can be added

  1. HB comes first
  2. SCM delete block processing next

Scenario 2: Here, adding same command will be duplicate and there is retry

  1. SCM delete block processing first
  2. HB comes next

Considering this, we need not have queue at SCM and above changes not required.

@xichen01
Copy link
Contributor Author

@Xushaohong PTAL Thanks.

@Xushaohong
Copy link
Contributor

@Xushaohong PTAL Thanks.

Hi @xichen01, not sure if you forget to push the commit or comment.

@xichen01
Copy link
Contributor Author

@Xushaohong PTAL Thanks.

Hi @xichen01, not sure if you forget to push the commit or comment.

Sorry, The comment is Pending status, I have commit the comments

@adoroszlai
Copy link
Contributor

Thanks @xichen01 for working on this. License header is missing (reported by ratcheck):

hadoop-hdds/server-scm/target/rat.txt: !????? /home/runner/work/ozone/ozone/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestSCMBlockDeletingService.java

https://github.com/xichen01/ozone/actions/runs/5322342178/jobs/9638901446#step:6:831

And new config property needs to be added to ozone-default.xml, too:

Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.47 s <<< FAILURE! - in org.apache.hadoop.ozone.TestOzoneConfigurationFields
Error:  org.apache.hadoop.ozone.TestOzoneConfigurationFields.testCompareConfigurationClassAgainstXml  Time elapsed: 0.033 s  <<< FAILURE!
... 1 variables missing in ozone-default.xml Entries:   ozone.block.deleting.pending.command.limit expected:<0> but was:<1>

https://github.com/xichen01/ozone/actions/runs/5322342178/jobs/9638902642#step:5:3834

@xichen01
Copy link
Contributor Author

Thanks @xichen01 for working on this. License header is missing (reported by ratcheck):

hadoop-hdds/server-scm/target/rat.txt: !????? /home/runner/work/ozone/ozone/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestSCMBlockDeletingService.java

https://github.com/xichen01/ozone/actions/runs/5322342178/jobs/9638901446#step:6:831

And new config property needs to be added to ozone-default.xml, too:

Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.47 s <<< FAILURE! - in org.apache.hadoop.ozone.TestOzoneConfigurationFields
Error:  org.apache.hadoop.ozone.TestOzoneConfigurationFields.testCompareConfigurationClassAgainstXml  Time elapsed: 0.033 s  <<< FAILURE!
... 1 variables missing in ozone-default.xml Entries:   ozone.block.deleting.pending.command.limit expected:<0> but was:<1>

https://github.com/xichen01/ozone/actions/runs/5322342178/jobs/9638902642#step:5:3834

License header has been added and the new config property has been added to ozone-default.xml.

@adoroszlai adoroszlai changed the title HDDS-8888. Limit SCMBlockDeletingService to sending delete transactions to Datanode whose queue is full HDDS-8888. Consider Datanode queue capacity when sending DeleteBlocks command Jun 29, 2023
@xichen01
Copy link
Contributor Author

xichen01 commented Jul 6, 2023

@adoroszlai PTAL. Thanks

@xichen01 xichen01 requested a review from adoroszlai July 6, 2023 03:42
Copy link
Contributor

@sumitagrawl sumitagrawl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xichen01 IMO, the changes is not required, as having queue will not provide any benefits. As per SCM deleteBlock processing, it will retry same blocks for delete till response in HB is not received.

@xichen01
Copy link
Contributor Author

xichen01 commented Jul 11, 2023

@xichen01 IMO, the changes is not required, as having queue will not provide any benefits. As per SCM deleteBlock processing, it will retry same blocks for delete till response in HB is not received.

SCM sends duplicate BlockIDs to the DN, we should avoid this problem because it affects the speed of DN marking blocks, in our Cluster, only about 1/3 of the blocks sent by SCM to the DN are valid, and the remaining 2/3 are duplicates.
However, preventing the sending of duplicate transactions is not a problem that this PR solves. This PR solves the problem of slow DNs. If a DN is slow and can't finish processing the tasks in the queue, we shouldn't send him new tasks, which is what this PR does.

A common scenario is:
A DN's DeleteBlocksCommandHandlerThread thread is blocked for any reason, this DN will not be able to report the status of the delete transaction to the SCM for a long time, which will cause the SCM to keep sending duplicated delete transactions to the stuck DN, and if the DN is stuck for a long enough period of time, you'll find that the SCM will send a delete transaction only to the stuck DN (and all the deletes transaction are duplicated), and other normal DNs will not be able to receive new deletion transactions (because they will be skipped).

Our business requires a deletion speed of at least 1000+ QPS to meet our business needs. The current Ozone deletion speed is not stable enough to reach this speed.

@sumitagrawl
Copy link
Contributor

Our business requires a deletion speed of at least 1000+ QPS to meet our business needs. The current Ozone deletion speed is not stable enough to reach this speed.

I have seen another PR for solving duplicate delete blocks handling.

IMO, we can do below changes to improve deletion speed,

  1. increase the item size, block.deletion.per-interval.max
  2. reduce the frequency for triggering frequent getting blocks, ozone.block.deleting.service.timeout, like 30 sec in your case.
    -- tuning above parameter can help achieve same result, as having code to keep queue of commands.

@xichen01
Copy link
Contributor Author

Our business requires a deletion speed of at least 1000+ QPS to meet our business needs. The current Ozone deletion speed is not stable enough to reach this speed.

I have seen another PR for solving duplicate delete blocks handling.

IMO, we can do below changes to improve deletion speed,

  1. increase the item size, block.deletion.per-interval.max
  2. reduce the frequency for triggering frequent getting blocks, ozone.block.deleting.service.timeout, like 30 sec in your case.
    -- tuning above parameter can help achieve same result, as having code to keep queue of commands.

Thanks you for the suggestion, we have increase the size of block.deletion.per-interval.max , but we are increase the ozone.block.deleting.service.timeout too, because if we just reduce this value from 60s to 30s, the valid(no duplicate) "Block deletion transaction" will reduce from 1/3 to 1/6 even more less, maybe 5/6 "Block deletion transaction" are duplicate.

And some DN are slower than others, that cause most (sometimes is all) of the "Block deletion transaction"'s are sent to the slow DN, which results in the other normal DN can not receiving the new transaction(figure below). this PR can resolve this issue.

image

@sumitagrawl
Copy link
Contributor

For duplicate handled, HDDS-8882 is raised. So with that, this might not be issue.

Slow DN part, there can be another improvement can be done at SCM, if number of commitTransaction for the DN is more, SCM can control sending new blocks, but it will be tricky, to choose retry or sending new blocks to DN. As,

  • DN can go down or other failure

Instead of queue capacity at SCM, some analysis can be done for controlling retry and sending new blocks to DN - strategy.

@xichen01
Copy link
Contributor Author

xichen01 commented Jul 12, 2023

For duplicate handled, HDDS-8882 is raised. So with that, this might not be issue.

Slow DN part, there can be another improvement can be done at SCM, if number of commitTransaction for the DN is more, SCM can control sending new blocks, but it will be tricky, to choose retry or sending new blocks to DN. As,

  • DN can go down or other failure

Instead of queue capacity at SCM, some analysis can be done for controlling retry and sending new blocks to DN - strategy.

Are you mean that we should consider "Does DN have too many unexecuted Block delete transactions" inside HDDS-8882?
If the DN has too many unexecuted delete transactions, we do not send new Block delete transactions for it.

But this is actually a similar implementation. In this PR we monitorer the length of the queue on the DN to determine whether to send a new delete transaction to the DN. This PR does not added any new queue on the SCM side. if a DN is down or other failure, its length of queue change can be monitored by SCM, so that shouldn't be a problem either.

Based on the HDDS-8882, it is acceptable to implement transmit control, do you have any suggestions?

@sumitagrawl
Copy link
Contributor

Are you mean that we should consider "Does DN have too many unexecuted Block delete transactions" inside HDDS-8882? If the DN has too many unexecuted delete transactions, we do not send new Block delete transactions for it.

But this is actually a similar implementation. In this PR we monitorer the length of the queue on the DN to determine whether to send a new delete transaction to the DN. This PR does not added any new queue on the SCM side. if a DN is down or other failure, its length of queue change can be monitored by SCM, so that shouldn't be a problem either.

Based on the HDDS-8882, it is acceptable to implement transmit control, do you have any suggestions?

@xichen01 Sorry for late reply, Re-looked over this, It checks count of DN;s queue length and pending Command queue at SCM and take decision,

  1. checking DN's queue length is good to avoid resending same command and being rejected by DN.

  2. For commands getting queued at SMC, this check also enforce to add same command again if DN is down.

nodeManager.getTotalDatanodeCommandCount(() -- returns command count at DN and at SCM together
nodeManager.getCommandQueueCount() -- returns command count present at SCM only as used earlier

Case: if DN command queue previoulsy is "0" or less than 5 and also DN is down, this will cause to add same command at SCM till limit "5". (duplicating exactly till 5 overall).

So we can have above as additional check if required to avoid sending command if DN queue is already reached "5".
May be we can check for lesser queue size at SCM to send like limit "2" to avoid duplicate for the change.

@xichen01
Copy link
Contributor Author

//...

Case: if DN command queue previoulsy is "0" or less than 5 and also DN is down, this will cause to add same command at SCM till limit "5". (duplicating exactly till 5 overall).
//...

Thank you for your reply.

Yes, currently, if the DN is down, duplicate commands are put to the SCM command queue until the limit.

However, if the HDDS-8882 is merged, the SCM will not place duplicate commands into the SCM command queue, because if a command is not sent to the DN, the command status will be TO_BE_SENT and the TO_BE_SENT command will be skipped when the SCM generating a DeletedBlocksTransaction command.

So if the HDDS-8882 is also merged, while the DN is down, the SCM will generate unduplicated DeletedBlocksTransaction commands to the command queue.

May be we can check for lesser queue size at SCM to send like limit "2" to avoid duplicate for the change.

If HDDS-8882 can be merged, maybe this doesn't need to be added
What do you think?

@xichen01
Copy link
Contributor Author

@sumitagrawl @adoroszlai, PTAL, Thanks

@sumitagrawl
Copy link
Contributor

May be we can check for lesser queue size at SCM to send like limit "2" to avoid duplicate for the change.

If HDDS-8882 can be merged, maybe this doesn't need to be added What do you think?

@xichen01
HDDS-8882 is merged now. Related to this, we need some improvement in check for this PR.

My concern was:
getTotalDatanodeCommandCount() returns DN queue count + command count at SCM. eg:
DN count = 2
SCM count=1
Total count=3
Default max count=5
-- in this case, it will add new set of command to SCM as meet the criteria and as result,
SCM count=2 in command queue at SCM.

We can optimize the logic as,
getTotalDatanodeCommandCount() < 5 && getCommandQueueCount() == 0
<-- this ensure that new command is not added at SCM till previous one is send to DN.

@xichen01
Copy link
Contributor Author

May be we can check for lesser queue size at SCM to send like limit "2" to avoid duplicate for the change.

If HDDS-8882 can be merged, maybe this doesn't need to be added What do you think?

@xichen01 HDDS-8882 is merged now. Related to this, we need some improvement in check for this PR.

My concern was: getTotalDatanodeCommandCount() returns DN queue count + command count at SCM. eg: DN count = 2 SCM count=1 Total count=3 Default max count=5 -- in this case, it will add new set of command to SCM as meet the criteria and as result, SCM count=2 in command queue at SCM.

We can optimize the logic as, getTotalDatanodeCommandCount() < 5 && getCommandQueueCount() == 0 <-- this ensure that new command is not added at SCM till previous one is send to DN.

@sumitagrawl
I understand, but is it too strict to limit getCommandQueueCount() == 0, because there may be a delay in updating the DN's queue information, which may leave the DN idle, I think it could be getTotalDatanodeCommandCount() < queueLen && getCommandQueueCount() < ceil(queueLen/2).
what do you think?

@sumitagrawl
Copy link
Contributor

May be we can check for lesser queue size at SCM to send like limit "2" to avoid duplicate for the change.

If HDDS-8882 can be merged, maybe this doesn't need to be added What do you think?

@xichen01 HDDS-8882 is merged now. Related to this, we need some improvement in check for this PR.
My concern was: getTotalDatanodeCommandCount() returns DN queue count + command count at SCM. eg: DN count = 2 SCM count=1 Total count=3 Default max count=5 -- in this case, it will add new set of command to SCM as meet the criteria and as result, SCM count=2 in command queue at SCM.
We can optimize the logic as, getTotalDatanodeCommandCount() < 5 && getCommandQueueCount() == 0 <-- this ensure that new command is not added at SCM till previous one is send to DN.

@sumitagrawl I understand, but is it too strict to limit getCommandQueueCount() == 0, because there may be a delay in updating the DN's queue information, which may leave the DN idle, I think it could be getTotalDatanodeCommandCount() < queueLen && getCommandQueueCount() < ceil(queueLen/2). what do you think?

@xichen01 I got your point, that need one command to be in advance. We can then have it as
getTotalDatanodeCommandCount() < queueLen && getCommandQueueCount() < 2

reason of using only one in advance is that, when HB comes from DN, it will send all items in command queue from SCM to DN together. This is to avoid big message size in response as can happen if someone configure higher value of queueLen.

But this will solve the problem of idle, that if only one command in queue, next command can run parallel to be prepared.

@xichen01
Copy link
Contributor Author

xichen01 commented Jan 4, 2024

May be we can check for lesser queue size at SCM to send like limit "2" to avoid duplicate for the change.

If HDDS-8882 can be merged, maybe this doesn't need to be added What do you think?

@xichen01 HDDS-8882 is merged now. Related to this, we need some improvement in check for this PR.
My concern was: getTotalDatanodeCommandCount() returns DN queue count + command count at SCM. eg: DN count = 2 SCM count=1 Total count=3 Default max count=5 -- in this case, it will add new set of command to SCM as meet the criteria and as result, SCM count=2 in command queue at SCM.
We can optimize the logic as, getTotalDatanodeCommandCount() < 5 && getCommandQueueCount() == 0 <-- this ensure that new command is not added at SCM till previous one is send to DN.

@sumitagrawl I understand, but is it too strict to limit getCommandQueueCount() == 0, because there may be a delay in updating the DN's queue information, which may leave the DN idle, I think it could be getTotalDatanodeCommandCount() < queueLen && getCommandQueueCount() < ceil(queueLen/2). what do you think?

@xichen01 I got your point, that need one command to be in advance. We can then have it as getTotalDatanodeCommandCount() < queueLen && getCommandQueueCount() < 2

reason of using only one in advance is that, when HB comes from DN, it will send all items in command queue from SCM to DN together. This is to avoid big message size in response as can happen if someone configure higher value of queueLen.

But this will solve the problem of idle, that if only one command in queue, next command can run parallel to be prepared.

Done.

Copy link
Contributor

@sumitagrawl sumitagrawl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xichen01 LGTM

@adoroszlai adoroszlai merged commit 43c9565 into apache:master Jan 6, 2024
34 checks passed
@adoroszlai
Copy link
Contributor

Thanks @xichen01 for the patch, @sumitagrawl, @Xushaohong for the review.

xichen01 added a commit to xichen01/ozone that referenced this pull request Jul 17, 2024
xichen01 added a commit to xichen01/ozone that referenced this pull request Jul 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants