Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

POC: Distribute to waiting from auction based on leaving nodes #6114

Merged

Conversation

mariusmihaic
Copy link
Contributor

@mariusmihaic mariusmihaic commented Apr 11, 2024

Reasoning behind the pull request

  • In case of leaving active nodes in epoch (waiting/eligible), as long as there leaving nodes per shard < number of active nodes && leaving nodes per shard < number nodes to shuffle per shard, those will not be shuffled out, they can leave. If there are more nodes per shard that are leaving than the number of nodes to shuffle per shard, the remaining will be forced to stay

Proposed changes

  • Fix auction list selection algorithm to consider leaving active nodes and forced nodes to stay when computing number of avaiable slots to select qualified nodes.

Testing procedure

  • One easy starting point would be to do something similar to this test: TestChainSimulator_UnStakeOneActiveNodeAndCheckAPIAuctionList

Scenario 1:

  • Have every epoch auction list with enough nodes (let's say 8 qualified, 2 unqualified)
  • Unstake 1 or more eligible/waiting nodes (no more than all nodes per shard or no more than all eligible) and call auction list api; we should see 8 qualified nodes. Next epoch we need to check that exactly 8 nodes were qualified. Afterwards(some epochs), those leaving nodes will be leaving and replaced by the auction nodes

Scenario 2:

  • Have every epoch auction list with MORE nodes (let's say 8 qualified, 20 unqualified)
  • Let's say we have number of nodes to shuffle per shard = 2 and 8 eligible nodes + 4 waiting nodes
  • Unstake from one or more shards >=2 nodes. We should see in api auction list that we have number of qualified nodes = maxNumNodesFromConfig - (numRemainingActiveNodes + number of forced nodes to stay); forced node = every other unstaked node over 2 node unstaked per shard

At all times for all these tests, we should always have in system eligible + waiting nodes <= maxNumNodes from config.
Please check this at all times

@mariusmihaic mariusmihaic self-assigned this Apr 11, 2024
@mariusmihaic mariusmihaic marked this pull request as ready for review April 11, 2024 13:34
@raduchis raduchis self-requested a review April 12, 2024 08:25
@@ -88,6 +88,14 @@ func (sdps *StakingDataProviderStub) GetNumOfValidatorsInCurrentEpoch() uint32 {
return 0
}

func (sdps *StakingDataProviderStub) GetCurrentEpochValidatorStats() epochStart.ValidatorStatsInEpoch {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add comment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added comm

}

// If no previous list is set, means that staking v4 is not activated or node is leaving right before activation
// and this node will be considered as eligible by the nodes coordinator with legacy bug.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

with old code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refactored

// If no previous list is set, means that staking v4 is not activated or node is leaving right before activation
// and this node will be considered as eligible by the nodes coordinator with legacy bug.
// Otherwise, it will have it set, and we should check its previous list in the current epoch
if len(validatorPreviousList) == 0 || validatorPreviousList == common.EligibleList || validatorPreviousList == common.WaitingList {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return directly. no need for if.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, good suggestion

sasurobert
sasurobert previously approved these changes Apr 14, 2024
Base automatically changed from MX-15351-backwards-compat-issues to rc/v1.7.0 April 16, 2024 07:18
@mariusmihaic mariusmihaic dismissed sasurobert’s stale review April 16, 2024 07:18

The base branch was changed.

@mariusmihaic mariusmihaic merged commit b0f7a5c into rc/v1.7.0 Apr 23, 2024
8 checks passed
@mariusmihaic mariusmihaic deleted the MX-15360-waiting-list-distribution-leaving-nodes branch April 23, 2024 10:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants