Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Filter auction nodes list #3822

Merged
merged 26 commits into from
Mar 9, 2022
Merged
Show file tree
Hide file tree
Changes from 18 commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
a0cdfc5
FEAT: Add first ugly version
mariusmihaic Feb 18, 2022
200f17d
Merge branch 'EN-11661-staking-v4-init' into EN-11662-filter-new-nodes
mariusmihaic Feb 22, 2022
60e4e3a
FEAT: Add temporary test
mariusmihaic Feb 23, 2022
bd9d101
FEAT: Change ProcessSystemSmartContract func interface to accept rand
mariusmihaic Feb 23, 2022
a838a7f
FEAT: Sort by pubKey XOR rand if multiple nodes have same top up per …
mariusmihaic Feb 23, 2022
ef30472
FIX: Top up per node in tests
mariusmihaic Feb 23, 2022
caa682d
FEAT: Display auction list, refactor interface + tests
mariusmihaic Feb 24, 2022
40ff5a7
FIX: Refactor tests
mariusmihaic Feb 24, 2022
02160ad
FIX: Refactor code pt. 1
mariusmihaic Feb 24, 2022
5ae3d73
FIX: Refactor code pt. 2
mariusmihaic Feb 25, 2022
2a2dc29
FEAT: Add tests for error paths
mariusmihaic Feb 28, 2022
473896e
FIX: Small refactor
mariusmihaic Feb 28, 2022
e51f952
FEAT: Add flag in toml file
mariusmihaic Feb 28, 2022
638a23b
Merge branch 'EN-11661-staking-v4-init' into EN-11662-filter-new-nodes
mariusmihaic Feb 28, 2022
a8fef77
Merge remote-tracking branch 'origin/EN-11661-staking-v4-init' into E…
mariusmihaic Mar 1, 2022
27aa5f8
Merge branch 'EN-11661-staking-v4-init' into EN-11662-filter-new-nodes
mariusmihaic Mar 1, 2022
69bc7c5
FIX: Review findings
mariusmihaic Mar 1, 2022
53befc7
Merge branch 'feat/liquid-staking' into EN-11662-filter-new-nodes
mariusmihaic Mar 2, 2022
23675b0
FIX: Review findings
mariusmihaic Mar 3, 2022
9639aa5
FIX: Review findings pt. 2
mariusmihaic Mar 4, 2022
196ffdb
Merge branch 'feat/liquid-staking' into EN-11662-filter-new-nodes
mariusmihaic Mar 4, 2022
42b0528
FIX: One review finding
mariusmihaic Mar 4, 2022
3df6cfb
FIX: Integration test
mariusmihaic Mar 9, 2022
c3abbdb
FIX: Broken tests
mariusmihaic Mar 9, 2022
47c7712
FEAT: Move selected nodes from AuctionList to SelectedFromAuctionList
mariusmihaic Mar 9, 2022
20535f3
FIX: Review findings
mariusmihaic Mar 9, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
3 changes: 3 additions & 0 deletions cmd/node/config/enableEpochs.toml
Original file line number Diff line number Diff line change
Expand Up @@ -191,6 +191,9 @@
# all nodes from staking queue are moved in the auction list
StakingV4InitEnableEpoch = 4

# StakingV4EnableEpoch represents the epoch when staking v4 is enabled. Should have a greater value than StakingV4InitEnableEpoch
StakingV4EnableEpoch = 5
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so this should always be StakingV4InitEnableEpoch + 1? What happens if we wrongly configure this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Theoretically, code should not panic or anything. If EnableEpoch < InitEpoch => auction list will be empty => no sorting/selection will be made.
Logic-wise, if they are inverted it shouldn't cause any bugs for NOW. I'm not sure how the implementation on nodes coordinator side will look like to give an educated opinion. Depending how it will be designed, it might cause some bugs


# MaxNodesChangeEnableEpoch holds configuration for changing the maximum number of nodes and the enabling epoch
MaxNodesChangeEnableEpoch = [
{ EpochEnable = 0, MaxNumNodes = 36, NodesToShufflePerShard = 4 },
Expand Down
1 change: 1 addition & 0 deletions config/epochConfig.go
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,7 @@ type EnableEpochs struct {
DoNotReturnOldBlockInBlockchainHookEnableEpoch uint32
StakeLimitsEnableEpoch uint32
StakingV4InitEnableEpoch uint32
StakingV4EnableEpoch uint32
}

// GasScheduleByEpochs represents a gas schedule toml entry that will be applied from the provided epoch
Expand Down
3 changes: 3 additions & 0 deletions epochStart/errors.go
Original file line number Diff line number Diff line change
Expand Up @@ -334,3 +334,6 @@ var ErrNilScheduledDataSyncerFactory = errors.New("nil scheduled data syncer fac

// ErrCouldNotInitLiquidStakingSystemSC signals that liquid staking system sc init failed
var ErrCouldNotInitLiquidStakingSystemSC = errors.New("could not init liquid staking system sc")

// ErrSortAuctionList signals that an error occurred while trying to sort auction list
var ErrSortAuctionList = errors.New("error while trying to sort auction list")
3 changes: 2 additions & 1 deletion epochStart/interface.go
Original file line number Diff line number Diff line change
Expand Up @@ -158,9 +158,10 @@ type StakingDataProvider interface {
GetTotalStakeEligibleNodes() *big.Int
GetTotalTopUpStakeEligibleNodes() *big.Int
GetNodeStakedTopUp(blsKey []byte) (*big.Int, error)
PrepareStakingDataForRewards(keys map[uint32][][]byte) error
PrepareStakingData(keys map[uint32][][]byte) error
FillValidatorInfo(blsKey []byte) error
ComputeUnQualifiedNodes(validatorInfos map[uint32][]*state.ValidatorInfo) ([][]byte, map[string][][]byte, error)
GetBlsKeyOwner(blsKey []byte) (string, error)
Clean()
IsInterfaceNil() bool
}
Expand Down
11 changes: 6 additions & 5 deletions epochStart/metachain/stakingDataProvider.go
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ func (sdp *stakingDataProvider) GetTotalTopUpStakeEligibleNodes() *big.Int {

// GetNodeStakedTopUp returns the owner of provided bls key staking stats for the current epoch
func (sdp *stakingDataProvider) GetNodeStakedTopUp(blsKey []byte) (*big.Int, error) {
owner, err := sdp.getBlsKeyOwner(blsKey)
owner, err := sdp.GetBlsKeyOwner(blsKey)
if err != nil {
log.Debug("GetOwnerStakingStats", "key", hex.EncodeToString(blsKey), "error", err)
return nil, err
Expand All @@ -105,8 +105,8 @@ func (sdp *stakingDataProvider) GetNodeStakedTopUp(blsKey []byte) (*big.Int, err
return ownerInfo.topUpPerNode, nil
}

// PrepareStakingDataForRewards prepares the staking data for the given map of node keys per shard
func (sdp *stakingDataProvider) PrepareStakingDataForRewards(keys map[uint32][][]byte) error {
// PrepareStakingData prepares the staking data for the given map of node keys per shard
func (sdp *stakingDataProvider) PrepareStakingData(keys map[uint32][][]byte) error {
sdp.Clean()

for _, keysList := range keys {
Expand Down Expand Up @@ -163,7 +163,7 @@ func (sdp *stakingDataProvider) FillValidatorInfo(blsKey []byte) error {
}

func (sdp *stakingDataProvider) getAndFillOwnerStatsFromSC(blsKey []byte) (*ownerStats, error) {
owner, err := sdp.getBlsKeyOwner(blsKey)
owner, err := sdp.GetBlsKeyOwner(blsKey)
if err != nil {
log.Debug("error fill owner stats", "step", "get owner from bls", "key", hex.EncodeToString(blsKey), "error", err)
return nil, err
Expand Down Expand Up @@ -195,7 +195,8 @@ func (sdp *stakingDataProvider) loadDataForBlsKey(blsKey []byte) error {
return nil
}

func (sdp *stakingDataProvider) getBlsKeyOwner(blsKey []byte) (string, error) {
// GetBlsKeyOwner returns the owner's public key of the provided bls key
func (sdp *stakingDataProvider) GetBlsKeyOwner(blsKey []byte) (string, error) {
vmInput := &vmcommon.ContractCallInput{
VMInput: vmcommon.VMInput{
CallerAddr: vm.ValidatorSCAddress,
Expand Down
2 changes: 1 addition & 1 deletion epochStart/metachain/stakingDataProvider_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -366,7 +366,7 @@ func TestStakingDataProvider_PrepareStakingDataForRewards(t *testing.T) {

keys := make(map[uint32][][]byte)
keys[0] = append(keys[0], []byte("owner"))
err := sdp.PrepareStakingDataForRewards(keys)
err := sdp.PrepareStakingData(keys)
require.NoError(t, err)
}

Expand Down
168 changes: 166 additions & 2 deletions epochStart/metachain/systemSCs.go
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,9 @@ import (
"github.com/ElrondNetwork/elrond-go-core/core/check"
"github.com/ElrondNetwork/elrond-go-core/data"
"github.com/ElrondNetwork/elrond-go-core/data/block"
"github.com/ElrondNetwork/elrond-go-core/display"
"github.com/ElrondNetwork/elrond-go-core/marshal"
logger "github.com/ElrondNetwork/elrond-go-logger"
"github.com/ElrondNetwork/elrond-go/common"
vInfo "github.com/ElrondNetwork/elrond-go/common/validatorInfo"
"github.com/ElrondNetwork/elrond-go/config"
Expand Down Expand Up @@ -73,6 +75,7 @@ type systemSCProcessor struct {
governanceEnableEpoch uint32
builtInOnMetaEnableEpoch uint32
stakingV4InitEnableEpoch uint32
stakingV4EnableEpoch uint32
maxNodesEnableConfig []config.MaxNodesChangeConfig
maxNodes uint32
flagSwitchJailedWaiting atomic.Flag
Expand All @@ -89,6 +92,7 @@ type systemSCProcessor struct {
flagBuiltInOnMetaEnabled atomic.Flag
flagInitStakingV4Enabled atomic.Flag
flagStakingQueueEnabled atomic.Flag
flagStakingV4Enabled atomic.Flag
esdtOwnerAddressBytes []byte
mapNumSwitchedPerShard map[uint32]uint32
mapNumSwitchablePerShard map[uint32]uint32
Expand Down Expand Up @@ -186,6 +190,7 @@ func NewSystemSCProcessor(args ArgsNewEpochStartSystemSCProcessing) (*systemSCPr
governanceEnableEpoch: args.EpochConfig.EnableEpochs.GovernanceEnableEpoch,
builtInOnMetaEnableEpoch: args.EpochConfig.EnableEpochs.BuiltInFunctionOnMetaEnableEpoch,
stakingV4InitEnableEpoch: args.EpochConfig.EnableEpochs.StakingV4InitEnableEpoch,
stakingV4EnableEpoch: args.EpochConfig.EnableEpochs.StakingV4EnableEpoch,
}

log.Debug("systemSC: enable epoch for switch jail waiting", "epoch", s.switchEnableEpoch)
Expand All @@ -197,7 +202,8 @@ func NewSystemSCProcessor(args ArgsNewEpochStartSystemSCProcessing) (*systemSCPr
log.Debug("systemSC: enable epoch for save jailed always", "epoch", s.saveJailedAlwaysEnableEpoch)
log.Debug("systemSC: enable epoch for governanceV2 init", "epoch", s.governanceEnableEpoch)
log.Debug("systemSC: enable epoch for create NFT on meta", "epoch", s.builtInOnMetaEnableEpoch)
log.Debug("systemSC: enable epoch for staking v4", "epoch", s.stakingV4InitEnableEpoch)
log.Debug("systemSC: enable epoch for initializing staking v4", "epoch", s.stakingV4InitEnableEpoch)
log.Debug("systemSC: enable epoch for staking v4", "epoch", s.stakingV4EnableEpoch)

s.maxNodesEnableConfig = make([]config.MaxNodesChangeConfig, len(args.MaxNodesEnableConfig))
copy(s.maxNodesEnableConfig, args.MaxNodesEnableConfig)
Expand All @@ -214,6 +220,7 @@ func (s *systemSCProcessor) ProcessSystemSmartContract(
validatorInfos map[uint32][]*state.ValidatorInfo,
nonce uint64,
epoch uint32,
randomness []byte,
) error {
if s.flagHystNodesEnabled.IsSet() {
err := s.updateSystemSCConfigMinNodes()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need a sort of refactor here - like adding a separate function with old / inactive flags - all those things which were once activated on the mainnet. That we we make it easier to read this function. like flagHystNodes, flagSetOwner, flagChangeMaxNodes, flagCorrectlastUnJailed, flagDelegationEnabled, flagCorrectNumNodesToStake

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those are called only once - so you could add flagESDTEnabled there. Move to new activation function which are not yet enabled, like flagGovernance, flagBuiltInOnMetachain, flagStakingV4Init

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Decided to split them in 2 functions with pretty "trivial" names:

  • checkOldFlags (which contains: flagHystNodes, flagSetOwner,...)
  • checkNewFlags(which contains flagGovernance, flagStakingV4Init,...)

Expand Down Expand Up @@ -327,9 +334,149 @@ func (s *systemSCProcessor) ProcessSystemSmartContract(
}
sasurobert marked this conversation as resolved.
Show resolved Hide resolved
}

if s.flagStakingV4Enabled.IsSet() {
allNodesKeys := s.getAllNodeKeys(validatorInfos)

err := s.stakingDataProvider.PrepareStakingData(allNodesKeys)
if err != nil {
return err
}

err = s.selectNodesFromAuctionList(validatorInfos, randomness)
if err != nil {
return err
}
}

return nil
}

func (s *systemSCProcessor) selectNodesFromAuctionList(validatorInfoMap map[uint32][]*state.ValidatorInfo, randomness []byte) error {
auctionList, numOfValidators := getAuctionListAndNumOfValidators(validatorInfoMap)
err := s.sortAuctionList(auctionList, randomness)
if err != nil {
return err
}

auctionListSize := uint32(len(auctionList))
numOfAvailableNodeSlots := core.MinUint32(auctionListSize, s.maxNodes-numOfValidators)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make a check on numValidators < s.maxNodes -> return -> do not do anything - it means there are not enough nodes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

still add the sanity check.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added sanity check as an early exit point

s.displayAuctionList(auctionList, numOfAvailableNodeSlots)

for i := uint32(0); i < numOfAvailableNodeSlots; i++ {
auctionList[i].List = string(common.NewList)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to somewhat rewrite the map[uint32][]*state.ValidatorInfo - like use an interface and set new values there. It is ok for now - but it is super easy to make a mistake - modifying a value under a wrong pointer - hard to catch. Add a task for this rework.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, current implementation it is pretty prone to errors; however I'm not sure how this interface might look like -> seems a bit of overhead; will add a TODO

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add Jira task as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will add task in Jira as well

}

return nil
}

func getAuctionListAndNumOfValidators(validatorInfoMap map[uint32][]*state.ValidatorInfo) ([]*state.ValidatorInfo, uint32) {
auctionList := make([]*state.ValidatorInfo, 0)
numOfValidators := uint32(0)

for _, validatorsInShard := range validatorInfoMap {
for _, validator := range validatorsInShard {
if validator.List == string(common.AuctionList) {
auctionList = append(auctionList, validator)
continue
}
if isValidator(validator) {
numOfValidators++
}
}
}

return auctionList, numOfValidators
}

func (s *systemSCProcessor) sortAuctionList(auctionList []*state.ValidatorInfo, randomness []byte) error {
validatorTopUpMap, err := s.getValidatorTopUpMap(auctionList)
if err != nil {
return fmt.Errorf("%w: %v", epochStart.ErrSortAuctionList, err)
}

sort.SliceStable(auctionList, func(i, j int) bool {
pubKey1 := auctionList[i].PublicKey
pubKey2 := auctionList[j].PublicKey

nodeTopUpPubKey1 := validatorTopUpMap[string(pubKey1)]
nodeTopUpPubKey2 := validatorTopUpMap[string(pubKey2)]

if nodeTopUpPubKey1.Cmp(nodeTopUpPubKey2) == 0 {
return compareByXORWithRandomness(pubKey1, pubKey2, randomness)
}

return nodeTopUpPubKey1.Cmp(nodeTopUpPubKey2) == 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

0 instead of == 1

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why 0?

We want to compare which node has the highest top up. Cmp documentation says:

// Cmp compares x and y and returns:
//
//   -1 if x <  y
//    0 if x == y
//   +1 if x >  y

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use bigger than 0. like ">0"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it to >0

})

return nil
}

func (s *systemSCProcessor) getValidatorTopUpMap(validators []*state.ValidatorInfo) (map[string]*big.Int, error) {
ret := make(map[string]*big.Int, len(validators))

for _, validator := range validators {
pubKey := validator.PublicKey
topUp, err := s.stakingDataProvider.GetNodeStakedTopUp(pubKey)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

discussion is needed here in order to see if we want the following: one entity might register 10 nodes with 3000eGLD, meaning 500 topup per node. but if the topUp limit is at 2500 - that entity might have 5 nodes which enter and 5 which will not. However the first auction system was designed like that - selecting the most possible for an account. So add a task for this - let's discuss and it should be another task if we decide that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will leave this topic open and create a new Jira task

if err != nil {
return nil, fmt.Errorf("%w when trying to get top up per node for %s", err, hex.EncodeToString(pubKey))
}

ret[string(pubKey)] = topUp
}

return ret, nil
}

func compareByXORWithRandomness(pubKey1, pubKey2, randomness []byte) bool {
minLen := core.MinInt(len(pubKey1), len(randomness))

key1Xor := make([]byte, minLen)
key2Xor := make([]byte, minLen)

for idx := 0; idx < minLen; idx++ {
key1Xor[idx] = pubKey1[idx] ^ randomness[idx]
key2Xor[idx] = pubKey2[idx] ^ randomness[idx]
}

return bytes.Compare(key1Xor, key2Xor) == 1
}

func (s *systemSCProcessor) displayAuctionList(auctionList []*state.ValidatorInfo, numOfSelectedNodes uint32) {
if log.GetLevel() > logger.LogDebug {
return
}

tableHeader := []string{"Owner", "Registered key", "TopUp per node"}
iulianpascalau marked this conversation as resolved.
Show resolved Hide resolved
lines := make([]*display.LineData, 0, len(auctionList))
horizontalLine := false
for idx, validator := range auctionList {
pubKey := validator.GetPublicKey()

owner, err := s.stakingDataProvider.GetBlsKeyOwner(pubKey)
log.LogIfError(err)

topUp, err := s.stakingDataProvider.GetNodeStakedTopUp(pubKey)
log.LogIfError(err)

horizontalLine = uint32(idx) == numOfSelectedNodes-1
line := display.NewLineData(horizontalLine, []string{
hex.EncodeToString([]byte(owner)),
hex.EncodeToString(pubKey),
topUp.String(),
})
lines = append(lines, line)
}

table, err := display.CreateTableString(tableHeader, lines)
if err != nil {
log.Error("could not create table", "error", err)
return
}

message := fmt.Sprintf("Auction list\n%s", table)
iulianpascalau marked this conversation as resolved.
Show resolved Hide resolved
log.Debug(message)
}

// ToggleUnStakeUnBond will pause/unPause the unStake/unBond functions on the validator system sc
func (s *systemSCProcessor) ToggleUnStakeUnBond(value bool) error {
if !s.flagStakingV2Enabled.IsSet() {
Expand Down Expand Up @@ -568,7 +715,7 @@ func (s *systemSCProcessor) prepareStakingDataForRewards(eligibleNodesKeys map[u
log.Debug("systemSCProcessor.prepareStakingDataForRewards time measurements", sw.GetMeasurements()...)
}()

return s.stakingDataProvider.PrepareStakingDataForRewards(eligibleNodesKeys)
return s.stakingDataProvider.PrepareStakingData(eligibleNodesKeys)
}

func (s *systemSCProcessor) getEligibleNodesKeyMapOfType(
Expand All @@ -587,6 +734,20 @@ func (s *systemSCProcessor) getEligibleNodesKeyMapOfType(
return eligibleNodesKeys
}

func (s *systemSCProcessor) getAllNodeKeys(
validatorsInfo map[uint32][]*state.ValidatorInfo,
) map[uint32][][]byte {
nodeKeys := make(map[uint32][][]byte)
for shardID, validatorsInfoSlice := range validatorsInfo {
nodeKeys[shardID] = make([][]byte, 0, s.nodesConfigProvider.ConsensusGroupSize(shardID))
for _, validatorInfo := range validatorsInfoSlice {
nodeKeys[shardID] = append(nodeKeys[shardID], validatorInfo.PublicKey)
}
}

return nodeKeys
}

func getRewardsMiniBlockForMeta(miniBlocks block.MiniBlockSlice) *block.MiniBlock {
for _, miniBlock := range miniBlocks {
if miniBlock.Type != block.RewardsBlock {
Expand Down Expand Up @@ -1605,4 +1766,7 @@ func (s *systemSCProcessor) EpochConfirmed(epoch uint32, _ uint64) {

s.flagStakingQueueEnabled.SetValue(epoch < s.stakingV4InitEnableEpoch)
log.Debug("systemProcessor: staking queue on meta", "enabled", s.flagStakingQueueEnabled.IsSet())

s.flagStakingV4Enabled.SetValue(epoch >= s.stakingV4EnableEpoch)
log.Debug("systemProcessor: staking queue on meta", "enabled", s.flagStakingV4Enabled.IsSet())
}