New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug fixes staking v4 #3993
Bug fixes staking v4 #3993
Conversation
…ve-unjailed-new-nodes-to-auction
…ve-unjailed-new-nodes-to-auction
…ve-unjailed-new-nodes-to-auction # Conflicts: # state/validatorsInfoMap_test.go
…ve-unjailed-new-nodes-to-auction # Conflicts: # factory/coreComponents.go # sharding/nodesCoordinator/indexHashedNodesCoordinator_test.go
…ve-unjailed-new-nodes-to-auction
…ve-unjailed-new-nodes-to-auction
isValidator := account.GetList() == string(common.EligibleList) || account.GetList() == string(common.WaitingList) | ||
if !stakingData.Jailed { | ||
if stakingData.StakedNonce == nonce && !isValidator { | ||
log.Debug("node is staked, changed status to new", "blsKey", blsPubKey) | ||
account.SetListAndIndex(account.GetShardId(), string(common.NewList), uint32(stakingData.StakedNonce)) | ||
log.Debug(fmt.Sprintf("node is staked, changed status to %s list", newNodesList), "blsKey", blsPubKey) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why fmt ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, I missed it; could've used args...
. Changed it
if ncf.flagStakingV4.IsSet() { | ||
return ncf.createRegistryWithAuction(buff) | ||
registry, err := ncf.createRegistryWithAuction(buff) | ||
if err == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it does not look good - it was a little better with flag check here directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, but this change is actually a fix; since flag activation wouldn't work.
I've discovered a chicken-egg
problem with this flag in case of node restart(also raised this problem to @iulianpascalau and @AdoAdoAdo) . After node restart, calling getLastBootstrapData
was done with some dummy values and nodesCoordinatorRegistryFactory
wouldn't have the flag correctly set, causing unmarshal error.
Codecov Report
@@ Coverage Diff @@
## feat/liquid-staking #3993 +/- ##
=======================================================
- Coverage 75.40% 75.38% -0.02%
=======================================================
Files 623 622 -1
Lines 83343 83275 -68
=======================================================
- Hits 62845 62779 -66
Misses 15773 15773
+ Partials 4725 4723 -2
Continue to review full report at Codecov.
|
epochStart/bootstrap/process_test.go
Outdated
@@ -90,8 +91,7 @@ func createMockEpochStartBootstrapArgs( | |||
) ArgsEpochStartBootstrap { | |||
generalCfg := testscommon.GetGeneralConfig() | |||
nodesCoordinatorRegistryFactory, _ := nodesCoordinator.NewNodesCoordinatorRegistryFactory( | |||
&testscommon.MarshalizerMock{}, | |||
&epochNotifier.EpochNotifierStub{}, | |||
&marshal.GogoProtoMarshalizer{}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a reason for using the protobuf here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, because MarshalizerMock
uses json marshaller,.
NodesCoordinatorRegistryFactory
uses 2 marshallers:
- hard coded json marshaller
- another marshaller which is expected to be != json (in our case, proto).
Decided to changed it to use this mock instead: NodesCoordinatorRegistryFactoryMock
@@ -321,7 +323,12 @@ func (s *legacySystemSCProcessor) unStakeNodesWithNotEnoughFunds( | |||
continue | |||
} | |||
|
|||
validatorInfo.SetList(string(common.LeavingList)) | |||
validatorLeaving := validatorInfo.ShallowClone() | |||
validatorLeaving.SetList(string(common.LeavingList)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all the occurrences of this SetList
cast the list to string. might change the function and the interface so the param is PeerType
not string
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, but this is because we had to force ValidatorInfo
DTO into an interface. This ValidatorInfo
uses proto which auto-generates structs with List
field of type string
=> GetList()
will return by default string
upon proto file generation.
If I change SetList
, I would also have to change GetList
, which is an auto-generated func by proto(returns string
), and that is not possible
log.Debug("unStake at end of epoch for node", "blsKey", blsKey) | ||
err = s.unStakeOneNode(blsKey, epoch) | ||
if err != nil { | ||
return err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what happens if: we have 5 nodes in the nodesToUnStake
. for the third node, unStakeOneNode
returns err. the first 2 processed nodes remain unstaked and this can lead to inconsistencies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, this would be pretty bad: execution would fail here and we wouldn't be able to also update delegation contracts (by calling updateDelegationContracts
). This code is from legacy, but also updated for staking v4, so yes, it would lead the inconsistencies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will fail only if we make mistake in code
factory/blockProcessorCreator.go
Outdated
@@ -712,7 +713,12 @@ func (pcf *processComponentsFactory) newMetaBlockProcessor( | |||
} | |||
|
|||
// TODO: in case of changing the minimum node price, make sure to update the staking data provider | |||
stakingDataProvider, err := metachainEpochStart.NewStakingDataProvider(systemVM, pcf.systemSCConfig.StakingSystemSCConfig.GenesisNodePrice) | |||
stakingDataProvider, err := metachainEpochStart.NewStakingDataProvider( | |||
systemVM, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could have created an args struct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, I was also thinking about it, but was not sure that is worth the refactor.
However, I've created args struct as your suggestion
@@ -58,6 +59,8 @@ type stakingToPeer struct { | |||
flagStaking atomic.Flag | |||
validatorToDelegationEnableEpoch uint32 | |||
flagValidatorToDelegation atomic.Flag | |||
stakingV4InitEpoch uint32 | |||
flagStakingV4 atomic.Flag |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rename to flagStakingV4Init
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, actually flagStakingV4
is correct in this component.
UpdateProtocol(body *block.Body, nonce uint64)
is called whenever a new tx is processed in a block. If a new node is registerd or unjailed by a tx in epoch == stakingV4Init, we still want to save it with list == AuctionList
func (ihnc *indexHashedNodesCoordinator) NodesCoordinatorToRegistry() NodesCoordinatorRegistryHandler { | ||
if ihnc.flagStakingV4.IsSet() { | ||
func (ihnc *indexHashedNodesCoordinator) NodesCoordinatorToRegistry(epoch uint32) NodesCoordinatorRegistryHandler { | ||
if epoch >= ihnc.stakingV4EnableEpoch { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could have used polymorphism
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I honestly don't see any good solution for this problem using polymorphism, I actually think it would only increase the overhead. This is the purpose of this factory: to create registry data depending on the epoch
@@ -109,6 +109,16 @@ func (vi *ValidatorInfo) SetTotalValidatorIgnoredSignatures(totalValidatorIgnore | |||
vi.TotalValidatorIgnoredSignatures = totalValidatorIgnoredSignatures | |||
} | |||
|
|||
// ShallowClone returns a clone of the object | |||
func (vi *ValidatorInfo) ShallowClone() ValidatorInfoHandler { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please rename the test TestValidatorInfo_IsInterfaceNile
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I deleted that file. It had not benefit.
What's new
StorageHandlerArgs
struct placeholder for all arguments required to create either a shard or a meta storage handler, which now also containsNodesCoordinatorRegistryFactory
StakingDataProvider
now subscribes toEpochNotifier
. This is used so that after staking v4, we expect newly staked nodes to be added in list =AuctionList
, instead of list =NewList
stakingToPeer
, all newly staked and unjailed nodes are moved to auction list starting with staking v4 initBugfixes
nodeCoordinator.NodesCoordinatorToRegistry
func now takes as input paramepoch
, so that it knows which marshaller to use depending on the epochsaveNodesCoordinatorRegistry
fromepochStart/bootStrap
, nodes config is saved usingNodesCoordinatorRegistryFactory
, instead of hard coded json marshallerunStakeNonEligibleNodesWithNotEnoughFunds
in separate functions :fillStakingDataForNonEligible
+unStakeNodesWithNotEnoughFunds
/unStakeNodesWithNotEnoughFundsWithStakingV4
depending on epochlegacySystemSC
after node restart. After restart, when EpochConfirmed is called with epoch =startInEpoch
, this would cause maxNumNodesConfig to be = maxNumNodesConfig in startInEpoch, instead of latest config; fix is one line code, seeL1366
+ also added unit testssaveState
inindexHashedNodesCoordinatorRegistry.go
to use the epoch to save the nodes config registry instead of flag. This is because we cannot rely on the flag after node restarts.shardValidatorsInfoMap
->GetValidator
func where the getter used to return the pointer to the internal validator member instead of a copy.staking.go
andlegacySystemSC.go
/systemSC.go