Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

epochStart: fix pending mininBlocks for scheduled #3717

Merged
merged 37 commits into from
Jan 28, 2022

Conversation

AdoAdoAdo
Copy link
Contributor

No description provided.

return pendingMbHashes
}

func (ssh *shardStorageHandler) updateProcessedMiniBlocksForScheduled(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unit tests for all the new methods?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

processedMbHashes := make([][]byte, 0)
miniBlocksDstMe := getAllMiniBlocksWithDst(metaBlock, destShardID)
for hash, mb := range miniBlocksDstMe {
if _, hashExists := pendingMBsMap[hash]; hashExists {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

multiple lines

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

return pendingMbHashes
}

func (ssh *shardStorageHandler) updateProcessedMiniBlocksForScheduled(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

noPendingMbs := make(map[string]struct{})
remainingPendingMiniBlocks := make([]bootstrapStorage.PendingMiniBlocksInfo, 0)
for index, metaBlockHash := range referencedMetaBlockHashes {
if index == 0 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

index == 0 means that this is the oldest block and thats why there are no pending, everything is processed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First referenced metablock in the notarized shard block header will be pending when trying to re-process the notarized shard block header.
we want here to remove only the follow up referenced MetaHeaders pending mbs from initial pending list, as these will be naturally added when processing the notarized shard block.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

if metaBlock.ShardInfo[i].ShardID == destId {
continue
}

for _, val := range metaBlock.ShardInfo[i].ShardMiniBlockHeaders {
isCrossShardDestMe := val.ReceiverShardID == destId && val.SenderShardID != destId
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should the isCrossShardDestMe := (val.ReceiverShardID == destId || **val.ReceiverShardID == core.AllShardId**) && val.SenderShardID != destId from line 616 be used also here?

Copy link
Contributor Author

@AdoAdoAdo AdoAdoAdo Jan 28, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the name of the function here is a bit misleading

this intent of this method is to only returns the miniBlocks with destID as receiver shard, but only ones that are pending to destId shard due to their inclusion in this metablock header.

the miniBlocks with core.AllShardId as destination become pending for a shard only when included in the MetaBlock.MiniBlockHeaders, not when processed by other shards.

renamed to getNewPendingMiniBlocksForDst

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

gc.mutGasProvided.Unlock()
}

// SetGasProvidedAsScheduled sets gas provided as scheduled for a given hash
func (gc *gasComputation) SetGasProvidedAsScheduled(gasProvided uint64, hash []byte) {
gc.mutGasProvided.Lock()
gc.gasProvidedAsScheduled[string(hash)] = gasProvided
gc.txHashesWithGasProvidedAsScheduledSinceLastReset = append(gc.txHashesWithGasProvidedAsScheduledSinceLastReset, hash)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need for temporary debug also for this line?


//TODO: Remove it or set log level to Trace
	log.Debug("gasComputation.SetGasProvidedAsScheduled", "tx hash", hash, "gas gasProvided", gasProvided)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually the problem happens only with the gas refunded and gas penalized

}

txs.gasHandler.RestoreGasSinceLastReset()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the logic before for scheduledMode called only removedTheGasProvidedAsScheduled. Now there will be a reset for all gascomputations. Is it ok?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The mechanism has changed and the Reset method which is called before starting to process, resets all four lists of tx hashes for gasProvided, gasProvidedAsScheduled, gasRefunded and gasPenalized. Then, when processing is started the Set method for gasProvided, gasProvidedAsScheduled, gasRefunded and gasPenalized is called accordingly with the current mode (scheduled or not). So, the RestoreGasSinceLastReset will have nothing to restore for gasProvidedAsScheduled when the scheduled mode is false and vice-versa, will have nothing to restore for gasProvided, gasRefunded and gasPenalized when the scheduled mode is true. In case of scheduled mode set on true, the transaction is not really executed, just the basic checks are done and because of that the only thing which could be calculated is gasProvidedAsScheduled. Afterwards, the scheduled txs are executed and resulted scrs, accumulated fees, developer fees, etc., are added in the next block.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok!

@@ -672,7 +672,7 @@
# Consensus type which will be used (the current implementation can manage "bn" and "bls")
# When consensus type is "bls" the multisig hasher type should be "blake2b"
[Consensus]
ScheduledExecutionMilliseconds = 1500
ScheduledExecutionMilliseconds = 6000 # round time duration in milliseconds
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is confusing. It supposed to mean "maximum duration to execute scheduled transactions" or a "general timestamp" at which you start counting.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed as we don't need it anymore after the refactor of the txs scheduled execution timing mechanism

mutGasRefunded sync.RWMutex
gasPenalized map[string]uint64
mutGasPenalized sync.RWMutex
gasProvided map[string]uint64
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

refactoring idea for next PRs in order to avoid having a lot of fields along with their protective mutexes (which, improperly used can cause deadlocks):
create ConcurrentSafeBytesSlice or ConcurrentSafeStringUint64Map that can internally hold the mutexes and have some exported functions that can safely lock/unlock the mutexes

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, should be done this way

gc.mutGasRefunded.Unlock()

//TODO: Remove it or set log level to Trace
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

resolve TODO?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are hunting a rare situation, so I will keep this log on Trace, which anyway should not appear too often, just to catch this error which seems to come from sc execution

gc.mutGasPenalized.Unlock()

//TODO: Remove it or set log level to Trace
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

resolve TODO?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are hunting a rare situation, so I will keep this log on Trace, which anyway should not appear too often, just to catch this error which seems to come from sc execution

scr.gasHandler.RemoveGasRefunded(processedTxHashes)
scr.gasHandler.RemoveGasPenalized(processedTxHashes)
for _, hash := range processedTxHashes {
//TODO: Remove it or set log level to Trace
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

resolve TODO?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are hunting a rare situation, so I will keep this log on Trace, which anyway should not appear too often, just to catch this error which seems to come from sc execution

txs.gasHandler.RemoveGasRefunded(processedTxHashes)
txs.gasHandler.RemoveGasPenalized(processedTxHashes)
for _, hash := range processedTxHashes {
//TODO: Remove it or set log level to Trace
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

resolve TODO?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are hunting a rare situation, so I will keep this log on Trace, which anyway should not appear too often, just to catch this error which seems to come from sc execution

SebastianMarian and others added 7 commits January 28, 2022 12:38
* Fixed situation when proposer would not include SCRs from previous block which had scheduled txs execution, because it is running out of time even before starting the creation of the block
…k/elrond-go into fix-processed-mbs-scheduled
iulianpascalau
iulianpascalau previously approved these changes Jan 28, 2022
log.Debug("elapsed time to create mbs to me",
"time [s]", elapsedTime,
)
log.Debug("elapsed time to create mbs to me", "time [s]", elapsedTime)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe remove [s] ? the time will contain the measuring unit by default

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

log.Debug("elapsed time to create mbs from me",
"time [s]", elapsedTime,
)
log.Debug("elapsed time to create mbs from me", "time [s]", elapsedTime)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove [s]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -828,7 +828,7 @@ func getCrossShardMiniblockKeys(miniBlockHdrs []data.MiniBlockHeaderHandler, sel
keys := make([][]byte, 0)
for _, miniBlockHdr := range miniBlockHdrs {
receiverShard := miniBlockHdr.GetReceiverShardID()
receiverIsSelfShard := receiverShard == selfShardID || receiverShard == core.AllShardId && processingShard == core.MetachainShardId
receiverIsSelfShard := (receiverShard == selfShardID || receiverShard == core.AllShardId) && processingShard == core.MetachainShardId
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

iulianpascalau
iulianpascalau previously approved these changes Jan 28, 2022
raduchis
raduchis previously approved these changes Jan 28, 2022
Copy link
Contributor

@LucianMincu LucianMincu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

System tests passed.

@LucianMincu LucianMincu merged commit 8c07de7 into feat/scheduled-sc-execution Jan 28, 2022
@LucianMincu LucianMincu deleted the fix-processed-mbs-scheduled branch January 28, 2022 19:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants