-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix/certora fixes #78
Conversation
We've introduced a rule that finds counter examples for all functions that changes balances. This rule will always fail by definition, so we're commenting it out to get CI green again.
Since we're implementing rules for `StakeManager` migrations, we need multiple instances inside the certora specs. This results in the prover trying to run rules on the other `StakeManager` instance as well, which isn't always desired, as it causes some rules to fail, even though they'd pass if they'd be executed only on the `currentContract`. This commit makes the filter condition for relevant rules stronger, such that the prover will not run them on the `newStakeManager` contract instance.
This is actually a bug that the certora prover found. The rule `epochStaysSameOnMigration` failed because a previous `StakeManager` could call `migrationInitialize` and change `currentEpoch` on a next `StakeManager`, even though the next `StakeManager` might be in migration itself (which means the `currentEpoch` is now allowed to change). This commit fixes this by ensure `migrationInitialize()` will revert if the `StakeManager` already has a `migration` on going.
This rule is only used for debugging purposes and serves no function for production formal verification. Hence we're commenting it out.
This was failing due to `migrationInitialize()` allowing for resetting or decreasing a `StakeManager`s `currentEpoch`. In practice, however, this is not possible because a new manager can only be called from an old manager and the old manager can only migrate once. So if `migrationInitialize()` is called from an old manager, we can safely assume it's the first time this is called, meaning the new manager's `currentEpoch` must be `0` at this point in time.
The were changes in the contracts that caused this rule to fail. Namely `migrateTo` shouldn't be reverting so this as been removed from the rule and `transferNonPending` has been added as it was missing.
f.selector == sig:migrateFrom(address,bool,StakeManager.Account).selector || | ||
f.selector == sig:migrateTo(bool).selector; | ||
f.selector == sig:migrateTo(bool).selector || | ||
f.selector == sig:transferNonPending().selector; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needed to remove migrateFrom
here, because that one shouldn't revert when a migration is happening.
require currentContract.migration == 0; | ||
require currentContract.oldManager == 0; | ||
require e.msg.sender != 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one here is necessary to account for cases that theoretically let old managers reset or decrease a new manager's currentEpoch
.
Having these requirements causes the prover to find no non-reverting cases for such functions
@@ -32,7 +32,8 @@ definition blockedWhenMigrating(method f) returns bool = ( | |||
f.selector == sig:unstake(uint256).selector || | |||
f.selector == sig:lock(uint256).selector || | |||
f.selector == sig:executeEpoch().selector || | |||
f.selector == sig:startMigration(address).selector | |||
f.selector == sig:startMigration(address).selector || | |||
f.selector == sig:migrationInitialize(uint256,uint256,uint256,uint256).selector |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one was simply missing after breaking changes were introduced.
@@ -68,7 +69,7 @@ rule allowWhenMigrating(method f) filtered { | |||
|
|||
|
|||
rule rejectWhenNotMigrating(method f) filtered { | |||
f -> blockedWhenNotMigrating(f) | |||
f -> blockedWhenNotMigrating(f) && f.contract == currentContract |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These changes were already reviewed in another PR, but it had been merged in a non-develop
branch
method f; | ||
rule migrationLockedIn(method f) filtered { | ||
f -> !blockedWhenMigrating(f) && f.contract == currentContract | ||
} { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one was also already reviewed in another PR
…anager `simplification()` is used to have some rules make certain assumptions so that they can pass. We need an additional simplification, stating that `oldManager == address(0)`. This means `oldManager` isn't set, meaning no `migrationInitialize()` and similar functions have a non-reverting path.
A previous manager can only migrate once, because the migration address is locked in. A **new** manager is always aware of its previous manager. This means, when a migration happens and is initialized, we know for sure it's always the first time this is happening. We probably don't want a migration to take place if the new manager has already processed epochs, so we're adding a check that its `currentEpoch` must be `0`. This also ensures one of its invariants isn't violated: `epochsOnlyIncrease` and `highEpochsAreNull`.
872080f
to
0d28141
Compare
revert StakeManager__PendingMigration(); | ||
} | ||
if (currentEpoch > 0) { | ||
revert StakeManager__AlreadyProcessedEpochs(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This changed @3esmit I need to be get carefully reviewed by you.
There's an assumption here that: When a StakeManager
migrates to a new one, that new one should not have processed any epochs yet.
The reason I've introduced this change is so that its currently defined invariants don't fail. Namely:
epochsOnlyIncrease
(this one fails whenmigrationInitialize()
is called with arbitrary values andhighEpochsAreNull
(can also file whenmigrationInitialize()
is called with arbitrary values
Obviously, this prevents StakeManager
from migrating to a new one if the new one already started processing accounts itself.
I guess this can happen in practice, when we put out a second StakeManager
, and before we initialize the migration, some account starts staking for example.
At this point, I'm even wondering if it makes sense to introduce some migrationFinalized
flag and use that instead. Something for us to say: "this stakemanager is meant to be migrated to, please don't stake on this one yet"
We could use that flag instead of a currentEpoch > 0
check.
Lemme know what you think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This certainly should be further discussed.
I think is valid to think of, if the contract is constructed with migration, meaning it expects a coming migration, it should first wait the migration start to accept processing the epochs and accounts.
This refactors the spec to no longer rely on the `simplification()` but instead filter out the vacuous rules from the get go. Using the `simplification()` previously was needed so that the prover will ignore cases that revert by design. This made some invariants vacuous. Having vacuous rules or invariants is still considered a failure, so to make get prover happy, we're using filtered invariants instead which renders the `simplification` obsolete.
Description
This includes a bunch of changes and fixes to the certora rules for it to at least not have a failing rule.
Closes #77
Checklist
Ensure you completed all of the steps below before submitting your pull request:
forge snapshot
?pnpm gas-report
?pnpm lint
?forge test
?pnpm verify
?