-
Notifications
You must be signed in to change notification settings - Fork 372
feat(orchestrator): return subnet assignment also on upgrade loop errors #7868
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: I would also argue changing both of these false to true. I.e., in case the registry version is unavailable locally or somehow the field is empty or not deserializable, I would prefer not to accidentally remove the state, keep the subnet's SSH keys for a bit too long and try to rotate IDKG keys (in which case the registry should anyways deny the request because we would have left the subnet) rather than the opposite.
As of today, I cannot see a way where the registry version is not available locally, since it is always lower than the latest that we have. But if this function gets re-used somewhere else, I feel like returning true is more fail-safe than false. What do you guys think?
Note that replacing this to true could also mean launching the replica even though we are unassigned. But again, I do not think it hurts a lot if it's a single node doing so since other nodes would ignore it.
rs/orchestrator/src/upgrade.rs
Outdated
|
|
||
| #[must_use = "This may be a `Stop` variant, which should be handled"] | ||
| pub(crate) enum OrchestratorControlFlow { | ||
| pub(crate) enum UpgradeCheckResult { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is called "UpgradeCheckResult", but it is not actually a Result type, which is confusing. The enum does have different Ok and Err variants, however. Could we just turn it into a "real" Result? For instance, could we have a new error type that wraps both an OrchestratorError and SubnetAssignment, or similar?
rs/orchestrator/src/upgrade.rs
Outdated
| Stop, | ||
| /// There was an error while checking for an upgrade, but we still successfully determined that | ||
| /// the node is assigned to the given subnet. | ||
| ErrorAsAssigned(SubnetId, OrchestratorError), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should there also be an ErrorAsLeaving variant?
rs/orchestrator/src/upgrade.rs
Outdated
| ) -> OrchestratorResult<OrchestratorControlFlow> { | ||
| let registry_version = self.registry.get_latest_version(); | ||
|
|
||
| registry_version: RegistryVersion, | ||
| ) -> OrchestratorResult<bool> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this make a difference?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you talking about the return type being bool? Not much, I just did not want to return some ErrorAsUnassigned here and instead keep a simple Err to be more generic (i.e. the function does not need to “know” about UpgradeCheckResult). This was reverted anyways.
rs/orchestrator/src/orchestrator.rs
Outdated
| Ok(Ok(control_flow)) => { | ||
| Ok(upgrade_result) => { | ||
| // Update the subnet assignment based on the latest upgrade result. | ||
| *subnet_assignment.write().unwrap() = upgrade_result.as_subnet_assignment(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If returning the subnet assignment is too much of a hassle, especially since it needs to be updated in both the Ok and Err case, maybe another idea could be to have the Upgrade struct own a copy to this RwLock and mutating it directly there?
rs/orchestrator/src/upgrade.rs
Outdated
| } | ||
| (None, None) => match self.registry.get_subnet_id(latest_registry_version) { | ||
| let maybe_local_cup_proto = self.cup_provider.get_local_cup_proto(); | ||
| let (subnet_id, maybe_local_cup) = 'block_subnet_id_local_cup: { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this block is a bit hard to understand. If this refactoring is needed, is there a way to avoid the named blocks and break statements? We could also consider extracting some functions if that makes it easier.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed. I reverted this and kept most of what was already there while improving it to return just the subnet ID out of the block, instead of also the CUP and CUP proto. LMK what you think. (I recommend reviewing with whitespaces disabled)
perf: grab exclusive lock at the start of the function Revert "perf: grab exclusive lock at the start of the function" This reverts commit bbf44232d18db279c35a21ef033cb76a21d0bade. perf: less locking style: use ? refactor: allow harmless race condition
|
Thanks for the review @eichhorl, I agree with all your remarks, that function has quite a high coupling so it’s tricky 😅. I’ll cover the ones that I haven’t answered directly here since they are interconnected.
and
I think these translate to keep the current
Which comes to
I think this is the cleanest solution. Unfortunately, it is impossible to hold the lock during an |
The upgrade loop in the orchestrator is responsible both for executing upgrades and determining the subnet ID of the node, used to provision SSH keys and rotate IDKG keys. Though there are multiple code flows where the orchestrator determines the subnet ID but there is an error later in the loop, which makes the function return an error and the caller not apply the subnet ID. This prevents SSH keys from being provisioned even though the subnet ID had correctly been identified.
An example of such a code flow is if the local CUP is not deserializable but the NiDkgId is, which allows the subnet ID to be correctly determined (i.e. we hit here). But since the CUP is not deserializable and currently has the highest height compared to a recovery or peers CUP (we imagine it's at the very start of a recovery, before applying SSH keys -> there is no recovery CUP yet), we return an error here and the subnet ID is not updated, and SSH keys are not provisioned. If it does not have the highest height (i.e. there is a recovery CUP), then we can use the latter, which explains why we can still recover.
Note: the existing system test
sr_app_no_upgrade_with_chain_keys_testis testing that we can recover a subnet exactly in that case (if the CUP is not deserializable but the NiDkgId is). As explained, nodes can see the Recovery CUP, but we do not apply readonly keys even though we could. In a parallel PR, I distinguished cases where the NiDkgId was corrupted or not. If yes, then there's indeed no way of provisioning SSH keys, but there's also no way of seeing the Recovery CUP -> thus use failover nodes. If not, then we should be able to provision SSH keys. When the second case runs on the current implementation, it fails because we cannot provision SSH keys. When merging this branch to it, the test succeeds, which is a positive sign towards the added value of this change.Another example is if we detected we need to leave the subnet but removing the state failed (i.e. hit here). Then, we'd return an error again and fail to remove SSH keys of the subnet.
This PR is not supposed to bring any functional change to the upgrade logic but instead moves the responsibility of setting the subnet assignment from the caller of
check_for_upgradeto the latter directly.PS: The PR also uses the same registry version for the entire loop, instead of determining multiple times the latest registry version (in functions
prepare_upgrade_if_scheduled,check_for_upgrade_as_unassigned,should_node_become_unassigned), in order to have a more consistent and predictable behaviour.