FIX - Enforced downtime state calculation after retention load #1990
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There is a race condition when the retention data is dumped in the
retention backend:
The downtime depth is calculated by incrementing or decrementing the
scheduled_downtime_depth
attribute in theDowntime
classIf the
update_retention_file()
thread is run while a downtime is being processed, the value stored in the retention backend may not be up to date because it's read during theenter()
orexit()
execution:dt.exit()
...
STOP update_retention_file() -> value with improper value
...
dt.ref.scheduled_downtime_depth -= 1
The consequence of this particular condition is that an object state can become inconsistent when the retention data is reloaded:
scheduled_downtime_depth
remains> 0
because of the value stored in the backend.This enforces the downtime state evaluation when an object state is restored from the retention backend.