-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
static resources removed from the configuration may never be deleted #20
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@ixdy: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale /assign |
I'm going to test this out by shrinking the k8s-infra-prow-build pool temporarily, specifically choosing some resources that are currently leased and not leased |
/close |
@spiffxp: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Thanks for confirming this has been fixed! |
Originally filed as kubernetes/test-infra#17282
I mentioned this tangentially in kubernetes/test-infra#16047 (comment), but I want to pull it to a separate issue to be more easily highlight it.
Boskos doesn't delete static resources that are removed from the configuration if they are in use, to ensure that jobs don't fail, and to ensure that such resources are properly cleaned up by the janitor.
Originally, this was a reasonable decision, since Boskos periodically synced its storage against the configuration, and most likely such resources would eventually be free and thus deleted from storage.
After kubernetes/test-infra#13990, Boskos only syncs its storage against the configuration when the configuration changes (or when Boskos restarts). As a result, it may take a long time for static resources to be deleted, if ever.
There was a similar issue for DRLCs that I addressed in kubernetes/test-infra#16021, effectively by putting the DRLCs into lame-duck mode.
There isn't a clear way to indicate that static resources are in lame-duck mode, though.
Possible ways to address this bug, in increasing order of complexity:
a. Add a field into the
UserData
for static resources. (CurrentlyUserData
is not used for static resources.)b. Set an
ExpirationDate
on static resources. (CurrentlyExpirationDate
is not used for static resources.)c. Add a new field on the
ResourceStatus
indicating resources are in lame-duck mode.Workaround until this bug is fixed: admins with access to the cluster where Boskos is running can just delete the resources manually using
kubectl
.The text was updated successfully, but these errors were encountered: