-
Notifications
You must be signed in to change notification settings - Fork 68
Closed
Description
Description
This issue was initially found by @amisevsk here.
If the common PVC cleanup job fails and all workspaces are deleted, the workspace which had the failed PVC cleanup job will be stuck in an errored state and won't be removed.
How To Reproduce
First, ensure the common PVC cleanup job will fail by modifying the PVC cleanup job spec in cleanup.go:
Args: []string{
"-c",
- fmt.Sprintf(cleanupCommandFmt, path.Join(pvcClaimMountPath, workspaceId)),
+ "exit 1",
},Then do the following:
oc apply -f samples/theia-next.yamlyq '.metadata.name="theia-next-2"' samples/theia-next.yaml | kubectl apply -f -- Wait for workspaces to start/get finalizers at least
oc delete dw theia-next- Wait for deletion to hit error
oc delete dw --all- The workspace which had the PVC cleanup job that failed should be stuck in the error state, visible when doing
kubectl get devworkspace -n $NAMESPACE
(Thank you to @amisevsk for the instructions on how to reproduce)
Expected behavior
The workspaces which are in the error state (from the PVC cleanup job failing) should be removed automatically.
Additional context
This bug arose from #846 and will only be apparent when that PR is merged.
amisevsk
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working