New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Force delete a service-catalog resource #666
Comments
+1. If you think about the developer loop, most of them will try to deploy a failing APB 90% of the time. We need to figure out a setup that will accommodate this fact, and provide useful feedback when it happens. |
@rthallisey Might take a go at implementing this if that's ok? |
I believe this falls under orphan mitigation. https://github.com/openservicebrokerapi/servicebroker/blob/master/spec.md#orphans So being able to send a success status to the catalog is important. |
@philipgough sure! Any issue that isn't assigned or doesn't have an open PR is free game. Just assign yourself so folks are aware who's working on it. |
@rthallisey, @jmrodri I've read the api in regards to orphan mitigation. So I understand we can clean up within the broker with these failed objects. What I am unsure of his how this will be implemented from the client using the current async flow. We just return a 202 currently from what I can see, when the job is succesfully kicked off so according to spec, this should be seen as a failure, even if provision results in success. So how is the client to handle this orphan mitigation? Any suggestions |
@philipgough let's not worry about orphan migration just yet. I think we need to figure out exactly what the broker is doing when a deprovision fails. From there, we can figure out how we'll signal to the catalog to delete the serviceinstance and how we'll handle deleting apb resources. |
I did a little initial investigation. See if this helps @philipgough. Here's my patch: --- a/pkg/handler/handler.go
+++ b/pkg/handler/handler.go
@@ -434,15 +434,19 @@ func (h handler) deprovision(w http.ResponseWriter, r *http.Request, params map[
resp, err := h.broker.Deprovision(serviceInstance, planID, nsDeleted, async)
+ log.Errorf("Logging Deprovision errors %s", err)
if err != nil {
switch err {
case broker.ErrorNotFound:
+ log.Error("ErrorNotFound")
writeResponse(w, http.StatusGone, broker.DeprovisionResponse{})
return
case broker.ErrorBindingExists:
+ log.Error("ErrorBindingExists")
writeResponse(w, http.StatusBadRequest, broker.DeprovisionResponse{})
return
case broker.ErrorDeprovisionInProgress:
+ log.Error("ErrorDeprovisionInProgress")
writeResponse(w, http.StatusAccepted, broker.DeprovisionResponse{})
return
default:
@@ -450,8 +454,10 @@ func (h handler) deprovision(w http.ResponseWriter, r *http.Request, params map[
return
}
} else if async {
+ log.Error("Using async write response 202")
writeDefaultResponse(w, http.StatusAccepted, resp, err)
} else {
+ log.Error("ELSE write response 201")
writeDefaultResponse(w, http.StatusCreated, resp, err)
}
} When I ran the broker with this additional logging, I got the following output:
|
Hey @rthallisey yeah I've made a first pass at this already, I'll do initial test & push it up as soon as I get some free time and we can go from there |
Awesome @philipgough! I look forward to reviewing it. |
Temporary work around to cleanup a terminating namespace with a failed deprovisioned serviceinstance: for i in $(oc get projects | grep Terminating| awk '{print $1}'); do echo $i; oc get serviceinstance -n $i -o yaml | sed "/kubernetes-incubator/d"| oc apply -f - ; done |
Feature:
If you fail to deprovision a resource using the service-catalog and broker, the resource will stay in the service-catalog and you can't delete it. This forces the user to rename the resource in the template or redeploy the service-catalog and broker in order to re run an APB.
Solving this in all scenarios isn't going to be in the broker's scope, but for a developer creating an apb, if there's a failure during the deprovision then I think it's worth deleting the service instance object.
Here's a way we can handle the resource deletion in dev mode:
This behaviour can be trigger by a broker config option:
force_delete
.kubectl delete service-instance --force
issue:kubernetes-retired/service-catalog#1551
The text was updated successfully, but these errors were encountered: