Skip to content
This repository has been archived by the owner on Oct 5, 2022. It is now read-only.

Delete app with red [x] will restart app instead of delete it totally #6334

Closed
davsclaus opened this issue Sep 8, 2016 · 16 comments
Closed
Assignees

Comments

@davsclaus
Copy link
Member

I wonder if its the new deployment config that makes k8s restart the app.

If you want to delete an app (and all its services/pods/RC/deployments etc.) then you could just click the red [x] to delete it from app page.

But now it restart it instead.

screen shot 2016-09-08 at 10 23 30 am

@davsclaus
Copy link
Member Author

And if you scale the app to 0, and then try to delete it from tha overview page, then the app is still listed there.

@gashcrumb
Copy link
Contributor

Yeah, looks like that 'x' tries to delete replicationcontrollers/item, that call fails for me here 'cause it's a replica set. @jstrachan do we label everything so we could maybe gather up all the objects that should be deleted easily enough client-side?

@jimmidyson
Copy link
Contributor

And don't forget the deployment....

@gashcrumb
Copy link
Contributor

It looks adding a function to KubernetesModel to get all the objects using the project label would be the way to go, currently that view only specifically deals with pods, services and RCs.

@gashcrumb
Copy link
Contributor

Most objects had an app label, but deployments only had projects so I wound up looking for both for now.

@jstrachan
Copy link
Contributor

FWIW deleting the Deployment / DeploymentConfig first often seems to remove all the other RC/RS /Pods for you

@jimmidyson
Copy link
Contributor

I don't think dependent deletes is on until 1.4. until then it's client side logic to cascade deletes.

@jstrachan
Copy link
Contributor

ah ok thx. I guess oc delete dc foo is doing that under the covers

@jimmidyson
Copy link
Contributor

Yep. We do the same in the Java client. Best is to scale down & wait rather than delete directly to prevent orphans I've found.

@jstrachan
Copy link
Contributor

I guess the same still applies though; try delete the Deployment/DeploymentConfig first; that should then do the heavly lifting of scaling down RC/RS etc - then tidy up empty stuff at the end

@jimmidyson
Copy link
Contributor

Deleting the deployment via the API doesn't do any scaling iirc. Let me double check but like I said above, cascading deletes is client side logic.

@jstrachan
Copy link
Contributor

sure - I just mean we should always scale down/delete in this order: D/DC, then RS/RS, then pods to avoid flighting with the underlying controllers in kubernetes/openshift

@jimmidyson
Copy link
Contributor

If the app has a D/DC then you should scale that down, which will in turn scale down RC/RS in normal D controller fashion. Wait until D running pods == 0 & then delete D.

@jimmidyson
Copy link
Contributor

jimmidyson commented Sep 13, 2016

Yeah scaling the D/DC is the best approach IMO. An example scale request would be:

$ body=$(cat  << EOF
{
    "kind": "Scale",
    "apiVersion": "extensions/v1beta1",
    "metadata": {
        "namespace": "default",
        "name": "ruby-hello-world"
    },
    "spec": {
        "replicas": 0
    }
}
EOF
)

$ curl -k -H "Content-Type: application/json" -X PUT -d "$body" https://<API_SERVER_ADDRESS>/oapi/v1/namespaces/<DC_NAMESPACE>/deploymentconfigs/<DC_NAME/scale

@gashcrumb
Copy link
Contributor

Ah, k, will work over that logic some more then...

On Sep 13, 2016 4:46 AM, "Jimmi Dyson" notifications@github.com wrote:

Yeah scaling the D/DC is the best approach IMO. An example scale request
would be:

body=$(cat << EOF
{
"kind": "Scale",
"apiVersion": "extensions/v1beta1",
"metadata": {
"namespace": "default",
"name": "ruby-hello-world"
},
"spec": {
"replicas": 0
}
}
EOF
)
curl -k -H "Content-Type: application/json" -X PUT -d "$body" https://<API_SERVER_ADDRESS>/oapi/v1/namespaces/<DC_NAMESPACE>/deploymentconfigs/<DC_NAME/scale


You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
#6334 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAVdrD-VH_KCoaOD8FNQ9w_V_pOCGUdlks5qpmL_gaJpZM4J3u7m
.

@gashcrumb gashcrumb reopened this Sep 13, 2016
@gashcrumb
Copy link
Contributor

Damn, put the wrong issue # in my commit message :-) I have it scaling down stuff in this commit -> hawtio/hawtio-kubernetes@5849c3a

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants