New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Openstack LBaaSv2 - Kubernetes fails to delete Load Balancer #48094

Closed
zioproto opened this Issue Jun 26, 2017 · 8 comments

Comments

Projects
None yet
5 participants
@zioproto
Contributor

zioproto commented Jun 26, 2017

/sig openstack
Is this a BUG REPORT or FEATURE REQUEST?: BUG

Uncomment only one, leave it on its own line:

/kind bug

What happened:

I delete a service:

kubectl delete service my-nginx

Kubernetes is never able to delete the Loadbalancer from Openstack

$ kubectl get events
LASTSEEN   FIRSTSEEN   COUNT     NAME        KIND      SUBOBJECT   TYPE      REASON                       SOURCE               MESSAGE
3m         3m          1         my-nginx                   Service                                  Normal    CreatingLoadBalancer         service-controller       Creating load balancer
2m         2m          1         my-nginx                   Service                                  Normal    CreatedLoadBalancer          service-controller       Created load balancer
2s         1m          5         my-nginx                   Service                                  Normal    DeletingLoadBalancer         service-controller       Deleting load balancer
42s        1m          4         my-nginx                   Service                                  Warning   DeletingLoadBalancerFailed   service-controller       Error deleting load balancer (will retry): Resource not found

The problem is that there are no health monitors in Openstack, and Kubernetes refuses to go on and delete pools and loadbalancers and listeners.
On the neutron server I see DELETE requests without the health-monitor uuid, because the list of healthmonitors is empty.

What you expected to happen:
The loadbalancer is deleted.

In case the health monitors list is empty Kubernetes should go on and delete the rest of the Load Balancer.

How to reproduce it (as minimally and precisely as possible):

kubectl run my-nginx --image=nginx --replicas=2 --port=80
kubectl expose deployment my-nginx --port=80 --type=LoadBalancer
kubectl delete service my-nginx
kubectl get events

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:33:11Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
@k8s-merge-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-merge-robot

k8s-merge-robot Jun 26, 2017

Contributor

@zioproto There are no sig labels on this issue. Please add a sig label by:
(1) mentioning a sig: @kubernetes/sig-<team-name>-misc
e.g., @kubernetes/sig-api-machinery-misc for API Machinery
(2) specifying the label manually: /sig <label>
e.g., /sig scalability for sig/scalability

Note: method (1) will trigger a notification to the team. You can find the team list here and label list here

Contributor

k8s-merge-robot commented Jun 26, 2017

@zioproto There are no sig labels on this issue. Please add a sig label by:
(1) mentioning a sig: @kubernetes/sig-<team-name>-misc
e.g., @kubernetes/sig-api-machinery-misc for API Machinery
(2) specifying the label manually: /sig <label>
e.g., /sig scalability for sig/scalability

Note: method (1) will trigger a notification to the team. You can find the team list here and label list here

@kargakis

This comment has been minimized.

Show comment
Hide comment
@kargakis

kargakis Jun 26, 2017

Member

/sig openstack
/sig network
/remove-sig api-machinery

Member

kargakis commented Jun 26, 2017

/sig openstack
/sig network
/remove-sig api-machinery

@FengyunPan

This comment has been minimized.

Show comment
Hide comment
@FengyunPan

FengyunPan Jun 30, 2017

@zioproto From the "Error deleting load balancer (will retry): Resource not found" of event, I suspect that you have deleted the neutron resource of service's LB, is it?
Or give me more detail log of kube-controller-manager?

FengyunPan commented Jun 30, 2017

@zioproto From the "Error deleting load balancer (will retry): Resource not found" of event, I suspect that you have deleted the neutron resource of service's LB, is it?
Or give me more detail log of kube-controller-manager?

@zioproto

This comment has been minimized.

Show comment
Hide comment
@zioproto

zioproto Jun 30, 2017

Contributor

@FengyunPan I have not deleted anything in Openstack. The problem is that kubernetes never creates health monitors but then it tries to delete them forever.

this is the log you requested:

E0630 07:31:51.577371       1 servicecontroller.go:772] Failed to process service. Retrying in 5s: Resource not found
I0630 07:31:56.578075       1 servicecontroller.go:759] Service has been deleted default/my-nginx
E0630 07:31:59.069556       1 servicecontroller.go:772] Failed to process service. Retrying in 10s: Resource not found
I0630 07:32:09.069908       1 servicecontroller.go:759] Service has been deleted default/my-nginx
E0630 07:32:11.247040       1 servicecontroller.go:772] Failed to process service. Retrying in 20s: Resource not found```
Contributor

zioproto commented Jun 30, 2017

@FengyunPan I have not deleted anything in Openstack. The problem is that kubernetes never creates health monitors but then it tries to delete them forever.

this is the log you requested:

E0630 07:31:51.577371       1 servicecontroller.go:772] Failed to process service. Retrying in 5s: Resource not found
I0630 07:31:56.578075       1 servicecontroller.go:759] Service has been deleted default/my-nginx
E0630 07:31:59.069556       1 servicecontroller.go:772] Failed to process service. Retrying in 10s: Resource not found
I0630 07:32:09.069908       1 servicecontroller.go:759] Service has been deleted default/my-nginx
E0630 07:32:11.247040       1 servicecontroller.go:772] Failed to process service. Retrying in 20s: Resource not found```
@FengyunPan

This comment has been minimized.

Show comment
Hide comment
@FengyunPan

FengyunPan Jun 30, 2017

@zioproto Maybe your kubernetes version is incorrect. Can you try it on kubernetes v1.6.4+. My test as following:
-bash-4.3# kubectl get events | grep test
5m 5m 1 test Service Normal CreatingLoadBalancer service-controller Creating load balancer
4m 4m 1 test Service Normal CreatedLoadBalancer service-controller Created load balancer
15s 15s 1 test Service Normal DeletingLoadBalancer service-controller Deleting load balancer
9s 9s 1 test Service Normal DeletedLoadBalancer service-controller Deleted load balancer

FengyunPan commented Jun 30, 2017

@zioproto Maybe your kubernetes version is incorrect. Can you try it on kubernetes v1.6.4+. My test as following:
-bash-4.3# kubectl get events | grep test
5m 5m 1 test Service Normal CreatingLoadBalancer service-controller Creating load balancer
4m 4m 1 test Service Normal CreatedLoadBalancer service-controller Created load balancer
15s 15s 1 test Service Normal DeletingLoadBalancer service-controller Deleting load balancer
9s 9s 1 test Service Normal DeletedLoadBalancer service-controller Deleted load balancer

@zioproto

This comment has been minimized.

Show comment
Hide comment
@zioproto

zioproto Jun 30, 2017

Contributor

@FengyunPan I tested with v1.6.4 and I had the same problem as with version v1.6.2. I am running Openstack Newton. Are you running the same version of Openstack as well ? thank you

Contributor

zioproto commented Jun 30, 2017

@FengyunPan I tested with v1.6.4 and I had the same problem as with version v1.6.2. I am running Openstack Newton. Are you running the same version of Openstack as well ? thank you

@FengyunPan

This comment has been minimized.

Show comment
Hide comment
@FengyunPan

FengyunPan Jun 30, 2017

@zioproto Yes, it work fine.

FengyunPan commented Jun 30, 2017

@zioproto Yes, it work fine.

@FengyunPan

This comment has been minimized.

Show comment
Hide comment
@FengyunPan

FengyunPan Jun 30, 2017

According to slack's discussion, @zioproto service's lb hasn't healthmonitor on his cluster, but controllermanager do a deletion for empty monitor.

Add pr #48336 to prevent the deletion of empty monitors.

FengyunPan commented Jun 30, 2017

According to slack's discussion, @zioproto service's lb hasn't healthmonitor on his cluster, but controllermanager do a deletion for empty monitor.

Add pr #48336 to prevent the deletion of empty monitors.

FengyunPan pushed a commit to FengyunPan/kubernetes that referenced this issue Jun 30, 2017

FengyunPan
Fix deleting empty monitors
Fix #48094
When create-monitor of cloud-config is false, pool has not monitor
and do not need to delete empty monitor.

zioproto added a commit to zioproto/k8s-on-openstack that referenced this issue Jun 30, 2017

Openstack LBaaS: enable healthmonitors
This is not really necessary but it works as a workaround
for bug kubernetes/kubernetes#48094

FengyunPan pushed a commit to FengyunPan/kubernetes that referenced this issue Jun 30, 2017

FengyunPan
Fix deleting empty monitors
Fix #48094
When create-monitor of cloud-config is false, pool has not monitor
and can not delete empty monitor.

k8s-merge-robot added a commit that referenced this issue Jul 8, 2017

Merge pull request #48336 from FengyunPan/fix-delete-empty-monitors
Automatic merge from submit-queue

Fix deleting empty monitors

Fix #48094
When create-monitor of cloud-config is false, pool has not monitor
and can not delete empty monitor.

**Release note**:
```release-note
NONE
```

RemingtonReackhof added a commit to RemingtonReackhof/kubernetes that referenced this issue Jul 11, 2017

Fix deleting empty monitors
Fix #48094
When create-monitor of cloud-config is false, pool has not monitor
and can not delete empty monitor.

afritzler added a commit to afritzler/kubernetes that referenced this issue Aug 30, 2017

Fix deleting empty monitors
Fix #48094
When create-monitor of cloud-config is false, pool has not monitor
and can not delete empty monitor.

mandelsoft pushed a commit to mandelsoft/kubernetes that referenced this issue Sep 20, 2017

Fix deleting empty monitors
Fix #48094
When create-monitor of cloud-config is false, pool has not monitor
and can not delete empty monitor.

dims pushed a commit to dims/kubernetes that referenced this issue Feb 8, 2018

FengyunPan
Fix deleting empty monitors
Fix #48094
When create-monitor of cloud-config is false, pool has not monitor
and can not delete empty monitor.

dims pushed a commit to dims/kubernetes that referenced this issue Feb 8, 2018

Merge pull request #48336 from FengyunPan/fix-delete-empty-monitors
Automatic merge from submit-queue

Fix deleting empty monitors

Fix #48094
When create-monitor of cloud-config is false, pool has not monitor
and can not delete empty monitor.

**Release note**:
```release-note
NONE
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment