New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Controller Manager - Service Controller - Clean up load balancer error (OpenStack) #54864
Comments
/sig openstack |
@piwi91 I'm wondering if the API is returning |
Hi @jamiehannaford, I would love to, but I don't know which URL it is calling. Any thoughts about how I can log all calls so I can try to reproduce it using CURL? |
@piwi91 I think it's the lbaas v2 endpoint for the Neutron service. The URL will be in your service catalog. |
@jamiehannaford Ok I did a verbose with the neutron client which showed me the response in JSON Response:
|
@piwi91 What's the JSON output for list loadbalancers? The one you outputted was retrieve a single LB |
@jamiehannaford Here you go:
|
@jamiehannaford Any update? :) |
@piwi91 Hmm, if you look at the JSON response you're getting back for the loadbalancers call, it has this for the listeners field: The OpenStack docs are super unclear. The description says "array" but no mention of what's inside the array except "The associated listener IDs, if any." @xgerman Any idea of what the correct JSON response should be for Octavia? |
Ok, I ran the curl against a Newton cloud and I am seeing the same output as in the docs: "listeners": [{"id": "4b01b6b0-959b-4012-a3a5-40c729835842"}] -- I am not sure why this is different in the output shown |
There is an LBaaS V1 interface which has long been deprecated and lb-version=v2 above indicates we are not using this. If we can have the execute URL which is being called and the OpenStack version we can investigate further. To my knowledge the LBaaS V2 api hasn't changed for several years (and the API contract is to only add new fields but not change existing ones)... |
Yeah, I ran a
Which matches the Octavia API output. |
@piwi91 This looks like a problem with your OpenStack setup. I'm not sure what K8s can do here, since gophercloud expects a specific output. |
@jamiehannaford I'm using LBaas v2. I will contact my OpenStack provider and refer to this issue to further investigate the issue. They're using Mirantis Openstack. The only thing I can think of is that Mirantis (https://www.mirantis.com/) changed the API in their implementation. EDIT: I contacted my OpenStack provider and they're going to investigate the issue and informed Mirantis about this issue. I will update this ticket when I have more information. |
I can confirm that this issue only occurred with my OpenStack provider. After reaching out to their support, they changed the API, and that resolved the issue!
I close this issue. Thank you all for the support! |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
Load balancer in Openstack isn't removed after removing a Load balancer service and results in an error in the controller manager:
service_controller.go:749] Failed to process service. Retrying in 5s: json: cannot unmarshal string into Go struct field LoadBalancer.listeners of type listeners.Listener
What you expected to happen:
The load balancer should be removed.
How to reproduce it (as minimally and precisely as possible):
Boot up kubernetes cluster with OpenStack cloud provider enabled. Add a load balancer service (this will create the load balancer in OpenStack) and delete it after the load balancer came online. The load balancer won't be removed.
Anything else we need to know?:
Cloud.conf
Environment:
kubectl version
):Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:27:35Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
OpenStack
CentOS Linux 7 (Core) 3.10.0-693.5.2.el7.x86_64
uname -a
):Linux xxx 3.10.0-693.5.2.el7.x86_64 #1 SMP Fri Oct 20 20:32:50 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
The text was updated successfully, but these errors were encountered: