Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cloud Controller Manager - Error from server: no preferred addresses found; known addresses: [] #709

Closed
silverbackdan opened this issue Jul 29, 2019 · 8 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@silverbackdan
Copy link

silverbackdan commented Jul 29, 2019

/kind bug

What happened:
The openstack-cloud-controller-manager pod returns an error:
Error from server: no preferred addresses found; known addresses: []

I believe this is because my instance sits on a network where the public IP address is not available to the machine. I'm not sure if that's the issue but after following the guides, the above error is logged before the controller manager goes into a crash loop back off status.

What you expected to happen:
When following the guides, the cloud-controller-manager to start up without error.

How to reproduce it (as minimally and precisely as possible):

vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# add --cloud-provider=external to kubelet args
# Environment="KUBELET_KUBECONFIG_ARGS=--cloud-provider=external ....
systemctl daemon-reload
systemctl restart kubelet

# cloud config
cat > /etc/kubernetes/cloud.conf <<EOF
[Global]
region=RegionOne
username=***
password=***
auth-url=https://***:5000/v3
tenant-name=Project-***
tenant-id=***

[LoadBalancer]
subnet-id=***
floating-network-id=***
create-monitor=true
monitor-delay=5
monitor-timeout=5
monitor-max-retries=3
EOF

# Kubeadm config
cat > ~/kubernetes/kubeadm.conf <<EOF
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
nodeRegistration:
  kubeletExtraArgs:
    cloud-provider: external
    cloud-config: "/etc/kubernetes/cloud.conf"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
useHyperKubeImage: true
apiServer:
  extraVolumes:
  - name: cloud-config
    hostPath: "/etc/kubernetes/cloud.conf"
    mountPath: "/etc/kubernetes/cloud.conf"
    pathType: FileOrCreate
    readOnly: true
controllerManager:
  extraVolumes:
  - name: cloud-config
    hostPath: "/etc/kubernetes/cloud.conf"
    mountPath: "/etc/kubernetes/cloud.conf"
    pathType: FileOrCreate
    readOnly: true
networking:
  podSubnet: 10.244.0.0/16
EOF

kubeadm init --config ~/kubernetes/kubeadm.conf

logout
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml
# wait for node to become ready
kubectl get nodes -w

CLOUD_CONFIG=/etc/kubernetes/cloud.conf
kubectl create secret -n kube-system generic cloud-config --from-literal=cloud.conf="$(cat $CLOUD_CONFIG)" --dry-run -o yaml > ~/kubernetes/cloud-config-secret.yaml
kubectl apply -f ~/kubernetes/cloud-config-secret.yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/cluster/addons/rbac/cloud-controller-manager-roles.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/cluster/addons/rbac/cloud-controller-manager-role-bindings.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/openstack-cloud-controller-manager-ds.yaml

Anything else we need to know?:

Environment:

I'm sorry if I've done something silly, I've been trying to do this for a very long time now and being new to Kubernetes, it's possible I've done something wrong. I've been on this for days now though so if you're able to say whether this is a bug or perhaps what addresses I should be seeing here, it'd be extremely helpful. Thank you in advance!

** edit for clarification **
as mentioned above, my open stack instance does not know its own external IP address because it sits behind a router/gateway, I can't seem to work out if this would be the issue and how I would define it for the cloud controller if that is the case - or if that is a feature that needs adding...or if that is the main problem.

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jul 29, 2019
@silverbackdan
Copy link
Author

I have been looking into this more and it may be due to when I have the cloud provider set as 'external' the nodes do not get an internal IP address assigned.

NAME                           STATUS   ROLES    AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
kube-controller---production   Ready    master   5m29s   v1.15.1   <none>        <none>        Ubuntu 18.04.2 LTS   4.15.0-48-generic   docker://18.6.2

When I do not use an external cloud provider in the kubeletExtraArgs an internal IP address is found and I think this is what may be missing. I am still unsure as to why this is though. I'll keep on trying to figure it out in the meantime.

@silverbackdan
Copy link
Author

Maybe someone one day will stumble across this and it'll help them - I ended up going back to the in-tree provider. With that I got the error about my cloud.conf file having an error 5 should be 5s for the monitor delay and timeout.

I then had issues again - what solved that was setting the DNS servers. I took a look at my network in openstack that I was in, checked the DNS addresses, configured my InitConfiguration object like so

apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
nodeRegistration:
  kubeletExtraArgs:
    cloud-provider: "openstack"
    cloud-config: "/etc/kubernetes/cloud.conf"
    cluster-dns: "81.201.138.244,80.244.179.244,94.229.163.244"

And then the first node came up with an internal and external IP address.

I'm sure it'll be much the same when using this external (out of tree) provider, as the code is probably the same/very similar. I don't know how I would have found those errors though without having used the in-tree cloud provider. Perhaps I missed it in the docs, perhaps a doc about debugging would be helpful?

I'll leave this open for a maintainer to consider whether there is a way to get these kinds of errors out when using this cloud controller and perhaps start an issue for a docs update.

If there's no time or inclination to do that, this issue can be closed.

@lingxiankong
Copy link
Contributor

I have been looking into this more and it may be due to when I have the cloud provider set as 'external' the nodes do not get an internal IP address assigned.

NAME                           STATUS   ROLES    AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
kube-controller---production   Ready    master   5m29s   v1.15.1   <none>        <none>        Ubuntu 18.04.2 LTS   4.15.0-48-generic   docker://18.6.2

From the output here, your openstack vm doesn't have either fixed-ip or floating-ip. Can you show me the output of openstack server show <vm-id>

@silverbackdan
Copy link
Author

Thanks for the reply @lingxiankong - after adding the cluster dns settings the internal and external IP addresses were both found. I think that was a key part to it. I looked in my openstack dashboard and the network I was on to find the DNS settings.

@lingxiankong
Copy link
Contributor

network I was on to find the DNS settings

Glad to hear the issue was solved, can I close the issue now, or do you think it's necessary to add something to the doc?

@silverbackdan
Copy link
Author

silverbackdan commented Jul 30, 2019

What would have been nice would be to know where the log files which will help trace errors are located, and if they exist. So my first issue was a malformed cloud.conf file . When using the deprecated in-tree cloud provider, kubelet failed to start and I saw the reason why.

I then looked at the default configurations used by kubelet here /etc/systemd/system/kubelet.service.d/10-kubeadm.conf I looked through those configurations being used and saw a DNS server IP address defined. It was that which made me then look for the cluster-dns configuration.

As I am new to this, it seemed a really difficult way around, and I'm not sure if there would have been anything in any logs for the 2nd issue.

I think docs to help with debugging would be extremely helpful - and this is assuming I haven't just missed some documentation that explains all of this, in which case I'd be very sorry!

@lingxiankong
Copy link
Contributor

I've not played with in-tree cloud provider for a long time because it's already deprecated and not recommended.

I don't know how you deploy your k8s cluster, but I am using the ansible playbook here which is deploying the openstack-cloud-controller-manager.

@silverbackdan
Copy link
Author

Thanks for the comments - I was certainly trying to use this out-of-tree provider when I raised this ticket (as seen the files I create with the commands in my initial post).

It was moving to the deprecated in-tree provider where an error was thrown about the cloud.conf and that started me on the road to getting that fixed and also finding the cluster-dns setting.

It's likely that switching back to the kubeadm init config I have in the initial post would now work.

I think I was just suggesting some docs about debugging, and possibly some updated manifests for example here: https://github.com/kubernetes/cloud-provider-openstack/blob/master/manifests/controller-manager/kubeadm.conf

I'm afraid I don't use a deployment tool, at the moment I'm just learning after having installed kubeadm. I have seen a lot about ansible and must do more research into it! I've only been trying to work out Kubernetes for 4 weeks or so, including the devops side so I'm sorry if I just missed something. I'll close this ticket off though. Thank you for your time and help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants