Skip to content
This repository has been archived by the owner on Jul 3, 2021. It is now read-only.

Integration with openstack #363

Closed
rauizab opened this issue Nov 22, 2018 · 6 comments
Closed

Integration with openstack #363

rauizab opened this issue Nov 22, 2018 · 6 comments

Comments

@rauizab
Copy link

rauizab commented Nov 22, 2018

What happened:
When running apply-specs tries to deploy kube-dns. The pod fails with "1 node(s) had taints that the pod didn't tolerate."

What you expected to happen:
kube-dns is up and running

How to reproduce it (as minimally and precisely as possible):
Openstack 6.
Version v0.24.0

  • bosh int manifests/cfcr.yml
    -o manifests/ops-files/misc/single-master.yml
    -o manifests/ops-files/add-hostname-to-master-certificate.yml
    -o manifests/ops-files/allow-privileged-containers.yml
    -o manifests/ops-files/enable-podsecuritypolicy.yml
    -o manifests/ops-files/iaas/openstack/cloud-provider.yml \ <--- changed value "openstack" for "external"
    --vars-store creds.yml
    --vars-file vars.yml
    -v kubernetes_master_host=***
    -v api-hostname=***** > manifest
  • deploy
  • run apply-specs

Anything else we need to know?:
kubectl edit node nodename****

...
spec:
  taints:
  - effect: NoSchedule
    key: node.cloudprovider.kubernetes.io/uninitialized
    value: "true"
...

Environment:

  • Deployment Info (bosh -d <deployment> deployment):
Name  Release(s)       Stemcell(s)                                      Config(s)            Team(s)
cfcr  bosh-dns/1.10.0  bosh-openstack-kvm-ubuntu-xenial-go_agent/97.28  238 cloud/default    -
      bpm/0.13.0                                                        236 runtime/default
      cfcr-etcd/1.5.0
      docker/32.1.0
      kubo/0.24.0
  • Environment Info (bosh -e <environment> environment):
Name      ******
UUID      33971492-adc7-4b9d-81cb-fa7a7f70eca4
Version   268.1.0 (00000000)
CPI       openstack_cpi
Features  compiled_package_cache: disabled
          config_server: enabled
          dns: disabled
          snapshots: disabled
User      admin
  • Kubernetes version (kubectl version):
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-10T11:44:36Z", GoVersion:"go1.11", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider (e.g. aws, gcp, vsphere): openstack 6
@cf-gitbot
Copy link

We have created an issue in Pivotal Tracker to manage this:

https://www.pivotaltracker.com/story/show/162150020

The labels on this github issue will be updated when the story is started.

@rauizab rauizab changed the title Kube-dns fails on deployment Kube-dns fails on apply specs Nov 22, 2018
@rauizab
Copy link
Author

rauizab commented Nov 22, 2018

Without ops file manifests/ops-files/iaas/openstack/cloud-provider.yml the command "kubectl edit node nodename****" does not show any taints fields.

@rauizab
Copy link
Author

rauizab commented Nov 22, 2018

I see in the worker.

 kubelet/kubelet_ctl.stderr.log:W1122 09:48:12.286929    7966 container_manager_linux.go:792] CPUAccounting not enabled for pid: 7966
kubelet/kubelet_ctl.stderr.log-W1122 09:48:12.286959    7966 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 7966

Maybe it explains why the worker never gets ready to accept pods. I remove the taint from the node and the worker is accepting pods again.
I dont know if this is a bug by openstack installation

@rauizab
Copy link
Author

rauizab commented Nov 22, 2018

From: https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/#running-cloud-controller-manager
and
https://kubernetes.io/docs/tasks/administer-cluster/developing-cloud-controller-manager/

kubelets specifying --cloud-provider=external will add a taint node.cloudprovider.kubernetes.io/uninitialized with an effect NoSchedule during initialization. This marks the node as needing a second initialization from an external controller before it can be scheduled work. Note that in the event that cloud controller manager is not available, new nodes in the cluster will be left unschedulable. The taint is important since the scheduler may require cloud specific information about nodes such as their region or type (high cpu, gpu, high memory, spot instance, etc).

@rauizab rauizab reopened this Nov 22, 2018
@cf-gitbot
Copy link

We have created an issue in Pivotal Tracker to manage this:

https://www.pivotaltracker.com/story/show/162153418

The labels on this github issue will be updated when the story is started.

@rauizab rauizab changed the title Kube-dns fails on apply specs Deployment in openstack Nov 22, 2018
@rauizab rauizab closed this as completed Nov 22, 2018
@rauizab rauizab changed the title Deployment in openstack Integration with openstack Nov 22, 2018
@rauizab
Copy link
Author

rauizab commented Nov 23, 2018

I think I can not really understand the problem. As this is for a test environment we just provisioned a disk with bosh directly and we use local storage class. For our use case is enough.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants