Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubeadm 1.6.0 release RPM has malformed versioning information #43819

Closed
civik opened this issue Mar 29, 2017 · 18 comments
Closed

Kubeadm 1.6.0 release RPM has malformed versioning information #43819

civik opened this issue Mar 29, 2017 · 18 comments
Assignees

Comments

@civik
Copy link

civik commented Mar 29, 2017

Bug

Kubernetes version (use kubectl version):
1.6.0

Environment:

  • Cloud provider or hardware configuration: Bare metal
  • OS (e.g. from /etc/os-release): Centos 7.3
  • Kernel (e.g. uname -a): 3.10.0-514.6.2.el7.x86_64
  • Install tools: yum
  • Others:

Using baseurl: http://yum.kubernetes.io/repos/kubernetes-el7-x86_64

What happened:

Installation of kubeadm is defaulting to the .alpha.0.2074.a092d8e0f95f52 package

What you expected to happen:

Yum should default to the 1.6.0-0 package.

How to reproduce it (as minimally and precisely as possible):

$ yum install kubeadm
Resolving Dependencies
--> Running transaction check
---> Package kubeadm.x86_64 0:1.6.0-0.alpha.0.2074.a092d8e0f95f52 will be installed

Anything else we need to know:

Versioning metadata appears to be malformed -
n-e:v-r.a
1.6.0-0 : kubeadm (name) - <null> (epoch) 1.6.0 (version) - 0 (release)
1.6.0-0.alpha.0.2074 : kubeadm (name) - <null> (epoch) 1.6.0 (version) - 0.alpha... (release)

Yum is treating the -0.something release as newer than just -0

@grodrigues3
Copy link
Contributor

/assign

@grodrigues3
Copy link
Contributor

/unassign

@mikedanese
Copy link
Member

Fixed even though 1.6.0 is still broken.

@sdake
Copy link

sdake commented Mar 30, 2017

this wasn't fixed - instead the kubeadm1.6.0alpha that worked was deleted. Can we get an older version of kubeadm in the kubernetes RPM repositories? TIA!

@mikedanese mikedanese reopened this Mar 30, 2017
@mikedanese mikedanese self-assigned this Mar 30, 2017
@civik
Copy link
Author

civik commented Mar 30, 2017

@sdake Agreed. In general keeping n-1 released version packages in the repo would be appreciated, for situations just like this.

@sdake
Copy link

sdake commented Mar 30, 2017

For others suffering from the fact that kubeadm 1.6.0 is broken out of the box, I built the "n-1" packages (atleast for CentOS) and use them in this review:

https://review.openstack.org/#/c/451556/

If your wondering how the images were built or how they are installed, that is covered in this review.

Upstream in kolla-kubernetes we have a parallel effort to get kubeadm 1.6.0 + kubernetes 1.6.0 going. Gate is looking promising(some green) although this review is far from complete: https://review.openstack.org/#/c/451391/

@csarora
Copy link

csarora commented Mar 30, 2017

@sdake Could you provide info how did you install older version of kubeadm , because i am trying to list/install older kubeadm its not allowing or where to get RPM for older kubeadm?

@sdake
Copy link

sdake commented Mar 30, 2017

@csarora I had to build it myself using these commands:
git clone https://github.com/kubernetes/release.git
git checkout efd57b86a69051b70cf08a73df0e1d672bc61272
docker-build.sh

For future time travelers, I'm not sure I built the RPMs correctly as a result of the issue raised here: kubernetes/release#305 Instead reference this review for latest instructions: https://review.openstack.org/#/c/451556/

@Dieken
Copy link
Contributor

Dieken commented Mar 30, 2017

"Broken out of the box",haha!

I upgraded from v1.5.4 to v1.6.0,bitten by the CNI issue in kubeadm,then I rollback to 1.5.6 because this is the only 1.5.x left in the apt repo,then...kubelet 1.5.6 can't start due to wrong default rbac config....

Oh, indeed broken out if the box!

@sl4dy
Copy link

sl4dy commented Mar 31, 2017

If anybody needs working kubeadm 1.6 alpha with all related packages:

kubeadm-1.6.0-0.alpha.0.2074.a092d8e0f95f52.x86_64.rpm
ubernetes-cni-0.3.0.1-0.07a8a2.x86_64.rpm
kubelet-1.5.4-0.x86_64.rpm
rkt-1.25.0-1.x86_64.rpm
kubectl-1.5.4-0.x86_64.rpm

I have them in our local Pulp mirror.
Link: https://drive.google.com/open?id=0Bz7FKhrf1vTQV2ZKcE54Q2pmdWM

@githubvick
Copy link

Hi,

I tried the above and it always hangs in the Created API client, waiting for the control plane to become ready with these in logs..

error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"

Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1/unit/kubelet_2eservice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=66015 reply_cookie=0 error=n/a
Apr 01 14:07:32 kubeadmmaster systemd[1]: Sent message type=signal sender=n/a destination=n/a object=/org/freedesktop/systemd1/unit/kubelet_2eservice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=66016 reply_cookie=0 error=n/a
Apr 01 14:07:32 kubeadmmaster systemd[1]: Got message type=signal sender=org.freedesktop.DBus destination=n/a object=/org/freedesktop/DBus interface=org.freedesktop.DBus member=NameOwnerChanged cookie=19998 reply_cookie=0 error=n/a

I had also added this parameter systemd

Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --cgroup-driver=systemd"

Is it because Im running behind a corporate proxy?

Thanks,

@githubvick
Copy link

Seems like we would have to restart the machine itself after setting the systemd parameter or I don't know which service to restart for it to apply. I restarted the machine and ran init and atleast it doesnt hang and moving on to the next hurdle

First node has registered, but is not ready yet
[apiclient] First node has registered, but is not ready yet

@gtirloni
Copy link
Contributor

gtirloni commented Apr 4, 2017

1.6.1 packages have been published for CentOS

# cat /etc/redhat-release 
CentOS Linux release 7.3.1611 (Core) 

# kubeadm version 
kubeadm version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:33:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

# kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:44:38Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:24:30Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

# rpm -qa |grep kube
kubectl-1.6.1-0.x86_64
kubernetes-cni-0.5.1-0.x86_64
kubelet-1.6.1-0.x86_64
kubeadm-1.6.1-0.x86_64

# cat /etc/yum.repos.d/kubernetes.repo 
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

EDIT: That server version doesn't look right.

@mikedanese
Copy link
Member

Server versions are not packaged in the RPM. Kubeadm is responsible for deploying them in docker containers so this can be closed.

@ReSearchITEng
Copy link

adding "--cgroup-driver=systemd" causes new issue on Centos/RHEL 7.3 (fully up to date):

Apr 12 14:23:25 machine01 kubelet[3026]: W0412 14:23:25.542322    3026 docker_service.go:196] No cgroup driver is set in Docker
Apr 12 14:23:25 machine01 kubelet[3026]: W0412 14:23:25.542343    3026 docker_service.go:197] Falling back to use the default driver: "cgroupfs"
Apr 12 14:23:25 machine01 kubelet[3026]: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"

while we can see clearly that native.cgroupdriver=systemd is set in the docker daemon:

 ps -ef|grep -i docker
root      4365     1  3 14:30 ?        00:00:33 /usr/bin/docker-current daemon --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --selinux-enabled --log-driver=journald --insecure-registry 172.30.0.0/16 --storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/vg.docker--pool --storage-opt dm.use_deferred_removal=true --storage-opt dm.use_deferred_deletion=true

@danodob
Copy link

danodob commented Jul 14, 2017

@ReSearchITEng, have you had any luck with this issue? I ran into it as well. Moving kubelet cgroup to cgroupfs removed the error but everything else I read points to both docker and kubelet using systemd.

@qubusp
Copy link

qubusp commented Jul 24, 2017

Just a sidenote - what is the correct repository to install kubernetes from rpm? Thank you

@gtirloni
Copy link
Contributor

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests