New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release a new version of kubeadm #34884

Closed
luxas opened this Issue Oct 15, 2016 · 53 comments

Comments

@luxas
Copy link
Member

luxas commented Oct 15, 2016

The last two weeks I've been focusing on getting kubeadm master more stable so we can do a release, and implementing new features. I'm collecting here now what should be done:

Changelog for the new release:

  • Switch to the 10.12.0.0/16 subnet: #33668
  • Fix kubeadm on AWS by including /etc/ssl/certs in the controller-manager #33681
  • The API was refactored and is now componentconfig: #33728, #34147 and #34555
  • Allow kubeadm to get config options from a file: #34501, #34885 and #34891
  • Implement preflight checks: #34341
  • Using kubernetes v1.4.1 by default for arm support: #34419
  • Make api and discovery ports configurable and default to 6443: #34719
  • Implement kubeadm reset: #34807
  • Make kubeadm poll/wait for endpoints instead of directly fail when the master isn't available #34703 and #34718
  • Bug fixes: #34352, #34558, #34573, #34834 and #34607

My plan is to get #34718, #34719 and #34807 merged under the weekend so we can do the release on Monday.

Now that kubernetes/test-infra#670 is fixed, we will also be able to just download the kubeadm binaries for all arches right away when building the debs and rpms.
I'll make a PR when we have a commit version that includes all above PRs.

I'd like to document this changelog somewhere, where do you think it would fit in the best?

@kubernetes/sig-cluster-lifecycle

k8s-merge-robot added a commit that referenced this issue Oct 15, 2016

Merge pull request #34885 from apprenda/kubeadm_join_configuration
Automatic merge from submit-queue

kubeadm join: Added support for config file.

As more behavior (#34719, #34807, fix for #33641) is added to `kubeadm join`, this will be eventually very much needed. Makes sense to go in sooner rather than later.

Also references #34501 and #34884.

/cc @luxas @mikedanese
@errordeveloper

This comment has been minimized.

Copy link
Member

errordeveloper commented Oct 17, 2016

Yes, I also would like to fix #34927.

@luxas

This comment has been minimized.

Copy link
Member

luxas commented Oct 17, 2016

We should aim to get the release out of the door today.
I think we'll manage without #34927, but #34719 and #34907 need to get merged.

Please fix the rebase failure and I'll merge it.

@pires

This comment has been minimized.

Copy link
Member

pires commented Oct 17, 2016

I think we'll manage without #34927, but #34719 and #34907 need to get merged.

Agreed.

@errordeveloper

This comment has been minimized.

Copy link
Member

errordeveloper commented Oct 17, 2016

Ok, turns out I've used an old build when I created #34927. Pre-flight check errors look ok when I build from master.

@errordeveloper

This comment has been minimized.

Copy link
Member

errordeveloper commented Oct 17, 2016

Now that we have all PR merged, we need to agree on the process. First of all we need to release latest binaries into the unstable channel, so we can test end-to-end from packages. I believe we have a URL for binaries built in CI, who know what it is should make a PR to update the URL first.

@pires

This comment has been minimized.

Copy link
Member

pires commented Oct 17, 2016

we need to release latest binaries into the unstable channel, so we can test end-to-end from packages.

I definitely agree we must test all scenarios manually before releasing. If we can’t, we shouldn’t release.

That said, I can test on Ubuntu 16.04, both with self-hosted and external etcd (with TLS + client-auth).

@errordeveloper

This comment has been minimized.

Copy link
Member

errordeveloper commented Oct 17, 2016

@luxas

This comment has been minimized.

Copy link
Member

luxas commented Oct 17, 2016

The packaging scripts in kubernetes/release now uses the latest version: kubernetes/release#160

When you've verified they are working properly, @mikedanese should push them to unstable and stable

@pires

This comment has been minimized.

Copy link
Member

pires commented Oct 17, 2016

@dmmcquay please test as well and provide your feedback here.

@mikedanese

This comment has been minimized.

Copy link
Member

mikedanese commented Oct 17, 2016

@luxas debs are pushed automatically to stable. We need to pick a point to tag unstable as new stable.

@mikedanese

This comment has been minimized.

Copy link
Member

mikedanese commented Oct 17, 2016

I think we should pick the v1.5-alpha.2 release for promotion to the stable repo, not a ci build off master. We should be tapping into the main kubernetes release cycle now that we are in the main repository.

@mikedanese

This comment has been minimized.

Copy link
Member

mikedanese commented Oct 17, 2016

@saad-ali when are you planning on cutting v1.5-alpha.2 ?

@luxas

This comment has been minimized.

Copy link
Member

luxas commented Oct 17, 2016

v1.5.0-alpha.1 just got released, so we should not wait ~2 weeks for the next release for this.

This time a commit is ok, but for the next time, sure

@mikedanese

This comment has been minimized.

Copy link
Member

mikedanese commented Oct 17, 2016

According to the release calendar there should be a v1.5-alpha.2 today. I think v1.5-alpha.1 slipped a week. Is that correct @saad-ali ?

@mikedanese

This comment has been minimized.

Copy link
Member

mikedanese commented Oct 17, 2016

Oops misread. Looks like next one is on 10/31.

@saad-ali

This comment has been minimized.

Copy link
Member

saad-ali commented Oct 19, 2016

@saad-ali when are you planning on cutting v1.5-alpha.2 ?

v1.5.0-alpha.1 went out on Oct 13. Keeping a two week cadence, v1.5.0-alpha.2 is planned for the 27th

@errordeveloper

This comment has been minimized.

Copy link
Member

errordeveloper commented Oct 19, 2016

I believe test instructions for someone who has already installed kubeadm would be something like:

On Ubuntu 16.04

# cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial-unstable main
EOF
# apt-get update
# apt-get upgrade -y kubelet kubeadm kubectl kubernetes-cni

On CentOS or other EL7-based distros

# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64-unstable
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
# yum upgrade -y docker kubelet kubeadm kubectl kubernetes-cni

And finally:

# kubeadm reset
# systemctl restart kubelet
# kubeadm init

_Can anyone here test this, so we can can point folks who had issue at these?_

@thaume

This comment has been minimized.

Copy link

thaume commented Oct 19, 2016

Hey thanks for the update !

@errordeveloper just ran your script, here are the results.

After updating the master, the other node disapeared (I guess this is pretty normal, since the kubeadm reset and kubeadm init create a new "join" token).

NAME       STATUS    AGE
sd-83688   Ready     40s

After running the update on the second pod + joining, my apiserver and k8s dashboard are fiercly running on port :6443.

A quick note, a preflight test outputed that :

preflight check errors:
    ethtool not found in system path

It was an apt-get install away, but it validates that preflights are working :P

Thanks again for the quick update !

@Starefossen

This comment has been minimized.

Copy link
Contributor

Starefossen commented Oct 19, 2016

@pesho

This comment has been minimized.

Copy link

pesho commented Oct 19, 2016

@errordeveloper

This comment has been minimized.

Copy link
Member

errordeveloper commented Oct 19, 2016

It is possible there is no automation for RPMs yet.

On Wed, 19 Oct 2016, 22:14 Peter Petrov, notifications@github.com wrote:

Only the DEB packages have been pushed:
https://packages.cloud.google.com/apt/dists/kubernetes-xenial-unstable/main/binary-amd64/Packages


You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub
#34884 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAPWS9JLTin83xPLvQII8t5gvrHkfcGOks5q1ogigaJpZM4KXrJd
.

@n-marton

This comment has been minimized.

Copy link
Contributor

n-marton commented Oct 21, 2016

The generation of (on ubuntu, didn't check on other dists.) /etc/systemd/system/kubelet.service.d/10-kubeadm.conf ignores custom --service-cidr and contains 10.12.0.10 as --cluster-dns.

@errordeveloper

This comment has been minimized.

Copy link
Member

errordeveloper commented Oct 21, 2016

I am suspecting there may be an issue with recent change that went in with fbd5032.

It appears that ipallocator.GetIndexedIP(...) has a mode where it ends up ignoring the range it was given, I'm looking into it now.

Ok, I just got my CIDR math wrong. The bug is really in the package, we need to update the --cluster-dns flag.

Here is how to fix this in the meantime:

root@kube-1:~# mkdir /etc/systemd/system/kubelet.service.d/
root@kube-1:~# printf '[Service]\nEnvironment="KUBELET_DNS_ARGS=--cluster-dns=10.0.0.10 --cluster-domain=cluster.local"\n' > /etc/systemd/system/kubelet.service.d/20-dns-fix.conf
root@kube-1:~# apt install kubelet kubeadm
...

If you have already installed the packages, you can do kubeadm reset, create /etc/systemd/system/kubelet.service.d/20-dns-fix.conf as shown above and run systemctl daemon-reload && systemctl restart kubelet.

@errordeveloper

This comment has been minimized.

Copy link
Member

errordeveloper commented Oct 21, 2016

Having said that, I am not happy with 10.12.0.0/12 being the default range, it's too likely to clash, for example there will be a major overlap with 10.0.0.0/16. I'd like to change the default.

@errordeveloper

This comment has been minimized.

Copy link
Member

errordeveloper commented Oct 21, 2016

@Starefossen

This comment has been minimized.

Copy link
Contributor

Starefossen commented Oct 21, 2016

Applied the dns fix on my kubeadm cluster but the errors still presist from within the pods:

Curling the kube-apiserver from a node:

[root@node-01 ~]# curl -k https://10.0.0.1
Unauthorized

Curling the kube-apiserver from within a pod:

[root@node-01 ~]# kubectl exec test-701078429-ael0b -- curl --verbose -k https://10.0.0.1
curl: (7) Failed to connect to 10.0.0.1 port 443: No route to host
[root@node-01 ~]# kubectl exec test-701078429-ael0b -- cat /etc/resolv.conf 
nameserver 10.0.0.10
[root@node-01 ~]# kubectl get svc --all-namespaces
NAMESPACE     NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
default       kubernetes   10.0.0.1     <none>        443/TCP         23m
kube-system   kube-dns     10.0.0.10    <none>        53/UDP,53/TCP   22m
@errordeveloper

This comment has been minimized.

Copy link
Member

errordeveloper commented Oct 24, 2016

@Starefossen this looks like a completely separate issue, let's discuss on Slack.

@errordeveloper

This comment has been minimized.

Copy link
Member

errordeveloper commented Oct 24, 2016

I am waiting for at least kubernetes/release#168 & #35270 to get meged, also #35290 would be good.

@vganapathy1

This comment has been minimized.

Copy link

vganapathy1 commented Oct 24, 2016

@errordeveloper I'm also facing the same issue as @Starefossen, can you please suggest?

@Starefossen

This comment has been minimized.

Copy link
Contributor

Starefossen commented Oct 24, 2016

@vagababov I managed to get my kubernetes cluster working with internal and external dns queries on Oracle Linux 7.2 (kernel 4.1.12-61.1.14.el7uek.x86_64) after a complete reset:

# reset kubeadm
kubeadm reset

# reset docker
# warning: this deletes all containers, images, and volumes!!
docker rm -f $(docker ps -qa)
docker rmi -f $(docker images -qa)
systemctl stop docker
rm -rf /var/lib/docker/*

# stop and disable firewalld
systemctl disable firewalld
systemctl stop firewalld

# reset iptables
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT 
iptables -t nat -F        
iptables -t mangle -F     
iptables -F               
iptables -X    
iptables-save > /etc/sysconfig/iptables
systemctl restart iptables

# start docker and kubelet
systemctl start docker
systemctl start kubelet
@errordeveloper

This comment has been minimized.

Copy link
Member

errordeveloper commented Oct 24, 2016

@vganapathy1 is it possible you might have some firewall configuration that gets on the way? What distro are you using?

@errordeveloper

This comment has been minimized.

Copy link
Member

errordeveloper commented Oct 24, 2016

Looks like we are also blocked on kubernetes/release#171.

@vganapathy1

This comment has been minimized.

Copy link

vganapathy1 commented Oct 25, 2016

@errordeveloper, I using Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-43-generic x86_64) and firewall is by default disabled. I even tried to reset steps suggested by @Starefossen and still the issue persists!

FYI When i look at the kube-proxy-adm64 I could see the below messages,

I1025 11:48:32.834269 1 iptables.go:339] running iptables-restore [--noflush --counters]
I1025 11:48:32.837936 1 proxier.go:751] syncProxyRules took 21.882689ms
I1025 11:48:32.837961 1 proxier.go:523] OnEndpointsUpdate took 22.098958ms for 4 endpoints
I1025 11:48:33.107442 1 config.go:99] Calling handler.OnEndpointsUpdate()
I1025 11:48:33.107485 1 proxier.go:758] Syncing iptables rules
I1025 11:48:33.107496 1 iptables.go:362] running iptables -N [KUBE-SERVICES -t filter]
I1025 11:48:33.108745 1 healthcheck.go:86] LB service health check mutation request Service: default/kubernetes - 0 Endpoints []
I1025 11:48:33.109389 1 iptables.go:362] running iptables -N [KUBE-SERVICES -t nat]
I1025 11:48:33.111039 1 iptables.go:362] running iptables -C [OUTPUT -t filter -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1025 11:48:33.113422 1 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1025 11:48:33.115332 1 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1025 11:48:33.117222 1 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
I1025 11:48:33.118631 1 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
I1025 11:48:33.120569 1 iptables.go:298] running iptables-save [-t filter]
I1025 11:48:33.123196 1 iptables.go:298] running iptables-save [-t nat]
I1025 11:48:33.125567 1 proxier.go:1244] Restoring iptables rules: *filter
:KUBE-SERVICES - [0:0]
-A KUBE-SERVICES -m comment --comment "kube-system/kube-dns:dns has no endpoints" -m udp -p udp -d 10.0.0.10/32 --dport 53 -j REJECT
-A KUBE-SERVICES -m comment --comment "kube-system/kube-dns:dns-tcp has no endpoints" -m tcp -p tcp -d 10.0.0.10/32 --dport 53 -j REJECT
COMMIT
*nat
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-TKCQEPMSAIXROJ4U - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x00004000/0x00004000 -j MASQUERADE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SERVICES -m comment --comment "default/kubernetes:https cluster IP" -m tcp -p tcp -d 10.0.0.1/32 --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-TKCQEPMSAIXROJ4U --rcheck --seconds 180 --reap -j KUBE-SEP-TKCQEPMSAIXROJ4U
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -j KUBE-SEP-TKCQEPMSAIXROJ4U
-A KUBE-SEP-TKCQEPMSAIXROJ4U -m comment --comment default/kubernetes:https -s 10.63.33.46/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-TKCQEPMSAIXROJ4U -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-TKCQEPMSAIXROJ4U --set -m tcp -p tcp -j DNAT --to-destination 10.63.33.46:6443
-A KUBE-SERVICES -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp -p udp -d 10.0.0.10/32 --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp -p tcp -d 10.0.0.10/32 --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
COMMIT

@errordeveloper

This comment has been minimized.

Copy link
Member

errordeveloper commented Oct 25, 2016

Does 'curl -vk https://10.0.0.1:443' work from a node and master?

On Tue, 25 Oct 2016, 12:49 vganapathy1, notifications@github.com wrote:

@errordeveloper https://github.com/errordeveloper, I using Ubuntu
16.04.1 LTS (GNU/Linux 4.4.0-43-generic x86_64) and firewall is by default
disabled. I even tried to reset steps suggested by @Starefossen
https://github.com/Starefossen and still the issue persists!

FYI When i look at the kube-proxy-adm64 I could see the below messages,
I1025 11:48:32.834269 1 iptables.go:339] running iptables-restore
[--noflush --counters]
I1025 11:48:32.837936 1 proxier.go:751] syncProxyRules took 21.882689ms
I1025 11:48:32.837961 1 proxier.go:523] OnEndpointsUpdate took 22.098958ms
for 4 endpoints
I1025 11:48:33.107442 1 config.go:99] Calling handler.OnEndpointsUpdate()
I1025 11:48:33.107485 1 proxier.go:758] Syncing iptables rules
I1025 11:48:33.107496 1 iptables.go:362] running iptables -N
[KUBE-SERVICES -t filter]
I1025 11:48:33.108745 1 healthcheck.go:86] LB service health check
mutation request Service: default/kubernetes - 0 Endpoints []
I1025 11:48:33.109389 1 iptables.go:362] running iptables -N
[KUBE-SERVICES -t nat]
I1025 11:48:33.111039 1 iptables.go:362] running iptables -C [OUTPUT -t
filter -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1025 11:48:33.113422 1 iptables.go:362] running iptables -C [OUTPUT -t
nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1025 11:48:33.115332 1 iptables.go:362] running iptables -C [PREROUTING
-t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1025 11:48:33.117222 1 iptables.go:362] running iptables -N
[KUBE-POSTROUTING -t nat]
I1025 11:48:33.118631 1 iptables.go:362] running iptables -C [POSTROUTING
-t nat -m comment --comment kubernetes postrouting rules -j
KUBE-POSTROUTING]
I1025 11:48:33.120569 1 iptables.go:298] running iptables-save [-t filter]
I1025 11:48:33.123196 1 iptables.go:298] running iptables-save [-t nat]
I1025 11:48:33.125567 1 proxier.go:1244] Restoring iptables rules: *filter
:KUBE-SERVICES - [0:0]
-A KUBE-SERVICES -m comment --comment "kube-system/kube-dns:dns has no
endpoints" -m udp -p udp -d 10.0.0.10/32 --dport 53 -j REJECT
-A KUBE-SERVICES -m comment --comment "kube-system/kube-dns:dns-tcp has no
endpoints" -m tcp -p tcp -d 10.0.0.10/32 --dport 53 -j REJECT
COMMIT
*nat
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-TKCQEPMSAIXROJ4U - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic
requiring SNAT" -m mark --mark 0x00004000/0x00004000 -j MASQUERADE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SERVICES -m comment --comment "default/kubernetes:https cluster
IP" -m tcp -p tcp -d 10.0.0.1/32 --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https
-m recent --name KUBE-SEP-TKCQEPMSAIXROJ4U --rcheck --seconds 180 --reap -j
KUBE-SEP-TKCQEPMSAIXROJ4U
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https
-j KUBE-SEP-TKCQEPMSAIXROJ4U
-A KUBE-SEP-TKCQEPMSAIXROJ4U -m comment --comment default/kubernetes:https
-s 10.63.33.46/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-TKCQEPMSAIXROJ4U -m comment --comment default/kubernetes:https
-m recent --name KUBE-SEP-TKCQEPMSAIXROJ4U --set -m tcp -p tcp -j DNAT
--to-destination 10.63.33.46:6443
-A KUBE-SERVICES -m comment --comment "kube-system/kube-dns:dns cluster
IP" -m udp -p udp -d 10.0.0.10/32 --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -m comment --comment "kube-system/kube-dns:dns-tcp
cluster IP" -m tcp -p tcp -d 10.0.0.10/32 --dport 53 -j
KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE:
this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j
KUBE-NODEPORTS
COMMIT


You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub
#34884 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAPWS7nGdGxzta2Q5mXPV1leHsJ0ShvZks5q3ey1gaJpZM4KXrJd
.

@luxas

This comment has been minimized.

Copy link
Member

luxas commented Oct 25, 2016

Please move the debug session to another thread so we can focus on releasing tasks here instead :)

@errordeveloper

This comment has been minimized.

Copy link
Member

errordeveloper commented Oct 25, 2016

Ok, looks like there is a bug, I wonder why we missed it so far, it may be
an edge case. Tobias, if you can give me access, it would be the easiest
path to debug this. I wonder why the /etc/kubernetes directory was not
empty in your VM... Before I get to this, could you check what was in it,
and in case there may be any other state, e.g. data in etcd?

On Fri, 21 Oct 2016, 10:59 Tobias Bradtke, notifications@github.com wrote:

Same problem like @pesho https://github.com/pesho. See over there for
details:
https://gist.github.com/webwurst/e65839c4889b8c3c88051ffd7b072168


You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub
#34884 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAPWS9uMolP2FeNz4G1wgqhG_wgUDUxAks5q2Iz7gaJpZM4KXrJd
.

@errordeveloper

This comment has been minimized.

Copy link
Member

errordeveloper commented Oct 26, 2016

Let's release another unstable snapshot ASAP kubernetes/release#177 👍

@errordeveloper

This comment has been minimized.

Copy link
Member

errordeveloper commented Oct 26, 2016

Please move the debug session to another thread so we can focus on releasing tasks here instead :)

Thanks @luxas. @vganapathy1 if you are still having problems, please open another issue or find me on Slack.

@luxas

This comment has been minimized.

Copy link
Member

luxas commented Oct 31, 2016

Some more PRs will be in the next release:

The last piece we need is #35796 before we can push the release to stable in my opinion.

cc @pires @errordeveloper @mikedanese

@bulletRush

This comment has been minimized.

Copy link
Contributor

bulletRush commented Nov 1, 2016

in kubeadm source code, some place use "k8s.io/kubernetes/pkg/api" and some place use "k8s.io/kubernetes/pkg/api/v1". and v1.PodSecurityContext doesn't have HostNetwork field.
find more information in my branch: pre pull images and configurable pod

@luxas luxas self-assigned this Nov 2, 2016

@luxas

This comment has been minimized.

Copy link
Member

luxas commented Nov 2, 2016

Now all required PRs are merged in order to produce a new stable release.
kubernetes/release#186 promotes the current HEAD version to unstable, and after that we're gonna mark it as stable as well.

Soon a docs update is coming

@luxas

This comment has been minimized.

Copy link
Member

luxas commented Nov 2, 2016

@dgoodwin

This comment has been minimized.

Copy link
Contributor

dgoodwin commented Nov 2, 2016

I have found a bug with kubeadm join on CentOS, we fixed master init, we failed to think about join:

$ kubeadm join --token=736a96.7d67243bc4d6137a 192.168.122.176
Running pre-flight checks
preflight check errors:
        kubelet service is not active, please run 'systemctl start kubelet.service'
        /etc/kubernetes is not empty

Fixing now. (#36083)

@luxas

This comment has been minimized.

Copy link
Member

luxas commented Nov 2, 2016

Oops, yeah. We need tests 😅
I'm merging that one now

@luxas

This comment has been minimized.

Copy link
Member

luxas commented Nov 3, 2016

@mikedanese Please promote the unstable packages to stable today.
I expect it to take about ~5 mins for you, so I think it can be done under the 2 hour limit 😄

@luxas

This comment has been minimized.

Copy link
Member

luxas commented Nov 7, 2016

The release is out now and documented, closing...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment