-
Notifications
You must be signed in to change notification settings - Fork 180
Pods fail to mount secret on k8s 1.3.0 + GKE #372
Comments
I also saw these with k8s 1.3.0 using workflow v2.1.0. First try was with kube-aws 1.3.0 / hyperkube v1.3.0_coreos.0 / CoreOS Alpha w. docker 1.11.2 and second try on kube-aws 1.3.0 / hyperkube v1.3.0_coreos.1 / CoreOS Beta w. docker 1.10.3. Here's an example of the logged error events taken from the tectonic console:
|
A workaround in GKE is to choose the "Change" link for your Node Pool and roll it back to k8s 1.2.5. |
Not the issue with Workflow v2.2.0 on GKE with kubernetes v1.3.2 with GC storage. |
Chances are you just haven't hit the bug yet. The bugfix has been merged in kubernetes master 2 days ago (see kubernetes/kubernetes#28939) and will likely land in k8s 1.3.3. The problem ist triggered when mounting secrets, so it doesn't matter what kind of storage you are using. |
@felixbuenemann you, I got hit by that bug on the forth cluster on GKE |
Just checked with 1.3.2, and there indeed it's still an issue. Will retry later this week when 1.3.3 is available on GKE. |
Looks like it will land in 1.3.4 |
Yes, apparently @mboersma received tentative acknowledgement that it will land in 1.3.4. |
If someone wants to try if it's fixed, k8s 1.3.4 has been released a couple of hours ago. |
I'll be testing this out tomorrow |
Yes, I manually tested this and it was fixed with 1.3.4-beta.0. |
Excellent! Thanks @felixbuenemann we'll test again to make sure. (I've also manually tested with k8s v1.4.0-beta2, and the bug stayed fixed.) |
Builder, database, minio, and registry all mount the "objectstorage-keyfile" secret volume. In k8s 1.3 on GKE, this began to fail (see the final Events listing):
This appears to be related to kubernetes/kubernetes#28750 and maybe kubernetes/kubernetes#28898 and kubernetes/kubernetes#28616.
The text was updated successfully, but these errors were encountered: