Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Service account not working in hyperkube since v1.3.0-alpha.5 #26943

Closed
cheld opened this issue Jun 7, 2016 · 35 comments
Closed

Service account not working in hyperkube since v1.3.0-alpha.5 #26943

cheld opened this issue Jun 7, 2016 · 35 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@cheld
Copy link
Contributor

cheld commented Jun 7, 2016

Problem
Service account seems to be broken in Hyperkube

Steps:

export K8S_VERSION=v1.3.0-alpha.5
export ARCH=amd64
docker run -d \
    --volume=/:/rootfs:ro \
    --volume=/sys:/sys:rw \
    --volume=/var/lib/docker/:/var/lib/docker:rw \
    --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
    --volume=/var/run:/var/run:rw \
    --net=host \
    --pid=host \
    --privileged \
    gcr.io/google_containers/hyperkube-${ARCH}:${K8S_VERSION} \
    /hyperkube kubelet \
        --containerized \
        --hostname-override=127.0.0.1 \
        --api-servers=http://localhost:8080 \
        --config=/etc/kubernetes/manifests \
        --cluster-dns=10.0.0.10 \
        --cluster-domain=cluster.local \
        --allow-privileged --v=2

kubectl run debain2 --image debian sleep 1000000
kubectl exec debain2-1279483658-wakm7 ls /var/run/secrets/kubernetes.io/serviceaccount

Result is empty dir.

Comment:
It works in v1.3.0-alpha.4

export K8S_VERSION=v1.3.0-alpha.4
export ARCH=amd64
docker run ...
kubectl run debain2 --image debian sleep 1000000
kubectl exec debain2-1279483658-wakm7 ls /var/run/secrets/kubernetes.io/serviceaccount
ca.crt
namespace
token
@luxas
Copy link
Member

luxas commented Jun 7, 2016

Can you provide interesting parts of the kubelet log?

@cheld
Copy link
Contributor Author

cheld commented Jun 7, 2016

Not sure what is interesting. Uploaded all of the log file.
log.txt

@luxas
Copy link
Member

luxas commented Jun 7, 2016

can you kindly test this also with the shared volume mount solution?

@luxas luxas added kind/bug Categorizes issue or PR as related to a bug. help-wanted priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Jun 7, 2016
@luxas
Copy link
Member

luxas commented Jun 7, 2016

cc @jsafrane @fgrzadkowski @lavalamp @ncdc @kubernetes/sig-storage @kubernetes/rh-storage

I think something is wrong with NsenterWriter, did someone touch it between alpha.4 and alpha.5?

@ncdc
Copy link
Member

ncdc commented Jun 7, 2016

@cheld What OS? What Docker version?

@cheld
Copy link
Contributor Author

cheld commented Jun 7, 2016

Docker: 1.10.3 (client+server)

OS: Ubuntu 15.10

@cheld
Copy link
Contributor Author

cheld commented Jun 7, 2016

CC @batikanu

@ncdc
Copy link
Member

ncdc commented Jun 7, 2016

@cheld have you tried --volume=/var/lib/kubelet:/var/lib/kubelet:rw,rslave?

@cheld
Copy link
Contributor Author

cheld commented Jun 7, 2016

@ncdc it works!

docker run -d \
    --volume=/:/rootfs:ro \
    --volume=/sys:/sys:rw \
    --volume=/var/lib/docker/:/var/lib/docker:rw \
    --volume=/var/lib/kubelet/:/var/lib/kubelet:rw,rslave \
    --volume=/var/run:/var/run:rw \
    --net=host \
    --pid=host \
    --privileged \
    gcr.io/google_containers/hyperkube-${ARCH}:${K8S_VERSION} \
    /hyperkube kubelet \
        --containerized \
        --hostname-override=127.0.0.1 \
        --api-servers=http://localhost:8080 \
        --config=/etc/kubernetes/manifests \
        --cluster-dns=10.0.0.10 \
        --cluster-domain=cluster.local \
        --allow-privileged --v=2

@ncdc
Copy link
Member

ncdc commented Jun 7, 2016

@cheld is it ok to close this?

@luxas
Copy link
Member

luxas commented Jun 7, 2016

Okay, this is going to push us over docker 1.10, but I think it's okay.
@cheld May you run conformance tests on alpha.4 and alpha.5 with this change?
That would be a great way to see if anything other has changed...

@cheld Does everything work "normally" (for this solution) with rslave? emptyDir, downward api, service accounts etc.

@ncdc
Copy link
Member

ncdc commented Jun 7, 2016

rslave is how we do it for OpenShift. Everything should work just fine.

On Tuesday, June 7, 2016, Lucas Käldström notifications@github.com wrote:

Okay, this is going to push us over docker 1.10, but I think it's okay.
@cheld https://github.com/cheld May you run conformance tests on alpha.4
and alpha.5 with this change?
That would be a great way to see if anything other has changed...

@cheld https://github.com/cheld Does everything work "normally" (for
this solution) with rslave? emptyDir, downward api, service accounts etc.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#26943 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AAABYhliJs7Wc2OU3ZfoAl0paEMLHUfkks5qJVIUgaJpZM4IvsS9
.

@cheld
Copy link
Contributor Author

cheld commented Jun 7, 2016

@ncdc Can you shed some light on the rslave option?

So, we still have to adapt documentation and a couple of scripts. I can do this in a separate PR.

@zreigz could you please run the conformance tests?

@luxas
Copy link
Member

luxas commented Jun 7, 2016

Does rslave work with <alpha.5? @cheld may you verify?

@cheld
Copy link
Contributor Author

cheld commented Jun 7, 2016

Manually tested emptyDir, downward api, service accounts with v1.2.4, alpha.4 and alpha.5. Seems to work fine

BTW: the shared volume solution is not really working on my machine. (Error: Path /var/lib/kubelet is mounted on /var/lib/kubelet but it is not a shared mount..)

   umount $(mount | grep /var/lib/kubelet | awk '{print $3}')
   rm -R /var/lib/kubelet
   mkdir -p /var/lib/kubelet
   mount --bind /var/lib/kubelet /var/lib/kubelet
   mount --make-shared /var/lib/kubelet

   docker run \
       --name=kubelet \
       --volume=/:/rootfs:ro \
       --volume=/sys:/sys:ro \
       --volume=/var/lib/docker/:/var/lib/docker:rw \
       --volume=/var/run:/var/run:rw \
       --volume=/var/lib/kubelet:/var/lib/kubelet:shared \
       --net=host \
       --pid=host \
       --privileged=true \
       -d \
       ${HYPERKUBE_IMAGE} \
       /hyperkube kubelet \
           --hostname-override=${MASTER_IP} \
           --address="0.0.0.0" \
           --api-servers=http://localhost:8080 \
           --config=/etc/kubernetes/manifests-multi \
           --cluster-dns=10.0.0.10 \
           --cluster-domain=cluster.local \
           --allow-privileged=true --v=2 \

@zreigz
Copy link
Contributor

zreigz commented Jun 7, 2016

In your docker service file you have to clear or set for shared MountFlags

[Service]
Type=notify
ExecStart=/usr/bin/docker daemon -H fd://
MountFlags=shared
LimitNOFILE=1048576

@ncdc
Copy link
Member

ncdc commented Jun 7, 2016

Did your Docker unit file change between v1.3.0-alpha.4 and v1.3.0-alpha.5? Perhaps you had MountFlags=shared (or slave) when running v1.3.0-alpha.4 but it somehow was removed or changed to private for v1.3.0-alpha.5?

@ncdc
Copy link
Member

ncdc commented Jun 7, 2016

@pmorie fyi in case you want to explain the various mount propagation modes 😄

@pmorie
Copy link
Member

pmorie commented Jun 7, 2016

I guess what I really need to do is write some doc on this subject.

@luxas
Copy link
Member

luxas commented Jun 7, 2016

@pmorie That would be really nice!

@cheld May I assign this to you; to document docker-1.10 requirement and update the commands to include rslave?

@cheld
Copy link
Contributor Author

cheld commented Jun 8, 2016

yes, I will do

@luxas
Copy link
Member

luxas commented Jun 8, 2016

@pmorie @ncdc For me it does create mounts every n seconds. Now my /proc/mounts consists of ~5500 lines.
e.g.

tmpfs /var/lib/kubelet/pods/1afe6545-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-ijd7i tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/1afe6545-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-ijd7i tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4dfc21-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4dfc21-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4de825-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4de825-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4dd246-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4dd246-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4de6cb-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4de6cb-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/cce34d6a-2cb3-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/cce34d6a-2cb3-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/1afe6545-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-ijd7i tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/1afe6545-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-ijd7i tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4dc64a-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4dc64a-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4dfc21-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4dfc21-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4dd246-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4dd246-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4de825-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4de825-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4de6cb-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4de6cb-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/1afe6545-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-ijd7i tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/1afe6545-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-ijd7i tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4dc64a-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4dc64a-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/cce34d6a-2cb3-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/cce34d6a-2cb3-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4dfc21-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4dfc21-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4dd246-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/ad4dd246-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-mnfqf tmpfs rw,relatime 0 0
tmpfs /var/lib/kubelet/pods/1afe6545-2cb2-11e6-b895-f0761c62f136/volumes/kubernetes.io~secret/default-token-ijd7i tmpfs rw,relatime 0 0

the same volume is mounted ~600 times.

@pmorie
Copy link
Member

pmorie commented Jun 8, 2016

@luxas what is the propagation mode of the /var/lib/kubelet mount in the kubelet container?

Inside the container, run this command:

findmnt -o +PROPAGATION

@luxas
Copy link
Member

luxas commented Jun 8, 2016

@pmorie That was when I was testing rslave I think...

@zreigz
Copy link
Contributor

zreigz commented Jun 9, 2016

Summarizing 3 Failures:

[Fail] [k8s.io] Secrets [It] should be consumable from pods in volume [Conformance] 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1670

[Fail] [k8s.io] Kubectl client [k8s.io] Kubectl describe [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] [Flaky] 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1291

[Fail] [k8s.io] Kubectl client [k8s.io] Kubectl run job [It] should create a job from an image when restart is Never [Conformance] 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1190

Ran 94 of 293 Specs in 458.371 seconds
FAIL! -- 91 Passed | 3 Failed | 0 Pending | 199 Skipped 

@dims
Copy link
Member

dims commented Jun 10, 2016

being worked on in #26996 (Also see #26864)

@tristanz
Copy link

tristanz commented Jul 8, 2016

I can confirm this remains broken in 1.3. The break appeared between alpha-4 and alpha-5.

@luxas
Copy link
Member

luxas commented Jul 8, 2016

Use the shared mount without containerized.
That works with v1.3.0, and is the only supported/working solution

@cheld
Copy link
Contributor Author

cheld commented Jul 12, 2016

The official documentation to launch hyperkube is still not correct. I see two options to fix:

  1. use the slave hack in order to make the containerized hack work... advantage: easy for the user. disadvantage: seems to have some bugs.
  2. use shared mount propagation. advantage: nice. disadvantage: requires some kind of setup by the user. Either we document or we provide a setup-script.

@sekka1
Copy link

sekka1 commented Jul 12, 2016

Also having this issue. The shared mount did not fix it for me.

@tristanz
Copy link

@sekka1 Did you use a shared mount for /var/lib/kubelet and remove --containerized flag? I believe this does solve the issues as @luxas suggests.

@sekka1
Copy link

sekka1 commented Jul 12, 2016

@tristanz It doesnt resolve the issue for me. I am following these docs with the modified kubelet start (This did work for me in v1.2.4).

http://kubernetes.io/docs/getting-started-guides/docker-multinode/worker/

docker run \
    --volume=/:/rootfs:ro \
    --volume=/sys:/sys:ro \
    --volume=/dev:/dev \
    --volume=/var/lib/docker/:/var/lib/docker:rw \
    --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
    --volume=/var/run:/var/run:rw \
    --net=host \
    --privileged=true \
    --pid=host \
    -d \
    gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
    /hyperkube kubelet \
        --allow-privileged=true \
        --api-servers=http://${MASTER_IP}:8080 \
        --v=2 \
        --address=0.0.0.0 \
        --enable-server \
        --cluster-dns=10.0.0.10 \
        --cluster-domain=cluster.local

When I startup a pod, i see these logs in that kubelet container. Not sure what to make of that.

2016-07-12T22:03:23.566055660Z I0712 22:03:23.565710    1682 kubelet.go:2534] SyncLoop (ADD, "api"): "aa-test-http-server-x5ups_prod(70816d38-487c-11e6-97e9-0050569525aa)"
2016-07-12T22:03:23.691929109Z I0712 22:03:23.691583    1682 reconciler.go:253] MountVolume operation started for volume "kubernetes.io/secret/default-token-yhdrg" (spec.Name: "default-token-yhdrg") to pod "70816d38-487c-11e6-97e9-0050569525aa" (UID: "70816d38-487c-11e6-97e9-0050569525aa"). 
2016-07-12T22:03:23.705684806Z I0712 22:03:23.705355    1682 operation_executor.go:720] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/default-token-yhdrg" (spec.Name: "default-token-yhdrg") pod "70816d38-487c-11e6-97e9-0050569525aa" (UID: "70816d38-487c-11e6-97e9-0050569525aa").
2016-07-12T22:03:23.867969069Z I0712 22:03:23.867614    1682 docker_manager.go:1735] Need to restart pod infra container for "aa-test-http-server-x5ups_prod(70816d38-487c-11e6-97e9-0050569525aa)" because it is not found
2016-07-12T22:03:24.397589348Z I0712 22:03:24.396673    1682 reconciler.go:253] MountVolume operation started for volume "kubernetes.io/secret/default-token-97yu5" (spec.Name: "default-token-97yu5") to pod "bd9e4a13-487b-11e6-97e9-0050569525aa" (UID: "bd9e4a13-487b-11e6-97e9-0050569525aa"). Volume is already mounted to pod, but remount was requested.
2016-07-12T22:03:24.408126259Z I0712 22:03:24.407818    1682 operation_executor.go:720] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/default-token-97yu5" (spec.Name: "default-token-97yu5") pod "bd9e4a13-487b-11e6-97e9-0050569525aa" (UID: "bd9e4a13-487b-11e6-97e9-0050569525aa").


2016-07-12T22:03:24.791296154Z I0712 22:03:24.790897    1682 kubelet.go:2561] SyncLoop (PLEG): "aa-test-http-server-x5ups_prod(70816d38-487c-11e6-97e9-0050569525aa)", event: &pleg.PodLifecycleEvent{ID:"70816d38-487c-11e6-97e9-0050569525aa", Type:"ContainerStarted", Data:"04a0f2fc8e21f57d1579029750acb623865fc2241c39b66ddc4ebcdbc7ba6817"}
2016-07-12T22:03:25.806406744Z I0712 22:03:25.806024    1682 kubelet.go:2561] SyncLoop (PLEG): "aa-test-http-server-x5ups_prod(70816d38-487c-11e6-97e9-0050569525aa)", event: &pleg.PodLifecycleEvent{ID:"70816d38-487c-11e6-97e9-0050569525aa", Type:"ContainerStarted", Data:"d9007694d9153be3e44b723f25bd535009ebcc59b6183854f7faea9319fd0ae4"}
2016-07-12T22:03:25.903979870Z I0712 22:03:25.903611    1682 reconciler.go:253] MountVolume operation started for volume "kubernetes.io/secret/default-token-yhdrg" (spec.Name: "default-token-yhdrg") to pod "70816d38-487c-11e6-97e9-0050569525aa" (UID: "70816d38-487c-11e6-97e9-0050569525aa"). Volume is already mounted to pod, but remount was requested.
2016-07-12T22:03:25.912781049Z I0712 22:03:25.912574    1682 operation_executor.go:720] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/default-token-yhdrg" (spec.Name: "default-token-yhdrg") pod "70816d38-487c-11e6-97e9-0050569525aa" (UID: "70816d38-487c-11e6-97e9-0050569525aa").


2016-07-12T22:03:26.908559241Z I0712 22:03:26.908027    1682 reconciler.go:253] MountVolume operation started for volume "kubernetes.io/secret/default-token-yhdrg" (spec.Name: "default-token-yhdrg") to pod "70816d38-487c-11e6-97e9-0050569525aa" (UID: "70816d38-487c-11e6-97e9-0050569525aa"). Volume is already mounted to pod, but remount was requested.
2016-07-12T22:03:26.913523187Z I0712 22:03:26.913374    1682 operation_executor.go:720] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/default-token-yhdrg" (spec.Name: "default-token-yhdrg") pod "70816d38-487c-11e6-97e9-0050569525aa" (UID: "70816d38-487c-11e6-97e9-0050569525aa").
2016-07-12T22:03:31.328176710Z I0712 22:03:31.327736    1682 reconciler.go:253] MountVolume operation started for volume "kubernetes.io/secret/default-token-yhdrg" (spec.Name: "default-token-yhdrg") to pod "bdb157a8-487b-11e6-97e9-0050569525aa" (UID: "bdb157a8-487b-11e6-97e9-0050569525aa"). Volume is already mounted to pod, but remount was requested.
2016-07-12T22:03:31.338621403Z I0712 22:03:31.338433    1682 operation_executor.go:720] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/default-token-yhdrg" (spec.Name: "default-token-yhdrg") pod "bdb157a8-487b-11e6-97e9-0050569525aa" (UID: "bdb157a8-487b-11e6-97e9-0050569525aa").

Happy to try more things, just let me know. Im not really sure what to do from this point.

@cheld
Copy link
Contributor Author

cheld commented Jul 13, 2016

You need to add shared to kublet. This is required to make mounts done by kubelet visible to other containers (e.g. service accounts)

  --volume=/var/lib/kubelet/:/var/lib/kubelet:rw,shared

@sekka1
Copy link

sekka1 commented Jul 15, 2016

@cheld Cool! That works! The token and ca.crt are there now. Thanks!

However something is still different here. The ingress doesnt complain about not finding the token and crt now but it is timing out on something. Same setup as v1.2.5 except for this change to the minion nodes running kubelet.

F0715 04:35:17.177010       1 main.go:125] unexpected error getting runtime information: timed out waiting for the condition

This could be just my setup. Will have to troubleshoot it some more to see what it is actually trying to do.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

8 participants