New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting errors when attempting token authentication from kubelet #10297

Closed
DreadPirateShawn opened this Issue Jun 24, 2015 · 7 comments

Comments

Projects
None yet
6 participants
@DreadPirateShawn

DreadPirateShawn commented Jun 24, 2015

I'm having difficulty getting token-based authentication configured without degrading the behavior of the kubelets.

I've got a master and 3 nodes, running processes like so:

== master (10.0.0.2) ==

/opt/kubernetes/bin/kube-apiserver --v=5 --logtostderr=false --stderrthreshold=1
    --log_dir=/logs/kube-master1
    --insecure-bind-address=10.0.0.2
    --etcd-servers=http://localhost:4001
    --secure_port=6443
    --token-auth-file=/config/kubernetes/known_tokens.csv
    --insecure-port=8080
    --kubelet_port=10250
    --portal_net=11.1.1.0/24
/opt/kubernetes/bin/kube-controller-manager --v=5 --logtostderr=false --stderrthreshold=1
    --log_dir=/logs/kube-master1
    --master=http://10.0.0.2:8080
    --address=10.0.0.2
    --machines=10.0.0.3,10.0.0.4,10.0.0.7,
    --port=10252
/opt/kubernetes/bin/kube-scheduler --v=5 --logtostderr=false --stderrthreshold=1
    --log_dir=/logs/kube-master1
    --address=10.0.0.2
    --port=10251
    --master=http://10.0.0.2:8080

== nodes (10.0.0.3, 10.0.0.4. 10.0.0.7) ==

/opt/kubernetes/bin/kube-proxy --v=5 --logtostderr=false --stderrthreshold=1
    --log_dir=/logs/kube-minion2
    --kubeconfig=/config/kubernetes/kube_proxy_config
    --master=https://10.0.0.2:6443
/opt/kubernetes/bin/kubelet --v=5 --logtostderr=false --stderrthreshold=1
    --log_dir=/logs/kube-minion2 --port=10250
    --address=0.0.0.0
    --hostname_override=10.0.0.4
    --cadvisor-port=4194
    --api-servers=http://10.0.0.2:8080

My token files are here, with both auth file and kubeconfig variations:

== known_tokens.csv ==

abcdefgtoken1randomstuffhere,kubelet,kubelet
abcdefgtoken2randomstuffagain,kube-proxy,kube-proxy

== kubernetes_auth ==

{"BearerToken": "abcdefgtoken1randomstuffhere", "Insecure": true }

== kubelet_config ==

apiVersion: v1
kind: Config
users:
- name: kubelet
  user:
    token: abcdefgtoken1randomstuffhere
clusters:
- name: local
  cluster:
    insecure-skip-tls-verify: true

== kube_proxy_config ==

apiVersion: v1
kind: Config
users:
- name: kube-proxy
  user:
    token: abcdefgtoken2randomstuffagain
clusters:
- name: local
  cluster:
     insecure-skip-tls-verify: true

My problem:

For kube-proxy, everything seems to behave the same regardless of whether I use --master=https://10.0.0.2:6443 --kubeconfig=/config/kubernetes/kube_proxy_config or simply --master=http://10.0.0.2:8080.

For kubelet, I'm having more difficulty. I seem to be able to get things to work if I use non-https, non-secure port, and no auth_path or kubeconfig.

However, when using https + secure port for --api-servers, certificate errors occur unless I include the --auth_path flag (which does not appear in kubelet --help, I believe may be deprecated)... but even with --auth_path to address the certificate errors, I still see other errors, which vary depending on whether --kubeconfig is provided.

(As a baseline question -- can token-based authentication be used with either "http + insecure port" or "https + secure port"? If not, that might chop the following table in half right away.)

Below is the summary of the flag combinations I've attempted for kubelet, followed by the resulting excerpts from the logs.

Can you help clarify which combination I should be aiming for in the first place, and weigh in on what appears to be going awry? And for that matter, is the "Steady data flow" output correct, or is that a red herring and still not behaving correctly?

--api-servers= --auth_path --kubeconfig result
http://10.0.0.2:8080 no no Steady data flow.
https://10.0.0.2:6443 no no Certificate errors.
http://10.0.0.2:8080 yes no Rejected event, then no data flow.
https://10.0.0.2:6443 yes no Rejected event, then no data flow.
http://10.0.0.2:8080 no yes Initial event, then no data flow.
https://10.0.0.2:6443 no yes Certificate errors.
http://10.0.0.2:8080 yes yes Error getting node.
https://10.0.0.2:6443 yes yes Error getting node.

Note that "yes" and "no" for auth_path and kubeconfig is shorthand for the presence or absence of --auth_path=/config/kubernetes/kubernetes_auth and --kubeconfig=/config/kubernetes/kubelet_config, the contents of which files are included earlier in this ticket.

== Error getting node: ==

I0624 17:33:27.566388    3831 kubelet.go:1616] Starting kubelet main sync loop.
I0624 17:33:37.567446    3831 kubelet.go:1634] Periodic sync
E0624 17:33:37.567642    3831 kubelet.go:1538] error getting node: node 10.0.0.4 not found
I0624 17:33:37.567717    3831 kubelet.go:1325] Desired: []*api.Pod(nil)
I0624 17:33:47.574430    3831 kubelet.go:1634] Periodic sync
E0624 17:33:47.574592    3831 kubelet.go:1538] error getting node: node 10.0.0.4 not found
I0624 17:33:47.574640    3831 kubelet.go:1325] Desired: []*api.Pod(nil)
I0624 17:33:57.577779    3831 kubelet.go:1634] Periodic sync
E0624 17:33:57.578211    3831 kubelet.go:1538] error getting node: node 10.0.0.4 not found
I0624 17:33:57.578447    3831 kubelet.go:1325] Desired: []*api.Pod(nil)

== Rejected event, then no data flow: ==

I0624 17:31:51.338356    3512 kubelet.go:1616] Starting kubelet main sync loop.
I0624 17:31:51.452650    3512 kubelet.go:1790] Recording NodeReady event message for node 10.0.0.4
I0624 17:31:51.453042    3512 kubelet.go:749] Attempting to register node 10.0.0.4
I0624 17:31:51.453393    3512 event.go:203] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.4", UID:"10.0.0.4", APIVersion:"", ResourceVersion:"", FieldPath:""}): reason: 'NodeReady' Node 10.0.0.4 status is now: NodeReady
I0624 17:31:51.876909    3512 kubelet.go:762] Node 10.0.0.4 was previously registered
I0624 17:31:51.877228    3512 kubelet.go:782] Starting node status updates
I0624 17:32:01.338844    3512 kubelet.go:1634] Periodic sync
I0624 17:32:01.339456    3512 kubelet.go:1325] Desired: []*api.Pod(nil)
I0624 17:32:01.880804    3512 kubelet.go:1790] Recording NodeReady event message for node 10.0.0.4
I0624 17:32:01.880900    3512 event.go:203] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.4", UID:"10.0.0.4", APIVersion:"", ResourceVersion:"", FieldPath:""}): reason: 'NodeReady' Node 10.0.0.4 status is now: NodeReady
I0624 17:32:11.343847    3512 kubelet.go:1634] Periodic sync
I0624 17:32:11.344304    3512 kubelet.go:1325] Desired: []*api.Pod(nil)
I0624 17:32:12.379752    3512 kubelet.go:1790] Recording NodeReady event message for node 10.0.0.4
I0624 17:32:12.380633    3512 event.go:203] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.4", UID:"10.0.0.4", APIVersion:"", ResourceVersion:"", FieldPath:""}): reason: 'NodeReady' Node 10.0.0.4 status is now: NodeReady
E0624 17:32:12.569829    3512 event.go:185] Server rejected event '&api.Event{TypeMeta:api.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"10.0.0.4.13eabce24f7afa93", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"424098", CreationTimestamp:util.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*util.Time)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.4", UID:"10.0.0.4", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeReady", Message:"Node 10.0.0.4 status is now: NodeReady", Source:api.EventSource{Component:"kubelet", Host:"10.0.0.4"}, FirstTimestamp:util.Time{Time:time.Time{sec:63570763911, nsec:0, loc:(*time.Location)(0x1b94040)}}, LastTimestamp:util.Time{Time:time.Time{sec:63570763932, nsec:380042143, loc:(*time.Location)(0x1b94040)}}, Count:3}': 'events "10.0.0.4.13eabce24f7afa93" cannot be updated: 101: Compare failed ([424098 != 519900]) [519900]' (will not retry!)
I0624 17:32:21.348021    3512 kubelet.go:1634] Periodic sync
I0624 17:32:21.348308    3512 kubelet.go:1325] Desired: []*api.Pod(nil)
I0624 17:32:31.351645    3512 kubelet.go:1634] Periodic sync
I0624 17:32:31.351936    3512 kubelet.go:1325] Desired: []*api.Pod(nil)
I0624 17:32:41.355545    3512 kubelet.go:1634] Periodic sync
I0624 17:32:41.356078    3512 kubelet.go:1325] Desired: []*api.Pod(nil)

== Initial event, then no data flow: ==

I0624 17:34:35.678212    4065 kubelet.go:1616] Starting kubelet main sync loop.
I0624 17:34:35.790459    4065 kubelet.go:1790] Recording NodeReady event message for node 10.0.0.4
I0624 17:34:35.790667    4065 kubelet.go:749] Attempting to register node 10.0.0.4
I0624 17:34:35.790918    4065 event.go:203] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.4", UID:"10.0.0.4", APIVersion:"", ResourceVersion:"", FieldPath:""}): reason: 'NodeReady' Node 10.0.0.4 status is now: NodeReady
I0624 17:34:35.809017    4065 kubelet.go:762] Node 10.0.0.4 was previously registered
I0624 17:34:35.809218    4065 kubelet.go:782] Starting node status updates
I0624 17:34:45.678469    4065 kubelet.go:1634] Periodic sync
I0624 17:34:45.679193    4065 kubelet.go:1325] Desired: []*api.Pod(nil)
I0624 17:34:45.814787    4065 kubelet.go:1790] Recording NodeReady event message for node 10.0.0.4
I0624 17:34:45.814947    4065 event.go:203] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.4", UID:"10.0.0.4", APIVersion:"", ResourceVersion:"", FieldPath:""}): reason: 'NodeReady' Node 10.0.0.4 status is now: NodeReady
I0624 17:34:55.685604    4065 kubelet.go:1634] Periodic sync
I0624 17:34:55.686360    4065 kubelet.go:1325] Desired: []*api.Pod(nil)
I0624 17:35:05.691533    4065 kubelet.go:1634] Periodic sync
I0624 17:35:05.691738    4065 kubelet.go:1325] Desired: []*api.Pod(nil)
I0624 17:35:15.694277    4065 kubelet.go:1634] Periodic sync
I0624 17:35:15.694630    4065 kubelet.go:1325] Desired: []*api.Pod(nil)

== Certificate errors: ==

I0624 17:38:49.927555    5012 kubelet.go:1616] Starting kubelet main sync loop.
I0624 17:38:50.028404    5012 kubelet.go:1790] Recording NodeReady event message for node 10.0.0.4
I0624 17:38:50.028529    5012 kubelet.go:749] Attempting to register node 10.0.0.4
I0624 17:38:50.028691    5012 event.go:203] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.4", UID:"10.0.0.4", APIVersion:"", ResourceVersion:"", FieldPath:""}): reason: 'NodeReady' Node 10.0.0.4 status is now: NodeReady
I0624 17:38:50.031165    5012 kubelet.go:766] Unable to register 10.0.0.4 with the apiserver: Post https://10.0.0.2:6443/api/v1/nodes: x509: certificate signed by unknown authority
I0624 17:38:50.231512    5012 kubelet.go:1790] Recording NodeReady event message for node 10.0.0.4
I0624 17:38:50.231609    5012 kubelet.go:749] Attempting to register node 10.0.0.4
I0624 17:38:50.231776    5012 event.go:203] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.4", UID:"10.0.0.4", APIVersion:"", ResourceVersion:"", FieldPath:""}): reason: 'NodeReady' Node 10.0.0.4 status is now: NodeReady
I0624 17:38:50.233549    5012 kubelet.go:766] Unable to register 10.0.0.4 with the apiserver: Post https://10.0.0.2:6443/api/v1/nodes: x509: certificate signed by unknown authority
I0624 17:38:50.634005    5012 kubelet.go:1790] Recording NodeReady event message for node 10.0.0.4
I0624 17:38:50.634087    5012 kubelet.go:749] Attempting to register node 10.0.0.4
I0624 17:38:50.634114    5012 event.go:203] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.4", UID:"10.0.0.4", APIVersion:"", ResourceVersion:"", FieldPath:""}): reason: 'NodeReady' Node 10.0.0.4 status is now: NodeReady
I0624 17:38:50.635950    5012 kubelet.go:766] Unable to register 10.0.0.4 with the apiserver: Post https://10.0.0.2:6443/api/v1/nodes: x509: certificate signed by unknown authority
W0624 17:38:50.671938    5012 request.go:302] field selector: v1 - nodes - metadata.name - 10.0.0.4: need to check if this is versioned correctly.
W0624 17:38:50.672248    5012 request.go:302] field selector: v1 - pods - spec.host - 10.0.0.4: need to check if this is versioned correctly.
E0624 17:38:50.675072    5012 reflector.go:136] Failed to list *api.Node: Get https://10.0.0.2:6443/api/v1/nodes?fieldSelector=metadata.name%3D10.0.0.4: x509: certificate signed by unknown authority
E0624 17:38:50.675268    5012 reflector.go:136] Failed to list *api.Service: Get https://10.0.0.2:6443/api/v1/services: x509: certificate signed by unknown authority
E0624 17:38:50.689752    5012 reflector.go:136] Failed to list *api.Pod: Get https://10.0.0.2:6443/api/v1/pods?fieldSelector=spec.host%3D10.0.0.4: x509: certificate signed by unknown authority
I0624 17:38:51.436414    5012 kubelet.go:1790] Recording NodeReady event message for node 10.0.0.4

== Steady data flow: ==

I0624 17:36:05.445298    4371 kubelet.go:1616] Starting kubelet main sync loop.
I0624 17:36:05.445334    4371 pod_manager.go:91] SET: Containers changed
I0624 17:36:05.457273    4371 kubelet.go:1325] Desired: []*api.Pod{(*api.Pod)(0xc208108d20)}
I0624 17:36:05.458123    4371 container.go:291] Start housekeeping for container "/kube-proxy"
I0624 17:36:05.458608    4371 container.go:291] Start housekeeping for container "/docker"
I0624 17:36:05.458992    4371 container.go:291] Start housekeeping for container "/docker/5ff164f903315ac702e5df4e174c1e162a1238709bf2aa767ba29754e14351aa"
I0624 17:36:05.459553    4371 container.go:291] Start housekeeping for container "/docker/d8e82364d31f4c2a1c8d7e4f1ffa95eae30b07cbba5bdd86ef92776ccff1c0ec"
I0624 17:36:05.462344    4371 kubelet.go:2061] Generating status for "mystuff-1jq7j_ns-mystuff"
I0624 17:36:05.472919    4371 manager.go:273] Container inspect result: {ID:864e232f42424f798e7a6c57da42e1226bcefa1d77d07b43aba10a8c3696b137 Created:2015-06-24 17:13:05.393004643 +0000 UTC Path:/bin/sh Args:[-c "$STARTUP_DIR/startup.sh"] Config:0xc20819ca80 State:{Running:false Paused:false Restarting:false OOMKilled:false Pid:0 ExitCode:137 Error: StartedAt:2015-06-24 17:13:05.49787239 +0000 UTC FinishedAt:2015-06-24 17:15:03.332783292 +0000 UTC} Image:2f674ca96f8214053a91a0b30f5eb5914e74f3146f9546e06f7e9c489b32b62b Node:<nil> NetworkSettings:0xc2084781e0 SysInitPath: ResolvConfPath:/var/lib/docker/containers/e5b892c46f0cf3bdc10d53e3ff39d9462b3cba0cecccd79261d0684ba56de18c/resolv.conf HostnamePath:/var/lib/docker/containers/e5b892c46f0cf3bdc10d53e3ff39d9462b3cba0cecccd79261d0684ba56de18c/hostname HostsPath:/var/lib/docker/containers/e5b892c46f0cf3bdc10d53e3ff39d9462b3cba0cecccd79261d0684ba56de18c/hosts Name:/k8s_mystuff.a7674930_mystuff-1jq7j_ns-mystuff_09cd418d-19f6-11e5-b231-00505629c58f_72b49219 Driver:aufs Volumes:map[/dev/termination-log:/var/lib/kubelet/pods/09cd418d-19f6-11e5-b231-00505629c58f/containers/mystuff/864e232f42424f798e7a6c57da42e1226bcefa1d77d07b43aba10a8c3696b137] VolumesRW:map[/dev/termination-log:true] HostConfig:0xc2081085a0 ExecIDs:[] AppArmorProfile:}
E0624 17:36:05.473133    4371 manager.go:310] Error on reading termination-log /var/lib/kubelet/pods/09cd418d-19f6-11e5-b231-00505629c58f/containers/mystuff/864e232f42424f798e7a6c57da42e1226bcefa1d77d07b43aba10a8c3696b137: open /var/lib/kubelet/pods/09cd418d-19f6-11e5-b231-00505629c58f/containers/mystuff/864e232f42424f798e7a6c57da42e1226bcefa1d77d07b43aba10a8c3696b137: no such file or directory
I0624 17:36:05.476533    4371 manager.go:273] Container inspect result: {ID:e5b892c46f0cf3bdc10d53e3ff39d9462b3cba0cecccd79261d0684ba56de18c Created:2015-06-24 17:13:05.167502862 +0000 UTC Path:/pause Args:[] Config:0xc20819d680 State:{Running:false Paused:false Restarting:false OOMKilled:false Pid:0 ExitCode:0 Error: StartedAt:2015-06-24 17:13:05.313470175 +0000 UTC FinishedAt:2015-06-24 17:15:03.488593132 +0000 UTC} Image:2c40b0526b6358710fd09e7b8c022429268cc61703b4777e528ac9d469a07ca1 Node:<nil> NetworkSettings:0xc208478410 SysInitPath: ResolvConfPath:/var/lib/docker/containers/e5b892c46f0cf3bdc10d53e3ff39d9462b3cba0cecccd79261d0684ba56de18c/resolv.conf HostnamePath:/var/lib/docker/containers/e5b892c46f0cf3bdc10d53e3ff39d9462b3cba0cecccd79261d0684ba56de18c/hostname HostsPath:/var/lib/docker/containers/e5b892c46f0cf3bdc10d53e3ff39d9462b3cba0cecccd79261d0684ba56de18c/hosts Name:/k8s_POD.e4cc795_mystuff-1jq7j_ns-mystuff_09cd418d-19f6-11e5-b231-00505629c58f_249286ca Driver:aufs Volumes:map[] VolumesRW:map[] HostConfig:0xc2081094a0 ExecIDs:[] AppArmorProfile:}
I0624 17:36:05.479024    4371 manager.go:273] Container inspect result: {ID:19e08a5e6dee810c700ae288e4b645e2680701ae84bab10e36ba09b5d45e2144 Created:2015-06-24 17:09:19.243973885 +0000 UTC Path:/bin/sh Args:[-c "$STARTUP_DIR/startup.sh"] Config:0xc20819db00 State:{Running:false Paused:false Restarting:false OOMKilled:false Pid:0 ExitCode:137 Error: StartedAt:2015-06-24 17:09:19.364811931 +0000 UTC FinishedAt:2015-06-24 17:11:33.632115788 +0000 UTC} Image:2f674ca96f8214053a91a0b30f5eb5914e74f3146f9546e06f7e9c489b32b62b Node:<nil> NetworkSettings:0xc208478730 SysInitPath: ResolvConfPath:/var/lib/docker/containers/14b81c9ec56b8983d131d68a6d48d7725ed706942802bbae3748344b54b1d6fb/resolv.conf HostnamePath:/var/lib/docker/containers/14b81c9ec56b8983d131d68a6d48d7725ed706942802bbae3748344b54b1d6fb/hostname HostsPath:/var/lib/docker/containers/14b81c9ec56b8983d131d68a6d48d7725ed706942802bbae3748344b54b1d6fb/hosts Name:/k8s_mystuff.a7674930_mystuff-1jq7j_ns-mystuff_09cd418d-19f6-11e5-b231-00505629c58f_c5322e0c Driver:aufs Volumes:map[/dev/termination-log:/var/lib/kubelet/pods/09cd418d-19f6-11e5-b231-00505629c58f/containers/mystuff/19e08a5e6dee810c700ae288e4b645e2680701ae84bab10e36ba09b5d45e2144] VolumesRW:map[/dev/termination-log:true] HostConfig:0xc208109680 ExecIDs:[] AppArmorProfile:}
E0624 17:36:05.479142    4371 manager.go:310] Error on reading termination-log /var/lib/kubelet/pods/09cd418d-19f6-11e5-b231-00505629c58f/containers/mystuff/19e08a5e6dee810c700ae288e4b645e2680701ae84bab10e36ba09b5d45e2144: open /var/lib/kubelet/pods/09cd418d-19f6-11e5-b231-00505629c58f/containers/mystuff/19e08a5e6dee810c700ae288e4b645e2680701ae84bab10e36ba09b5d45e2144: no such file or directory
I0624 17:36:05.481304    4371 manager.go:273] Container inspect result: {ID:14b81c9ec56b8983d131d68a6d48d7725ed706942802bbae3748344b54b1d6fb Created:2015-06-24 17:09:19.032803235 +0000 UTC Path:/pause Args:[] Config:0xc208282300 State:{Running:false Paused:false Restarting:false OOMKilled:false Pid:0 ExitCode:0 Error: StartedAt:2015-06-24 17:09:19.191252504 +0000 UTC FinishedAt:2015-06-24 17:11:33.741571588 +0000 UTC} Image:2c40b0526b6358710fd09e7b8c022429268cc61703b4777e528ac9d469a07ca1 Node:<nil> NetworkSettings:0xc208478960 SysInitPath: ResolvConfPath:/var/lib/docker/containers/14b81c9ec56b8983d131d68a6d48d7725ed706942802bbae3748344b54b1d6fb/resolv.conf HostnamePath:/var/lib/docker/containers/14b81c9ec56b8983d131d68a6d48d7725ed706942802bbae3748344b54b1d6fb/hostname HostsPath:/var/lib/docker/containers/14b81c9ec56b8983d131d68a6d48d7725ed706942802bbae3748344b54b1d6fb/hosts Name:/k8s_POD.e4cc795_mystuff-1jq7j_ns-mystuff_09cd418d-19f6-11e5-b231-00505629c58f_f5c7e7db Driver:aufs Volumes:map[] VolumesRW:map[] HostConfig:0xc208109a40 ExecIDs:[] AppArmorProfile:}
I0624 17:36:05.481428    4371 manager.go:1296] Syncing Pod &{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:mystuff-1jq7j GenerateName:mystuff- Namespace:ns-mystuff SelfLink:/api/v1/namespaces/ns-mystuff/pods/mystuff-1jq7j UID:09cd418d-19f6-11e5-b231-00505629c58f ResourceVersion:519827 CreationTimestamp:2015-06-23 22:20:22 +0000 UTC DeletionTimestamp:<nil> Labels:map[app:hosting_platform name:mystuff] Annotations:map[kubernetes.io/config.source:api kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"ns-mystuff","name":"mystuff","uid":"09c5b028-19f6-11e5-b231-00505629c58f","apiVersion":"v1","resourceVersion":"509186"}}]} Spec:{Volumes:[] Containers:[{Name:mystuff Image:dev.foo.com:5000/foo/mystuff:314159a10cee8edd3059d0fce4036c72c2527392 Command:[] Args:[] WorkingDir: Ports:[] Env:[{Name:CONFIGS_REVISION Value:tip ValueFrom:<nil>}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[] LivenessProbe:<nil> ReadinessProbe:<nil> Lifecycle:<nil> TerminationMessagePath:/dev/termination-log ImagePullPolicy:IfNotPresent SecurityContext:<nil>}] RestartPolicy:Always TerminationGracePeriodSeconds:<nil> ActiveDeadlineSeconds:<nil> DNSPolicy:ClusterFirst NodeSelector:map[] ServiceAccount: NodeName:10.0.0.4 HostNetwork:false ImagePullSecrets:[]} Status:{Phase:Running Conditions:[{Type:Ready Status:False}] Message: HostIP:10.0.0.4 PodIP:10.244.71.4 StartTime:2015-06-23 22:20:23 +0000 UTC ContainerStatuses:[{Name:mystuff State:{Waiting:<nil> Running:0xc208300d20 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc2081edea0} Ready:false RestartCount:1 Image:dev.foo.com:5000/foo/mystuff:314159a10cee8edd3059d0fce4036c72c2527392 ImageID:docker://2f674ca96f8214053a91a0b30f5eb5914e74f3146f9546e06f7e9c489b32b62b ContainerID:docker://864e232f42424f798e7a6c57da42e1226bcefa1d77d07b43aba10a8c3696b137}]}}, podFullName: "mystuff-1jq7j_ns-mystuff", uid: "09cd418d-19f6-11e5-b231-00505629c58f"
I0624 17:36:05.481648    4371 manager.go:1316] Need to restart pod infra container for "mystuff-1jq7j_ns-mystuff" because it is not found
I0624 17:36:05.481772    4371 manager.go:1335] Container {Name:mystuff Image:dev.foo.com:5000/foo/mystuff:314159a10cee8edd3059d0fce4036c72c2527392 Command:[] Args:[] WorkingDir: Ports:[] Env:[{Name:CONFIGS_REVISION Value:tip ValueFrom:<nil>}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[] LivenessProbe:<nil> ReadinessProbe:<nil> Lifecycle:<nil> TerminationMessagePath:/dev/termination-log ImagePullPolicy:IfNotPresent SecurityContext:<nil>} is dead, but RestartPolicy says that we should restart it.
I0624 17:36:05.481831    4371 manager.go:1442] Got container changes for pod "mystuff-1jq7j_ns-mystuff": {StartInfraContainer:true InfraContainerId: ContainersToStart:map[0:{}] ContainersToKeep:map[]}
I0624 17:36:05.482234    4371 manager.go:1451] Killing Infra Container for "mystuff-1jq7j_ns-mystuff", will start new one
I0624 17:36:05.482287    4371 manager.go:1476] Creating pod infra container for "mystuff-1jq7j_ns-mystuff"
I0624 17:36:05.504232    4371 event.go:203] Event(api.ObjectReference{Kind:"Pod", Namespace:"ns-mystuff", Name:"mystuff-1jq7j", UID:"09cd418d-19f6-11e5-b231-00505629c58f", APIVersion:"v1", ResourceVersion:"519827", FieldPath:"implicitly required container POD"}): reason: 'pulled' Successfully pulled image "gcr.io/google_containers/pause:0.8.0"
I0624 17:36:05.504465    4371 manager.go:601] Container ns-mystuff/mystuff-1jq7j/POD: setting entrypoint "[]" and command "[]"
I0624 17:36:05.557391    4371 kubelet.go:1790] Recording NodeReady event message for node 10.0.0.4
I0624 17:36:05.557707    4371 kubelet.go:749] Attempting to register node 10.0.0.4
I0624 17:36:05.557740    4371 event.go:203] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.4", UID:"10.0.0.4", APIVersion:"", ResourceVersion:"", FieldPath:""}): reason: 'NodeReady' Node 10.0.0.4 status is now: NodeReady
I0624 17:36:05.579841    4371 event.go:203] Event(api.ObjectReference{Kind:"Pod", Namespace:"ns-mystuff", Name:"mystuff-1jq7j", UID:"09cd418d-19f6-11e5-b231-00505629c58f", APIVersion:"v1", ResourceVersion:"519827", FieldPath:"implicitly required container POD"}): reason: 'created' Created with docker id 83a4687f0beab015752a50f43cc5cac20272ab9f79d654763cfe003ca770570a
I0624 17:36:05.668454    4371 kubelet.go:762] Node 10.0.0.4 was previously registered
I0624 17:36:05.668510    4371 kubelet.go:782] Starting node status updates
@mbforbes

This comment has been minimized.

Contributor

mbforbes commented Jun 24, 2015

+cc @roberthbailey you might know something about this or who to point @DreadPirateShawn to

@roberthbailey

This comment has been minimized.

Member

roberthbailey commented Jun 24, 2015

(As a baseline question -- can token-based authentication be used with either "http + insecure port" or "https + secure port"? If not, that might chop the following table in half right away.)

For the insecure port, the apiserver doesn't do any authentication checking. So it's both insecure in the sense that the traffic is unencrypted and also in the sense that all requests are accepted without any extra credentials (this is why it's configured to only listen on localhost by default).

Since you set insecure-bind-address to 10.0.0.2 rather than leaving as the default value of 127.0.0.1, anyone who can connect to 10.0.0.2:8080 will have unauthenticated and unencrypted access to your master (this is why passing --master=http://10.0.0.2:8080 to the kube-proxy is working).

For GCE, we use a kubeconfig file (without passing the flag because it's in the default location) along with passing the --api_servers flag on the command line. The kubeconfig file contains client credentials (certificate/key) for the kubelet as well as the ca certificate for the cluster (so that the kubelet can verify the master's certificate). This isn't much different from what you are doing, except that instead of using certificates you are disabling the server cert checking in the kubelet and supplying bearer token credentials to the server. This should work just as well as the certificate configuration that is used on GCE (although the connections can be man-in-the-middled).

It seems like what you are missing in your kubeconfig files is the context which pulls together the cluster definition with a user definition and the specification of the current context. Can you try adding the following to the bottom of your kubelet's kubeconfig file:

contexts:
- context:
    cluster: local
    user: kubelet
  name: service-account-context
current-context: service-account-context

and see if that helps?

@goltermann

This comment has been minimized.

Contributor

goltermann commented Aug 25, 2015

We’re going through old support issues and asking everyone to direct your questions to stackoverflow.

We are trying to consolidate the channels to which questions for help/support are posted so that we can improve our efficiency in responding to your requests, and to make it easier for you to find answers to frequently asked questions and how to address common use cases.

We regularly see messages posted in multiple forums, with the full response thread only in one place or, worse, spread across multiple forums. Also, the large volume of support issues on github is making it difficult for us to use issues to identify real bugs.

The Kubernetes team scans stackoverflow on a regular basis, and will try to ensure your questions don't go unanswered.

@goltermann goltermann closed this Aug 25, 2015

@DreadPirateShawn

This comment has been minimized.

DreadPirateShawn commented Sep 15, 2015

Updating the closed ticket, in case anyone finds this while looking for documentation on how to use tokens for cluster authentication.

First -- adding the context info (per @roberthbailey's advice) was a key breakthrough -- thanks Robert!

Now, to recap:

Per a buried example, tokens can be generated using:

dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null

Next, I generated a token for every user process -- that is, one each for kubelet, kube-proxy, kube-scheduler, kube-controller-manager, and kubectl for all processes to share.

known_tokens.csv

token_goes_here,k8s,k8s

config

apiVersion: v1
kind: Config
clusters:
- name: local
  cluster:
    insecure-skip-tls-verify: true
contexts:
- context:
    cluster: local
    user: k8s
users:
- name: k8s
  user:
    token: token_goes_here

Then, pass --token-auth-file to kube-apiserver and --kubeconfig to everything else, using "https://MASTER_IP:SECURE_PORT" for all references to master. Remember that kubectl also needs the auth file, and in my case we're running heapster outside of the cluster, but the auth param worked just fine.

kube-apiserver \
        --insecure-bind-address=127.0.0.1 \
        --etcd-servers=http://localhost:4001 \
        --secure_port=6443 \
        --token-auth-file=known_tokens.csv \
        --insecure-port=8080 \
        --kubelet_port=10250 \
        --admission_control=NamespaceLifecycle,NamespaceExists \
        --portal_net=11.1.1.0/24

kube-controller-manager \
        --master=https://10.0.0.2:6443 \
        --kubeconfig=config \
        --address=10.0.0.2 \
        --port=10252

kube-scheduler \
        --master=https://10.0.0.2:6443 \
        --kubeconfig=config \
        --address=10.0.0.2 \
        --port=10251

kubelet \
        --api-servers=https://10.0.0.2:6443 \
        --kubeconfig=config \
        --port=10250 \
        --address=0.0.0.0 \
        --hostname_override=10.0.0.3 \
        --cadvisor-port=4194

kube-proxy \
        --master=https://10.0.0.2:6443 \
        --kubeconfig=config

kubectl \
        --server=https://10.0.0.2:6443 \
        --kubeconfig=config \
        get pods

heapster \
        --port=8082 \
        --source="kubernetes:https://10.0.0.2:6443?inClusterConfig=false&auth=config"

I couldn't find any doc location which contained this a token-usage walkthrough, so I figured I'd at least share the initial spread of touchpoints which worked for us. For posterity. :-)

(EDIT: Initially included a separate token for a user matching each process, eg token for "kubelet" and one for "kube-scheduler" user etc, but then realized that I can simply use a single token -- naming to match each process is not necessary. Granted, it's possible, and then each process would load a different auth config file with their own token, but not essential to the core example.)

@roberthbailey

This comment has been minimized.

Member

roberthbailey commented Sep 15, 2015

I'm glad you got it working. :)

@workhardcc

This comment has been minimized.

workhardcc commented Jun 1, 2016

This is helped me a lot!
But I think we don't need add --kubeconfig=config in every service config file. Only kubelet、kube-proxy and kubectl ?

@harshal-shah

This comment has been minimized.

harshal-shah commented May 29, 2017

@DreadPirateShawn Thanks so much for writing this. I was trying to create an token auth based k8s cluster for a peculiar use case and these steps helped me a lot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment