Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running from Docker Container Fails #5366

Closed
smugcloud opened this issue Oct 23, 2015 · 5 comments
Closed

Running from Docker Container Fails #5366

smugcloud opened this issue Oct 23, 2015 · 5 comments
Assignees

Comments

@smugcloud
Copy link

I'm trying to get Origin up and running on an AWS Linux AMI and am getting some strange errors when I attempt to start the container from these instructions. Here is my start command, and the meat of the log output (the last SyncLoop section keeps looping)

Start command
sudo docker run -d --name "openshift-origin" --net=host --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /tmp/openshift:/tmp/openshift openshift/origin start --loglevel=5

Log Output

I1023 17:56:32.286091       1 crypto.go:275] Generating client cert in openshift.local.config/master/openshift-router.crt and key in openshift.local.config/master/openshift-router.key
I1023 17:56:35.849010       1 create_servercert.go:122] Generated new server certificate as openshift.local.config/master/etcd.server.crt, key as openshift.local.config/master/etcd.server.key
I1023 17:56:36.149595       1 create_clientcert.go:69] Generated new client cert as openshift.local.config/master/openshift-router.crt and key as openshift.local.config/master/openshift-router.key
I1023 17:56:36.149755       1 create_kubeconfig.go:132] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://172.31.46.11:8443", PublicAPIServerURL:"https://172.31.46.11:8443", APIServerCAFile:"openshift.local.config/master/ca.crt", CertFile:"openshift.local.config/master/openshift-router.crt", KeyFile:"openshift.local.config/master/openshift-router.key", ContextNamespace:"default", KubeConfigFile:"openshift.local.config/master/openshift-router.kubeconfig", Output:(*os.File)(0xc20802e008)}
I1023 17:56:36.438740       1 create_kubeconfig.go:206] Generating 'system:openshift-router/172-31-46-11:8443' API client config as openshift.local.config/master/openshift-router.kubeconfig
I1023 17:56:36.440504       1 create_clientcert.go:53] Creating a client cert with: admin.CreateClientCertOptions{SignerCertOptions:(*admin.SignerCertOptions)(0xc2084e72c0), CertFile:"openshift.local.config/master/openshift-registry.crt", KeyFile:"openshift.local.config/master/openshift-registry.key", User:"system:openshift-registry", Groups:util.StringList{"system:registries"}, Overwrite:false, Output:(*os.File)(0xc20802e008)} and &admin.SignerCertOptions{CertFile:"openshift.local.config/master/ca.crt", KeyFile:"openshift.local.config/master/ca.key", SerialFile:"openshift.local.config/master/ca.serial.txt", lock:sync.Mutex{state:0, sema:0x0}, ca:(*crypto.CA)(0xc20843c900)}
I1023 17:56:36.440620       1 crypto.go:275] Generating client cert in openshift.local.config/master/openshift-registry.crt and key in openshift.local.config/master/openshift-registry.key
I1023 17:56:38.219644       1 create_clientcert.go:69] Generated new client cert as openshift.local.config/master/openshift-registry.crt and key as openshift.local.config/master/openshift-registry.key
I1023 17:56:38.219803       1 create_kubeconfig.go:132] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://172.31.46.11:8443", PublicAPIServerURL:"https://172.31.46.11:8443", APIServerCAFile:"openshift.local.config/master/ca.crt", CertFile:"openshift.local.config/master/openshift-registry.crt", KeyFile:"openshift.local.config/master/openshift-registry.key", ContextNamespace:"default", KubeConfigFile:"openshift.local.config/master/openshift-registry.kubeconfig", Output:(*os.File)(0xc20802e008)}
I1023 17:56:38.519166       1 create_kubeconfig.go:206] Generating 'system:openshift-registry/172-31-46-11:8443' API client config as openshift.local.config/master/openshift-registry.kubeconfig
I1023 17:56:40.326580       1 plugins.go:71] No cloud provider specified.
I1023 17:56:40.640303       1 start_master.go:388] Starting master on 0.0.0.0:8443 (v1.0.6-882-g8e1bbb5)
I1023 17:56:40.640440       1 start_master.go:389] Public master address is https://172.31.46.11:8443
I1023 17:56:40.640856       1 start_master.go:393] Using images from "openshift/origin-<component>:v1.0.6"
I1023 17:56:40.934371       1 server.go:65] etcd: peerTLS: cert = openshift.local.config/master/etcd.server.crt, key = openshift.local.config/master/etcd.server.key, ca = openshift.local.config/master/ca.crt
I1023 17:56:41.238607       1 server.go:76] etcd: listening for peers on https://0.0.0.0:7001
I1023 17:56:41.238739       1 server.go:87] etcd: clientTLS: cert = openshift.local.config/master/etcd.server.crt, key = openshift.local.config/master/etcd.server.key, ca = openshift.local.config/master/ca.crt
I1023 17:56:41.539500       1 server.go:98] etcd: listening for client requests on https://0.0.0.0:4001
2015-10-23 17:56:41.540086 I | etcdserver: name = openshift.local
2015-10-23 17:56:41.540149 I | etcdserver: data dir = openshift.local.etcd
2015-10-23 17:56:41.540282 I | etcdserver: member dir = openshift.local.etcd/member
2015-10-23 17:56:41.540573 I | etcdserver: heartbeat = 100ms
2015-10-23 17:56:41.540704 I | etcdserver: election = 1000ms
2015-10-23 17:56:41.540853 I | etcdserver: snapshot count = 0
2015-10-23 17:56:41.541000 I | etcdserver: advertise client URLs = https://172.31.46.11:4001
2015-10-23 17:56:41.541150 I | etcdserver: initial advertise peer URLs = https://172.31.46.11:7001
2015-10-23 17:56:41.541298 I | etcdserver: initial cluster = openshift.local=https://172.31.46.11:7001
2015-10-23 17:56:41.561391 I | etcdserver: starting member 180d5abbbee658cf in cluster 4767314281816d7a
2015-10-23 17:56:41.561471 I | raft: 180d5abbbee658cf became follower at term 0
2015-10-23 17:56:41.561632 I | raft: newRaft 180d5abbbee658cf [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2015-10-23 17:56:41.561771 I | raft: 180d5abbbee658cf became follower at term 1
2015-10-23 17:56:41.561953 I | etcdserver: set snapshot count to default 10000
2015-10-23 17:56:41.562089 I | etcdserver: starting server... [version: 2.1.2, cluster version: to_be_decided]
2015-10-23 17:56:41.563267 N | etcdserver: added local member 180d5abbbee658cf [https://172.31.46.11:7001] to cluster 4767314281816d7a
I1023 17:56:41.563586       1 etcd.go:68] Started etcd at 172.31.46.11:4001
I1023 17:56:41.617819       1 run_components.go:183] Using default project node label selector:
I1023 17:56:42.245614       1 master.go:369] Setting master service IP to "172.30.0.1" (read-write).
2015-10-23 17:56:42.962115 I | raft: 180d5abbbee658cf is starting a new election at term 1
2015-10-23 17:56:42.962288 I | raft: 180d5abbbee658cf became candidate at term 2
2015-10-23 17:56:42.962618 I | raft: 180d5abbbee658cf received vote from 180d5abbbee658cf at term 2
2015-10-23 17:56:42.962819 I | raft: 180d5abbbee658cf became leader at term 2
2015-10-23 17:56:42.962987 I | raft: raft.node: 180d5abbbee658cf elected leader 180d5abbbee658cf at term 2
2015-10-23 17:56:42.963572 I | etcdserver: setting up the initial cluster version to 2.1.0
2015-10-23 17:56:42.969854 N | etcdserver: set the initial cluster version to 2.1.0
2015-10-23 17:56:42.970415 I | etcdserver: published {Name:openshift.local ClientURLs:[https://172.31.46.11:4001]} to cluster 4767314281816d7a
W1023 17:56:43.004180       1 controller.go:248] Resetting endpoints for master service "kubernetes" to &{{ } {kubernetes  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[]} [{[{172.31.46.11 <nil>}] [] [{https 8443 TCP}]}]}
I1023 17:56:43.293435       1 plugin.go:27] Route plugin initialized with suffix=router.default.svc.cluster.local
I1023 17:56:43.456524       1 master.go:232] Started Kubernetes API at 0.0.0.0:8443/api/v1
I1023 17:56:43.456652       1 master.go:232] Started Kubernetes API Extensions at 0.0.0.0:8443/apis/extensions/v1beta1
I1023 17:56:43.456906       1 master.go:232] Started Origin API at 0.0.0.0:8443/oapi/v1
I1023 17:56:43.457063       1 master.go:232] Started OAuth2 API at 0.0.0.0:8443/oauth
I1023 17:56:43.457210       1 master.go:232] Started Login endpoint at 0.0.0.0:8443/login
I1023 17:56:43.457359       1 master.go:232] Started Web Console 0.0.0.0:8443/console/
I1023 17:56:43.457485       1 master.go:232] Started Swagger Schema API at 0.0.0.0:8443/swaggerapi/
I1023 17:56:43.486209       1 net.go:105] Got error &net.OpError{Op:"dial", Net:"tcp4", Addr:(*net.TCPAddr)(0xc20953ef60), Err:0x6f}, trying again: "0.0.0.0:8443"
I1023 17:56:43.636076       1 net.go:105] Got error &net.OpError{Op:"dial", Net:"tcp4", Addr:(*net.TCPAddr)(0xc20953ff50), Err:0x6f}, trying again: "0.0.0.0:8443"
I1023 17:56:43.801814       1 ensure.go:173] No cluster policy found.  Creating bootstrap policy based on: openshift.local.config/master/policy.json
I1023 17:56:43.801957       1 decoder.go:141] decoding stream as JSON
I1023 17:56:45.455673       1 ensure.go:156] No security context constraints detected, adding defaults
E1023 17:56:45.713071       1 ensure.go:142] Error recording adding service account roles to "default" namespace: namespaces "default" cannot be updated: the object has been modified; please apply your changes to the latest version and try again
I1023 17:56:45.900699       1 ensure.go:77] Added [{ServiceAccount openshift-infra deployment-controller    }] subjects to the system:deployment-controller cluster role: <nil>
I1023 17:56:46.022708       1 ensure.go:77] Added [{ServiceAccount openshift-infra replication-controller    }] subjects to the system:replication-controller cluster role: <nil>
I1023 17:56:46.765100       1 ensure.go:77] Added [{ServiceAccount openshift-infra build-controller    }] subjects to the system:build-controller cluster role: <nil>
E1023 17:56:47.063234       1 ensure.go:142] Error recording adding service account roles to "openshift-infra" namespace: namespaces "openshift-infra" cannot be updated: the object has been modified; please apply your changes to the latest version and try again
E1023 17:56:47.284778       1 ensure.go:142] Error recording adding service account roles to "openshift" namespace: invalid update object, missing resource version: &{{ } {openshift      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[openshift.io/sa.initialized-roles:true]} {[]} {}}
2015-10-23 17:56:47.285581 I | skydns: ready for queries on cluster.local. for tcp4://0.0.0.0:53 [rcache 0]
2015-10-23 17:56:47.285646 I | skydns: ready for queries on cluster.local. for udp4://0.0.0.0:53 [rcache 0]
I1023 17:56:47.305148       1 net.go:105] Got error &net.OpError{Op:"dial", Net:"tcp", Addr:(*net.TCPAddr)(0xc20a15bb60), Err:0x6f}, trying again: "0.0.0.0:53"
I1023 17:56:47.405479       1 run_components.go:178] DNS listening at 0.0.0.0:53
I1023 17:56:47.405591       1 start_node.go:146] Generating node configuration
Generating node credentials ...
I1023 17:56:47.405962       1 create_clientcert.go:53] Creating a client cert with: admin.CreateClientCertOptions{SignerCertOptions:(*admin.SignerCertOptions)(0xc20aa4ae80), CertFile:"openshift.local.config/node-ip-172-31-46-11/master-client.crt", KeyFile:"openshift.local.config/node-ip-172-31-46-11/master-client.key", User:"system:node:ip-172-31-46-11", Groups:util.StringList{"system:nodes"}, Overwrite:false, Output:(*os.File)(0xc20802e008)} and &admin.SignerCertOptions{CertFile:"openshift.local.config/master/ca.crt", KeyFile:"openshift.local.config/master/ca.key", SerialFile:"openshift.local.config/master/ca.serial.txt", lock:sync.Mutex{state:0, sema:0x0}, ca:(*crypto.CA)(nil)}
I1023 17:56:47.416786       1 start_master.go:519] Controllers starting (*)
I1023 17:56:48.103738       1 trace.go:57] Trace "List *api.NamespaceList" (started 2015-10-23 17:56:47.481281362 +0000 UTC):
[1.938µs] [1.938µs] About to list directory
[622.42961ms] [622.427672ms] List extracted
[622.430225ms] [615ns] END
I1023 17:56:48.161158       1 crypto.go:275] Generating client cert in openshift.local.config/node-ip-172-31-46-11/master-client.crt and key in openshift.local.config/node-ip-172-31-46-11/master-client.key
I1023 17:56:48.169726       1 trace.go:57] Trace "List *api.NamespaceList" (started 2015-10-23 17:56:47.48179311 +0000 UTC):
[977ns] [977ns] About to list directory
[687.911643ms] [687.910666ms] List extracted
[687.912321ms] [678ns] END
I1023 17:56:48.416924       1 trace.go:57] Trace "List *api.ServiceAccountList" (started 2015-10-23 17:56:47.48218425 +0000 UTC):
[1.078µs] [1.078µs] About to list directory
[934.712233ms] [934.711155ms] List extracted
[934.712924ms] [691ns] END
I1023 17:56:50.198551       1 trace.go:57] Trace "Get *api.ServiceAccount" (started 2015-10-23 17:56:49.099128665 +0000 UTC):
[1.658µs] [1.658µs] About to read object
[1.099395433s] [1.099393775s] Object read
[1.099396892s] [1.459µs] END
I1023 17:56:51.068767       1 create_clientcert.go:69] Generated new client cert as openshift.local.config/node-ip-172-31-46-11/master-client.crt and key as openshift.local.config/node-ip-172-31-46-11/master-client.key
I1023 17:56:51.068905       1 create_servercert.go:107] Creating a server cert with: admin.CreateServerCertOptions{SignerCertOptions:(*admin.SignerCertOptions)(0xc20aa4ae80), CertFile:"openshift.local.config/node-ip-172-31-46-11/server.crt", KeyFile:"openshift.local.config/node-ip-172-31-46-11/server.key", Hostnames:util.StringList{"ip-172-31-46-11"}, Overwrite:false, Output:(*os.File)(0xc20802e008)}
I1023 17:56:51.069227       1 crypto.go:249] Generating server certificate in openshift.local.config/node-ip-172-31-46-11/server.crt, key in openshift.local.config/node-ip-172-31-46-11/server.key
I1023 17:56:52.131904       1 create_dockercfg_secrets.go:177] View of ServiceAccount openshift-infra/deployment-controller is not up to date, skipping dockercfg creation
I1023 17:56:53.303762       1 nodecontroller.go:119] Sending events to api server.
I1023 17:56:53.304360       1 factory.go:126] Creating scheduler from algorithm provider 'DefaultProvider'
I1023 17:56:53.304418       1 factory.go:161] creating scheduler with fit predicates 'map[HostName:{} PodFitsHostPorts:{} PodFitsResources:{} NoDiskConflict:{} MatchNodeSelector:{}]' and priority functions 'map[LeastRequestedPriority:{} BalancedResourceAllocation:{} SelectorSpreadPriority:{}]
I1023 17:56:53.305860       1 persistentvolume_claim_binder_controller.go:345] Starting PersistentVolumeClaimBinder
I1023 17:56:53.305955       1 plugins.go:262] Loaded volume plugin "kubernetes.io/host-path"
I1023 17:56:53.306121       1 plugins.go:262] Loaded volume plugin "kubernetes.io/nfs"
I1023 17:56:53.306274       1 persistentvolume_recycler_controller.go:212] Starting PersistentVolumeRecycler
I1023 17:56:53.306420       1 start_master.go:546] Started Kubernetes Controllers
I1023 17:56:53.943116       1 trace.go:57] Trace "List *api.PodList" (started 2015-10-23 17:56:53.335923159 +0000 UTC):
[1.74µs] [1.74µs] About to list directory
[607.169112ms] [607.167372ms] List extracted
[607.169688ms] [576ns] END
I1023 17:56:53.944702       1 trace.go:57] Trace "List *api.ReplicationControllerList" (started 2015-10-23 17:56:53.336336252 +0000 UTC):
[1.338µs] [1.338µs] About to list directory
[608.344775ms] [608.343437ms] List extracted
[608.345524ms] [749ns] END
I1023 17:56:53.945423       1 trace.go:57] Trace "List *api.ServiceList" (started 2015-10-23 17:56:53.336730118 +0000 UTC):
[1.23µs] [1.23µs] About to list directory
[608.672908ms] [608.671678ms] List extracted
[608.673593ms] [685ns] END
I1023 17:56:53.946116       1 trace.go:57] Trace "List *api.NodeList" (started 2015-10-23 17:56:53.337135763 +0000 UTC):
[1.201µs] [1.201µs] About to list directory
[608.960064ms] [608.958863ms] List extracted
[608.960975ms] [911ns] END
I1023 17:56:53.954941       1 trace.go:57] Trace "List *api.NodeList" (started 2015-10-23 17:56:53.337933087 +0000 UTC):
[1.223µs] [1.223µs] About to list directory
[616.988622ms] [616.987399ms] List extracted
[616.989382ms] [760ns] END
I1023 17:56:54.007757       1 trace.go:57] Trace "List *api.ResourceQuotaList" (started 2015-10-23 17:56:53.338357856 +0000 UTC):
[1.116µs] [1.116µs] About to list directory
[669.376611ms] [669.375495ms] List extracted
[669.377212ms] [601ns] END
I1023 17:56:54.008405       1 trace.go:57] Trace "List *api.PodList" (started 2015-10-23 17:56:53.337531514 +0000 UTC):
[1.263µs] [1.263µs] About to list directory
[670.854054ms] [670.852791ms] List extracted
[670.854619ms] [565ns] END
I1023 17:56:54.023817       1 create_servercert.go:122] Generated new server certificate as openshift.local.config/node-ip-172-31-46-11/server.crt, key as openshift.local.config/node-ip-172-31-46-11/server.key
I1023 17:56:54.023957       1 create_kubeconfig.go:132] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://172.31.46.11:8443", PublicAPIServerURL:"", APIServerCAFile:"openshift.local.config/node-ip-172-31-46-11/ca.crt", CertFile:"openshift.local.config/node-ip-172-31-46-11/master-client.crt", KeyFile:"openshift.local.config/node-ip-172-31-46-11/master-client.key", ContextNamespace:"default", KubeConfigFile:"openshift.local.config/node-ip-172-31-46-11/node.kubeconfig", Output:(*os.File)(0xc20802e008)}
I1023 17:56:54.654884       1 endpoints_controller.go:256] Finished syncing service "default/kubernetes" endpoints. (10.979µs)
I1023 17:56:55.214988       1 trace.go:57] Trace "Update *api.ServiceAccount" (started 2015-10-23 17:56:54.208173802 +0000 UTC):
[1.006792258s] [1.006792258s] END
I1023 17:56:55.263541       1 factory.go:436] Checking for deleted builds
I1023 17:56:55.263985       1 factory.go:521] Checking for deleted build pods
I1023 17:56:55.503982       1 create_kubeconfig.go:206] Generating 'system:node:ip-172-31-46-11/172-31-46-11:8443' API client config as openshift.local.config/node-ip-172-31-46-11/node.kubeconfig
Created node config for ip-172-31-46-11 in openshift.local.config/node-ip-172-31-46-11
I1023 17:56:55.923932       1 trace.go:57] Trace "List *api.BuildConfigList" (started 2015-10-23 17:56:55.321038672 +0000 UTC):
[1.274µs] [1.274µs] About to list directory
[602.868216ms] [602.866942ms] List extracted
[602.868756ms] [540ns] END
I1023 17:56:55.924979       1 trace.go:57] Trace "List *api.BuildList" (started 2015-10-23 17:56:55.321427997 +0000 UTC):
[1.792µs] [1.792µs] About to list directory
[603.531617ms] [603.529825ms] List extracted
[603.532163ms] [546ns] END
I1023 17:56:55.925809       1 create_dockercfg_secrets.go:177] View of ServiceAccount openshift-infra/replication-controller is not up to date, skipping dockercfg creation
I1023 17:56:56.037181       1 trace.go:57] Trace "List *api.BuildList" (started 2015-10-23 17:56:55.32229084 +0000 UTC):
[48.498µs] [48.498µs] About to list directory
[714.86146ms] [714.812962ms] List extracted
[714.862073ms] [613ns] END
I1023 17:56:56.385138       1 start_node.go:181] Starting a node connected to https://172.31.46.11:8443
I1023 17:56:56.387923       1 server.go:318] Running kubelet in containerized mode (experimental)
I1023 17:56:56.403507       1 plugins.go:71] No cloud provider specified.
I1023 17:56:56.403574       1 start_node.go:276] Starting node ip-172-31-46-11 (v1.0.6-882-g8e1bbb5)
I1023 17:56:56.799674       1 start_master.go:565] Started Origin Controllers
W1023 17:56:57.272108       1 node.go:121] Error running 'chcon' to set the kubelet volume root directory SELinux context: exit status 1
I1023 17:56:57.361984       1 node.go:56] Connecting to Docker at unix:///var/run/docker.sock
I1023 17:56:57.443419       1 manager.go:127] cAdvisor running in container: "/docker/acbecbd3ee718c407768f47bb05aa63efa57d5077fd67e13c4e8aac7b9f70925"
I1023 17:56:57.444042       1 fs.go:93] Filesystem partitions: map[/dev/mapper/docker-202:1-263693-acbecbd3ee718c407768f47bb05aa63efa57d5077fd67e13c4e8aac7b9f70925:{mountpoint:/ major:253 minor:2} /dev/xvda1:{mountpoint:/tmp/openshift major:202 minor:1}]
I1023 17:56:57.446425       1 iptables.go:173] Could not connect to D-Bus system bus: dial unix /var/run/dbus/system_bus_socket: no such file or directory
I1023 17:56:57.446553       1 util.go:508] Default route transits interface "eth0"
I1023 17:56:57.446932       1 util.go:353] Interface eth0 is up
I1023 17:56:57.455205       1 util.go:398] Interface "eth0" has 2 addresses :[172.31.46.11/20 fe80::10ec:35ff:fe0c:c7af/64].
I1023 17:56:57.455227       1 util.go:365] Checking addr  172.31.46.11/20.
I1023 17:56:57.455235       1 util.go:374] IP found 172.31.46.11
I1023 17:56:57.455242       1 util.go:404] valid IPv4 address for interface "eth0" found as 172.31.46.11.
I1023 17:56:57.455248       1 util.go:514] Choosing IP 172.31.46.11
I1023 17:56:57.455261       1 proxier.go:156] Setting proxy IP to 172.31.46.11 and initializing iptables
I1023 17:56:57.455541       1 iptables.go:368] running iptables -N [KUBE-PORTALS-CONTAINER -t nat]
I1023 17:56:57.460764       1 iptables.go:368] running iptables -C [PREROUTING -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-CONTAINER]
I1023 17:56:57.462348       1 machine.go:49] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
I1023 17:56:57.462462       1 manager.go:158] Machine: {NumCores:1 CpuFrequency:2500092 MemoryCapacity:3950567424 MachineID: SystemUUID:EC2DD66C-1C29-752A-E92E-608E2C408DA3 BootID:9fab29dd-d589-4ac5-88d3-2999cf633ebe Filesystems:[{Device:/dev/mapper/docker-202:1-263693-acbecbd3ee718c407768f47bb05aa63efa57d5077fd67e13c4e8aac7b9f70925 Capacity:10434699264} {Device:/dev/xvda1 Capacity:8318783488}] DiskMap:map[253:0:{Name:dm-0 Major:253 Minor:0 Size:107374182400 Scheduler:none} 253:1:{Name:dm-1 Major:253 Minor:1 Size:10737418240 Scheduler:none} 253:2:{Name:dm-2 Major:253 Minor:2 Size:10737418240 Scheduler:none} 202:0:{Name:xvda Major:202 Minor:0 Size:8589934592 Scheduler:noop}] NetworkDevices:[{Name:eth0 MacAddress:12:ec:35:0c:c7:af Speed:0 Mtu:9001}] Topology:[{Id:0 Memory:3950567424 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:26214400 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown}
I1023 17:56:57.462872       1 iptables.go:368] running iptables -N [KUBE-PORTALS-HOST -t nat]
I1023 17:56:57.464965       1 iptables.go:368] running iptables -C [OUTPUT -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-HOST]
I1023 17:56:57.466864       1 iptables.go:368] running iptables -N [KUBE-NODEPORT-CONTAINER -t nat]
I1023 17:56:57.468302       1 iptables.go:368] running iptables -C [PREROUTING -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-CONTAINER]
I1023 17:56:57.469921       1 iptables.go:368] running iptables -N [KUBE-NODEPORT-HOST -t nat]
I1023 17:56:57.479523       1 iptables.go:368] running iptables -C [OUTPUT -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-HOST]
I1023 17:56:57.481105       1 iptables.go:368] running iptables -F [KUBE-PORTALS-CONTAINER -t nat]
I1023 17:56:57.482731       1 iptables.go:368] running iptables -F [KUBE-PORTALS-HOST -t nat]
I1023 17:56:57.484230       1 iptables.go:368] running iptables -F [KUBE-NODEPORT-CONTAINER -t nat]
I1023 17:56:57.485729       1 iptables.go:368] running iptables -F [KUBE-NODEPORT-HOST -t nat]
I1023 17:56:57.487315       1 node.go:238] Started Kubernetes Proxy on 0.0.0.0
I1023 17:56:57.537308       1 manager.go:164] Version: {KernelVersion:4.1.7-15.23.amzn1.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:1.7.1 CadvisorVersion:0.16.0}
I1023 17:56:57.538688       1 server.go:467] Using root directory: openshift.local.volumes
I1023 17:56:57.538851       1 server.go:717] Sending events to api server.
I1023 17:56:57.539215       1 server.go:801] Watching apiserver
I1023 17:56:57.846599       1 plugins.go:56] Registering credential provider: .dockercfg
I1023 17:56:57.923494       1 config.go:242] Setting services (config.ServiceUpdate) {
 Services: ([]api.Service) (len=1 cap=1) {
  (api.Service) {
   TypeMeta: (unversioned.TypeMeta) {
    Kind: (string) "",
    APIVersion: (string) ""
   },
   ObjectMeta: (api.ObjectMeta) {
    Name: (string) (len=10) "kubernetes",
    GenerateName: (string) "",
    Namespace: (string) (len=7) "default",
    SelfLink: (string) (len=46) "/api/v1/namespaces/default/services/kubernetes",
    UID: (types.UID) (len=36) "6b396f3e-79af-11e5-ae9e-12ec350cc7af",
    ResourceVersion: (string) (len=1) "7",
    Generation: (int64) 0,
    CreationTimestamp: (unversioned.Time) 2015-10-23 17:56:42 +0000 UTC,
    DeletionTimestamp: (*unversioned.Time)(<nil>),
    DeletionGracePeriodSeconds: (*int64)(<nil>),
    Labels: (map[string]string) (len=2) {
     (string) (len=9) "component": (string) (len=9) "apiserver",
     (string) (len=8) "provider": (string) (len=10) "kubernetes"
    },
    Annotations: (map[string]string) <nil>
   },
   Spec: (api.ServiceSpec) {
    Type: (api.ServiceType) (len=9) "ClusterIP",
    Ports: ([]api.ServicePort) (len=1 cap=1) {
     (api.ServicePort) {
      Name: (string) (len=5) "https",
      Protocol: (api.Protocol) (len=3) "TCP",
      Port: (int) 443,
      TargetPort: (util.IntOrString) 443,
      NodePort: (int) 0
     }
    },
    Selector: (map[string]string) <nil>,
    ClusterIP: (string) (len=10) "172.30.0.1",
    ExternalIPs: ([]string) <nil>,
    LoadBalancerIP: (string) "",
    SessionAffinity: (api.ServiceAffinity) (len=4) "None"
   },
   Status: (api.ServiceStatus) {
    LoadBalancer: (api.LoadBalancerStatus) {
     Ingress: ([]api.LoadBalancerIngress) <nil>
    }
   }
  }
 },
 Op: (config.Operation) 0
}
I1023 17:56:57.925203       1 config.go:194] Calling handler.OnServiceUpdate()
I1023 17:56:57.925283       1 proxier.go:350] Received update notice: [{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:kubernetes GenerateName: Namespace:default SelfLink:/api/v1/namespaces/default/services/kubernetes UID:6b396f3e-79af-11e5-ae9e-12ec350cc7af ResourceVersion:7 Generation:0 CreationTimestamp:2015-10-23 17:56:42 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[component:apiserver provider:kubernetes] Annotations:map[]} Spec:{Type:ClusterIP Ports:[{Name:https Protocol:TCP Port:443 TargetPort:{Kind:0 IntVal:443 StrVal:} NodePort:0}] Selector:map[] ClusterIP:172.30.0.1 ExternalIPs:[] LoadBalancerIP: SessionAffinity:None} Status:{LoadBalancer:{Ingress:[]}}}]
I1023 17:56:57.925616       1 proxier.go:390] Adding new service "default/kubernetes:https" at 172.30.0.1:443/TCP
I1023 17:56:57.925911       1 proxier.go:332] Proxying for service "default/kubernetes:https" on TCP port 32849
I1023 17:56:57.926093       1 proxier.go:403] info: &{isAliveAtomic:1 portal:{ip:[0 0 0 0 0 0 0 0 0 0 255 255 172 30 0 1] port:443 isExternal:false} protocol:TCP proxyPort:32849 socket:0xc2092e9f20 timeout:250000000 activeClients:0xc20969e6e0 nodePort:0 loadBalancerStatus:{Ingress:[]} sessionAffinityType:None stickyMaxAgeMinutes:180 externalIPs:[]}
I1023 17:56:57.926490       1 iptables.go:368] running iptables -C [KUBE-PORTALS-CONTAINER -t nat -m comment --comment default/kubernetes:https -p tcp -m tcp --dport 443 -d 172.30.0.1/32 -j REDIRECT --to-ports 32849]
I1023 17:56:57.938030       1 iptables.go:368] running iptables -A [KUBE-PORTALS-CONTAINER -t nat -m comment --comment default/kubernetes:https -p tcp -m tcp --dport 443 -d 172.30.0.1/32 -j REDIRECT --to-ports 32849]
I1023 17:56:57.939867       1 proxier.go:506] Opened iptables from-containers portal for service "default/kubernetes:https" on TCP 172.30.0.1:443
I1023 17:56:57.939968       1 iptables.go:368] running iptables -C [KUBE-PORTALS-HOST -t nat -m comment --comment default/kubernetes:https -p tcp -m tcp --dport 443 -d 172.30.0.1/32 -j DNAT --to-destination 172.31.46.11:32849]
I1023 17:56:57.941823       1 iptables.go:368] running iptables -A [KUBE-PORTALS-HOST -t nat -m comment --comment default/kubernetes:https -p tcp -m tcp --dport 443 -d 172.30.0.1/32 -j DNAT --to-destination 172.31.46.11:32849]
I1023 17:56:57.943596       1 proxier.go:539] Opened iptables from-host portal for service "default/kubernetes:https" on TCP 172.30.0.1:443
I1023 17:56:57.943684       1 roundrobin.go:99] LoadBalancerRR service "default/kubernetes:https" did not exist, created
I1023 17:56:57.987173       1 plugins.go:262] Loaded volume plugin "kubernetes.io/aws-ebs"
I1023 17:56:57.987259       1 plugins.go:262] Loaded volume plugin "kubernetes.io/empty-dir"
I1023 17:56:57.987444       1 plugins.go:262] Loaded volume plugin "kubernetes.io/gce-pd"
I1023 17:56:57.987589       1 plugins.go:262] Loaded volume plugin "kubernetes.io/git-repo"
I1023 17:56:57.987739       1 plugins.go:262] Loaded volume plugin "kubernetes.io/host-path"
I1023 17:56:57.987863       1 plugins.go:262] Loaded volume plugin "kubernetes.io/nfs"
I1023 17:56:57.988015       1 plugins.go:262] Loaded volume plugin "kubernetes.io/secret"
I1023 17:56:57.988153       1 plugins.go:262] Loaded volume plugin "kubernetes.io/iscsi"
I1023 17:56:57.988299       1 plugins.go:262] Loaded volume plugin "kubernetes.io/glusterfs"
I1023 17:56:57.988425       1 plugins.go:262] Loaded volume plugin "kubernetes.io/persistent-claim"
I1023 17:56:57.988567       1 plugins.go:262] Loaded volume plugin "kubernetes.io/rbd"
I1023 17:56:57.988711       1 plugins.go:262] Loaded volume plugin "kubernetes.io/cinder"
I1023 17:56:57.988845       1 plugins.go:262] Loaded volume plugin "kubernetes.io/cephfs"
I1023 17:56:57.988967       1 plugins.go:262] Loaded volume plugin "kubernetes.io/downward-api"
I1023 17:56:57.989107       1 plugins.go:262] Loaded volume plugin "kubernetes.io/fc"
I1023 17:56:57.989246       1 plugins.go:262] Loaded volume plugin "kubernetes.io/flocker"
I1023 17:56:57.989457       1 server.go:760] Started kubelet
E1023 17:56:57.989677       1 kubelet.go:782] Image garbage collection failed: unable to find data for container /
W1023 17:56:57.990208       1 kubelet.go:801] Failed to move Kubelet to container "/kubelet": mountpoint for cgroup not found
I1023 17:56:57.990260       1 kubelet.go:803] Running in container "/kubelet"
I1023 17:56:57.990424       1 server.go:102] Starting to listen on 0.0.0.0:10250
I1023 17:56:57.991208       1 server.go:715] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-46-11", UID:"ip-172-31-46-11", APIVersion:"", ResourceVersion:"", FieldPath:""}): reason: 'Starting' Starting kubelet.
I1023 17:56:58.059546       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
I1023 17:56:58.060750       1 kubelet.go:2255] Recording NodeReady event message for node ip-172-31-46-11
I1023 17:56:58.060822       1 kubelet.go:898] Attempting to register node ip-172-31-46-11
I1023 17:56:58.092110       1 server.go:715] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-46-11", UID:"ip-172-31-46-11", APIVersion:"", ResourceVersion:"", FieldPath:""}): reason: 'NodeReady' Node ip-172-31-46-11 status is now: NodeReady
I1023 17:56:58.297649       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
I1023 17:56:58.299057       1 config.go:143] Setting endpoints (config.EndpointsUpdate) {
 Endpoints: ([]api.Endpoints) (len=1 cap=1) {
  (api.Endpoints) {
   TypeMeta: (unversioned.TypeMeta) {
    Kind: (string) "",
    APIVersion: (string) ""
   },
   ObjectMeta: (api.ObjectMeta) {
    Name: (string) (len=10) "kubernetes",
    GenerateName: (string) "",
    Namespace: (string) (len=7) "default",
    SelfLink: (string) (len=47) "/api/v1/namespaces/default/endpoints/kubernetes",
    UID: (types.UID) (len=36) "6b3cc680-79af-11e5-ae9e-12ec350cc7af",
    ResourceVersion: (string) (len=1) "8",
    Generation: (int64) 0,
    CreationTimestamp: (unversioned.Time) 2015-10-23 17:56:43 +0000 UTC,
    DeletionTimestamp: (*unversioned.Time)(<nil>),
    DeletionGracePeriodSeconds: (*int64)(<nil>),
    Labels: (map[string]string) <nil>,
    Annotations: (map[string]string) <nil>
   },
   Subsets: ([]api.EndpointSubset) (len=1 cap=1) {
    (api.EndpointSubset) {
     Addresses: ([]api.EndpointAddress) (len=1 cap=1) {
      (api.EndpointAddress) {
       IP: (string) (len=12) "172.31.46.11",
       TargetRef: (*api.ObjectReference)(<nil>)
      }
     },
     NotReadyAddresses: ([]api.EndpointAddress) <nil>,
     Ports: ([]api.EndpointPort) (len=1 cap=1) {
      (api.EndpointPort) {
       Name: (string) (len=5) "https",
       Port: (int) 8443,
       Protocol: (api.Protocol) (len=3) "TCP"
      }
     }
    }
   }
  }
 },
 Op: (config.Operation) 0
}
I1023 17:56:58.310226       1 config.go:95] Calling handler.OnEndpointsUpdate()
I1023 17:56:58.310320       1 roundrobin.go:263] LoadBalancerRR: Setting endpoints for default/kubernetes:https to [172.31.46.11:8443]
I1023 17:56:58.310508       1 roundrobin.go:220] Delete endpoint 172.31.46.11:8443 for service "default/kubernetes:https"
I1023 17:56:58.586984       1 config.go:277] Setting pods for source api
I1023 17:56:58.912808       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
E1023 17:56:58.919804       1 manager.go:201] Docker container factory registration failed: failed to get cgroup subsystems: failed to find cgroup mounts.
E1023 17:56:58.920231       1 manager.go:207] Registration of the raw container factory failed: failed to get cgroup subsystems: failed to find cgroup mounts
I1023 17:56:59.348194       1 kubelet.go:929] Successfully registered node ip-172-31-46-11
I1023 17:56:59.537239       1 manager.go:1001] Started watching for new ooms in manager
I1023 17:56:59.547986       1 oomparser.go:183] oomparser using systemd
E1023 17:56:59.548106       1 kubelet.go:818] Failed to start ContainerManager, system may not be properly isolated: system validation failed - open /rootfs/proc/mounts: no such file or directory
I1023 17:56:59.548363       1 manager.go:104] Starting to sync pod status with apiserver
I1023 17:56:59.548536       1 kubelet.go:1917] Starting kubelet main sync loop.
I1023 17:56:59.548697       1 kubelet.go:1969] SyncLoop (ADD, "api"): ""
I1023 17:56:59.582496       1 server.go:715] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-46-11", UID:"ip-172-31-46-11", APIVersion:"", ResourceVersion:"", FieldPath:""}): reason: 'KubeletSetupFailed' Failed to start ContainerManager system validation failed - open /rootfs/proc/mounts: no such file or directory
I1023 17:56:59.587529       1 nodecontroller.go:251] NodeController observed a new Node: api.Node{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"ip-172-31-46-11", GenerateName:"", Namespace:"", SelfLink:"/api/v1/nodes/ip-172-31-46-11", UID:"7486496f-79af-11e5-ae9e-12ec350cc7af", ResourceVersion:"126", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:63581219818, nsec:0, loc:(*time.Location)(0x49c6aa0)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"kubernetes.io/hostname":"ip-172-31-46-11"}, Annotations:map[string]string(nil)}, Spec:api.NodeSpec{PodCIDR:"", ExternalID:"ip-172-31-46-11", ProviderID:"", Unschedulable:false}, Status:api.NodeStatus{Capacity:api.ResourceList{"pods":resource.Quantity{Amount:40.000, Format:"DecimalSI"}, "cpu":resource.Quantity{Amount:1.000, Format:"DecimalSI"}, "memory":resource.Quantity{Amount:3950567424.000, Format:"BinarySI"}}, Phase:"", Conditions:[]api.NodeCondition{api.NodeCondition{Type:"Ready", Status:"True", LastHeartbeatTime:unversioned.Time{Time:time.Time{sec:63581219818, nsec:0, loc:(*time.Location)(0x49c6aa0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63581219818, nsec:0, loc:(*time.Location)(0x49c6aa0)}}, Reason:"KubeletReady", Message:"kubelet is posting ready status"}}, Addresses:[]api.NodeAddress{api.NodeAddress{Type:"LegacyHostIP", Address:"172.31.46.11"}, api.NodeAddress{Type:"InternalIP", Address:"172.31.46.11"}}, DaemonEndpoints:api.NodeDaemonEndpoints{KubeletEndpoint:api.DaemonEndpoint{Port:10250}}, NodeInfo:api.NodeSystemInfo{MachineID:"", SystemUUID:"EC2DD66C-1C29-752A-E92E-608E2C408DA3", BootID:"9fab29dd-d589-4ac5-88d3-2999cf633ebe", KernelVersion:"4.1.7-15.23.amzn1.x86_64", OsImage:"CentOS Linux 7 (Core)", ContainerRuntimeVersion:"docker://1.7.1", KubeletVersion:"v1.2.0-alpha.1-1107-g4c8e6f4", KubeProxyVersion:"v1.2.0-alpha.1-1107-g4c8e6f4"}}}
I1023 17:56:59.587923       1 nodecontroller.go:399] Recording Registered Node ip-172-31-46-11 in NodeController event message for node ip-172-31-46-11
W1023 17:56:59.588011       1 nodecontroller.go:466] Missing timestamp for Node ip-172-31-46-11. Assuming now as a timestamp.
I1023 17:56:59.589415       1 event.go:216] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-46-11", UID:"ip-172-31-46-11", APIVersion:"", ResourceVersion:"", FieldPath:""}): reason: 'RegisteredNode' Node ip-172-31-46-11 event: Registered Node ip-172-31-46-11 in NodeController
I1023 17:56:59.609425       1 create_dockercfg_secrets.go:177] View of ServiceAccount default/default is not up to date, skipping dockercfg creation
I1023 17:56:59.932784       1 create_dockercfg_secrets.go:177] View of ServiceAccount default/builder is not up to date, skipping dockercfg creation
I1023 17:57:00.099032       1 create_dockercfg_secrets.go:177] View of ServiceAccount default/deployer is not up to date, skipping dockercfg creation
I1023 17:57:00.183701       1 create_dockercfg_secrets.go:177] View of ServiceAccount openshift/default is not up to date, skipping dockercfg creation
I1023 17:57:00.289322       1 create_dockercfg_secrets.go:177] View of ServiceAccount openshift-infra/build-controller is not up to date, skipping dockercfg creation
I1023 17:57:00.290805       1 create_dockercfg_secrets.go:177] View of ServiceAccount openshift/builder is not up to date, skipping dockercfg creation
I1023 17:57:00.393320       1 create_dockercfg_secrets.go:177] View of ServiceAccount openshift/deployer is not up to date, skipping dockercfg creation
I1023 17:57:00.478357       1 create_dockercfg_secrets.go:177] View of ServiceAccount openshift-infra/default is not up to date, skipping dockercfg creation
I1023 17:57:00.557843       1 create_dockercfg_secrets.go:177] View of ServiceAccount openshift-infra/builder is not up to date, skipping dockercfg creation
I1023 17:57:00.626501       1 create_dockercfg_secrets.go:177] View of ServiceAccount openshift-infra/deployer is not up to date, skipping dockercfg creation
I1023 17:57:04.590363       1 nodecontroller.go:501] Nodes ReadyCondition updated. Updating timestamp: {Capacity:map[cpu:{Amount:1.000 Format:DecimalSI} memory:{Amount:3950567424.000 Format:BinarySI} pods:{Amount:40.000 Format:DecimalSI}] Phase: Conditions:[{Type:Ready Status:True LastHeartbeatTime:2015-10-23 17:56:58 +0000 UTC LastTransitionTime:2015-10-23 17:56:58 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:172.31.46.11} {Type:InternalIP Address:172.31.46.11}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID: SystemUUID:EC2DD66C-1C29-752A-E92E-608E2C408DA3 BootID:9fab29dd-d589-4ac5-88d3-2999cf633ebe KernelVersion:4.1.7-15.23.amzn1.x86_64 OsImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.7.1 KubeletVersion:v1.2.0-alpha.1-1107-g4c8e6f4 KubeProxyVersion:v1.2.0-alpha.1-1107-g4c8e6f4}}
 vs {Capacity:map[memory:{Amount:3950567424.000 Format:BinarySI} pods:{Amount:40.000 Format:DecimalSI} cpu:{Amount:1.000 Format:DecimalSI}] Phase: Conditions:[{Type:Ready Status:True LastHeartbeatTime:2015-10-23 17:56:59 +0000 UTC LastTransitionTime:2015-10-23 17:56:58 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:172.31.46.11} {Type:InternalIP Address:172.31.46.11}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID: SystemUUID:EC2DD66C-1C29-752A-E92E-608E2C408DA3 BootID:9fab29dd-d589-4ac5-88d3-2999cf633ebe KernelVersion:4.1.7-15.23.amzn1.x86_64 OsImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.7.1 KubeletVersion:v1.2.0-alpha.1-1107-g4c8e6f4 KubeProxyVersion:v1.2.0-alpha.1-1107-g4c8e6f4}}.
I1023 17:57:09.549044       1 kubelet.go:1983] SyncLoop (periodic sync)
I1023 17:57:09.549216       1 kubelet.go:1949] SyncLoop (housekeeping)
I1023 17:57:09.550416       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
I1023 17:57:09.551085       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
I1023 17:57:09.657855       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
I1023 17:57:09.758875       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
I1023 17:57:09.859969       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
I1023 17:57:09.961009       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
I1023 17:57:10.061939       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
I1023 17:57:10.162602       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
I1023 17:57:10.263431       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
I1023 17:57:10.364234       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
I1023 17:57:10.465213       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
I1023 17:57:10.566855       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
I1023 17:57:10.667830       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
I1023 17:57:10.768853       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
I1023 17:57:10.869906       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
I1023 17:57:10.970980       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
I1023 17:57:11.072267       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
I1023 17:57:11.173316       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
I1023 17:57:11.274248       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
I1023 17:57:11.375337       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
I1023 17:57:11.476356       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
I1023 17:57:11.577408       1 docker.go:368] Docker Container: /openshift-origin is not managed by kubelet.
@smarterclayton
Copy link
Contributor

You're missing some mount instructions - you need to use the full command.

sudo docker run -d --name "origin" \
        --privileged --net=host \
        -v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys:ro -v /var/lib/docker:/var/lib/docker:rw \
        -v /var/lib/openshift/openshift.local.volumes:/var/lib/openshift/openshift.local.volumes \
        openshift/origin start

@smugcloud
Copy link
Author

Thanks @smarterclayton. This seems to have resolved the mount issue but now I'm getting Failed to ensure state of "/docker-daemon": failed to find pid of Docker container: exit status 1 every minute. I'm not sure what the error actually is but this is the last part of the logs.

I1026 15:58:01.638834       1 start_master.go:565] Started Origin Controllers
Created node config for ip-172-31-46-11 in openshift.local.config/node-ip-172-31-46-11
I1026 15:58:05.318606       1 start_node.go:181] Starting a node connected to https://172.31.46.11:8443
I1026 15:58:05.326783       1 plugins.go:71] No cloud provider specified.
I1026 15:58:05.326860       1 start_node.go:276] Starting node ip-172-31-46-11 (v1.0.6-882-g8e1bbb5)
W1026 15:58:05.434502       1 node.go:121] Error running 'chcon' to set the kubelet volume root directory SELinux context: exit status 1
I1026 15:58:05.451914       1 node.go:56] Connecting to Docker at unix:///var/run/docker.sock
I1026 15:58:05.464558       1 manager.go:127] cAdvisor running in container: "/docker/87172024c0fd4ed27fd07e427f1261e88e47640890b55f5e3aece65f059ed85a"
I1026 15:58:05.466165       1 fs.go:93] Filesystem partitions: map[/dev/mapper/docker-202:1-263693-87172024c0fd4ed27fd07e427f1261e88e47640890b55f5e3aece65f059ed85a:{mountpoint:/ major:253 minor:2} /dev/xvda1:{mountpoint:/rootfs major:202 minor:1}]
I1026 15:58:05.483193       1 machine.go:49] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
I1026 15:58:05.483297       1 manager.go:158] Machine: {NumCores:1 CpuFrequency:2500092 MemoryCapacity:3950567424 MachineID: SystemUUID:EC2DD66C-1C29-752A-E92E-608E2C408DA3 BootID:9fab29dd-d589-4ac5-88d3-2999cf633ebe Filesystems:[{Device:/dev/mapper/docker-202:1-263693-87172024c0fd4ed27fd07e427f1261e88e47640890b55f5e3aece65f059ed85a Capacity:10434699264} {Device:/dev/xvda1 Capacity:8318783488}] DiskMap:map[202:0:{Name:xvda Major:202 Minor:0 Size:8589934592 Scheduler:noop} 253:0:{Name:dm-0 Major:253 Minor:0 Size:107374182400 Scheduler:none} 253:1:{Name:dm-1 Major:253 Minor:1 Size:10737418240 Scheduler:none} 253:2:{Name:dm-2 Major:253 Minor:2 Size:10737418240 Scheduler:none}] NetworkDevices:[{Name:eth0 MacAddress:12:ec:35:0c:c7:af Speed:0 Mtu:9001}] Topology:[{Id:0 Memory:3950567424 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:26214400 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown}
I1026 15:58:05.485451       1 manager.go:164] Version: {KernelVersion:4.1.7-15.23.amzn1.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:1.7.1 CadvisorVersion:0.16.0}
I1026 15:58:05.496505       1 server.go:801] Watching apiserver
I1026 15:58:05.728338       1 node.go:238] Started Kubernetes Proxy on 0.0.0.0
I1026 15:58:05.789449       1 plugins.go:56] Registering credential provider: .dockercfg
I1026 15:58:05.986897       1 server.go:760] Started kubelet
E1026 15:58:05.987074       1 kubelet.go:782] Image garbage collection failed: unable to find data for container /
I1026 15:58:06.013208       1 server.go:102] Starting to listen on 0.0.0.0:10250
I1026 15:58:06.088558       1 kubelet.go:803] Running in container "/kubelet"
I1026 15:58:07.166629       1 factory.go:235] Registering Docker factory
I1026 15:58:07.168449       1 factory.go:93] Registering Raw factory
I1026 15:58:07.588382       1 trace.go:57] Trace "Update *api.ServiceAccount" (started 2015-10-26 15:58:06.12853807 +0000 UTC):
[1.459824764s] [1.459824764s] END
I1026 15:58:07.834642       1 trace.go:57] Trace "Create *api.Node" (started 2015-10-26 15:58:06.789157086 +0000 UTC):
[239.701807ms] [239.701807ms] About to create object
[1.045457654s] [805.755847ms] Object created
[1.045459713s] [2.059µs] END
I1026 15:58:07.932888       1 kubelet.go:929] Successfully registered node ip-172-31-46-11
I1026 15:58:07.944291       1 manager.go:1001] Started watching for new ooms in manager
I1026 15:58:07.947053       1 oomparser.go:183] oomparser using systemd
I1026 15:58:07.979542       1 manager.go:245] Starting recovery of all containers
I1026 15:58:08.055651       1 manager.go:250] Recovery completed
I1026 15:58:08.063397       1 manager.go:104] Starting to sync pod status with apiserver
I1026 15:58:08.063532       1 kubelet.go:1917] Starting kubelet main sync loop.
W1026 15:58:08.073503       1 container_manager_linux.go:278] [ContainerManager] Failed to ensure state of "/docker-daemon": failed to find pid of Docker container: exit status 1
W1026 15:58:09.041631       1 nodecontroller.go:466] Missing timestamp for Node ip-172-31-46-11. Assuming now as a timestamp.
I1026 15:58:09.042202       1 event.go:216] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-46-11", UID:"ip-172-31-46-11", APIVersion:"", ResourceVersion:"", FieldPath:""}): reason: 'RegisteredNode' Node ip-172-31-46-11 event: Registered Node ip-172-31-46-11 in NodeController
W1026 15:59:08.077315       1 container_manager_linux.go:278] [ContainerManager] Failed to ensure state of "/docker-daemon": failed to find pid of Docker container: exit status 1
W1026 16:00:08.081896       1 container_manager_linux.go:278] [ContainerManager] Failed to ensure state of "/docker-daemon": failed to find pid of Docker container: exit status 1
W1026 16:01:08.085812       1 container_manager_linux.go:278] [ContainerManager] Failed to ensure state of "/docker-daemon": failed to find pid of Docker container: exit status 1
I1026 16:02:03.778017       1 trace.go:57] Trace "List *api.ReplicationControllerList" (started 2015-10-26 16:02:03.067277322 +0000 UTC):
[2.605µs] [2.605µs] About to list directory
[710.707889ms] [710.705284ms] List extracted
[710.708681ms] [792ns] END
W1026 16:02:08.089753       1 container_manager_linux.go:278] [ContainerManager] Failed to ensure state of "/docker-daemon": failed to find pid of Docker container: exit status 1
W1026 16:03:08.093801       1 container_manager_linux.go:278] [ContainerManager] Failed to ensure state of "/docker-daemon": failed to find pid of Docker container: exit status 1
W1026 16:04:08.098551       1 container_manager_linux.go:278] [ContainerManager] Failed to ensure state of "/docker-daemon": failed to find pid of Docker container: exit status 1
W1026 16:05:08.102575       1 container_manager_linux.go:278] [ContainerManager] Failed to ensure state of "/docker-daemon": failed to find pid of Docker container: exit status 1
W1026 16:06:08.108110       1 container_manager_linux.go:278] [ContainerManager] Failed to ensure state of "/docker-daemon": failed to find pid of Docker container: exit status 1
W1026 16:07:08.112139       1 container_manager_linux.go:278] [ContainerManager] Failed to ensure state of "/docker-daemon": failed to find pid of Docker container: exit status 1
W1026 16:08:08.116886       1 container_manager_linux.go:278] [ContainerManager] Failed to ensure state of "/docker-daemon": failed to find pid of Docker container: exit status 1
W1026 16:09:08.121036       1 container_manager_linux.go:278] [ContainerManager] Failed to ensure state of "/docker-daemon": failed to find pid of Docker container: exit status 1
W1026 16:10:08.125926       1 container_manager_linux.go:278] [ContainerManager] Failed to ensure state of "/docker-daemon": failed to find pid of Docker container: exit status 1
W1026 16:11:08.130220       1 container_manager_linux.go:278] [ContainerManager] Failed to ensure state of "/docker-daemon": failed to find pid of Docker container: exit status 1
W1026 16:12:08.150115       1 container_manager_linux.go:278] [ContainerManager] Failed to ensure state of "/docker-daemon": failed to find pid of Docker container: exit status 1

@smarterclayton
Copy link
Contributor

That's a known limitation of cadvisor / kubernetes / openshift inside of a docker container (kubernetes/kubernetes#9689). It's not yet resolved. You can ignore the error for now.

@smarterclayton
Copy link
Contributor

We've updated the docs on docs.openshift.org with additional flags to pass to the docker run command (--pid=host and taking 'ro' off of /sys:/sys), I believe this can now be closed.

@ChadCorsentino
Copy link

ChadCorsentino commented May 25, 2017

This still happens with docker v17.04.0
the new command provided on the openshift docs causes the docker container to run but it exits shortly after.

$ sudo docker run -t -i --name "origin" --privileged --pid=host --net=host -v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys -v /sys/fs/cgroup:/sys/fs/cgroup:rw -v /var/lib/docker:/var/lib/docker:rw -v /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes:rslave openshift/origin start

Here's the output if you run it with -t -i instead of -d:

W0525 17:27:56.608088 18769 start_master.go:288] Warning: assetConfig.loggingPublicURL: Invalid value: "": required to view aggregated container logs in the console, master start will continue.
W0525 17:27:56.608161 18769 start_master.go:288] Warning: assetConfig.metricsPublicURL: Invalid value: "": required to view cluster metrics in the console, master start will continue.
W0525 17:27:56.608170 18769 start_master.go:288] Warning: auditConfig.auditFilePath: Required value: audit can now be logged to a separate file, master start will continue.
I0525 17:27:56.613275 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2017-05-25 17:27:56.614019 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.34.61.217:4001: getsockopt: connection refused"; Reconnecting to {10.34.61.217:4001 }
I0525 17:27:56.614075 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2017-05-25 17:27:56.614589 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.34.61.217:4001: getsockopt: connection refused"; Reconnecting to {10.34.61.217:4001 }
I0525 17:27:56.614745 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2017-05-25 17:27:56.615038 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.34.61.217:4001: getsockopt: connection refused"; Reconnecting to {10.34.61.217:4001 }
I0525 17:27:56.615425 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2017-05-25 17:27:56.615844 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.34.61.217:4001: getsockopt: connection refused"; Reconnecting to {10.34.61.217:4001 }
I0525 17:27:56.616145 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2017-05-25 17:27:56.616584 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.34.61.217:4001: getsockopt: connection refused"; Reconnecting to {10.34.61.217:4001 }
I0525 17:27:56.619724 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:56.620366 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:56.621069 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:56.621776 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:56.622033 18769 plugins.go:101] No cloud provider specified.
2017-05-25 17:27:56.624003 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.34.61.217:4001: getsockopt: connection refused"; Reconnecting to {10.34.61.217:4001 }
2017-05-25 17:27:56.624036 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.34.61.217:4001: getsockopt: connection refused"; Reconnecting to {10.34.61.217:4001 }
2017-05-25 17:27:56.624061 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.34.61.217:4001: getsockopt: connection refused"; Reconnecting to {10.34.61.217:4001 }
2017-05-25 17:27:56.624085 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.34.61.217:4001: getsockopt: connection refused"; Reconnecting to {10.34.61.217:4001 }
I0525 17:27:56.746174 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2017-05-25 17:27:56.746674 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.34.61.217:4001: getsockopt: connection refused"; Reconnecting to {10.34.61.217:4001 }
I0525 17:27:56.746855 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:56.746963 18769 start_master.go:427] Starting master on 0.0.0.0:8443 (v3.6.0-alpha.1+4b3473a-839)
I0525 17:27:56.746971 18769 start_master.go:428] Public master address is https://10.34.61.217:8443
I0525 17:27:56.746985 18769 start_master.go:432] Using images from "openshift/origin-:v3.6.0-alpha.1"
2017-05-25 17:27:56.747060 I | embed: peerTLS: cert = openshift.local.config/master/etcd.server.crt, key = openshift.local.config/master/etcd.server.key, ca = openshift.local.config/master/ca.crt, trusted-ca = , client-cert-auth = true
2017-05-25 17:27:56.747283 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 10.34.61.217:4001: getsockopt: connection refused"; Reconnecting to {10.34.61.217:4001 }
2017-05-25 17:27:56.747681 I | embed: listening for peers on https://0.0.0.0:7001
2017-05-25 17:27:56.747719 I | embed: listening for client requests on 0.0.0.0:4001
I0525 17:27:56.766803 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2017-05-25 17:27:56.766882 I | etcdserver: name = openshift.local
2017-05-25 17:27:56.766891 I | etcdserver: data dir = openshift.local.etcd
2017-05-25 17:27:56.766896 I | etcdserver: member dir = openshift.local.etcd/member
2017-05-25 17:27:56.766901 I | etcdserver: heartbeat = 100ms
2017-05-25 17:27:56.766904 I | etcdserver: election = 1000ms
2017-05-25 17:27:56.766908 I | etcdserver: snapshot count = 10000
2017-05-25 17:27:56.766918 I | etcdserver: advertise client URLs = https://10.34.61.217:4001
2017-05-25 17:27:56.766975 I | etcdserver: initial advertise peer URLs = https://10.34.61.217:7001
2017-05-25 17:27:56.766987 I | etcdserver: initial cluster = openshift.local=https://10.34.61.217:7001
2017-05-25 17:27:56.770586 I | etcdserver: starting member 5f841db091715cd3 in cluster 41af6c40e8e4df61
2017-05-25 17:27:56.770619 I | raft: 5f841db091715cd3 became follower at term 0
2017-05-25 17:27:56.770633 I | raft: newRaft 5f841db091715cd3 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2017-05-25 17:27:56.770638 I | raft: 5f841db091715cd3 became follower at term 1
I0525 17:27:56.779014 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:56.779661 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2017-05-25 17:27:56.779686 I | etcdserver: starting server... [version: 3.1.0, cluster version: to_be_decided]
2017-05-25 17:27:56.779716 I | embed: ClientTLS: cert = openshift.local.config/master/etcd.server.crt, key = openshift.local.config/master/etcd.server.key, ca = openshift.local.config/master/ca.crt, trusted-ca = , client-cert-auth = true
2017-05-25 17:27:56.780372 I | etcdserver/membership: added member 5f841db091715cd3 [https://10.34.61.217:7001] to cluster 41af6c40e8e4df61
2017-05-25 17:27:57.070905 I | raft: 5f841db091715cd3 is starting a new election at term 1
2017-05-25 17:27:57.070978 I | raft: 5f841db091715cd3 became candidate at term 2
2017-05-25 17:27:57.071015 I | raft: 5f841db091715cd3 received MsgVoteResp from 5f841db091715cd3 at term 2
2017-05-25 17:27:57.071030 I | raft: 5f841db091715cd3 became leader at term 2
2017-05-25 17:27:57.071037 I | raft: raft.node: 5f841db091715cd3 elected leader 5f841db091715cd3 at term 2
2017-05-25 17:27:57.071317 I | etcdserver: setting up the initial cluster version to 3.1
2017-05-25 17:27:57.072054 N | etcdserver/membership: set the initial cluster version to 3.1
2017-05-25 17:27:57.072101 I | etcdserver/api: enabled capabilities for version 3.1
2017-05-25 17:27:57.072128 I | etcdserver: published {Name:openshift.local ClientURLs:[https://10.34.61.217:4001]} to cluster 41af6c40e8e4df61
I0525 17:27:57.072149 18769 run.go:85] Started etcd at 10.34.61.217:4001
2017-05-25 17:27:57.072284 I | embed: ready to serve client requests
2017-05-25 17:27:57.072610 I | embed: serving client requests on [::]:4001
2017-05-25 17:27:57.072756 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
2017-05-25 17:27:57.072811 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
2017-05-25 17:27:57.072859 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
2017-05-25 17:27:57.072905 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
2017-05-25 17:27:57.072998 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
2017-05-25 17:27:57.073017 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
I0525 17:27:57.109721 18769 run_components.go:249] Using default project node label selector:
I0525 17:27:57.110997 18769 clusterquotamapping.go:101] Starting ClusterQuotaMappingController controller
E0525 17:27:57.111221 18769 reflector.go:190] github.com/openshift/origin/pkg/project/cache/cache.go:109: Failed to list *api.Namespace: Get https://10.34.61.217:8443/api/v1/namespaces?resourceVersion=0: dial tcp 10.34.61.217:8443: getsockopt: connection refused
E0525 17:27:57.111265 18769 reflector.go:201] github.com/openshift/origin/pkg/controller/shared/shared_informer.go:109: Failed to list *api.ClusterResourceQuota: Get https://10.34.61.217:8443/oapi/v1/clusterresourcequotas?resourceVersion=0: dial tcp 10.34.61.217:8443: getsockopt: connection refused
I0525 17:27:57.111457 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.112163 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.112891 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.113617 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.114508 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.115194 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.162759 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.163486 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.164082 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.164730 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.165403 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.166055 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.166676 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.167327 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.168050 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.168728 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.169440 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.170130 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.171424 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.172134 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.173171 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.173825 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.175213 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.175794 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.213756 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.214529 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.215268 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.216014 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.216696 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.217396 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.225552 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.226299 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.226975 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.227667 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.228376 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.229274 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.230040 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.230709 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.231415 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.232100 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.232819 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.233511 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.234227 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.234911 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.235819 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.236488 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.237193 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.237903 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
W0525 17:27:57.274075 18769 genericapiserver.go:315] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0525 17:27:57.324456 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.325284 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.326036 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.326833 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.327827 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.328545 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.329242 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.329953 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.330672 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.331724 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.332477 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.333164 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.333850 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.373521 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.374285 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.375310 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.376072 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.376747 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.377416 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.378077 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.378706 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.379395 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.380078 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.380824 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:57.600896 18769 master.go:194] Started OAuth2 API at 0.0.0.0:8443/oauth
I0525 17:27:57.600912 18769 master.go:194] Started Web Console 0.0.0.0:8443/console/
I0525 17:27:57.600916 18769 master.go:194] Started Kubernetes API at 0.0.0.0:8443/api
I0525 17:27:57.600920 18769 master.go:194] Started Kubernetes API apps/v1beta1 at 0.0.0.0:8443/apis
I0525 17:27:57.600939 18769 master.go:194] Started Kubernetes API authentication.k8s.io/v1beta1 at 0.0.0.0:8443/apis
I0525 17:27:57.600943 18769 master.go:194] Started Kubernetes API authentication.k8s.io/v1 at 0.0.0.0:8443/apis
I0525 17:27:57.600947 18769 master.go:194] Started Kubernetes API authorization.k8s.io/v1 at 0.0.0.0:8443/apis
I0525 17:27:57.600950 18769 master.go:194] Started Kubernetes API authorization.k8s.io/v1beta1 at 0.0.0.0:8443/apis
I0525 17:27:57.600954 18769 master.go:194] Started Kubernetes API autoscaling/v1 at 0.0.0.0:8443/apis
I0525 17:27:57.600958 18769 master.go:194] Started Kubernetes API autoscaling/v2alpha1 at 0.0.0.0:8443/apis
I0525 17:27:57.600961 18769 master.go:194] Started Kubernetes API batch/v1 at 0.0.0.0:8443/apis
I0525 17:27:57.600965 18769 master.go:194] Started Kubernetes API batch/v2alpha1 at 0.0.0.0:8443/apis
I0525 17:27:57.600969 18769 master.go:194] Started Kubernetes API certificates.k8s.io/v1beta1 at 0.0.0.0:8443/apis
I0525 17:27:57.600972 18769 master.go:194] Started Kubernetes API extensions/v1beta1 at 0.0.0.0:8443/apis
I0525 17:27:57.600976 18769 master.go:194] Started Kubernetes API policy/v1beta1 at 0.0.0.0:8443/apis
I0525 17:27:57.600980 18769 master.go:194] Started Kubernetes API rbac.authorization.k8s.io/v1beta1 at 0.0.0.0:8443/apis
I0525 17:27:57.600984 18769 master.go:194] Started Kubernetes API settings.k8s.io/v1alpha1 at 0.0.0.0:8443/apis
I0525 17:27:57.600987 18769 master.go:194] Started Kubernetes API storage.k8s.io/v1 at 0.0.0.0:8443/apis
I0525 17:27:57.600991 18769 master.go:194] Started Kubernetes API storage.k8s.io/v1beta1 at 0.0.0.0:8443/apis
I0525 17:27:57.600995 18769 master.go:194] Started Swagger Schema API at 0.0.0.0:8443/swaggerapi/
I0525 17:27:57.600999 18769 master.go:194] Started OpenAPI Schema at 0.0.0.0:8443/swagger.json
[restful] 2017/05/25 17:27:57 log.go:30: [restful/swagger] listing is available at https://10.34.61.217:8443/swaggerapi/
[restful] 2017/05/25 17:27:57 log.go:30: [restful/swagger] https://10.34.61.217:8443/swaggerui/ is mapped to folder /swagger-ui/
I0525 17:27:57.804468 18769 trace.go:61] Trace "cacher *api.PolicyBinding: List" (started 2017-05-25 17:27:57.110772001 +0000 UTC):
[693.624469ms] [693.624469ms] Ready
[693.629977ms] [5.508µs] watchCache locked acquired
[693.630183ms] [206ns] watchCache fresh enough
[693.633457ms] [3.274µs] Listed 0 items from cache
[693.634421ms] [964ns] Filtered 0 items
"cacher *api.PolicyBinding: List" [693.644501ms] [10.08µs] END
I0525 17:27:57.805857 18769 trace.go:61] Trace "cacher *api.ClusterPolicy: List" (started 2017-05-25 17:27:57.110964047 +0000 UTC):
[694.85531ms] [694.85531ms] Ready
[694.859343ms] [4.033µs] watchCache locked acquired
[694.859536ms] [193ns] watchCache fresh enough
[694.862317ms] [2.781µs] Listed 0 items from cache
[694.86316ms] [843ns] Filtered 0 items
"cacher *api.ClusterPolicy: List" [694.868321ms] [5.161µs] END
I0525 17:27:57.807822 18769 trace.go:61] Trace "cacher *api.Policy: List" (started 2017-05-25 17:27:57.109987852 +0000 UTC):
[697.799389ms] [697.799389ms] Ready
[697.803087ms] [3.698µs] watchCache locked acquired
[697.803301ms] [214ns] watchCache fresh enough
[697.806041ms] [2.74µs] Listed 0 items from cache
[697.806785ms] [744ns] Filtered 0 items
"cacher *api.Policy: List" [697.8115ms] [4.715µs] END
I0525 17:27:57.808828 18769 trace.go:61] Trace "cacher *api.ClusterPolicyBinding: List" (started 2017-05-25 17:27:57.110590534 +0000 UTC):
[698.196535ms] [698.196535ms] Ready
[698.201251ms] [4.716µs] watchCache locked acquired
[698.201445ms] [194ns] watchCache fresh enough
[698.204237ms] [2.792µs] Listed 0 items from cache
[698.205641ms] [1.404µs] Filtered 0 items
"cacher *api.ClusterPolicyBinding: List" [698.215336ms] [9.695µs] END
I0525 17:27:57.827433 18769 trace.go:61] Trace "cacher *api.Group: List" (started 2017-05-25 17:27:57.110990378 +0000 UTC):
[716.394304ms] [716.394304ms] Ready
[716.408621ms] [14.317µs] watchCache locked acquired
[716.408824ms] [203ns] watchCache fresh enough
[716.411971ms] [3.147µs] Listed 0 items from cache
[716.412877ms] [906ns] Filtered 0 items
"cacher api.Group: List" [716.419711ms] [6.834µs] END
I0525 17:27:57.951510 18769 serve.go:86] Serving securely on 0.0.0.0:8443
W0525 17:27:58.057544 18769 run_components.go:218] Binding DNS on port 8053 instead of 53, which may not be resolvable from all clients
I0525 17:27:58.057802 18769 logs.go:41] skydns: ready for queries on cluster.local. for tcp4://0.0.0.0:8053 [rcache 0]
I0525 17:27:58.057813 18769 logs.go:41] skydns: ready for queries on cluster.local. for udp4://0.0.0.0:8053 [rcache 0]
2017-05-25 17:27:58.072933 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
2017-05-25 17:27:58.073091 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
2017-05-25 17:27:58.073133 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
2017-05-25 17:27:58.073145 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
2017-05-25 17:27:58.073191 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
2017-05-25 17:27:58.073209 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
I0525 17:27:58.158229 18769 run_components.go:244] DNS listening at 0.0.0.0:8053
I0525 17:27:58.158876 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:58.171180 18769 ensure.go:223] No cluster policy found. Creating bootstrap policy based on: openshift.local.config/master/policy.json
I0525 17:27:58.171910 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:58.172648 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:58.173330 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:27:58.174038 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
W0525 17:27:58.250308 18769 lease_endpoint_reconciler.go:176] Resetting endpoints for master service "kubernetes" to [10.34.61.217]
I0525 17:27:58.958707 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0525 17:27:58.978855 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0525 17:27:58.995140 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0525 17:27:59.001917 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/admin
I0525 17:27:59.017619 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/edit
I0525 17:27:59.023127 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/view
I0525 17:27:59.025862 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0525 17:27:59.028565 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:node
I0525 17:27:59.031249 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0525 17:27:59.045821 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0525 17:27:59.053013 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0525 17:27:59.056196 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0525 17:27:59.067780 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0525 17:27:59.070703 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0525 17:27:59.073684 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0525 17:27:59.076533 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0525 17:27:59.112395 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0525 17:27:59.176337 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0525 17:27:59.214732 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0525 17:27:59.217514 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0525 17:27:59.225276 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0525 17:27:59.228169 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0525 17:27:59.230864 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0525 17:27:59.233434 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0525 17:27:59.236901 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0525 17:27:59.239696 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0525 17:27:59.242361 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0525 17:27:59.246549 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0525 17:27:59.253187 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0525 17:27:59.255739 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0525 17:27:59.258411 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0525 17:27:59.274253 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0525 17:27:59.281516 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0525 17:27:59.284225 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0525 17:27:59.286986 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0525 17:27:59.289522 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0525 17:27:59.292337 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0525 17:27:59.295100 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0525 17:27:59.319139 18769 storage_rbac.go:168] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0525 17:27:59.329939 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0525 17:27:59.332544 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0525 17:27:59.335153 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0525 17:27:59.341375 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0525 17:27:59.356298 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0525 17:27:59.359318 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0525 17:27:59.366900 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0525 17:27:59.380369 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0525 17:27:59.389840 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0525 17:27:59.392305 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0525 17:27:59.395099 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0525 17:27:59.397796 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0525 17:27:59.411262 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0525 17:27:59.431606 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
2017-05-25 17:27:59.444067 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
I0525 17:27:59.445360 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0525 17:27:59.448856 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0525 17:27:59.488319 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0525 17:27:59.569685 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0525 17:27:59.572750 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0525 17:27:59.585520 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0525 17:27:59.601451 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0525 17:27:59.604185 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0525 17:27:59.606809 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0525 17:27:59.609388 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0525 17:27:59.611922 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0525 17:27:59.620200 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
2017-05-25 17:27:59.628134 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
I0525 17:27:59.628428 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0525 17:27:59.631085 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0525 17:27:59.633804 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0525 17:27:59.636413 18769 storage_rbac.go:196] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0525 17:27:59.644157 18769 storage_rbac.go:227] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0525 17:27:59.652876 18769 storage_rbac.go:227] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0525 17:27:59.661020 18769 storage_rbac.go:227] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0525 17:27:59.673267 18769 storage_rbac.go:227] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0525 17:27:59.676312 18769 storage_rbac.go:257] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0525 17:27:59.678961 18769 storage_rbac.go:257] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0525 17:27:59.681562 18769 storage_rbac.go:257] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
2017-05-25 17:27:59.790699 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
2017-05-25 17:27:59.902668 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
2017-05-25 17:27:59.922349 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
2017-05-25 17:27:59.966203 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
I0525 17:28:00.528860 18769 ensure.go:208] Created default security context constraint privileged
I0525 17:28:00.530522 18769 ensure.go:208] Created default security context constraint nonroot
I0525 17:28:00.533225 18769 ensure.go:208] Created default security context constraint hostmount-anyuid
I0525 17:28:00.534978 18769 ensure.go:208] Created default security context constraint hostaccess
I0525 17:28:00.536437 18769 ensure.go:208] Created default security context constraint restricted
I0525 17:28:00.538225 18769 ensure.go:208] Created default security context constraint anyuid
I0525 17:28:00.551349 18769 ensure.go:208] Created default security context constraint hostnetwork
I0525 17:28:00.835984 18769 start_master.go:601] Controllers starting (
)
E0525 17:28:00.837012 18769 util.go:45] Metric for serviceaccount_controller already registered
I0525 17:28:00.837850 18769 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0525 17:28:00.837973 18769 serviceaccounts_controller.go:122] Starting ServiceAccount controller
I0525 17:28:00.943643 18769 create_dockercfg_secrets.go:219] Dockercfg secret controller initialized, starting.
2017-05-25 17:28:01.876221 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
2017-05-25 17:28:01.885125 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
W0525 17:28:01.891904 18769 start_master.go:690] "ttl" is skipped
2017-05-25 17:28:01.997390 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
I0525 17:28:02.038973 18769 start_master.go:708] Started "endpoint"
2017-05-25 17:28:02.222421 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
W0525 17:28:02.341713 18769 shared_informer.go:298] resyncPeriod 300000000000 is smaller than resyncCheckPeriod 600000000000 and the informer has already started. Changing it to 600000000000
I0525 17:28:02.341743 18769 start_master.go:708] Started "namespace"
W0525 17:28:02.341755 18769 start_master.go:690] "serviceaccount" is skipped
I0525 17:28:02.341919 18769 namespace_controller.go:177] Starting the NamespaceController
E0525 17:28:02.392625 18769 watch.go:212] unable to encode watch object: http2: stream closed (&streaming.encoder{writer:(*framer.lengthDelimitedFrameWriter)(0xc42e10b2c0), encoder:(*versioning.codec)(0xc42f960630), buf:(*bytes.Buffer)(0xc4329a83f0)})
2017-05-25 17:28:02.435252 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
I0525 17:28:02.480763 18769 start_master.go:708] Started "garbagecollector"
I0525 17:28:02.480854 18769 garbagecollector.go:111] Garbage Collector: Initializing
I0525 17:28:02.670622 18769 docker.go:364] Connecting to docker on unix:///var/run/docker.sock
I0525 17:28:02.670643 18769 docker.go:384] Start docker client with request timeout=2m0s
W0525 17:28:02.675842 18769 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
I0525 17:28:02.691151 18769 garbagecollector.go:116] Garbage Collector: All resource monitors have synced. Proceeding to collect garbage
I0525 17:28:02.730218 18769 start_master.go:708] Started "statefuleset"
I0525 17:28:02.730338 18769 stateful_set.go:144] Starting statefulset controller
2017-05-25 17:28:02.761608 I | etcdserver/api/v3rpc: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::]:4001: connect: network is unreachable"; Reconnecting to {[::]:4001 }
I0525 17:28:02.790158 18769 node_config.go:360] DNS Bind to 10.34.61.217:53
I0525 17:28:02.790179 18769 start_node.go:344] Starting node ip-10-34-61-217 (v3.6.0-alpha.1+4b3473a-839)
I0525 17:28:02.791520 18769 start_node.go:353] Connecting to API server https://10.34.61.217:8443
I0525 17:28:02.802457 18769 start_master.go:708] Started "replicaset"
I0525 17:28:02.802536 18769 replica_set.go:155] Starting ReplicaSet controller
W0525 17:28:02.813293 18769 node.go:207] Error running 'chcon' to set the kubelet volume root directory SELinux context: exit status 1
I0525 17:28:02.813314 18769 docker.go:364] Connecting to docker on unix:///var/run/docker.sock
I0525 17:28:02.813322 18769 docker.go:384] Start docker client with request timeout=2m0s
I0525 17:28:02.817508 18769 node.go:143] Connecting to Docker at unix:///var/run/docker.sock
I0525 17:28:02.873973 18769 feature_gate.go:144] feature gates: map[]
I0525 17:28:02.874370 18769 manager.go:143] cAdvisor running in container: "/docker/6f8f842bf2a8562c72efb38447915e7cbae0ee8755f2b31d36b1972afc4d4731"
I0525 17:28:02.903224 18769 node.go:364] Using iptables Proxier.
W0525 17:28:02.907756 18769 manager.go:151] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
W0525 17:28:02.913564 18769 node.go:501] Failed to retrieve node info: nodes "ip-10-34-61-217" not found
W0525 17:28:02.913634 18769 proxier.go:309] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
W0525 17:28:02.913641 18769 proxier.go:314] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0525 17:28:02.913657 18769 node.go:393] Tearing down userspace rules.
I0525 17:28:02.961912 18769 fs.go:117] Filesystem partitions: map[/dev/mapper/vgroot-lvapps:{mountpoint:/rootfs/apps major:253 minor:7 fsType:ext3 blockSize:0} /dev/mapper/vgroot-lvITT:{mountpoint:/rootfs/apps/tools major:253 minor:4 fsType:ext3 blockSize:0} /dev/mapper/vgroot-lvusr:{mountpoint:/rootfs/usr major:253 minor:2 fsType:ext3 blockSize:0} /dev/xvda1:{mountpoint:/rootfs/boot major:202 minor:1 fsType:ext3 blockSize:0} /dev/mapper/vgroot-lvvar:{mountpoint:/var/lib/docker/overlay major:253 minor:8 fsType:ext3 blockSize:0} /dev/mapper/vgroot-lvcrash:{mountpoint:/rootfs/var/crash major:253 minor:6 fsType:ext3 blockSize:0} /dev/mapper/vgroot-lvhome:{mountpoint:/rootfs/home major:253 minor:5 fsType:ext3 blockSize:0} /dev/xvda3:{mountpoint:/rootfs/UTS major:202 minor:3 fsType:ext3 blockSize:0} /dev/mapper/vgroot-lvroot:{mountpoint:/rootfs major:253 minor:1 fsType:ext3 blockSize:0} /dev/mapper/vgroot-lvlocal:{mountpoint:/rootfs/usr/local major:253 minor:3 fsType:ext3 blockSize:0} /dev/mapper/vgroot-lvtmp:{mountpoint:/rootfs/tmp major:253 minor:9 fsType:ext3 blockSize:0}]
E0525 17:28:02.992515 18769 certificates.go:38] Failed to start certificate controller: open : no such file or directory
W0525 17:28:02.992556 18769 start_master.go:705] Skipping "certificatesigningrequests"
W0525 17:28:02.992570 18769 start_master.go:690] "bootstrapsigner" is skipped
I0525 17:28:03.031227 18769 manager.go:198] Machine: {NumCores:2 CpuFrequency:2400217 MemoryCapacity:7933206528 MachineID:9bb9faf500524e0fb00601fddbc33aca SystemUUID:EC2C81F3-5B17-7EAA-6C6E-ECF2D11E09D4 BootID:7d908860-dae5-4075-9ec4-7197753220ad Filesystems:[{Device:/dev/mapper/vgroot-lvITT DeviceMajor:253 DeviceMinor:4 Capacity:5150212096 Type:vfs Inodes:327680 HasInodes:true} {Device:/dev/mapper/vgroot-lvusr DeviceMajor:253 DeviceMinor:2 Capacity:4093313024 Type:vfs Inodes:262144 HasInodes:true} {Device:/dev/xvda1 DeviceMajor:202 DeviceMinor:1 Capacity:511647744 Type:vfs Inodes:32768 HasInodes:true} {Device:/dev/mapper/vgroot-lvvar DeviceMajor:253 DeviceMinor:8 Capacity:4093313024 Type:vfs Inodes:262144 HasInodes:true} {Device:/dev/mapper/vgroot-lvapps DeviceMajor:253 DeviceMinor:7 Capacity:4093313024 Type:vfs Inodes:262144 HasInodes:true} {Device:/dev/mapper/vgroot-lvhome DeviceMajor:253 DeviceMinor:5 Capacity:511647744 Type:vfs Inodes:32768 HasInodes:true} {Device:/dev/xvda3 DeviceMajor:202 DeviceMinor:3 Capacity:279942930432 Type:vfs Inodes:17367040 HasInodes:true} {Device:overlay DeviceMajor:0 DeviceMinor:38 Capacity:4093313024 Type:vfs Inodes:262144 HasInodes:true} {Device:/dev/mapper/vgroot-lvroot DeviceMajor:253 DeviceMinor:1 Capacity:2046640128 Type:vfs Inodes:131072 HasInodes:true} {Device:/dev/mapper/vgroot-lvlocal DeviceMajor:253 DeviceMinor:3 Capacity:251575296 Type:vfs Inodes:65536 HasInodes:true} {Device:/dev/mapper/vgroot-lvtmp DeviceMajor:253 DeviceMinor:9 Capacity:4160421888 Type:vfs Inodes:262144 HasInodes:true} {Device:/dev/mapper/vgroot-lvcrash DeviceMajor:253 DeviceMinor:6 Capacity:2046640128 Type:vfs Inodes:131072 HasInodes:true}] DiskMap:map[253:5:{Name:dm-5 Major:253 Minor:5 Size:536870912 Scheduler:none} 253:8:{Name:dm-8 Major:253 Minor:8 Size:4294967296 Scheduler:none} 253:9:{Name:dm-9 Major:253 Minor:9 Size:4294967296 Scheduler:none} 253:3:{Name:dm-3 Major:253 Minor:3 Size:268435456 Scheduler:none} 253:4:{Name:dm-4 Major:253 Minor:4 Size:5368709120 Scheduler:none} 253:6:{Name:dm-6 Major:253 Minor:6 Size:2147483648 Scheduler:none} 253:7:{Name:dm-7 Major:253 Minor:7 Size:4294967296 Scheduler:none} 202:0:{Name:xvda Major:202 Minor:0 Size:322122547200 Scheduler:deadline} 253:0:{Name:dm-0 Major:253 Minor:0 Size:2147483648 Scheduler:none} 253:1:{Name:dm-1 Major:253 Minor:1 Size:2147483648 Scheduler:none} 253:2:{Name:dm-2 Major:253 Minor:2 Size:4294967296 Scheduler:none}] NetworkDevices:[{Name:eth0 MacAddress:02:d1:c1:da:7d:6c Speed:0 Mtu:9001}] Topology:[{Id:0 Memory:8589529088 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:31457280 Type:Unified Level:3}]}] CloudProvider:AWS InstanceType:t2.large InstanceID:i-069000410987b51ce}
I0525 17:28:03.047641 18769 manager.go:204] Version: {KernelVersion:3.10.0-514.6.1.el7.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:17.04.0-ce DockerAPIVersion:1.28 CadvisorVersion: CadvisorRevision:}
I0525 17:28:03.050231 18769 server.go:509] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
W0525 17:28:03.089418 18769 container_manager_linux.go:217] Running with swap on is not supported, please disable swap! This will be a fatal error by default starting in K8s v1.6! In the meantime, you can opt-in to making this a fatal error by enabling --experimental-fail-swap-on.
I0525 17:28:03.089522 18769 container_manager_linux.go:244] container manager verified user specified cgroup-root exists: /
I0525 17:28:03.089537 18769 container_manager_linux.go:249] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd ProtectKernelDefaults:false EnableCRI:true NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} ExperimentalQOSReserved:map[]}
I0525 17:28:03.089718 18769 kubelet.go:265] Watching apiserver
W0525 17:28:03.226664 18769 kubelet_network.go:70] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
I0525 17:28:03.226704 18769 kubelet.go:494] Hairpin mode set to "hairpin-veth"
W0525 17:28:03.248728 18769 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
I0525 17:28:03.298065 18769 start_master.go:708] Started "replicationcontroller"
I0525 17:28:03.298165 18769 replication_controller.go:150] Starting RC Manager
I0525 17:28:03.455792 18769 start_master.go:708] Started "podgc"
W0525 17:28:03.455821 18769 start_master.go:690] "resourcequota" is skipped
I0525 17:28:03.486567 18769 docker_service.go:184] Docker cri networking managed by kubernetes.io/no-op
F0525 17:28:03.510872 18769 node.go:297] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants