-
Notifications
You must be signed in to change notification settings - Fork 0
Stack Build Error #22
Comments
Hello, I just tried running it and have found no issues. The command I issued is the same as in the readme: Are you still running into issues? If so, could you provide some more info/logs of the issue? |
Hi, The mentioned issue dissappeared again (looks like Rackspace did some changes/fixes under the hood...). But still i have problems. Now the coreos cluster is not working: kubernetes-master ~ # fleetctl list-units
Error retrieving list of units from repository: googleapi: Error 503: fleet server unable to communicate with etcd kubernetes-master ~ # journalctl -u etcd.service
-- Logs begin at Sat 2015-09-19 22:49:30 UTC, end at Sat 2015-09-19 22:54:39 UTC. --
Sep 19 22:49:42 kubernetes-master systemd[1]: Started etcd.
Sep 19 22:49:42 kubernetes-master systemd[1]: Starting etcd...
Sep 19 22:49:42 kubernetes-master etcd[1073]: [etcd] Sep 19 22:49:42.559 INFO | Discovery via http://10.182.65.214:2379 using prefix discovery/<TOKEN>.
Sep 19 22:49:42 kubernetes-master systemd[1]: etcd.service: Main process exited, code=exited, status=1/FAILURE
Sep 19 22:49:42 kubernetes-master systemd[1]: etcd.service: Unit entered failed state.
Sep 19 22:49:42 kubernetes-master systemd[1]: etcd.service: Failed with result 'exit-code'.
Sep 19 22:49:53 kubernetes-master systemd[1]: etcd.service: Service hold-off time over, scheduling restart. Does it work for you @metral ? |
Hi @metral, Any updates on this? Does etcd work for you (e.g. on the kubernetes master)? Thx Sven |
Apologies on my lack of a response. Have you tried starting a clean On Thursday, October 1, 2015, Sven Müller notifications@github.com wrote:
-Mike Metral |
Hi @metral, Yep, i always destroy the old stack and create a new stack using the heat template (repeated it couple of times to see if it is reproducable). After the stack is ready, i'm using ssh to access the kubernetes master node. There i can see that there are issues with etcd. kubernetes-master ~ # journalctl -u etcd.service
-- Logs begin at Sat 2015-09-19 22:49:30 UTC, end at Sat 2015-09-19 22:54:39 UTC. --
Sep 19 22:49:42 kubernetes-master systemd[1]: Started etcd.
Sep 19 22:49:42 kubernetes-master systemd[1]: Starting etcd...
Sep 19 22:49:42 kubernetes-master etcd[1073]: [etcd] Sep 19 22:49:42.559 INFO | Discovery via http://10.182.65.214:2379 using prefix discovery/<TOKEN>.
Sep 19 22:49:42 kubernetes-master systemd[1]: etcd.service: Main process exited, code=exited, status=1/FAILURE
Sep 19 22:49:42 kubernetes-master systemd[1]: etcd.service: Unit entered failed state.
Sep 19 22:49:42 kubernetes-master systemd[1]: etcd.service: Failed with result 'exit-code'.
Sep 19 22:49:53 kubernetes-master systemd[1]: etcd.service: Service hold-off time over, scheduling restart. Thx for the support :) |
This is very odd - I've done 2 clean deployments just now and when you originally opened up the issue but I still am not running into the issues that you're describing. My steps from beginning to end in the ORD region:
can you provide the steps you're taking? from your issues it seems that you're discovery node is not setting up the private etcd server that both the overlord and k8s use/depend on, but I am not sure as to why its having issues. could you try again deploying from scratch or provide me with more information into your discovery node's log files for the container running it: |
Closing due to inactivity. Please reopen if the issues still continue. |
Hi,
When i use the latest version of https://github.com/metral/corekube/blob/master/corekube-cloudservers.yaml i get an error when creating the stack (rackspace). Any idea?
Resource CREATE failed: resources.kubernetes_minions: Property error: resources[1].properties.networks[0].network: Error validating value '00000000-0000-0000-0000-000000000000': SSL certificate validation has failed: [Errno 1] _ssl.c:504: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
Thanks,
Sven
The text was updated successfully, but these errors were encountered: