Skip to content
This repository has been archived by the owner on Oct 22, 2020. It is now read-only.

timeout for overlord and discovery #24

Closed
v1k0d3n opened this issue Apr 13, 2016 · 2 comments
Closed

timeout for overlord and discovery #24

v1k0d3n opened this issue Apr 13, 2016 · 2 comments

Comments

@v1k0d3n
Copy link

v1k0d3n commented Apr 13, 2016

first, i wanted to say that this is a great project you have here. i work with your folks on the openstack-ansible side, and you rackspace folks are awesome with the community!

i'm running into a bit of an issue with overlord and discovery timing out, and i was wondering if this should work for private installs of openstack-ansible as well? i was trying to load the openstack.yml file without much luck, since CoreOS was reporting failed units and other things. this would be a huge win, since i'd like to demo some things around coreos for our folks internally. any ideas what could be causing the issues? if you need logs or more, just tell me what to grab and i will provide it for you.

thanks for everything, including the awesome project!

@metral
Copy link
Owner

metral commented Apr 13, 2016

Thank you for the kind words and the report that you're experiencing a timeout.

Could you please specify where you're in the process that you're seeing the timeout, as well as, include any logs or information that could help me recreate the problem?

The corekube-openstack.yaml stack has been tested & does work for a private install via openstack-ansible, but the system used is based on an older deployment of it which should be updated; nevertheless, the OpenStack components used are Glance, Nova, Neutron and Heat, so as long as those all functional there shouldn't be a reason for any timeout.

@v1k0d3n
Copy link
Author

v1k0d3n commented Apr 15, 2016

so it seems like the cluster is up and operational. i may have created this issue a little too quickly, or as things were still building out. i read more about the overload node, and i think i get it a little better now. thanks for the quick reply, sorry for not getting back to you sooner. i have some questions about your thoughts on SDN (related to CNI) and service exposure, but I may just try to hit you up on k8s Slack if you don't mind? great job...nice, clean deployment/repo!

@v1k0d3n v1k0d3n closed this as completed Apr 15, 2016
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants