Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Be smarter about initial reconfigure #78

Open
dcosson opened this issue Jul 6, 2014 · 3 comments
Open

Be smarter about initial reconfigure #78

dcosson opened this issue Jul 6, 2014 · 3 comments

Comments

@dcosson
Copy link
Contributor

dcosson commented Jul 6, 2014

Synapse is designed to re-write the haproxy config & reload haproxy when it starts (set here).

This means that any time synapse is restarted there is a period between when it starts and when the watcher first registers where the defaults are being used (and if there are no defaults you will return 503's from haproxy).

I can see why you would want an initial reconfigure so that any time you restart synapse you know any changes unrelated to backends will get picked up, but it seems like it should be smarter and wait until all watchers have checked in once before doing its initial reconfiguration. I'm happy to submit a PR if this sounds reasonable.

@jolynch
Copy link
Collaborator

jolynch commented Oct 3, 2015

@dcosson Yea this has been somewhat of an annoying bug to us as well. We mitigated it slightly at Yelp by reducing the amount of zookeeper connecting we do and using the state_file so that at the very least as soon as the watcher gets information we can quickly enable the backend over the stat socket, but this just shortens the window of sad.

How are you thinking about implementing this? The watchers all live in separate threads, but perhaps we can communicate back to the main thread a la reconfigure?

@jolynch
Copy link
Collaborator

jolynch commented Oct 11, 2015

Now that I'm looking at it, the zookeeper watchers do setup before returning because they don't use a Thread.new in the start method (they rely on the zk-ruby gem's async callbacks to multithread themselves).

@dcosson I imagine you're using the ec2 tag watcher or some such?

@jolynch jolynch closed this as completed Oct 11, 2015
@jolynch jolynch reopened this Oct 11, 2015
@jolynch
Copy link
Collaborator

jolynch commented Oct 11, 2015

Oops, didn't mean to close, sorry about that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants