Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow toggling service registration on a network #722

Closed
wants to merge 2 commits into from
Closed

Allow toggling service registration on a network #722

wants to merge 2 commits into from

Conversation

aidanhs
Copy link
Contributor

@aidanhs aidanhs commented Oct 29, 2015

The end goal is to allow you to run:

$ docker network modify newbridge com.docker.network.ignore_svcs=true
$ docker network modify newbridge com.docker.network.ignore_svcs=false

to permit toggling of services on a network (you can try it out be cloning and building https://github.com/aidanhs/docker/tree/aphs-service-discovery-toggle, which has the libnetwork changes as well as Docker changes on the branch).

I want to focus on the libnetwork stuff before opening PRs on Docker.

The first commit is useful in itself, allowing you you create networks without service registration enabled. The second commit is the interesting one for me, allowing toggling.

One question: when 'disabling services', would you expect it to 1) disable all services on the network immediately or 2) just not register any new services started after that point? Currently this PR works based on 2. If it should be changed to 1, I'll probably change ignore_svcs to disable_svcs.

Signed-off-by: Aidan Hobson Sayers <aidanhs@cantab.net>
Signed-off-by: Aidan Hobson Sayers <aidanhs@cantab.net>
@dave-tucker
Copy link
Contributor

I'm 👎 on this.

Modifying options after a network is created is a bad idea IMO.
If we really have to, it should only be possible if there are no endpoints already attached.
Otherwise, you will have to make changes to every container that is already attached which is not only complex, but could also have some very weird side effects.

With the above in mind, it's much easier just to delete and re-create a network.

As for the label itself, I could go either way - from what I understand those wanting to disable it do so as "we have our own solution". In which case, a plugin based on the work in #767 would be a better option so we don't start fragmenting the user experience.

In any case, why truncate services to svcs? It would be much easier to read if it were called disable_service_discovery.

@aidanhs
Copy link
Contributor Author

aidanhs commented Nov 27, 2015

Thanks for the response!

If we really have to, it should only be possible if there are no endpoints already attached.
Otherwise, you will have to make changes to every container that is already attached which is not only complex, but could also have some very weird side effects.

Well this already happens when service discovery is enabled, right? We already see strange side effects, but this won't make it any worse or create additional complexity. When DNS (eventually) get implemented it becomes dead simple because you just need to add/remove the records.

With the above in mind, it's much easier just to delete and re-create a network.

This is how containers work, which is now being reconsidered - moby/moby#15078. I think allowing for dynamic network reconfiguration (e.g. by allowing plugins to mark labels as 'dynamically configurable' and associate callbacks) allows much more interesting use-cases down the line for network plugins.

It's possible to agree with the above (that networks should be dynamically configurable in general) without agreeing that service discovery specifically should be dynamically configurable. However, to me it seems like an obvious one to provide by default, given the relative simplicity and nice ability to enable service discovery on the default bridge.

In which case, a plugin based on the work in #767 would be a better option so we don't start fragmenting the user experience.

Sorry, I'm a bit hazy on what you're suggesting here. Are you talking about service discovery plugins, in addition to network mode plugins? And are hoping that external solutions will start hooking into the libnetwork plugins rather than go their own route?

In any case, why truncate services to svcs? It would be much easier to read if it were called disable_service_discovery.

I have no preference, I'll change it if this PR starts moving.

@dave-tucker
Copy link
Contributor

Well this already happens when service discovery is enabled, right? We already see strange side effects, but this won't make it any worse or create additional complexity.

With your PR as is, I create a network with ignore_svcs=true, spawn some containers, modify to ignore_svcs=false, spawn some more. Now I can discover the "new" containers from other "new" containers while the older ones are unpublished. When I restart the old containers, what's the expected behaviour? They continue to be unpublished or that they publish themselves?
Providing something that is able to lead to inconsistent behaviour like this is just bad design in my book.

This is how containers work, which is now being reconsidered - moby/moby#15078. I think allowing for dynamic network reconfiguration (e.g. by allowing plugins to mark labels as 'dynamically configurable' and associate callbacks) allows much more interesting use-cases down the line for network plugins.

The PR you mentioned is driven by the need to dynamically update CPU/Memory allocations. While a parallel could be drawn to network QoS settings, we have no incite in to whether this is something that users actually want at this time and if any plugins are actually supporting such behaviour. As it's a pretty big change to the plugin protocol, I'd much rather be sure of what the requirements and use cases are before we venture down that path.

Sorry, I'm a bit hazy on what you're suggesting here. Are you talking about service discovery plugins, in addition to network mode plugins? And are hoping that external solutions will start hooking into the libnetwork plugins rather than go their own route?

Yes. We already have IP Address Management plugins and we've discussed going the same way with service discovery.

@aidanhs
Copy link
Contributor Author

aidanhs commented Nov 27, 2015

Providing something that is able to lead to inconsistent behaviour like this is just bad design in my book.

Ah I see. Yes, I actually agree, which is why I'm now leaning towards option 1 outlined in the final paragraph of #722 (comment) - I've just not yet implemented it. So service discovery would either apply to every container on a network or none.

As it's a pretty big change to the plugin protocol, I'd much rather be sure of what the requirements and use cases are before we venture down that path.

That's fair. To give a concrete use-case, aside from my desire to have service discovery toggleable, I was pondering on a 'faultybridge' network plugin which would do something similar to https://github.com/tylertreat/comcast and https://github.com/dcm-oss/blockade/. Which is basically just an extension of your QoS thought.

Yes. We already have IP Address Management plugins and we've discussed going the same way with service discovery.

That seems neat, I'd be interested to read more about this - I guess the intention would be to replace docker events for all service discovery purposes?

Idly thinking about it, I realised that enabling/disabling service discovery for a network is just a special case of making service discovery swappable on a running network. I guess the general case would work by 1. registering all services on the new plugin, 2. swapping the plugin used for new requests, 3. deregistering services on the old plugin.
Of course, the prerequisite for this would be the ability to change options on a running network.

Anyway, is there an issue and/or PR about service plugins? Or is it still at the speculation stage, intended to follow up the /etc/hosts -> DNS changeover?

@dave-tucker
Copy link
Contributor

Anyway, is there an issue and/or PR about service plugins? Or is it still at the speculation stage, intended to follow up the /etc/hosts -> DNS changeover?

Too early at this stage. Once a design is locked down for the /etc/hosts to DNS migration, it should be possible to start the plugins work parallel if someone is willing to pick up the torch.

@aidanhs
Copy link
Contributor Author

aidanhs commented Dec 2, 2015

Ok it sounds like this has no chance of being merged in the current incarnation (pre SD plugins) so closing.

@aidanhs aidanhs closed this Dec 2, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants