Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow publishing of additional information to IPAM #977

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

johnbelamaric
Copy link

Adds an IPAM driver capability that allows libnetwork
to send additional information during subnet and address
allocation.

Signed-off-by: John Belamaric jbelamaric@infoblox.com

@johnbelamaric
Copy link
Author

This addresses the functionality mentioned in the comments of #864 by adding some additional information to be (optionally) sent to the IPAM driver. This enables the driver to provide more context and data to an external IPAM system.

ep.ipamOptions = make(map[string]string)
}
log.Debugf("Endpoint: %s", ep)
ep.ipamOptions[netlabel.EndpointName] = name
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure we are ok in providing the remote IPAM drivers with the endpoint name and network name (change below).

Please take a look at @mrjana explanation in #139 (comment)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I read that comment and it makes sense. The difference though is this is not a core part of the interface but rather an optional capabilitity-based extension.

Without this, it limits the flexibility and use cases around IPAM. For example, one use case for external IPAM systems is to log and maintain the understanding of which application/service/container/host had which IP address when. This can be necessary in reconstructing the IP ownership in a forensic situation, where you are trying to figure out the extent and causes of a breach. To do this, we really need to know who the IP is for when it is allocated and release.

Similarly, in our IPAM solution we provide a single-pane-of-glass management of all IP addresses throughout the network. Seeing an IP address alone and only knowing it belongs to "Docker" doesn't provided enough useful context for the user.

In fact ideally we would know the container ID. If we had that we could just pull all the rest of the meta-data out-of-band.

The next step for the IPAM extensions I am looking at would be to allow the user to pass arbitrary key-value pairs at the time of IP allocation (i.e., end point connect). Right now, this is possible at network creation. But if we do it at IP allocation time, then we can enable the user to control additional meta-data (for example, application name or service name) to be presented in our external system. Similarly, this can enable policy-based addressing; for example, if the user wants to place particular services at particular IP addresses within each subnet (gateways at .1, etc.).

In any case, all of this would be optional based on capabilities. I agree they shouldn't be "one big interface" as described in the comments. But by not allowing them as optional capabilities, you are limiting the end user to very narrow use cases.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @johnbelamaric for the explanation.

If this extra information is mainly needed for auditing purposes, then I am suggesting to pass only the endpoint and network ID to the ipam driver.

The IDs are the right thing to pass given they are the only unique identifiers of the network resources, while names are not.

With that information the remote driver would be able to fill in the auditing info for each assigned pool/IP address by querying the daemon.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, that makes good sense. I'll rework the PR.

Adds an IPAM driver capability that allows libnetwork
to send additional information during subnet and address
allocation.

Signed-off-by: John Belamaric <jbelamaric@infoblox.com>
@johnbelamaric
Copy link
Author

Ok, I updated this to only publish the IDs to the IPAM driver.

@aboch
Copy link
Contributor

aboch commented Mar 9, 2016

@johnbelamaric

Thanks for the change.

I discussed your updated change with the other maintainers. Some are concerned with passing endpoint/network ids to the ipam driver, because it conflicts with the reason why the IPAM extension point was added in first place, which was to separate the IP management from the network plumbing: #489

Understand that for now this extra information is needed for a driver for tracking, to reconstruct the "this IP was given to which container" information. But, again it was reminded me, the driver should be able to do so given the container MAC address (today driver can specify it needs to see the container MAC address when is requested for an address). Similarly the "subnet<->network" pairing information can be retrieved from the docker network commands.

Feel free to add more information/use-cases that justify this feature, in addition to what you already listed in #864.

I'll meet with the other maintainers again to discuss this PR in more detail in the coming days and we will contact you if we need more inputs.

@johnbelamaric
Copy link
Author

@aboch thanks for the explanation. I see how the auditing could be done without this, though it gets more complex. However, the auditing was just one use case.

Here's another one. I've spoken to several providers where they have decided against overlays due to performance and management complexity. In these environments, a traditional L3 routing architecture is used. For example, you may assign each rack its own VLAN and /22. The individual subnets are routable and therefore have connectivity.

Now, consider that a cluster may be spread across multiple racks. So, we need a concept of a "network" that provides reachability to all containers that have an IP in that network. But containers on different hosts are on different racks and therefore different VLANs and subnets. So, they have L3 connectivity but not L2 connectivity.

So, how can libnetwork satisfy this use case? There is a close association between hosts and IP addressing. One way would be that something gets passed into the IPAM driver at network create time to identify the abstract "network" concept that IPAM knows about. That is, some sort of tag that maps this Docker "network" to the set of VLANs/subnets in the physical network. The external IPAM system would have to maintain a mapping of racks/hosts/VLANs/subnets.

Now, when an endpoint is created on that network, it is created on a particular host, which is in a particular rack, which has access to a particular VLAN. But how can the IPAM driver allocate an IP in the proper subnet if it does not know which host the endpoint is on?

Granted, you could give the same answer "use the MAC to find the endpoint". But if we can do that, why not just pass the ID instead of having to do this indirect lookup?

Another possible issue I see is with respect to the "pool" concept. In this case, what would the pools associated with the network be? I suppose at that point it would return all the subnets, though I am not sure why libnetwork really even needs to know them in this case. I see that here https://github.com/docker/libnetwork/blob/master/ipams/remote/remote.go#L86 the assumption is a pool is a CIDR - is that constraint really necessary?

Maybe what I am doing is better done with address spaces, but I don't see anyway to pass that in either.

So, I guess my question is how to handle these sorts of architectures if there is a complete divorce of IPAM and network driver? While I think that is the best practice, it is not always possible. Is there an alternative solution to the one I describe above that fits better into the current libnetwork model?

Another thing I have to ask - how useful is external IPAM if it has no access to any data on which to make addressing decisions? If we can't pass anything meaningful into IPAM, it really doesn't do much to have the ability to integrate with external or alternate IPAM systems.

@wrouesnel
Copy link

I absolutely need this functionality. As it stands it's simply not possible AFAIK to implement IPAM which supports overlapping network ranges.

In my specific case my network driver is creating isolated layer 2 networks for containers - overlapping IPs and MAC addresses should be allowed, but since docker provides no network information to IPAM I can't do any accounting of whether an IP is available on a specific network segment or not.

My only option is to just always allow overlapping pools (and hope docker is doing the right thing higher up).

@BSWANG
Copy link

BSWANG commented Apr 28, 2017

This functional needed by swarm mode cluster ipam plugin. #1738

@bearice
Copy link
Contributor

bearice commented Jun 14, 2017

@aboch For those drivers that dose not support MAC address allocation (ipvlan for example), having the sandbox id is absolutely handful.

@greenpau
Copy link

greenpau commented Feb 5, 2018

fyi, @johnbelamaric , the addition to cli referenced in #2066 (comment) would help passing IPAM options when invoking individual docker run commands.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants