Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Install default cni binaries when deploying the daemonset #207

Closed

Conversation

micw
Copy link
Contributor

@micw micw commented Dec 8, 2018

Fixes #192

@micw micw force-pushed the feature/install-cni-binaries branch 2 times, most recently from fc1eecc to 7922201 Compare December 8, 2018 19:04
@coveralls
Copy link

coveralls commented Dec 8, 2018

Pull Request Test Coverage Report for Build 698

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 52.114%

Totals Coverage Status
Change from base Build 695: 0.0%
Covered Lines: 604
Relevant Lines: 1159

💛 - Coveralls

@dougbtv
Copy link
Member

dougbtv commented Jan 10, 2019

Mike, I like the idea for this generally -- let's continue the discussion. I'd like to have the option to not install these sometimes, and/or non-destructively place them. I also might want to use another distribution of the CNI reference plugins so that we have more control of which versions we're including.

@rkamudhan
Copy link
Member

@micw @dcbw @dougbtv Is there any standard containerized CNI binary to install the all cni plugins ? Are we have to create one ?

Copy link
Member

@rkamudhan rkamudhan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only few nitpicking on the standard CNI plugin docker image. In over the PR is good

@micw
Copy link
Contributor Author

micw commented Feb 28, 2019

Unfortunately, there is no official image. I have created a PR a while ago (containernetworking/plugins#245). But this is stuck becuase of the general discussion in containernetworking/cni#284 and it's unlikely that there will be one in the near future.
If it's ok for you, I'd go with the rancher image for now.

@micw
Copy link
Contributor Author

micw commented Mar 7, 2019

@rkamudhan Can we merge this?

@s1061123
Copy link
Member

s1061123 commented Mar 8, 2019

@micw I believe that multus-daemonset should install multus because some user still want to use current CNI plugin. In addition your container's CNI plugin is slightly old so I suppose it is not good choice for multus. In addition, currently we don't care for above container image, so we should make it separate from maintenance point of view as well ('multus-cni' repo's purpose is multus CNI plugin itself, as far as I know).

I believe that you would like to have some place that can have several daemonset yaml for multus (not only with CNI reference plugin in the future, I guess). How about to have another repo for that (put some sample yaml to integrate multus) in K8sNetworkPlumbingWG github, like 'multus-config-examples' or some?

@dougbtv @dcbw @rkamudhan, do you have any toughts?

@micw
Copy link
Contributor Author

micw commented Mar 8, 2019

@s1061123 Thank you for the feedback. I'm a bit torn between the following:
On the one hand side we need the default CNI plug-ins to have multus working properly. So having it installed along with multus would make sense to give a good initial user experience.
On the other hand, CNI plug-ins might be managed differently, e.g. installed and updated by OS packages. So we don't want to override them.

If we had a helm chart rather than static yaml files, we could make this simply configurable. Unfortunately helm worky only after multus is installed ;-)

I think the best option would be to move the yaml files into an "examples" folder where we provide well-documented, well-tested working examples for different setups (so we could simply provide one example with just multus+flanell, one that aditional Default-CNI-Plugin instalaltion and some others).

@dougbtv
Copy link
Member

dougbtv commented Mar 8, 2019

Here's the direction I'd like to take... Firstly -- I think including the CNI plugins along with the Multus installation is a good idea -- it also ensures that our documentation examples work (for instance -- we use macvlan as our primary first-time-through example), and definitely thanks @micw for the contribution, let's work to get this together.

What I'd like to however, is to create a distribution of the reference CNI plugins that we, as a community, can build as we please so that we have some control of what's there, as opposed to including the Rancher distribution. What Rancher has for goals in the inclusion of the reference CNI plugins are likely different than our own goals for the inclusion of them. Partially what I'd like to see is the ability to have some of the newer reference plugins (such as the static IPAM plugin). I think we could use this until we have an official build from the containernetworking plugins repo directly -- and then, in the case we have one there, I'd defer to that. (Thanks for kicking off the conversation with them @micw )

Additionally -- I'm going to test this PR to see what it does when plugins already exist on disk. I'd like it to essentially not overwrite anything. I think that's what it'd do now, but, I'll double check.

In the meanwhile, I'll work on getting a docker image created that packages the reference CNI plugins.

As for the Helm charts... Certainly we could take a pull request that includes some helm charts, however... I think Helm charts are overkill for what we're trying to do, so I wouldn't recommend that as the default quick-start installation -- as it makes for some chicken-and-egg kind of problems as @micw as mentioned, as well as I consider it to be overkill, the bottom line is that a lot of the functionality of Helm just isn't necessary here, like... We don't necessary have a lot of cases where we need to templatize our daemonset, for example.

@micw
Copy link
Contributor Author

micw commented Mar 8, 2019

@dougbtv

Additionally -- I'm going to test this PR to see what it does when plugins already exist on disk. I'd like it to essentially not overwrite anything. I think that's what it'd do now, but, I'll double check.

No need to test this - it actually overwrites what's there. I had to make that decision: If I don't overwrite, we also have no updates if we use a newer image.

I think about how I could improve this. E.g. I could create a file in that folder that keeps track of md5 sums of plug-ins installed by this image and overwrite only if the images were installed by that image.

In the meanwhile, I'll work on getting a docker image created that packages the reference CNI plugins.

I have some docker image here somewhere that pulls, builds and packages CNI from official sources. I dropped that idea because I thought with my PR we will have an official image soon. I'll look if I can find it.

@@ -117,6 +117,19 @@ spec:
- operator: Exists
effect: NoSchedule
serviceAccountName: multus
initContainers:
- name: install-cni-binaries
image: rancher/coreos-flannel-cni:v0.3.0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change to nfvpe/containernetworking-plugins:latest

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer to use a version-tagged plug-in, see my comments below.

@micw
Copy link
Contributor Author

micw commented Mar 8, 2019

@dougbtv @s1061123 @rkamudhan
I have created an automated build of the official CNI binaries plus an installer script that keeps trakc of it's own installed cni-plugins so that it does only overwrite it's own plug-ins (in case of updates). A unit test of this installer is part of the docker build too.

https://github.com/micw/cni-plugin-installer -> https://hub.docker.com/r/micwy/cni-plugin-installer

I have added a build-tag for the latest release (v0.7.4) and the latest master commit (v0.7.5-SNAPSHOT-20190227-afd7391938547)

Please have a look into this. If you are fine with this solution, we can use it for multus.

@dougbtv
Copy link
Member

dougbtv commented Mar 8, 2019

Hey @micw -- my team has a containernetworking/plugins fork that is maintained, so I decided I'd keep a branch and a .travis.yml there in order to do a build -- the current build is based on master of containernetworking/plugins.

It's in the multus-distribution branch of https://github.com/redhat-nfvpe/plugins

Feel free to participate in the management of that branch in that repository too, absolutely.

No need to test this - it actually overwrites what's there. I had to make that decision: If I don't overwrite, > we also have no updates if we use a newer image.
I think about how I could improve this. E.g. I could create a file in that folder that keeps track of md5
sums of plug-ins installed by this image and overwrite only if the images were installed by that image.

Here's my thinking on it -- Currently, there's a few things to consider...

  1. The main reason to have the daemonset-style install is for a quick installation, following the model set forth by Flannel. It makes it easy to install for the first time and covers what you'd expect for a first time experience. That's the most important to me.
  2. I'm not sure it's this daemonset's responsibility to lifecycle manage the reference CNI plugins, that might fit better into Tomo's suggestion of having an external repository.
  3. I think the best default behavior for this would be to place the reference CNI plugins on disk if they're not present, as it'll make the examples in the documentation work.

Beyond this, however! There's lots that we can contribute to the project. One thing we REALLY need is a kind of "I installed Multus, now what?" type of document. One of the first things I think that should cover is "How do I maintain my own daemonset for Multus, and how do I customize that." -- because, like the Flannel daemonset-style installation, there's lots of options for Flannel... that aren't in that daemonset. I think they expect you to package it up, distribute it, install it, and lifecycle manage it in a way that works with your own environment, it's just a starting point.

I'd definitely like any contribution there, too.

@micw
Copy link
Contributor Author

micw commented Mar 8, 2019

@dougbtv this are great news. Please see if you could use my contribution in containernetworking/plugins#245 to build the image. Could you also create a tagged build (like my v.0.7.5-SNAPSHOT-DATE-GITHASH) so that we have a reliable image?

Edit: I can also transfer you the ownership of my "cni-plugin-installler" repo (or you fork it) so that you can do your builds to nfvpe/containernetworking-plugins with it.

Edit2: can you meet me in the "kubernetes" slack?

@micw micw force-pushed the feature/install-cni-binaries branch from 7922201 to 575d093 Compare March 8, 2019 20:01
@dougbtv
Copy link
Member

dougbtv commented Mar 8, 2019

@micw -- sorry I'm back and forth from the keyboard, let's see if we can catch up next week in chat.

However -- that being said -- if you'd like to make a PR against the redhat-nfvpe/plugins multus-distribution branch for the changes you have against containernetworking/plugins -- that'd be awesome. Thanks!

@s1061123 s1061123 force-pushed the master branch 2 times, most recently from 1b0b39d to c319f6b Compare March 22, 2019 03:14
@dougbtv dougbtv closed this Apr 30, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Install reference CNI plugins as part of Multus daemonset-style installation
5 participants