Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multus: add support for the Multus daemonset #54

Merged
merged 2 commits into from
Jan 26, 2019

Conversation

dcbw
Copy link
Contributor

@dcbw dcbw commented Jan 8, 2019

Multus will be installed by default but can be turned off by
setting the DisableMultiNetwork property of the Cluster Network Operator
object to 'true'.

[NOTE: we are waiting on https://github.com/openshift/release/pull/2570 to land, which will provide the official images for Multus. Once that lands, I will remove the "hack: no images for now" commit and update the image references.]

@openshift-ci-robot openshift-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jan 8, 2019
@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 8, 2019
@dcbw dcbw mentioned this pull request Jan 8, 2019
serviceAccountName: multus
containers:
- name: kube-multus
image: nfvpe/multus:latest
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You probably know this, but the image url will need to be templated through like the others.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to use nfvpe/multus or pull from official registry, one of the issue I hit was that occasionally the latest nfvpe/multus image doesn't support reading kubelet device plugin checkpoint file which results in failure of configuring sriov-cni. not sure if it's because nfvpe/multus:latest may be updated by builds from different branches.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, hence the WIP :)

@@ -185,9 +213,9 @@ func RenderAdditionalNetworks(conf *netv1.NetworkConfigSpec, manifestDir string)
return nil, errors.Errorf("invalid Additional Network Configuration: %v", errs)
}

// render Multus when additional networks is provided
// render the CRD when additional networks are provided
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As per our previous discussion, we need to render the CRD whenever Multus is enabled so administrators can create additional networks out-of-band. I suspect that Multus also needs the CRD to exist to prevent errors.

Copy link
Contributor Author

@dcbw dcbw Jan 9, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think Multus cares. It only looks for the pod annotations, and only then does it try to get the networks using the CRD. So it's not going to block pods that don't use multiple networks. But yeah, CRD should get rendered anyway.

bindata/network/multus/multus.yaml Outdated Show resolved Hide resolved
serviceAccountName: multus
containers:
- name: kube-multus
image: nfvpe/multus:latest
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to use nfvpe/multus or pull from official registry, one of the issue I hit was that occasionally the latest nfvpe/multus image doesn't support reading kubelet device plugin checkpoint file which results in failure of configuring sriov-cni. not sure if it's because nfvpe/multus:latest may be updated by builds from different branches.

bindata/network/multus/multus.yaml Show resolved Hide resolved
if len(ans) > 0 {
objs, err := renderMultusConfig(manifestDir)
objs, err := renderAdditionalNetworksCRD(manifestDir)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rendering of CRD and CR may fail in a single run, some details here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zshi-redhat yeah, that's pretty odd and I think a bug somewhere in the Kube parts. If we create the CRD first, there's no reason that creating a subsequent CR should fail.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dcbw I still see this failure with the latest version of PR, not sure if it's specified to how I run the test (run the binary of cluster-network-operator in one of the master nodes). did you ever hit this issue?

@@ -41,6 +48,7 @@ func Validate(conf *netv1.NetworkConfigSpec) error {
errs := []error{}

errs = append(errs, ValidateDefaultNetwork(conf)...)
errs = append(errs, ValidateMultus(conf)...)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we also validate additional networks here? If it's not provided, validation of additional network returns immediately.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zshi-redhat no, the validate of additional networks would be handled by the additional network rendering, not multus rendering. Multus and AdditionalNetworks are completely separate codepaths that do not depend on each other.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I added the comment in the wrong line, I mean add validation of additional network in a separate line so that controller will capture the failure immediately instead of reaching the render phase. just little optimization as it will anyway be validated before rendering.

pkg/network/render.go Outdated Show resolved Hide resolved
@dcbw
Copy link
Contributor Author

dcbw commented Jan 9, 2019

We probably want to make Multus conf file more configurable instead of using auto, reason being that there are some parameters that user may want to configure, for example, the namespaceIsolation, logLevel, logFile etc and in the future when enabling network migration with Multus, user may want to specify default network plugin although we may support configure default network with net-attach-def CR.

At this point, let's wait until we actually have use-cases for those things before we put them into the operator. I honestly don't think namespaceIsolation is worthwhile here, that's what the Admission Controller and RBAC is for. logLevel and logFile would be controlled by the Network operator when increased debugging is called for, but shouldn't be a user-visible option. Almost everything that multus allows configuration for, should be handled automatically by the operator and not be a passthrough option to the user.

@zshi-redhat
Copy link
Contributor

We probably want to make Multus conf file more configurable instead of using auto, reason being that there are some parameters that user may want to configure, for example, the namespaceIsolation, logLevel, logFile etc and in the future when enabling network migration with Multus, user may want to specify default network plugin although we may support configure default network with net-attach-def CR.

At this point, let's wait until we actually have use-cases for those things before we put them into the operator. I honestly don't think namespaceIsolation is worthwhile here, that's what the Admission Controller and RBAC is for. logLevel and logFile would be controlled by the Network operator when increased debugging is called for, but shouldn't be a user-visible option. Almost everything that multus allows configuration for, should be handled automatically by the operator and not be a passthrough option to the user.

Ok, understood, Operator would be the only user-facing interface, if there needs to be an user-configurable parameter to multus, it will be exposed in the operator level.

@dcbw
Copy link
Contributor Author

dcbw commented Jan 9, 2019

Ok, understood, Operator would be the only user-facing interface, if there needs to be an user-configurable parameter to multus, it will be exposed in the operator level.

@zshi-redhat yeah, at least for now. Let's keep things simple because once we add something, we can't easily remove it :)

@dcbw dcbw force-pushed the multus branch 2 times, most recently from 4f01da2 to 555d35f Compare January 10, 2019 06:17
@dcbw dcbw changed the title [WIP] multus: add support for the Multus daemonset multus: add support for the Multus daemonset Jan 10, 2019
@openshift-ci-robot openshift-ci-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jan 10, 2019
@dcbw
Copy link
Contributor Author

dcbw commented Jan 10, 2019

/retest

flake is during install:

time="2019-01-10T07:40:24Z" level=error
time="2019-01-10T07:40:24Z" level=error msg="Error: Error applying plan:"
time="2019-01-10T07:40:24Z" level=error
time="2019-01-10T07:40:24Z" level=error msg="1 error occurred:"
time="2019-01-10T07:40:24Z" level=error msg="\t* module.vpc.aws_route.to_nat_gw[4]: 1 error occurred:"
time="2019-01-10T07:40:24Z" level=error msg="\t* aws_route.to_nat_gw.4: Error creating route: timeout while waiting for state to become 'success' (timeout: 2m0s)"

@dcbw dcbw force-pushed the multus branch 2 times, most recently from e634bea to 555d35f Compare January 10, 2019 19:59
@openshift-ci-robot openshift-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jan 10, 2019
@dcbw
Copy link
Contributor Author

dcbw commented Jan 10, 2019

/retest

2 similar comments
@dcbw
Copy link
Contributor Author

dcbw commented Jan 10, 2019

/retest

@dcbw
Copy link
Contributor Author

dcbw commented Jan 10, 2019

/retest

@squeed
Copy link
Contributor

squeed commented Jan 16, 2019

/lgtm

@dcbw
Copy link
Contributor Author

dcbw commented Jan 17, 2019

NOTE: we are waiting on openshift/release#2570 to land, which will provide the official images for Multus. Once that lands, I will remove the "hack: no images for now" commit and update the image references. But this PR demonstrates that Multus passes CI even without official images.

@openshift-ci-robot openshift-ci-robot removed the lgtm Indicates that a PR is ready to be merged. label Jan 24, 2019
@dcbw
Copy link
Contributor Author

dcbw commented Jan 24, 2019

@danwinship real images added now, FWIW

@knobunc
Copy link
Contributor

knobunc commented Jan 24, 2019

/retest

manifests/image-references Outdated Show resolved Hide resolved
bindata/network/multus/000-ns.yaml Show resolved Hide resolved
bindata/network/multus/multus.yaml Outdated Show resolved Hide resolved
bindata/network/multus/multus.yaml Outdated Show resolved Hide resolved
Multus will be installed by default but can be turned off by
setting the DisableMultiNetwork property of the Cluster Network Operator
object to 'true'.
@dcbw
Copy link
Contributor Author

dcbw commented Jan 25, 2019

/retest
One master failed to correctly download images for many containers. ImageInspectError too, no idea what that means in practice.

kube-system                          etcd-member-ip-10-0-26-81.ec2.internal                    1/1       Running             0          17m       10.0.26.81   ip-10-0-26-81.ec2.internal   <none>
multus                               multus-w72wg                                              0/1       CrashLoopBackOff    9          22m       10.0.26.81   ip-10-0-26-81.ec2.internal   <none>
openshift-cluster-network-operator   cluster-network-operator-htwh8                            1/1       Running             0          22m       10.0.26.81   ip-10-0-26-81.ec2.internal   <none>
openshift-dns                        dns-default-968p5                                         0/2       ContainerCreating   0          21m       <none>       ip-10-0-26-81.ec2.internal   <none>
openshift-kube-apiserver             installer-1-ip-10-0-26-81.ec2.internal                    0/1       ContainerCreating   0          17m       <none>       ip-10-0-26-81.ec2.internal   <none>
openshift-sdn                        ovs-cpvss                                                 0/1       ImageInspectError   0          22m       10.0.26.81   ip-10-0-26-81.ec2.internal   <none>
openshift-sdn                        sdn-controller-4tx5t                                      1/1       Running             0          22m       10.0.26.81   ip-10-0-26-81.ec2.internal   <none>
openshift-sdn                        sdn-dz8n4                                                 0/1       ImageInspectError   0          22m       10.0.26.81   ip-10-0-26-81.ec2.internal   <none>

@dcbw
Copy link
Contributor Author

dcbw commented Jan 25, 2019

/retest

@dcbw
Copy link
Contributor Author

dcbw commented Jan 25, 2019

@danwinship look better now?

@danwinship
Copy link
Contributor

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Jan 25, 2019
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: danwinship, dcbw, squeed

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [danwinship,dcbw,squeed]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@knobunc
Copy link
Contributor

knobunc commented Jan 25, 2019

/test all

@knobunc
Copy link
Contributor

knobunc commented Jan 25, 2019

/hold cancel

@openshift-ci-robot openshift-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jan 25, 2019
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@dcbw
Copy link
Contributor Author

dcbw commented Jan 26, 2019

@dcbw: The following test failed, say /retest to rerun them all:
Test name Commit Details Rerun command
ci/prow/e2e-aws fd9303c link /test e2e-aws

The following don't seem network related...

Failing tests:

[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-storage] Volume limits should verify that all nodes have volume limits [Suite:openshift/conformance/parallel] [Suite:k8s]

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-merge-robot openshift-merge-robot merged commit cfcc522 into openshift:master Jan 26, 2019
@jeremyeder
Copy link

+100

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet