New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add an option for CNI shim to create and configure the pod interface #824
Conversation
@dcbw @danwinship PTAL |
ovn-kubernetes has way too many modes. Is there any reason to not just always do it this way? |
@danwinship it was implemented using a knob for two reasons.
|
@danwinship if we also ran OVS on the host, then we could use this mode too :( And this is mostly what we do in openshift-sdn except that we do the OVS operation inside our container because that's where our ovs-vsctl lives. I guess we could switch to this mode if we map our ovs-vsctl binary and the vsctl socket onto the host's filesystem and call it from there? |
@dcbw @danwinship PTAL. This doesn't use the cniShimConfig.json file anymore, so it is very simple now. |
Hm... it seems like if we split |
Except that the CNI plugin doesn't have any K8s credentials and OVN DB endpoint information. So,
|
ah... ok |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mostly looks good then
go-controller/pkg/config/config.go
Outdated
@@ -71,6 +71,9 @@ var ( | |||
|
|||
// NbctlDaemon enables ovn-nbctl to run in daemon mode | |||
NbctlDaemonMode bool | |||
|
|||
// PrivilegedMode needs ovnkube-node container to run with SYS_ADMIN capability by default. | |||
PrivilegedMode = true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this was UnprivilegedMode
instead then it would match the config flag, and wouldn't need to be initialized here, and could be set directly from the cli.BoolFlag
declaration below rather than needing separate code in ovnkube.go
.
As for the comment, assuming you go with UnprivilegedMode
:
// UnprivilegedMode allows ovnkube-node to run without SYS_ADMIN capability, by performing interface setup in the CNI plugin
(except don't you mean NET_ADMIN
anyway?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for reviewing it.
I have updated the code.
go-controller/pkg/cni/cni.go
Outdated
IPAddress: ipAddress, | ||
GatewayIP: gatewayIP, | ||
Ingress: ingress, | ||
Egress: egress} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
style nit: put the }
on the next line
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
go-controller/pkg/cni/cni.go
Outdated
}, | ||
} | ||
|
||
podIntfaceInfo := &PodIntfaceInfo{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Intface" is weird... Usually people abbreviate "interface" to "iface", but you could also just not abbreviate it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
Currently we use client/server design to create and configure the pod interface. The client being the ovn-k8s-cni-overlay and server being the ovnkube-node container. The ovnkube-node creates and configures the pod interface. This requires It to be running with SYS_ADMIN capability and this is undesirable. To make ovnkube-node to run with least capabilities as possible, the idea is to create and configure the pod interface in the client itself (i.e., in ovn-k8s-cni-overlay running on the host). In this approach, the deployer explicity asks for unpriivelged ovnkube-node using a CLI option. The server returns to client all the pod interface information (ip, mac, gateway, MTU, ingresss, egress bandwidth, and so on). The client then creates and sets up the pod interface. Signed-off-by: Zhen Wang <zhewang@nvidia.com>
@danwinship Please review my latest code, thanks! |
lgtm |
Currently we use client/server design to create and configure the pod
interface. The client being the ovn-k8s-cni-overlay and server
being the ovnkube-node container. The ovnkube-node creates and
configures the pod interface. This requires It to be running with
SYS_ADMIN capability and this is undesirable.
To make ovnkube-node to run with least capabilities as possible, the idea
is to create and configure the pod interface in the client itself (i.e., in
ovn-k8s-cni-overlay running on the host). In this approach, the deployer
explicity asks for unpriivelged ovnkube-node using a CLI option. The
server returns to client all the pod interface information (ip, mac, gateway,
MTU, ingresss, egress bandwidth, and so on). The client then creates
and sets up the pod interface.
Signed-off-by: Zhen Wang zhewang@nvidia.com