-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce master host object; move vtep creation to netmaster #27
Conversation
err = cfgNw.Read(epCfg.NetId) | ||
if err != nil { | ||
return err | ||
pktTag = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I got it correct, looks like we set pktTag=0 implicitly if intfName is provided? In case of SR-IOV kind of deployment the pkt tag will need to come from network and intfName will also be specified. Why not pick the tag from CfgNetworkState as before?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Associating pktTag to an endpoint is somewhat - it is a property of a network. Therefore instead of associating a tag to an endpoint, we could introduce a network (aka underlay-network) at top level various hosts' EPs belong to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand but at Driver level pktTag is already a property of OvsCfgNetwork, right? In this function we pass pktTag as part of CreateDeletePort() since it needs to be programmed in the ovsdb port-table. I was wondering why do we need to hardcode a pktTag to 0 in the Driver code here when we can pick it up from corresponding OvsCfgNetwork state.
Also looks like we are no longer creating OvsCfgNetwork state for infra network, which makes endpoint handling in Driver a bit inconsistent. The way we could model it at Driver level is that each endpoint belongs to a network and always derive it's Network specific properties (like pktTag in this case) from NetworkCfg, this would keep the handling in driver consistent for all endpoints.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes of course there is no network for infra; that's what I was alluding to can be added at the top level. and make hosts as their end points. It should have been clear from the json schema I mean...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we are saying the same thing.
I am not asking that we need to put pkt-tag in json schema for hosts (which corresponds to logical config i.e MasterHostCfg), what I am asking is that MasterHostCfg shall result in creation of concrete configuration viz. OvsCfgNetwork (for infra-network, with pktTag set to 0) and associted OvsCfgEndpoint (for host interfaces, which shall then be able pick pktTag from OvsCfgNetwork then like other endpoints).
* merge contract groups change * rebase to latest * add versioning in api, other fixes * add network oper proerties * avoid generating operlinks * avoid operlinks for client code gen * global oper state * add version to external contracts object * support for oper only objects * add list of endpoints in network oper
These changes allows a user/orchestrator that is tied to host discovery to specify the host creation as an event into the netmaster. Thus Eps for vtep and vlanif is now created from the netmaster, not auto-created from the host.