-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Plans for Node Group Management and Service Scope Management #3582
Comments
|
2022.1.24 Meeting Minutes:
|
|
proposal: #3574 |
|
2022.1.26 Meeting:
|
|
If I understand correctly NodeGroup and ServiceScope are trying to achieve goals similar to OpenYurt YurtAppManager and SuperEdge ServiceGroup but in different ways? Both OpenYurt YurtAppManager and SuperEdge ServiceGroup distribute workload by creating sub-workload while KubeEdge is try to distribute pods in a single deployment to different NodeGroup? It might work well for deployment, but is this ok for a statefulset to spread pods of a single statefulset to different edge locations? For example:
And where can I find the proposal for Service Scope? Thanks |
|
@benjaminhuo Thanks for your feedback.
It's the problem that we need take into consideration. This is not the final design and any other suggestions are welcome.
Do you mean Serivce Scope based on something like node group? And a pod can only reach endpoints in the same nodegroup. In my mind, it is still under designing. |
Yep, I mean one multi-nodegroup deployment should only have one service, and the access to this service from a pod in the same nodegroup should only be able to find endpoints in the same nodegroup. |
|
Well, because KubeEdge doesn't have the concept of node group before this proposal, I think the proposal of service scope based on it hasn't been post yet. You can watch the edgemesh repo which mantains the network solutions of KubeEdge and any new information will be post there. |
|
@benjaminhuo I personally tend to wrap the workload in a CRD, then distribute the workload to the node group.
You mean only create one service for all the node groups? That maybe leads to the pods(ip) from different group located in one endpointSlice. Hard to seprated for groups. |
@fisherxu My previous conclusion is based on the current proposal because there is only one deployment for multiple node groups in the current proposal. Current design might work for deployment, but not for statefulset and might have issues for service. |
|
As we discussed lately, for the workload like deployment, we decide to deploy sub-workload to every group through CRD, so every group will have a workload(deployment). But for the service, if we create only one for all workload, now in k8s there're two features related to the topology(worked with kube-proxy):
Another idea is to create one service for every nodegroup, which can solve the problem of isolation, but there will be to many svcs. And we also need the gateway/ingress for every node group. How do you think? @benjaminhuo @Congrool @vincentgoat @zc2638 |
|
@fisherxu Is this implementable? |
Yes, EdgeMesh is one networking solution for us, and will do this :) |
|
I agree that EdgeMesh should be aware of the nodegroup. As for the kube-proxy approach, OpenYurt is using EndpointSliceProxying together with topologyKeys to use one service for all sub deployment in all nodepool. https://github.com/openyurtio/openyurt/blob/master/docs/tutorial/service-topology.md But it requires:
|
no authority :) |
|
Yeah, this doc is not open yet, and cannot access it @vincentgoat |
|
2022.3.2 meeting: |
Plz join the group kubeedge@googlegroups.com |
|
2022.3.7 meeting:
|
|
hi,In this design, have you ever considered the certificate management of device access?For example, with my same application deployment packaged in node groups A and node group B, do I require node groups A and node group B to use the same set of certificates issued by CA? |
Hi @koulq , thanks for the feedback, we currently manage workload and service through nodegroup, and management of device certificates can be considered in future evolutions, and also you are welcome to follow up on this process. |
Hi guys, we have plans for achieving the feature of Pod Scheduling among node groups in phase 1, and we also make plans for achieving the feature of Service Scope within a node group in phase 2.
If you have features to achieve or to get involved in the development, please see the project form below, and you can add other todos here as you want:
Phase 1: Node Group Management
Phase 2: Service Scope Management
NodeGroup-A will be deployed preferentially. After the specified number of deployments in NodeGroup-A are deployed, NodeGroup-B will be started deployed.
/cc @Congrool @fisherxu
The text was updated successfully, but these errors were encountered: