-
Notifications
You must be signed in to change notification settings - Fork 882
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support v1alpha2 version of PodGroup & Queue #314
Support v1alpha2 version of PodGroup & Queue #314
Conversation
|
||
const ( | ||
// GroupName is the group name used in this package. | ||
GroupName = "scheduling.sigs.dev" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure why changed the group name, in v1alpha1 it is called "scheduling.incubator.k8s.io"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Group name has been changed to v1alpha2 only. As suggested by @k82cn
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
v1alpha2
is version, is there any consideration i missed for groupname change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was the issue kubernetes-retired/kube-batch#815 in kube-batch in which groupname was specified by @k82cn. So have followed the same here too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's use new groupname.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@k82cn @hzxuzhonghu Please suggest new one.
The new one is kubernetes-retired/kube-batch#815 :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@k82cn I think you mean scheduling.sigs.k8s.io
, so I will make changes accordingly. Or you want us to keep scheduling.sigs.dev
itself?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI:
We already have "batch.volcano.sh"
"bus.volcano.sh"
"scheduling.incubator.k8s.io"
.
Personally, i prefer scheduling.volcano.sh
if there is no other consideration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, talked in private offline with k82cn@. We should keep it consistent with kube-batch
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We used to promise kubeflow to follow up kube-batch's api, so let's just cherry-pick this from kube-batch :)
pkg/scheduler/cache/cache.go
Outdated
if err != nil { | ||
glog.Errorf("Error while updating podgroup with error: %v", err) | ||
} | ||
podGroupInfo, err := api.ConvertV1Alpha1ToPodGroupInfo(updated) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am concerned with the performance, since there are 2Xmarshal, 2X unmarshal in one update.
pkg/scheduler/cache/cache.go
Outdated
if err != nil { | ||
glog.Errorf("Error while updating podgroup with error: %v", err) | ||
} | ||
podGroupInfo, err := api.ConvertV1Alpha1ToPodGroupInfo(updated) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
concerned about the performance, with 2 x marshal 2 x unmarshal.
/cc @TommyLike |
@thandayuthapani would you mind providing some background for this change? |
This change is for supporting both versions of podGroup and Queue CRD, users can create podGroup or Queue CRD in any of the versions, so that it will be handled in code as same. @k82cn wanted this change in kube-batch, so the same was done in kube-batch, and since cherry-picking of those commits were not possible, raising it as new commits. For issue: #305 |
thanks |
/lgtm @thandayuthapani please fix the ci |
e9ad08b
to
f9e6000
Compare
This PR is dependent on volcano-retired/charts#13, once that is merged, E2E has to be retriggered |
@hzxuzhonghu @k82cn @TommyLike Have fixed CI. Please Review. |
f9e6000
to
24328fa
Compare
24328fa
to
6829fe7
Compare
@hzxuzhonghu Please have a look |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: k82cn, thandayuthapani The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm |
Add v1alpha2 version for PodGroup and Queue and handle the same in code, so user can create PodGroup/Queue in both versions
Dependent on volcano-retired/charts#11. Only once this PR is merged, CI will pass, since CRD creation is done in that PR.
For Issue: #305
Cherry-Pick has lot of conflicts, so raising as new commits