Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CAPV: adjust resource consumption for jobs on community cluster #32584

Merged

Conversation

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels May 8, 2024
@k8s-ci-robot k8s-ci-robot added area/config Issues or PRs related to code in /config area/jobs sig/testing Categorizes an issue or PR as relevant to SIG Testing. labels May 8, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: chrischdi

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 8, 2024
@@ -116,10 +116,10 @@ periodics:
resources:
requests:
cpu: "4000m"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we set limits on some more jobs? I think a bunch of them only have requests (including upgrade jobs)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK the jobs without limits are only the ones still running on GCP.
I would not invest time in optimizing those jobs because we have to migrate them soonish and also because AFAIK we don't have the same monitoring data that we get for jobs running in EKS

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack, we have no monitoring data for these.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah kk. Missed that we don't have data

@fabriziopandini
Copy link
Member

/lgtm
/hold for @sbueringer to take another look

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label May 9, 2024
@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label May 9, 2024
@sbueringer
Copy link
Member

/hold cancel

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label May 10, 2024
@k8s-ci-robot k8s-ci-robot merged commit 7138b3a into kubernetes:master May 10, 2024
7 checks passed
@k8s-ci-robot
Copy link
Contributor

@chrischdi: Updated the job-config configmap in namespace default at cluster test-infra-trusted using the following files:

  • key cluster-api-provider-vsphere-main-periodics.yaml using file config/jobs/kubernetes-sigs/cluster-api-provider-vsphere/cluster-api-provider-vsphere-main-periodics.yaml
  • key cluster-api-provider-vsphere-main-presubmits.yaml using file config/jobs/kubernetes-sigs/cluster-api-provider-vsphere/cluster-api-provider-vsphere-main-presubmits.yaml
  • key cluster-api-provider-vsphere-release-1-10-periodics.yaml using file config/jobs/kubernetes-sigs/cluster-api-provider-vsphere/cluster-api-provider-vsphere-release-1-10-periodics.yaml
  • key cluster-api-provider-vsphere-release-1-10-presubmits.yaml using file config/jobs/kubernetes-sigs/cluster-api-provider-vsphere/cluster-api-provider-vsphere-release-1-10-presubmits.yaml
  • key cluster-api-provider-vsphere-release-1-7-periodics.yaml using file config/jobs/kubernetes-sigs/cluster-api-provider-vsphere/cluster-api-provider-vsphere-release-1-7-periodics.yaml
  • key cluster-api-provider-vsphere-release-1-7-presubmits.yaml using file config/jobs/kubernetes-sigs/cluster-api-provider-vsphere/cluster-api-provider-vsphere-release-1-7-presubmits.yaml
  • key cluster-api-provider-vsphere-release-1-8-periodics.yaml using file config/jobs/kubernetes-sigs/cluster-api-provider-vsphere/cluster-api-provider-vsphere-release-1-8-periodics.yaml
  • key cluster-api-provider-vsphere-release-1-8-presubmits.yaml using file config/jobs/kubernetes-sigs/cluster-api-provider-vsphere/cluster-api-provider-vsphere-release-1-8-presubmits.yaml
  • key cluster-api-provider-vsphere-release-1-9-periodics.yaml using file config/jobs/kubernetes-sigs/cluster-api-provider-vsphere/cluster-api-provider-vsphere-release-1-9-periodics.yaml
  • key cluster-api-provider-vsphere-release-1-9-presubmits.yaml using file config/jobs/kubernetes-sigs/cluster-api-provider-vsphere/cluster-api-provider-vsphere-release-1-9-presubmits.yaml

In response to this:

Part of kubernetes-sigs/cluster-api-provider-vsphere#2978

xref: used sheet: https://docs.google.com/spreadsheets/d/18jQgOlhtMbOICJyMFr_gdI3WlhI0kYmuiEMJuKF61dg/edit#gid=1987539428

/assign @sbueringer

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/config Issues or PRs related to code in /config area/jobs cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants