Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Scheduler] Use allNodes when calculating nextStartNodeIndex #124933

Merged
merged 2 commits into from
May 21, 2024

Conversation

AxeZhan
Copy link
Member

@AxeZhan AxeZhan commented May 17, 2024

What type of PR is this?

/kind bug

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes #124930

Special notes for your reviewer:

Does this PR introduce a user-facing change?

Fixed a bug in the scheduler where it would crash when prefilter returns a non-existent node.

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. kind/bug Categorizes issue or PR as related to a bug. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels May 17, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-priority Indicates a PR lacks a `priority/foo` label and requires one. label May 17, 2024
@k8s-ci-robot k8s-ci-robot added sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels May 17, 2024
@AxeZhan
Copy link
Member Author

AxeZhan commented May 17, 2024

/cc @alculquicondor

@alculquicondor
Copy link
Member

Please add a release note

Copy link
Member

@alculquicondor alculquicondor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's also add an integration test using nodeAffinity, as reported in the issue.

Copy link
Member

@Huang-Wei Huang-Wei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/approve

Thanks @AxeZhan !

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label May 17, 2024
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: 22c877b53f7130df6d99d567d7fa47cb9e506d3d

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 17, 2024
@Huang-Wei
Copy link
Member

/retest

@Huang-Wei
Copy link
Member

/hold for @alculquicondor to take a final look for integration test.

@k8s-ci-robot k8s-ci-robot added do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed release-note-none Denotes a PR that doesn't merit a release note. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels May 17, 2024
@k8s-ci-robot k8s-ci-robot added this to the v1.31 milestone May 21, 2024
k8s-ci-robot added a commit that referenced this pull request May 23, 2024
…933-upstream-release-1.30

Automated cherry pick of #124933: base on allNodes when calculating nextStartNodeIndex
k8s-ci-robot added a commit that referenced this pull request May 23, 2024
…933-upstream-release-1.27

Automated cherry pick of #124933: base on allNodes when calculating nextStartNodeIndex
k8s-ci-robot added a commit that referenced this pull request May 23, 2024
…933-upstream-release-1.29

Automated cherry pick of #124933: base on allNodes when calculating nextStartNodeIndex
k8s-ci-robot added a commit that referenced this pull request May 23, 2024
…933-upstream-release-1.28

Automated cherry pick of #124933: base on allNodes when calculating nextStartNodeIndex
hswong3i added a commit to alvistack/kubernetes-kubernetes that referenced this pull request Jun 3, 2024
    git clean -xdf
    go mod download
    go mod vendor
    git checkout HEAD -- vendor
    tar zcvf ../kubernetes_1.30.1.orig.tar.gz --exclude=.git .
    debuild -uc -us
    cp kubernetes.spec ../kubernetes_1.30.1-1.spec
    cp ../kubernetes*1.30.1*.{gz,xz,spec,dsc} /osc/home\:alvistack/kubernetes-kubernetes-1.30.1/
    rm -rf ../kubernetes*1.30.1*.*

See kubernetes#124933
See kubernetes#124908

Signed-off-by: Wong Hoi Sing Edison <hswong3i@pantarei-design.com>
hswong3i added a commit to alvistack/kubernetes-kubernetes that referenced this pull request Jun 3, 2024
    git clean -xdf
    go mod download
    go mod vendor
    git checkout HEAD -- vendor
    tar zcvf ../kubernetes_1.28.10.orig.tar.gz --exclude=.git .
    debuild -uc -us
    cp kubernetes.spec ../kubernetes_1.28.10-1.spec
    cp ../kubernetes*1.28.10*.{gz,xz,spec,dsc} /osc/home\:alvistack/kubernetes-kubernetes-1.28.10/
    rm -rf ../kubernetes*1.28.10*.*

See kubernetes#124933
See kubernetes#124908

Signed-off-by: Wong Hoi Sing Edison <hswong3i@pantarei-design.com>
hswong3i added a commit to alvistack/kubernetes-kubernetes that referenced this pull request Jun 3, 2024
    git clean -xdf
    go mod download
    go mod vendor
    git checkout HEAD -- vendor
    tar zcvf ../kubernetes_1.29.5.orig.tar.gz --exclude=.git .
    debuild -uc -us
    cp kubernetes.spec ../kubernetes_1.29.5-1.spec
    cp ../kubernetes*1.29.5*.{gz,xz,spec,dsc} /osc/home\:alvistack/kubernetes-kubernetes-1.29.5/
    rm -rf ../kubernetes*1.29.5*.*

See kubernetes#124933
See kubernetes#124908

Signed-off-by: Wong Hoi Sing Edison <hswong3i@pantarei-design.com>
hswong3i added a commit to alvistack/kubernetes-kubernetes that referenced this pull request Jun 3, 2024
    git clean -xdf
    go mod download
    go mod vendor
    git checkout HEAD -- vendor
    tar zcvf ../kubernetes_1.28.10.orig.tar.gz --exclude=.git .
    debuild -uc -us
    cp kubernetes.spec ../kubernetes_1.28.10-1.spec
    cp ../kubernetes*1.28.10*.{gz,xz,spec,dsc} /osc/home\:alvistack/kubernetes-kubernetes-1.28.10/
    rm -rf ../kubernetes*1.28.10*.*

See kubernetes#124933
See kubernetes#124908

Signed-off-by: Wong Hoi Sing Edison <hswong3i@pantarei-design.com>
hswong3i added a commit to alvistack/kubernetes-kubernetes that referenced this pull request Jun 3, 2024
    git clean -xdf
    go mod download
    go mod vendor
    git checkout HEAD -- vendor
    tar zcvf ../kubernetes_1.29.5.orig.tar.gz --exclude=.git .
    debuild -uc -us
    cp kubernetes.spec ../kubernetes_1.29.5-1.spec
    cp ../kubernetes*1.29.5*.{gz,xz,spec,dsc} /osc/home\:alvistack/kubernetes-kubernetes-1.29.5/
    rm -rf ../kubernetes*1.29.5*.*

See kubernetes#124933
See kubernetes#124908

Signed-off-by: Wong Hoi Sing Edison <hswong3i@pantarei-design.com>
hswong3i added a commit to alvistack/kubernetes-kubernetes that referenced this pull request Jun 3, 2024
    git clean -xdf
    go mod download
    go mod vendor
    git checkout HEAD -- vendor
    tar zcvf ../kubernetes_1.30.1.orig.tar.gz --exclude=.git .
    debuild -uc -us
    cp kubernetes.spec ../kubernetes_1.30.1-1.spec
    cp ../kubernetes*1.30.1*.{gz,xz,spec,dsc} /osc/home\:alvistack/kubernetes-kubernetes-1.30.1/
    rm -rf ../kubernetes*1.30.1*.*

See kubernetes#124933
See kubernetes#124908

Signed-off-by: Wong Hoi Sing Edison <hswong3i@pantarei-design.com>
hswong3i added a commit to alvistack/kubernetes-kubernetes that referenced this pull request Jun 5, 2024
    git clean -xdf
    go mod download
    go mod vendor
    git checkout HEAD -- vendor
    tar zcvf ../kubernetes_1.28.10.orig.tar.gz --exclude=.git .
    debuild -uc -us
    cp kubernetes.spec ../kubernetes_1.28.10-1.spec
    cp ../kubernetes*1.28.10*.{gz,xz,spec,dsc} /osc/home\:alvistack/kubernetes-kubernetes-1.28.10/
    rm -rf ../kubernetes*1.28.10*.*

See kubernetes#124933
See kubernetes#124908

Signed-off-by: Wong Hoi Sing Edison <hswong3i@pantarei-design.com>
hswong3i added a commit to alvistack/kubernetes-kubernetes that referenced this pull request Jun 5, 2024
    git clean -xdf
    go mod download
    go mod vendor
    git checkout HEAD -- vendor
    tar zcvf ../kubernetes_1.29.5.orig.tar.gz --exclude=.git .
    debuild -uc -us
    cp kubernetes.spec ../kubernetes_1.29.5-1.spec
    cp ../kubernetes*1.29.5*.{gz,xz,spec,dsc} /osc/home\:alvistack/kubernetes-kubernetes-1.29.5/
    rm -rf ../kubernetes*1.29.5*.*

See kubernetes#124933
See kubernetes#124908

Signed-off-by: Wong Hoi Sing Edison <hswong3i@pantarei-design.com>
hswong3i added a commit to alvistack/kubernetes-kubernetes that referenced this pull request Jun 5, 2024
    git clean -xdf
    go mod download
    go mod vendor
    git checkout HEAD -- vendor
    tar zcvf ../kubernetes_1.30.1.orig.tar.gz --exclude=.git .
    debuild -uc -us
    cp kubernetes.spec ../kubernetes_1.30.1-1.spec
    cp ../kubernetes*1.30.1*.{gz,xz,spec,dsc} /osc/home\:alvistack/kubernetes-kubernetes-1.30.1/
    rm -rf ../kubernetes*1.30.1*.*

See kubernetes#124933
See kubernetes#124908

Signed-off-by: Wong Hoi Sing Edison <hswong3i@pantarei-design.com>
@debugger24
Copy link

debugger24 commented Jun 22, 2024

I am using k3s v1.30.1+k3s1 version.

I am still facing this issue, is it related or different issue?

I am facing this issues since k8s 1.29 version. Tried v1.29.5+k3s1, v1.30.0+k3s1 and v1.30.1+k3s1, in all of them found this issue.

Jun 23 01:44:11 pi2 k3s[95864]: Trace[842295123]: ["List(recursive=true) etcd3" audit-id:cc9a7ad2-7daf-4bb0-9bbe-beb706b0b98a,key:/apiextensions.k8s.io/customresourcedefinitions,resourceVersion:,resourceVersionMatch:,limit:500,continue: 6134ms (01:44:05.696)]
Jun 23 01:44:11 pi2 k3s[95864]: Trace[842295123]: ---"Writing http response done" count:113 3331ms (01:44:11.830)
Jun 23 01:44:11 pi2 k3s[95864]: Trace[842295123]: [6.134136376s] [6.134136376s] END
Jun 23 01:44:11 pi2 k3s[95864]: I0623 01:44:11.865246   95864 leaderelection.go:260] successfully acquired lease kube-system/kube-scheduler
Jun 23 01:44:11 pi2 k3s[95864]: E0623 01:44:11.876542   95864 runtime.go:79] Observed a panic: "integer divide by zero" (runtime error: integer divide by zero)
Jun 23 01:44:11 pi2 k3s[95864]: goroutine 37253 [running]:
Jun 23 01:44:11 pi2 k3s[95864]: k8s.io/apimachinery/pkg/util/runtime.logPanic({0x52e8f60, 0xa1f3860})
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apimachinery@v1.30.1-k3s1/pkg/util/runtime/runtime.go:75 +0x7c
Jun 23 01:44:11 pi2 k3s[95864]: k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x40266a7c00?})
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apimachinery@v1.30.1-k3s1/pkg/util/runtime/runtime.go:49 +0x78
Jun 23 01:44:11 pi2 k3s[95864]: panic({0x52e8f60?, 0xa1f3860?})
Jun 23 01:44:11 pi2 k3s[95864]:         /usr/local/go/src/runtime/panic.go:770 +0x124
Jun 23 01:44:11 pi2 k3s[95864]: k8s.io/kubernetes/pkg/scheduler.(*Scheduler).findNodesThatFitPod(0x40311e7200, {0x6c65c50, 0x402b0881e0}, {0x6cd47f8, 0x4030c7dd48}, 0x403876d5c0, 0x4031ef0d88)
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes@v1.30.1-k3s1/pkg/scheduler/schedule_one.go:505 +0x8a0
Jun 23 01:44:11 pi2 k3s[95864]: k8s.io/kubernetes/pkg/scheduler.(*Scheduler).schedulePod(0x40311e7200, {0x6c65c50, 0x402b0881e0}, {0x6cd47f8, 0x4030c7dd48}, 0x403876d5c0, 0x4031ef0d88)
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes@v1.30.1-k3s1/pkg/scheduler/schedule_one.go:402 +0x25c
Jun 23 01:44:11 pi2 k3s[95864]: k8s.io/kubernetes/pkg/scheduler.(*Scheduler).schedulingCycle(0x40311e7200, {0x6c65c50, 0x402b0881e0}, 0x403876d5c0, {0x6cd47f8, 0x4030c7dd48}, 0x4031fdfd10, {0x2?, 0x4025beea88?, 0xa45d580?}, ...)
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes@v1.30.1-k3s1/pkg/scheduler/schedule_one.go:149 +0xb8
Jun 23 01:44:11 pi2 k3s[95864]: k8s.io/kubernetes/pkg/scheduler.(*Scheduler).ScheduleOne(0x40311e7200, {0x6c65c50, 0x402aced680})
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes@v1.30.1-k3s1/pkg/scheduler/schedule_one.go:111 +0x4c0
Jun 23 01:44:11 pi2 k3s[95864]: k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1()
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apimachinery@v1.30.1-k3s1/pkg/util/wait/backoff.go:259 +0x2c
Jun 23 01:44:11 pi2 k3s[95864]: k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x403ab8fec8?)
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apimachinery@v1.30.1-k3s1/pkg/util/wait/backoff.go:226 +0x40
Jun 23 01:44:11 pi2 k3s[95864]: k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x403ab8ff68, {0x6c0e880, 0x4035b5ef60}, 0x1, 0x40307c97a0)
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apimachinery@v1.30.1-k3s1/pkg/util/wait/backoff.go:227 +0x90
Jun 23 01:44:11 pi2 k3s[95864]: k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40225a2f68, 0x0, 0x0, 0x1, 0x40307c97a0)
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apimachinery@v1.30.1-k3s1/pkg/util/wait/backoff.go:204 +0x80
Jun 23 01:44:11 pi2 k3s[95864]: k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext({0x6c65c50, 0x402aced680}, 0x40175d4560, 0x0, 0x0, 0x1)
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apimachinery@v1.30.1-k3s1/pkg/util/wait/backoff.go:259 +0x80
Jun 23 01:44:11 pi2 k3s[95864]: k8s.io/apimachinery/pkg/util/wait.UntilWithContext(...)
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apimachinery@v1.30.1-k3s1/pkg/util/wait/backoff.go:170
Jun 23 01:44:11 pi2 k3s[95864]: created by k8s.io/kubernetes/pkg/scheduler.(*Scheduler).Run in goroutine 37071
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes@v1.30.1-k3s1/pkg/scheduler/scheduler.go:445 +0x104
Jun 23 01:44:11 pi2 k3s[95864]: panic: runtime error: integer divide by zero [recovered]
Jun 23 01:44:11 pi2 k3s[95864]:         panic: runtime error: integer divide by zero
Jun 23 01:44:11 pi2 k3s[95864]: goroutine 37253 [running]:
Jun 23 01:44:11 pi2 k3s[95864]: k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x40266a7c00?})
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apimachinery@v1.30.1-k3s1/pkg/util/runtime/runtime.go:56 +0xe0
Jun 23 01:44:11 pi2 k3s[95864]: panic({0x52e8f60?, 0xa1f3860?})
Jun 23 01:44:11 pi2 k3s[95864]:         /usr/local/go/src/runtime/panic.go:770 +0x124
Jun 23 01:44:11 pi2 k3s[95864]: k8s.io/kubernetes/pkg/scheduler.(*Scheduler).findNodesThatFitPod(0x40311e7200, {0x6c65c50, 0x402b0881e0}, {0x6cd47f8, 0x4030c7dd48}, 0x403876d5c0, 0x4031ef0d88)
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes@v1.30.1-k3s1/pkg/scheduler/schedule_one.go:505 +0x8a0
Jun 23 01:44:11 pi2 k3s[95864]: k8s.io/kubernetes/pkg/scheduler.(*Scheduler).schedulePod(0x40311e7200, {0x6c65c50, 0x402b0881e0}, {0x6cd47f8, 0x4030c7dd48}, 0x403876d5c0, 0x4031ef0d88)
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes@v1.30.1-k3s1/pkg/scheduler/schedule_one.go:402 +0x25c
Jun 23 01:44:11 pi2 k3s[95864]: k8s.io/kubernetes/pkg/scheduler.(*Scheduler).schedulingCycle(0x40311e7200, {0x6c65c50, 0x402b0881e0}, 0x403876d5c0, {0x6cd47f8, 0x4030c7dd48}, 0x4031fdfd10, {0x2?, 0x4025beea88?, 0xa45d580?}, ...)
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes@v1.30.1-k3s1/pkg/scheduler/schedule_one.go:149 +0xb8
Jun 23 01:44:11 pi2 k3s[95864]: k8s.io/kubernetes/pkg/scheduler.(*Scheduler).ScheduleOne(0x40311e7200, {0x6c65c50, 0x402aced680})
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes@v1.30.1-k3s1/pkg/scheduler/schedule_one.go:111 +0x4c0
Jun 23 01:44:11 pi2 k3s[95864]: k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1()
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apimachinery@v1.30.1-k3s1/pkg/util/wait/backoff.go:259 +0x2c
Jun 23 01:44:11 pi2 k3s[95864]: k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x403ab8fec8?)
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apimachinery@v1.30.1-k3s1/pkg/util/wait/backoff.go:226 +0x40
Jun 23 01:44:11 pi2 k3s[95864]: k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x403ab8ff68, {0x6c0e880, 0x4035b5ef60}, 0x1, 0x40307c97a0)
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apimachinery@v1.30.1-k3s1/pkg/util/wait/backoff.go:227 +0x90
Jun 23 01:44:11 pi2 k3s[95864]: k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40225a2f68, 0x0, 0x0, 0x1, 0x40307c97a0)
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apimachinery@v1.30.1-k3s1/pkg/util/wait/backoff.go:204 +0x80
Jun 23 01:44:11 pi2 k3s[95864]: k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext({0x6c65c50, 0x402aced680}, 0x40175d4560, 0x0, 0x0, 0x1)
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apimachinery@v1.30.1-k3s1/pkg/util/wait/backoff.go:259 +0x80
Jun 23 01:44:11 pi2 k3s[95864]: k8s.io/apimachinery/pkg/util/wait.UntilWithContext(...)
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes/staging/src/k8s.io/apimachinery@v1.30.1-k3s1/pkg/util/wait/backoff.go:170
Jun 23 01:44:11 pi2 k3s[95864]: created by k8s.io/kubernetes/pkg/scheduler.(*Scheduler).Run in goroutine 37071
Jun 23 01:44:11 pi2 k3s[95864]:         /go/pkg/mod/github.com/k3s-io/kubernetes@v1.30.1-k3s1/pkg/scheduler/scheduler.go:445 +0x104
Jun 23 01:44:12 pi2 systemd[1]: k3s.service: Main process exited, code=exited, status=2/INVALIDARGUMENT

@AxeZhan
Copy link
Member Author

AxeZhan commented Jun 23, 2024

@debugger24 I think this patch should be included in 1.30.2. You can check https://kubernetes.io/releases/patch-releases/#detailed-release-history-for-active-branches for detailed version for each releases. Can you update to 1.30.2 and see if the scheduler will panic?

@debugger24
Copy link

It will be included in 1.31. Since I am using k3s looks like I need to wait for k3s team to release 1.31

@AxeZhan
Copy link
Member Author

AxeZhan commented Jun 23, 2024

I believe 1.30.2 is enough, maybe let k3s team to do a patch fot this.

@debugger24
Copy link

Tried v1.30.2-rc3+k3s1 but didn't work.

@AxeZhan
Copy link
Member Author

AxeZhan commented Jun 24, 2024

That shouldn't happen. Do you have steps to reproduce? And did you try on other clusters(for example, maybe kind?)

hswong3i added a commit to alvistack/kubernetes-kubernetes that referenced this pull request Jul 3, 2024
    git clean -xdf
    go mod download
    go mod vendor
    git checkout HEAD -- vendor
    tar zcvf ../kubernetes_1.28.10.orig.tar.gz --exclude=.git .
    debuild -uc -us
    cp kubernetes.spec ../kubernetes_1.28.10-1.spec
    cp ../kubernetes*1.28.10*.{gz,xz,spec,dsc} /osc/home\:alvistack/kubernetes-kubernetes-1.28.10/
    rm -rf ../kubernetes*1.28.10*.*

See kubernetes#124933
See kubernetes#124908

Signed-off-by: Wong Hoi Sing Edison <hswong3i@pantarei-design.com>
hswong3i added a commit to alvistack/kubernetes-kubernetes that referenced this pull request Jul 3, 2024
    git clean -xdf
    go mod download
    go mod vendor
    git checkout HEAD -- vendor
    tar zcvf ../kubernetes_1.29.5.orig.tar.gz --exclude=.git .
    debuild -uc -us
    cp kubernetes.spec ../kubernetes_1.29.5-1.spec
    cp ../kubernetes*1.29.5*.{gz,xz,spec,dsc} /osc/home\:alvistack/kubernetes-kubernetes-1.29.5/
    rm -rf ../kubernetes*1.29.5*.*

See kubernetes#124933
See kubernetes#124908

Signed-off-by: Wong Hoi Sing Edison <hswong3i@pantarei-design.com>
hswong3i added a commit to alvistack/kubernetes-kubernetes that referenced this pull request Jul 3, 2024
    git clean -xdf
    go mod download
    go mod vendor
    git checkout HEAD -- vendor
    tar zcvf ../kubernetes_1.30.1.orig.tar.gz --exclude=.git .
    debuild -uc -us
    cp kubernetes.spec ../kubernetes_1.30.1-1.spec
    cp ../kubernetes*1.30.1*.{gz,xz,spec,dsc} /osc/home\:alvistack/kubernetes-kubernetes-1.30.1/
    rm -rf ../kubernetes*1.30.1*.*

See kubernetes#124933
See kubernetes#124908

Signed-off-by: Wong Hoi Sing Edison <hswong3i@pantarei-design.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. kind/regression Categorizes issue or PR as related to a regression from a prior release. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

v1.30: kube-scheduler crashes with: Observed a panic: "integer divide by zero"
6 participants