Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TestErrConnKilledHTTP2 failed #108975

Closed
kkkkun opened this issue Mar 24, 2022 · 4 comments
Closed

TestErrConnKilledHTTP2 failed #108975

kkkkun opened this issue Mar 24, 2022 · 4 comments
Labels
kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@kkkkun
Copy link
Member

kkkkun commented Mar 24, 2022

Which jobs are failing?

https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/107724/pull-kubernetes-unit/1506951069819736064

Which tests are failing?

TestErrConnKilledHTTP2

Since when has it been failing?

https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/107724/pull-kubernetes-unit/1506951069819736064

=== FAIL: pkg/kubelet/nodeshutdown Test_managerImpl_processShutdownEvent/kill_pod_func_take_too_long (0.00s)
    nodeshutdown_manager_linux_test.go:709: managerImpl.processShutdownEvent() should log Shutdown manager pod killing time out, got I0324 11:23:06.940142   59533 nodeshutdown_manager_linux.go:319] "Shutdown manager processing shutdown event"
        I0324 11:23:06.940426   59533 nodeshutdown_manager_linux.go:375] "Shutdown manager killing pod with gracePeriod" pod="normal-pod" gracePeriod=10
        I0324 11:23:06.940528   59533 nodeshutdown_manager_linux.go:387] "Shutdown manager finished killing pod" pod="normal-pod"
        I0324 11:23:06.940761   59533 nodeshutdown_manager_linux.go:375] "Shutdown manager killing pod with gracePeriod" pod="critical-pod" gracePeriod=20
        I0324 11:23:06.940818   59533 nodeshutdown_manager_linux.go:387] "Shutdown manager finished killing pod" pod="critical-pod"
        I0324 11:23:06.940965   59533 nodeshutdown_manager_linux.go:324] "Shutdown manager completed processing shutdown event, node will shutdown shortly"
    --- FAIL: Test_managerImpl_processShutdownEvent/kill_pod_func_take_too_long (0.00s)
=== FAIL: pkg/kubelet/nodeshutdown Test_managerImpl_processShutdownEvent (0.00s)
=== FAIL: vendor/k8s.io/apiserver/pkg/server/filters TestErrConnKilledHTTP2 (0.16s)
==================
WARNING: DATA RACE
Write at 0x000003879e00 by goroutine 375:
  k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.captureStdErr.func2()
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout_test.go:273 +0x3b
  runtime.deferreturn()
      /usr/local/go/src/runtime/panic.go:436 +0x32
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:1439 +0x213
  testing.(*T).Run.func1()
      /usr/local/go/src/testing/testing.go:1486 +0x47
Previous read at 0x000003879e00 by goroutine 405:
  k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output()
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:773 +0x71e
  k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth()
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:608 +0x314
  k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).print()
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:590 +0x286
  k8s.io/kubernetes/vendor/k8s.io/klog/v2.Error()
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1434 +0x225
  k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol.(*configController).processNextWorkItem.func1()
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_controller.go:350 +0x198
  k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol.(*configController).processNextWorkItem()
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_controller.go:357 +0x6c
  k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol.(*configController).runWorker()
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_controller.go:333 +0x2e
  k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol.(*configController).runWorker-fm()
      <autogenerated>:1 +0x39
  k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1()
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x48
  k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil()
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xce
  k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil()
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x104
  k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until()
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x48
  k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol.(*configController).Run.func3()
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_controller.go:322 +0x4c
Goroutine 375 (running) created at:
  testing.(*T).Run()
      /usr/local/go/src/testing/testing.go:1486 +0x724
  testing.runTests.func1()
      /usr/local/go/src/testing/testing.go:1839 +0x99
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:1439 +0x213
  testing.runTests()
      /usr/local/go/src/testing/testing.go:1837 +0x7e4
  testing.(*M).Run()
      /usr/local/go/src/testing/testing.go:1719 +0xa71
  k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.TestMain()
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/priority-and-fairness_test.go:60 +0x35
  main.main()
      _testmain.go:107 +0x317
Goroutine 405 (finished) created at:
  k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol.(*configController).Run()
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_controller.go:322 +0x4b9
  k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.startAPFController.func1()
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/priority-and-fairness_test.go:1109 +0x63
==================
    testing.go:1312: race detected during execution of test
DONE 34304 tests, 20 skipped, 3 failures in 1.824s
+++ [0324 11:38:19] Saved JUnit XML test report to /logs/artifacts/junit_20220324-111217.xml
make: *** [Makefile:186: test] Error 1
@kkkkun kkkkun added the kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. label Mar 24, 2022
@k8s-ci-robot
Copy link
Contributor

@kkkkun: There are no sig labels on this issue. Please add an appropriate label by using one of the following commands:

  • /sig <group-name>
  • /wg <group-name>
  • /committee <group-name>

Please see the group list for a listing of the SIGs, working groups, and committees available.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Mar 24, 2022
@k8s-ci-robot
Copy link
Contributor

@kkkkun: This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Mar 24, 2022
@pohly
Copy link
Contributor

pohly commented Mar 24, 2022

/close

This is already tracked in #108043. A PR is pending.

@k8s-ci-robot
Copy link
Contributor

@pohly: Closing this issue.

In response to this:

/close

This is already tracked in #108043. A PR is pending.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

3 participants