New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1873043: Rebase to 1.19.2 #361
Bug 1873043: Rebase to 1.19.2 #361
Conversation
…oupsForNLB Signed-off-by: Pedro Tôrres <t0rr3sp3dr0@gmail.com>
After PR kubernetes#92555, there are a number of gce pd default fs tests skipped. Here the testpatten has SnapshotType set because some provisioning tests use snapshots. But for drivers such as In-tree gce pd driver, the tests will be skipped because of the logic in skipUnsupportedTesthttps://github.com/kubernetes/kubernetes/blob/master/test/e2e/storage/testsuites/base.go#L154 Since multiple drivers might test with the same pattern, so I think we need keep SnapshotType here. This PR removes the part of the logic in skipUnsupportedTest. This should be ok because all snapshot tests will check whether a driver has snapshot capability or not.
Signed-off-by: knight42 <anonymousknight96@gmail.com>
…-pick-of-#93515-upstream-release-1.19 Automated cherry pick of kubernetes#93515: Use NLB Subnet CIDRs instead of VPC CIDRs in
Signed-off-by: knight42 <anonymousknight96@gmail.com>
Make the yaml valid.
If a pod has a configmap/secret volume an annoying message shows up in the log approximately every 70 seconds. This happens because the desiredStateOfWorldPopulator sync loop always call the MarkRemountRequired. The function finds the volume plugin and check if the plugin requires mount. Configmap and secret plugins always returns true for that. Thus, the reconciler code of the volume manager remounts the volume every time. This commit decrease the log level of that message in the mount function from warning to V4. Signed-off-by: José Guilherme Vanz <jguilhermevanz@suse.com>
This reverts commit ebece49936e635f151fdd8a64fa2b77fd183e817.
If a pod has a configmap/secret volume an annoying message shows up in the log approximately every 70 seconds. This happens because the desiredStateOfWorldPopulator sync loop always call the MarkRemountRequired. The function finds the volume plugin and check if the plugin requires mount. Configmap and secret plugins always returns true for that. Thus, the reconciler code of the volume manager remounts the volume every time. This commit change the SetVolumeOwnership to print the warning only if the function does not finish within 30 seconds. Signed-off-by: José Guilherme Vanz <jguilhermevanz@suse.com>
Currently if a group is specified for an impersonated user, 'system:authenticated' is not added to the 'Groups' list inside the request context. This causes priority and fairness match to fail. The catch-all flow schema needs the user to be in the 'system:authenticated' or in the 'system:unauthenticated' group. An impersonated user with a specified group is in neither. As a general rule, if an impersonated user has passed authorization checks, we should consider him authenticated.
A bug was discovered in the `enforceRequirements` func for `upgrade plan`. If a command line argument that specifies the target Kubernetes version is supplied, the returned `ClusterConfiguration` by `enforceRequirements` will have its `KubernetesVersion` field set to the new version. If no version was specified, the returned `KubernetesVersion` points to the currently installed one. This remained undetected for a couple of reasons - It's only `upgrade plan` that allows for the version command line argument to be optional (in `upgrade plan` it's mandatory) - Prior to 1.19, the implementation of `upgrade plan` did not make use of the `KubernetesVersion` returned by `enforceRequirements`. `upgrade plan` supports this optional command line argument to enable air-gapped setups (as not specifying a version on the command line will end up looking for the latest version over the Interned). Hence, the only option is to make `enforceRequirements` consistent in the `upgrade plan` case and always return the currently installed version in the `KubernetesVersion` field. Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
…rs with API servers 1.17.0-1.18.5
Pinning the kube-controller-manager and kube-scheduler kubeconfig files to point to the control-plane-endpoint can be problematic during immutable upgrades if one of these components ends up contacting an N-1 kube-apiserver: https://kubernetes.io/docs/setup/release/version-skew-policy/#kube-controller-manager-kube-scheduler-and-cloud-controller-manager For example, the components can send a request for a non-existing API version. Instead of using the CPE for these components, use the LocalAPIEndpoint. This guarantees that the components would talk to the local kube-apiserver, which should be the same version, unless the user explicitly patched manifests.
…ick-of-#94398-origin-release-1.19 Automated cherry pick of kubernetes#94398: kubeadm: make the scheduler and KCM connect to local endpoint
…-pick-of-#94261-upstream-release-1.19 Add PR kubernetes#89069 Action Required to 1.19 release notes
…-of-#93646-origin-release-1.19 Automated cherry pick of kubernetes#93646: let panics propagate up when processLoop panic
Change-Id: I30005efec519840142bc7151aeaa543912a2f84b
…ck-of-#94246-upstream-release-1.19 Automated cherry pick of kubernetes#94246: Fix issue on skipTest in storage suits
…ck-of-#94316-upstream-release-1.19 Automated cherry pick of kubernetes#94316: Fixed reflector not recovering from "Too large resource
The isCoreDNSVersionSupported() check assumes that there is a running kubelet, that manages the CoreDNS containers. If the containers are being created it is not possible to fetch their image digest. To workaround that, a poll can be used in isCoreDNSVersionSupported() and wait for the CoreDNS Pods are expected to be running. Depending on timing and CNI yet to be installed this can cause problems related to addon idempotency of "kubeadm init", because if the CoreDNS Pods are waiting for another step they will never get running. Remove the function isCoreDNSVersionSupported() and assume that the version is always supported. Rely on the Corefile migration library to error out if it must.
…-pick-of-#94294-upstream-release-1.19 Automated cherry pick of kubernetes#94294: Remove duplicate nodeSelector
…ck-of-#94306-upstream-release-1.19 Automated cherry pick of kubernetes#94306: fix(azure): check error returned by scaleSet.getVMSS
…ck-of-#93773-upstream-release-1.19 Automated cherry pick of kubernetes#93773: fix(kubelet): protect `containerCleanupInfos` from concurrent map writes
…of-#94421-upstream-release-1.19 Automated cherry pick of kubernetes#94421: kubeadm: Fix `upgrade plan` for air-gapped setups
…k-of-#94204-upstream-release-1.19 Automated cherry pick of kubernetes#94204: Add impersonated user to system:authenticated group
This needs to be maintained can committed in our fork so that it can be vendored by origin. UPSTREAM: <carry>: (squash) Stop ignoring generated openapi definitions openshift/origin needs to be able to vendor these definitions so they need to be committed. Should be squashed with UPSTREAM: <carry>: Stop ignoring test/e2e/generated/bindata.go
UPSTREAM: <carry>: Force releasing the lock on exit for KS
…ied resource Currently count includes keys from different resource(s) if the keys are a prefix of the specified resource/key. Consider the following keys: A: <storage-prefix>//foo.bar.io/machines B: <storage-prefix>//foo.bar.io/machinesets If we ask for the count of key A, the result will also include the keys from key B since key B shares the same prefix as key A. Append a separator to mark the end of the key, this will exclude all other keys from a different resource that is a prefix of the specified key.
…lished OpenAPI kubectl falls over arrays without item schema. Hence, we have to publish a less precise OpenAPI spec (similar to other pruning we already do for the same reason).
apiserver_request_duration_seconds does not take into account the time a request spends in the server filters. If a filter takes longer then the latency incurred will not be reflected in the apiserver latency metrics. For example, the amount of time a request spends in priority and fairness machineries or in shuffle queues will not be accounted for. - Add a server filter that attaches request received timestamp to the request context very early in in the handler chain (as soon as net/http hands over control to us). - Use the above received timestamp in the apiserver latency metrics apiserver_request_duration_seconds. - Use the above received timestamp in the audit layer to set RequestReceivedTimestamp.
Testing with the default FS (ext4) is IMO enough, ext2/ext3 does not add much value. It's handled by the same kernel module anyway. Leave ext2/ext3 only in GCE PD which is tested in kubernetes/kubernetes CI jobs regularly to catch regressions.
drop the managed fields of the objects from the audit entries when we are logging request and response bodies.
…n error This should stop leaking Cinder volumes in tests as a side-effect.
80e50d2
to
512f733
Compare
fixed a typo https://github.com/openshift/kubernetes/compare/80e50d20c685504237bdc055f456aec5a1f5311c..512f733fcbf275af939217b1a566b831775ba86e otherwise its ready |
/retest |
1 similar comment
/retest |
/overwrite ci/prow/verify-commits |
/override ci/prow/verify-commits |
@soltysh: Overrode contexts on behalf of soltysh: ci/prow/verify-commits In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@tnozicka: All pull requests linked via external trackers have merged:
Bugzilla bug 1873043 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@tnozicka: Some pull requests linked via external trackers have merged:
The following pull requests linked via external trackers have not merged:
These pull request must merge or be unlinked from the Bugzilla bug in order for it to move to the next state. Bugzilla bug 1873043 has not been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
1 similar comment
@tnozicka: Some pull requests linked via external trackers have merged:
The following pull requests linked via external trackers have not merged:
These pull request must merge or be unlinked from the Bugzilla bug in order for it to move to the next state. Bugzilla bug 1873043 has not been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
tracking sheet https://docs.google.com/spreadsheets/d/10KYptJkDB1z8_RYCQVBYDjdTlRfyoXILMa0Fg8tnNlY
TODO:
openshift/api
repository to v1.19.2 (Update to kubernetes v1.19.2 api#761)openshift/apiserver-library-go
to v1.19.2 (Update to kubernetes v1.19.2 apiserver-library-go#37)openshift/client-go
to v1.19.2 (Update to kubernetes v1.19.2 client-go#163)openshift/library-go
to v1.19.2 (Update to kubernetes v1.19.2 library-go#921)Followups:
/cc @marun @soltysh @sttts