New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bump to kubernetes-v1.18.0-beta.2 #116
Bump to kubernetes-v1.18.0-beta.2 #116
Conversation
…jq 'select(.Version!="v0.0.0")') > Godeps/Godeps.json
The tests need to be rewritten to be safe for concurrent use and for work in contended environments. Disabling the worst offenders and fixing reuse issues around the tests here. Origin-commit: b6281a54c84f20c2f0d35d6a44881e83b2e75227
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: marun The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
A few comments for now:
together, it's all about exposing that journald through kubelet, so it's sufficient to have a single commit.
|
Done
Done. Are there any more that kcm carries that should be squashed? i.e.
|
e420c8c
to
6992c9a
Compare
Unit tests are passing locally. Commits past 7a4ff00 are new fixes rather than picks. Is |
staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/apiserver.go
Outdated
Show resolved
Hide resolved
staging/src/k8s.io/legacy-cloud-providers/openstack/metadata.go
Outdated
Show resolved
Hide resolved
staging/src/k8s.io/code-generator/cmd/client-gen/generators/client_generator.go
Outdated
Show resolved
Hide resolved
staging/src/k8s.io/apiserver/pkg/admission/plugin/namespace/lifecycle/patch.go
Outdated
Show resolved
Hide resolved
{Group: "authorization.openshift.io", Resource: "clusterroles"}: {}, | ||
{Group: "authorization.openshift.io", Resource: "clusterrolebindings"}: {}, | ||
{Group: "apiregistration.k8s.io", Resource: "apiservices"}: {}, | ||
{Group: "apiextensions.k8s.io", Resource: "customresourcedefinitions"}: {}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@deads2k why do we exclude apiservers and CRDs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@deads2k why do we exclude apiservers and CRDs?
I have no memory. They are cluster scoped and not covered by quota. Should be removeable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As per a slack conversation, we can drop cluster-scoped resources from this list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But trimming the file is deferred till after this lands to not delay things further.
// Get Volume | ||
volume, err := os.getVolume(pv.Spec.Cinder.VolumeID) | ||
// Get metadata | ||
md, err := getMetadata(os.metadataOpts.SearchOrder) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Fedosin why is this a carry and not upstreamed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that @Fedosin appears to be on vacation, I think this concern can be deferred for now. I've prompted him on slack so hopefully he answers when he returns.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi! I did this because there was a requirement to read cloud credentials from a secret. The upstream version doesn't support it, as you know.
As I remember, there was an issue with kube-apiserver to get a volume info, so I implemented this workaround.
I believe we can revert it now and go back to the upstream implementation.
openshift-kube-apiserver/admission/network/externalipranger/externalip_admission_test.go
Outdated
Show resolved
Hide resolved
… new kubelet delay in pod deletion
Origin-commit: a1d5a1201d5fb4616187cd009f126f5fe7fa0787
Origin-commit: beac12d815b4099cfd4f4d953da4b8789054be51
…ofile Origin-commit: 84ba7fc304870a30df7136da14bccb4d5232f075
Origin-commit: 4498bb4de03ff3a910fed10bed337ba2fcdf321d
…en checking custom columns (kubectl)
This line is not necessary for our test usage and should not be an issue in OpenShift (openshift-tests already verifies this correctly).
…y running test OpenShift uses these function before any test is run and they cause NPE
Origin-commit: 131dbb4770bb3bed0c07d2a6ca0cbe4cba2556bb
This line makes the upgrade log output unreadable and provides no value during the set of tests it's used in: ``` Jan 12 20:49:25.628: INFO: cluster upgrade is Progressing: Working towards registry.svc.ci.openshift.org/ci-op-jbtg7jjb/release@sha256:144e73d125cce620bdf099be9a85225ade489a95622a70075d264ea3ff79219c: downloading update Jan 12 20:49:26.692: INFO: Poke("http://a74e3476115ce4d2d817a1e5ea608dad-802917831.us-east-1.elb.amazonaws.com:80/echo?msg=hello"): success Jan 12 20:49:28.727: INFO: Poke("http://a74e3476115ce4d2d817a1e5ea608dad-802917831.us-east-1.elb.amazonaws.com:80/echo?msg=hello"): success ``` Origin-commit: 1cdf04c0e15b79fad3e3a6ba896ed2bb3df42b78
…o function Origin-commit: 0d7fb2d769d631054ec9ac0721aee623c96c1001
Origin-commit: cb0b340d0e68c9524fa7fd6277f571b6aa68bf86
The following packages have tests that exceed the default 120s timeout: k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler - The tests in this package collectively take longer than 120s. k8s.io/kubernetes/pkg/volume/csi - One of the unit tests has to wait 2 minutes for a timeout to validate its failure condition.
a324756
to
2dceee3
Compare
Updated to remove |
I'd like to help review this, but as someone who hasn't done a rebase in literally years, I don't really know where to begin. How does one evaluate whether the proposed tree is in sync with the upstream tag? What procedure would you recommend to someone looking at this "from the outside" to be of any assistance? Apologies if there's some guide out there I'm missing. docs/rebase.md didn't really help. |
@marun: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Closing in favor of #118, which picks the same commits against rc.1 |
TODO:
https://docs.google.com/spreadsheets/d/10KYptJkDB1z8_RYCQVBYDjdTlRfyoXILMa0Fg8tnNlY/edit?ts=5e67286f#gid=646747504