Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

potential backport on okd 4.0 assuming K8s 1.12 is being targeted #520

Closed
DanyC97 opened this issue Mar 2, 2019 · 8 comments
Closed

potential backport on okd 4.0 assuming K8s 1.12 is being targeted #520

DanyC97 opened this issue Mar 2, 2019 · 8 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@DanyC97
Copy link
Contributor

DanyC97 commented Mar 2, 2019

you might already be aware of the issue, however in case you haven't i thought i should just give you a heads up.

Assuming you targeting 4.0 with 1.12 (as per latest openshift blog published by @crawford ) you should be aware of this issue which is forcing a golang version bump before 1.14 gets into freeze move

There is also a workaround which i think you can take it in your kubelet config - kubernetes/kubernetes#74755

/cc @wking @smarterclayton

please close this issue once you ack, i couldn't find a better way to notify the right group

@DanyC97 DanyC97 changed the title upstream 1.12 slow down behaviour - fyi potential backport on okd 4.0 assuming K8s 1.12 is being targeted Mar 2, 2019
@crawford
Copy link
Contributor

crawford commented Mar 3, 2019

Thanks for the heads-up! I'm going to let @runcom close this to make sure he sees it.

@runcom
Copy link
Member

runcom commented Mar 4, 2019

is there anything we (mco) should do here? (I'm probably missing something obvious) Is this an heads up that we're just going to upgrade golang and make sure everything still works fine here?

@RobertKrawitz
Copy link
Contributor

For now we're going to simply cherrypick the point fix, revert the use of the watch-based monitoring strategy in favor of the previous caching code that doesn't have this problem (I have a cherrypick PR waiting to go as soon as kubernetes/kubernetes#74842 merges). @rphillips and I were discussing whether the MCO should block any attempt by the user to change the default; that's what issue is about.

@rphillips
Copy link
Contributor

I added a PR to prevent the KubeletConfiguration variable being changed by the user.

@runcom
Copy link
Member

runcom commented Mar 5, 2019

Awesome, thanks for the clarification

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 3, 2020
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 3, 2020
@kikisdeliveryservice
Copy link
Contributor

Since this has been addressed, closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

8 participants