New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v1.10 known issues / FAQ accumulator #59764

Closed
krzyzacy opened this Issue Feb 12, 2018 · 27 comments

Comments

Projects
None yet
9 participants
@krzyzacy
Member

krzyzacy commented Feb 12, 2018

Please link to this issue or comment on this issue with known errata for the 1.10.x release. This follows what we have done for prior releases.

We will be populating the "Known Issues" section of 1.10.0 and above release notes based on this issue

cc @jdumars @calebamiles @jberkus
cc @kubernetes/kubernetes-release-managers

@krzyzacy

This comment has been minimized.

Member

krzyzacy commented Feb 12, 2018

/sig release

@jdumars

This comment has been minimized.

Member

jdumars commented Feb 13, 2018

Is this part of the release process documented somewhere?

@krzyzacy

This comment has been minimized.

Member

krzyzacy commented Feb 13, 2018

cc @spiffxp
I was referencing #53004 and made one for 1.10

@jberkus

This comment has been minimized.

jberkus commented Feb 13, 2018

There's two CRI changes which break backwards compatibility for some users:

#59475

#58973

@jberkus

This comment has been minimized.

jberkus commented Feb 19, 2018

adding kind

/kind design

@jberkus

This comment has been minimized.

jberkus commented Mar 2, 2018

Adjusting labels to keep this tracker in the milestone.

/kind cleanup
/remove-kind design
/priority critical-urgent
/remove-priority important-soon

@krzyzacy

This comment has been minimized.

Member

krzyzacy commented Mar 4, 2018

(seems this issue is not as useful and not documented as part of release process, I'll just close it, but feel free to reopen if necessary)

@jberkus

This comment has been minimized.

jberkus commented Mar 13, 2018

#60764
Known issue, downgrade extra steps with PVC

@Bradamant3

This comment has been minimized.

Member

Bradamant3 commented Mar 14, 2018

Clarification, please? Do y'all want a relnote for #60933?

@Bradamant3

This comment has been minimized.

Member

Bradamant3 commented Mar 14, 2018

Add #60764 plus related doc kubernetes/website#7731

@jberkus

This comment has been minimized.

jberkus commented Mar 14, 2018

Mount propagation manual steps: #61126

@jberkus

This comment has been minimized.

jberkus commented Mar 14, 2018

Clarification, please? Do y'all want a relnote for #60933?

My opinion: yes.

@jdumars

This comment has been minimized.

Member

jdumars commented Mar 19, 2018

ACK. In progress
ETA: 29/03/2018
Risks: The bot nagging will drive us slowly mad

@Bradamant3

This comment has been minimized.

Member

Bradamant3 commented Mar 20, 2018

@nickchase in case it's helpful here: think you don't need to worry about #61126 bc the relnote content is in the PR (such as it is ...). BUT it might not be in the right place in the generated relnotes. It's an action required thing.

@jberkus

This comment has been minimized.

jberkus commented Mar 21, 2018

We need to doc downgrading and PVC protection. TL;DR: if you have PVCs, and need to downgrade, you need to downgrade to 1.9.6, which will be released next Wednesday, and not to an earlier version of 1.9.

@Bradamant3

This comment has been minimized.

Member

Bradamant3 commented Mar 21, 2018

Yup. @nickchase see also comment from liggitt in kubernetes/website#7731

@Bradamant3

This comment has been minimized.

Member

Bradamant3 commented Mar 22, 2018

#61446
#61456

@nickchase let's coordinate with @msau42 about relnotes/docs. Slack discussion also.

@msau42

This comment has been minimized.

Member

msau42 commented Mar 22, 2018

We also need to add those 2 known issues to 1.9, 1.8, 1.7 patch releases as well.

@msau42

This comment has been minimized.

Member

msau42 commented Mar 22, 2018

The workaround for #61446 and #61456 is to not use subPath with hostPath volumes, and instead just specify the whole path in the hostPath volume.

@nickchase

This comment has been minimized.

Contributor

nickchase commented Mar 26, 2018

Adding:

  • Use of subPath module with hostPath volumes can cause issues during reconstruction (#61446) and with containerized kubelets (#61456). The workaround for this issue is to specify the complete path in the hostPath volume.
@nickchase

This comment has been minimized.

Contributor

nickchase commented Mar 26, 2018

Flaky timeouts while waiting for RC pods to be running in density test -- Appears to be fixed?
Controller-manager sees higher mem-usage when load test runs before density -- Does this need a Relnote?
HostPath mounts failing with "Path is not a shared or slave mount" -- Resolved with addition to "Before Upgrading" section
kubeadm: etcd certs missing in self-hosted deploy (will be fixed in point release) -- Added

@nickchase

This comment has been minimized.

Contributor

nickchase commented Mar 26, 2018

Added * In large clusters (~2K nodes) scheduling logs can explode and crash the master node. (#60933)

@jberkus

This comment has been minimized.

jberkus commented Mar 26, 2018

@nickchase

Yes on Controller-manager. Suggested text:

"Some users, especially those with very large clusters, may see higher memory usage by the kube-controller-manager in 1.10."

@k8s-merge-robot

This comment has been minimized.

Contributor

k8s-merge-robot commented Apr 11, 2018

[MILESTONENOTIFIER] Milestone Issue: Up-to-date for process

@krzyzacy

Issue Labels
  • sig/release: Issue will be escalated to these SIGs if needed.
  • priority/critical-urgent: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.
  • kind/cleanup: Adding tests, refactoring, fixing old bugs.
Help
@fejta-bot

This comment has been minimized.

fejta-bot commented Jul 10, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@jberkus

This comment has been minimized.

jberkus commented Jul 11, 2018

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment