New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix vmss dirty cache issue #85158
fix vmss dirty cache issue #85158
Conversation
Note that this was changed already between 1.15 and 1.16. |
Oh nevermind I see it was changed in 1.15.5 |
staging/src/k8s.io/legacy-cloud-providers/azure/azure_controller_vmss.go
Show resolved
Hide resolved
add logging
9cef486
to
01ea169
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: andyzhangx, feiskyer, khenidak The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
…5158-upstream-release-1.16 Automated cherry pick of #85158: fix vmss dirty cache issue
…5158-upstream-release-1.15 Automated cherry pick of #85158: fix vmss dirty cache issue
…5158-upstream-release-1.14 Automated cherry pick of #85158: fix vmss dirty cache issue
What type of PR is this?
/kind bug
What this PR does / why we need it:
fix vmss dirty cache issue
clean vmss cache should happen after disk attach/detach operation, now it's before those operations, which would lead to dirty cache.
since update operation may cost 30s or more, and at that time period, if there is another get vmss operation, it would get the old data disk list
Which issue(s) this PR fixes:
Fixes #85159
Special notes for your reviewer:
This dirty cache issue would cause lots of weird disk issues and this issue lies in k8s v1.13.12, v1.13.13, v1.14.8, v1.14.9, v1.15.5, v1.15.6, v1.16.2, v1.16.3 in vmss clusters using data disks
Does this PR introduce a user-facing change?:
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:
/kind bug
/assign @feiskyer
/priority important-soon
/sig cloud-provider
/area provider/azure