We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What happened: There are too many logs in GetVolumeLimits on azure disk, which would flush kubelet logs, it happens every 10 seconds:
Dec 12 08:02:00 k8s-agentpool-36010883-vmss000001 kubelet[4008]: I1212 08:02:00.046830 4008 azure_dd.go:190] got a matching size in getMaxDataDiskCount, Name: Standard_DS2_v2, MaxDataDiskCount: 8 Dec 12 08:02:10 k8s-agentpool-36010883-vmss000001 kubelet[4008]: I1212 08:02:10.062522 4008 azure_dd.go:190] got a matching size in getMaxDataDiskCount, Name: Standard_DS2_v2, MaxDataDiskCount: 8 Dec 12 08:02:20 k8s-agentpool-36010883-vmss000001 kubelet[4008]: I1212 08:02:20.075571 4008 azure_dd.go:190] got a matching size in getMaxDataDiskCount, Name: Standard_DS2_v2, MaxDataDiskCount: 8 Dec 12 08:02:30 k8s-agentpool-36010883-vmss000001 kubelet[4008]: I1212 08:02:30.089377 4008 azure_dd.go:190] got a matching size in getMaxDataDiskCount, Name: Standard_DS2_v2, MaxDataDiskCount: 8
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl version
uname -a
/kind bug /assign /sig azure
The text was updated successfully, but these errors were encountered:
andyzhangx
Successfully merging a pull request may close this issue.
What happened:
There are too many logs in GetVolumeLimits on azure disk, which would flush kubelet logs, it happens every 10 seconds:
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl version
):uname -a
):/kind bug
/assign
/sig azure
The text was updated successfully, but these errors were encountered: