New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The resourceVersion for the provided list is too old #102160
Comments
Would you run /kind support |
Let me try to gather some more detail. In the mean time could you share some more details on what does this msg mean and why we should see this? |
related: kubernetes/kubectl#965 IIUC, this can happen in a case with a lot of pods and slow etcd response times. /sig cli api machinery |
@neolit123: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hi @apratina . The resourceVersion field is a way to keep track of the persisted version of the object. Every X minutes, the api server will make a call to etcd's database which will truncate old resourceVersion. You are getting the error because the resourceVersion has been deleted and the api server is returning HTTP code 140 A very easy way to reproduce this:
You will see something like
and try decreasing the resourceVersion by 100 each time, for example
then
Eventually you will see the error message Just like what @neolit123 suggested, it might be because you have too many pods and slow etcd response times |
/triage accepted |
i was thinking about #91073, it is about 'too large RV', not 'too old RV' though. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@kundan2707: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Again reported here |
From what i could understand after reading @lauchokyip 's comment and Efficient detection of changes, I read this
and the first thought that came into my mind was Why don't we automatically start a new watch from the last returned resourceVersion or a fresh get / list request when a client watch is disconnected? IMHO new watch from the last returned resourceVersion should be the preferred approach. |
We get the following error msg at the time of executing
kubect get pod
command.Error:
Here are the details of k8s version:
We would like to understand what does this error mean?
The text was updated successfully, but these errors were encountered: