New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scheduler should use pod UID instead of "namespace/name" in its cache #60966
Comments
@bsalamat I am wondering whether |
@dixudx Yes, UID must be non-empty in real clusters and if it is not, it is a bug in the API server: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids That said, most of our scheduler tests do not set UID of the pods. So, this proposed change requires changes to our tests to ensure that UID is populated. |
Using UID sounds reasonable to me! |
/assign |
UID uniquely identifies pods across lifecycles, while namespace/name could be 2 different pods across lifecycles. This could result in tricky scheduler bugs. Fixes kubernetes#60966
UID uniquely identifies pods across lifecycles, while namespace/name could be 2 different pods across lifecycles. This could result in tricky scheduler bugs. Fixes kubernetes#60966
UID uniquely identifies pods across lifecycles, while namespace/name could be 2 different pods across lifecycles. This could result in tricky scheduler bugs. Fixes kubernetes#60966
UID uniquely identifies pods across lifecycles, while namespace/name could be 2 different pods across lifecycles. This could result in tricky scheduler bugs. Fixes kubernetes#60966
The key that scheduler uses to store a pod in its "assume" cache is "namespace/name" of the pod. As I explained in #56682 (comment), this can cause issues if a pod that is being scheduled is deleted and a replacement pod with the same name and namespace is created. In order to avoid this problem, scheduler should use pod UID instead of "namespace/name" as the key in its "assume" cache.
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
/sig scheduling
The text was updated successfully, but these errors were encountered: