Join GitHub today
GitHub is home to over 20 million developers working together to host and review code, manage projects, and build software together.
initial heapster redploy leaves ghosted replicasets #241
Comments
|
Thats certainly interesting! It doesn't appear that the 'ghosted heapster" replication controllers are listed in the CLI output. Can you try to kubectl describe one of the RC's that have no pods? eg: I suspect whats happened here is it rescheduled the addon service, and there is still an RC definition thats unable to be fulfilled because the pods are already running occupying whatever bindings the other rc's are expecting so there are 2 sitting in not-error-state but they aren't being helpful either. Whats confusing to me is the presence of the RC in the dashboard, but not in the cli output. Highly suspect... |
cm-graham
commented
Mar 23, 2017
•
|
Looks like it is definitely an issue with the dashboard and not the CLI:
|
cm-graham
commented
Mar 23, 2017
•
|
Spoke too soon:
|
cm-graham
commented
Mar 23, 2017
•
|
But you can't describe them
It's Schrodinger's cat! |
cm-graham
commented
Apr 5, 2017
•
|
OK I believe this one can be closed as during testing a couple of in house built deployments and they also have older replicasets that hang out. It appears that K8s keeps the last 3 revisions of a replicaset, and looks at the deployment.kubernetes.io/revision annotation it looks like heapster is "deployed" 3 times during the initial CDK install. It does seem to make sense to do that from a rollback perspective, this is a configurable setting in the deployment spec, revisionHistoryLimit, and defaults to 2 plus the current revision. Apparently at one point last year they had it to default to keep all replicasets by default. |
chuckbutler
added
the
kind/question
label
Apr 21, 2017
|
Thanks for the feedback loop here cm-graham. Good investigative work. I'm going to close this for now. if it continues to be problematic for you, don't hesitate to reply to the bug and we'll re-open and evaluate for a fix. |
cm-graham commentedMar 23, 2017
•
Edited 1 time
-
marcoceppi
Mar 30, 2017
After a clean cluster install, I came across what for the lack of a better term, looks like 2 ghosted heapster replicasets that weren't running any pods. Here are screen shots from the dashboard when I first noticed:
(It is worth noting that there are no old replica sets listed on the deployment page either. The only one that is mentioned is noted in the screen shot)
Output from CLI: