Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Duplicate ReplicaSetunder the managed Deploy in the cluster #77858

Open
w564791 opened this issue May 14, 2019 · 2 comments

Comments

Projects
None yet
4 participants
@w564791
Copy link

commented May 14, 2019

What happened:
Duplicate ReplicaSetunder the managed Deploy in the cluster
What you expected to happen:
about kube-scheduler ?
How to reproduce it (as minimally and precisely as possible):
i can not reproduce it by Fixed method
Anything else we need to know?:

Environment:

  • Kubernetes version (1.11.5 and 1.13.4`):
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.5", GitCommit:"753b2dbc622f5cc417845f0ff8a77f539a4213ea", GitTreeState:"clean", BuildDate:"2018-11-26T14:41:50Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.5", GitCommit:"753b2dbc622f5cc417845f0ff8a77f539a4213ea", GitTreeState:"clean", BuildDate:"2018-11-26T14:31:35Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

  • OS (e.g: cat /etc/os-release):
NAME="Ubuntu"
VERSION="16.04.4 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.4 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial

  • Kernel (e.g. uname -a):
Linux 128 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
  • Network plugin and version (if this is a network-related bug):
    calico

The details :

There are 2 rs

 root@dev-k8s-master:/var/log/k8s]# kubectl get rs -l run=memcached
NAME                   DESIRED   CURRENT   READY     AGE
memcached-59c89465c7   1         1         1         6d
memcached-67db9f5698   0         0         0         6d
memcached-9597d879     1         1         1         6d
 root@dev-k8s-master:/var/log/k8s]# kubectl get deploy memcached
NAME        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
memcached   1         1         1            1           6d


# kubectl get rs memcached-9597d879 -o yaml
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
  annotations:
    deployment.kubernetes.io/desired-replicas: "1"
    deployment.kubernetes.io/max-replicas: "2"
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: 2019-05-08T08:54:25Z
  generation: 1
  labels:
    app: memcached
    pod-template-hash: "51538435"
    run: memcached
  name: memcached-9597d879
  namespace:  tianbao
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: Deployment
    name: memcached
    uid: 84944c7c-716e-11e9-a976-00163e2e35e9
  resourceVersion: "57594182"
  selfLink: /apis/extensions/v1beta1/namespaces/tianbao/replicasets/memcached-9597d879
  uid: e164be6b-716e-11e9-a976-00163e2e35e9
spec:
  replicas: 1
...
# kubectl get rs memcached-59c89465c7 -o yaml
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
  annotations:
    deployment.kubernetes.io/desired-replicas: "1"
    deployment.kubernetes.io/max-replicas: "2"
    deployment.kubernetes.io/revision: "2"
  creationTimestamp: 2019-05-08T08:52:58Z
  generation: 1
  labels:
    pod-template-hash: "1574502173"
    run: memcached
  name: memcached-59c89465c7
  namespace: tianbao
  resourceVersion: "56692898"
  selfLink: /apis/extensions/v1beta1/namespaces/tianbao/replicasets/memcached-59c89465c7
  uid: add065d4-716e-11e9-a976-00163e2e35e9
spec:
  replicas: 1
...
@neolit123

This comment has been minimized.

Copy link
Member

commented May 14, 2019

/sig apps

@k8s-ci-robot k8s-ci-robot added sig/apps and removed needs-sig labels May 14, 2019

@zq-david-wang

This comment has been minimized.

Copy link

commented May 15, 2019

You seems to add a new selector "app=" to your deployment after your deployment created, which cut off the relationship between the deployment and its existing replicasets.
I am not sure what the expected behaviour should be under this situation, but I think current behaviour is acceptable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.