Skip to content

Services not removed when pod's are restarted. #712

@jgaa

Description

@jgaa

I tried to install arangodb in production mode on a local bare metal k8s cluster with the latest everything.

kubectl get nodes
NAME       STATUS                        ROLES                  AGE    VERSION
k8master   Ready                         control-plane,master   231d   v1.20.5
k8n0       Ready                         <none>                 231d   v1.20.2
k8n1       Ready                         <none>                 231d   v1.20.2
k8vmn0     Ready                         <none>                 231d   v1.20.5
k8vmn1     Ready                         <none>                 16d    v1.20.5
k8vmn2     Ready                         <none>                 15d    v1.20.5
k8vmn3     Ready                         <none>                 15d    v1.20.5

Whatever the disk provisioning operator is supposed to do, it did not, and the agencies and db servers was not started. The coordinators seemed to time out and being restarted after a while. The services assigned to the removed pods were not deleted by the operator(s), but new pods were assigned new services, so after a while there were a lot of services...

kubectl -n arango get services | wc -l
57

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions