Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Annotation field in zookeeper headless service does not update #474

Closed
hoyhbx opened this issue May 10, 2022 · 0 comments · Fixed by #492
Closed

[BUG] Annotation field in zookeeper headless service does not update #474

hoyhbx opened this issue May 10, 2022 · 0 comments · Fixed by #492

Comments

@hoyhbx
Copy link
Contributor

hoyhbx commented May 10, 2022

What did you do?

We were trying to update the field spec.domainName which should update an annotation (external-dns.alpha.kubernetes.io/hostname) of the zookeeper-headless service. However, we found that the field spec.domainName only has effect when initially creating the zookeeper-headless, and it does not correctly update the external-dns.alpha.kubernetes.io/hostname annotation if the zookeeper-headless is already present.

Steps to reproduce

  1. Install zookeeper-operator helm chart

  2. Deploy a basic zookeeper cluster using cr1.yaml
    kubectl apply -f cr1.yaml

    cr1.yaml
    apiVersion: zookeeper.pravega.io/v1beta1
    kind: ZookeeperCluster
    metadata:
      name: zookeeper
    spec:
      replicas: 3
  3. Assign some random domain name to the field spec.domainName by applying cr2.yaml
    kubectl apply -f cr2.yaml

    cr2.yaml
    apiVersion: zookeeper.pravega.io/v1beta1
    kind: ZookeeperCluster
    metadata:
      name: zookeeper
    spec:
      domainName: hgvkbmxwtg
      replicas: 3
  4. Observe that zookeeper-headless service is not updated with the expected annotation external-dns.alpha.kubernetes.io/hostname
    kubectl describe service zookeeper-headless

    Observed zookeeper-headless
      Name:              zookeeper-headless
      Namespace:         default
      Labels:            app=zookeeper
                         headless=true
                         owner-rv=1247
                         release=zookeeper
      Annotations:       <none>
      Selector:          app=zookeeper
      Type:              ClusterIP
      IP Family Policy:  SingleStack
      IP Families:       IPv4
      IP:                None
      IPs:               None
      Port:              tcp-client  2181/TCP
      TargetPort:        2181/TCP
      Endpoints:         172.17.0.3:2181,172.17.0.5:2181,172.17.0.6:2181
      Port:              tcp-quorum  2888/TCP
      TargetPort:        2888/TCP
      Endpoints:         172.17.0.3:2888,172.17.0.5:2888,172.17.0.6:2888
      Port:              tcp-leader-election  3888/TCP
      TargetPort:        3888/TCP
      Endpoints:         172.17.0.3:3888,172.17.0.5:3888,172.17.0.6:3888
      Port:              tcp-metrics  7000/TCP
      TargetPort:        7000/TCP
      Endpoints:         172.17.0.3:7000,172.17.0.5:7000,172.17.0.6:7000
      Port:              tcp-admin-server  8080/TCP
      TargetPort:        8080/TCP
      Endpoints:         172.17.0.3:8080,172.17.0.5:8080,172.17.0.6:8080
      Session Affinity:  None
      Events:            <none>
    Expected zookeeper-headless
    Name:              zookeeper-headless
    Namespace:         default
    Labels:            app=zookeeper
                       headless=true
                       release=zookeeper
    Annotations:       external-dns.alpha.kubernetes.io/hostname: zookeeper-headless.hgvkbmxwtg.
    Selector:          app=zookeeper
    Type:              ClusterIP
    IP Family Policy:  SingleStack
    IP Families:       IPv4
    IP:                None
    IPs:               None
    Port:              tcp-client  2181/TCP
    TargetPort:        2181/TCP
    Endpoints:         172.17.0.3:2181,172.17.0.5:2181,172.17.0.6:2181
    Port:              tcp-quorum  2888/TCP
    TargetPort:        2888/TCP
    Endpoints:         172.17.0.3:2888,172.17.0.5:2888,172.17.0.6:2888
    Port:              tcp-leader-election  3888/TCP
    TargetPort:        3888/TCP
    Endpoints:         172.17.0.3:3888,172.17.0.5:3888,172.17.0.6:3888
    Port:              tcp-metrics  7000/TCP
    TargetPort:        7000/TCP
    Endpoints:         172.17.0.3:7000,172.17.0.5:7000,172.17.0.6:7000
    Port:              tcp-admin-server  8080/TCP
    TargetPort:        8080/TCP
    Endpoints:         172.17.0.3:8080,172.17.0.5:8080,172.17.0.6:8080
    Session Affinity:  None
    Events:            <none>

Note that the expected zookeeper-headless has the additional annotation key-value pair: external-dns.alpha.kubernetes.io/hostname: zookeeper-headless.hgvkbmxwtg. comparing to the observed state in the buggy run.
The expected zookeeper-headless state can be corrected reached if we directly apply cr2.yaml after deploying the operator.

Did you expect to see some different?

We expected that zookeeper-operator can correctly update the zookeeper-headless service's annotation when the field spec.domainName is updated. Currently zookeeper-operator can only correctly create zookeeper-headless service, but cannot correctly update.

Environment

  • Operator version (Image):

    pravega/zookeeper-operator:0.2.13

  • Kubernetes version information:

    Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-12T14:11:29Z", GoVersion:"go1.16.3", Compiler:"gc", Platform:"darwin/amd64"}
    Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25:06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
    
  • Kubernetes cluster kind:

    minikube start --vm-driver=docker --cpus 4 --memory 4096 --kubernetes-version v1.21.0
    

Root Cause

cr2.yaml updates the field spec.domainName. The field will be used in the function makeService (Link) to update the annotation map in zookeeper-headless service specification. The new service specification will be passed to the variable svc in zookeepercluster_controller.go#L398.

In zookeepercluster_controller.go#L407, it will check whether the service zookeeper-headless exists or not. If not, the operator will create a new zookeeper-headless service with the new specification svc in zookeepercluster_controller.go#L411. That's why the operator can correctly create zookeeper-headless service.

On the other hand, if the zookeeper-headless exists, the operator will use the function zk.SyncService(foundSvc, svc) to create a new service specification. Then, using the specification foundSvc to update the existing service. However, the function SyncService only updates Spec.Ports and Spec.Type. Hence, the operator cannot update the zookeeper-headless service correctly.

To fix this bug, the function SyncService should be updated.

// SyncService synchronizes a service with an updated spec and validates it
func SyncService(curr *v1.Service, next *v1.Service) {
	curr.Spec.Ports = next.Spec.Ports
	curr.Spec.Type = next.Spec.Type
        // Fix bug: copy Annotation field from `next` to `curr`
}

Additional notes

This bug affects multiple functionalities in the CR. Including spec.adminServerService.annotations, spec.clientService.annotations, spec.headlessService.annotations.

hoyhbx added a commit to hoyhbx/zookeeper-operator that referenced this issue Aug 1, 2022
hoyhbx added a commit to hoyhbx/zookeeper-operator that referenced this issue Aug 1, 2022
hoyhbx added a commit to hoyhbx/zookeeper-operator that referenced this issue Aug 1, 2022
Signed-off-by: hoyhbx <hoyhbx@gmail.com>
anishakj pushed a commit that referenced this issue Aug 12, 2022
Signed-off-by: hoyhbx <hoyhbx@gmail.com>

Signed-off-by: hoyhbx <hoyhbx@gmail.com>
mmoscher pushed a commit to mmoscher/zookeeper-operator that referenced this issue Oct 12, 2022
Signed-off-by: hoyhbx <hoyhbx@gmail.com>

Signed-off-by: hoyhbx <hoyhbx@gmail.com>
Signed-off-by: Marco Moscher <moscher@modell-aachen.de>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant