Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pods of broker/proxy/recovery init failed when enabled tls #25

Closed
tabalt opened this issue Jun 24, 2020 · 2 comments · Fixed by #27
Closed

Pods of broker/proxy/recovery init failed when enabled tls #25

tabalt opened this issue Jun 24, 2020 · 2 comments · Fixed by #27

Comments

@tabalt
Copy link
Contributor

tabalt commented Jun 24, 2020

Describe the bug
Pods of broker/proxy/recovery init failed when enabled tls

To Reproduce
Install commands:

git clone https://github.com/apache/pulsar-helm-chart.git ./
cd pulsar-helm-chart/

./scripts/cert-manager/install-cert-manager.sh
./scripts/pulsar/prepare_helm_release.sh -c -n pulsar -k pulsar

helm upgrade --install pulsar charts/pulsar \
    --set namespace=pulsar --set volumes.local_storage=true --set certs.internal_issuer.enabled=true \
    --set tls.enabled=true --set tls.proxy.enabled=true  --set tls.broker.enabled=true  --set tls.bookie.enabled=true \
    --set tls.zookeeper.enabled=true  --set tls.autorecovery.enabled=true  --set tls.toolset.enabled=true \
    --set auth.authentication.enabled=true --set auth.authorization.enabled=true -n pulsar

Unexpected behavior

Pods of broker/proxy/recovery stucked in the Init status

kubectl get pods -n pulsar
NAME                                     READY   STATUS      RESTARTS   AGE
pulsar-bookie-0                          1/1     Running     0          46m
pulsar-bookie-1                          1/1     Running     0          46m
pulsar-bookie-2                          1/1     Running     0          46m
pulsar-bookie-3                          1/1     Running     0          46m
pulsar-bookie-init-l9zdv                 0/1     Completed   0          46m
pulsar-broker-0                          0/1     Init:0/2    0          46m
pulsar-broker-1                          0/1     Init:0/2    0          46m
pulsar-broker-2                          0/1     Init:0/2    0          46m
pulsar-grafana-5ffd75b49d-g658b          1/1     Running     0          46m
pulsar-prometheus-5f957bf77-6mj2z        1/1     Running     0          46m
pulsar-proxy-0                           0/1     Init:1/2    0          46m
pulsar-proxy-1                           0/1     Init:1/2    0          46m
pulsar-proxy-2                           0/1     Init:1/2    0          46m
pulsar-pulsar-init-mqsvt                 1/1     Running     0          46m
pulsar-pulsar-manager-767d5f5766-khpr4   1/1     Running     0          46m
pulsar-recovery-0                        0/1     Init:0/1    0          46m
pulsar-toolset-0                         1/1     Running     0          46m
pulsar-zookeeper-0                       1/1     Running     0          46m
pulsar-zookeeper-1                       1/1     Running     0          46m
pulsar-zookeeper-2                       1/1     Running     0          45m

Check file /pulsar/certs/broker/tls.crt failed when init container started

kubectl logs pulsar-broker-0 -c wait-zookeeper-ready -n pulsar | head -8
processing /pulsar/certs/broker/tls.crt : len = 0
/pulsar/certs/broker/tls.crt is empty
JMX enabled by default
Connecting to pulsar-zookeeper:2281
...

When I check it, tls files had generated

kubectl exec -it  pulsar-broker-0 -c wait-zookeeper-ready -n pulsar /bin/bash
ls -al /pulsar/certs/broker/tls.crt
lrwxrwxrwx 1 root root 14 Jun 24 10:06 /pulsar/certs/broker/tls.crt -> ..data/tls.crt

If I re-run the following command:

/pulsar/keytool/keytool.sh broker ${HOSTNAME}.pulsar-broker.pulsar.svc.cluster.local true;

The init container will be successful exit, and pod will running

kubectl get pods -n pulsar | grep 'pulsar-broker-0'
pulsar-broker-0                          1/1     Running     0          71m
@sijie
Copy link
Member

sijie commented Jun 24, 2020

@tabalt Did you install cert-manager before installing the Pulsar helm chart? The helm chart uses cert-manager to issue self-signed certificates.

See: http://pulsar.apache.org/docs/en/helm-deploy/#install-cert-manager

@tabalt
Copy link
Contributor Author

tabalt commented Jun 25, 2020

@sijie Yes, I have installed cert-manager before installing the Pulsar helm chart.
image
The certificate files generated success when I check it, but it did’t generated when the pods of broker/proxy/recovery start. So the following init script execute failed:

/pulsar/keytool/keytool.sh broker ${HOSTNAME}.pulsar-broker.pulsar.svc.cluster.local true;

And the following check script will be failed every times:

until bin/bookkeeper org.apache.zookeeper.ZooKeeperMain -server pulsar-zookeeper:2281 get /admin/clusters/pulsar; do   echo "pulsar cluster pulsar isn't initialized yet ... check in 3 seconds ..." && sleep 3; done;

@tabalt tabalt mentioned this issue Jun 25, 2020
@sijie sijie closed this as completed in #27 Jun 26, 2020
sijie pushed a commit that referenced this issue Jun 26, 2020
sijie pushed a commit to streamnative/charts that referenced this issue Jun 26, 2020
Joshhw pushed a commit to Joshhw/pulsar-helm-chart that referenced this issue Mar 10, 2021
* Create Optumfile.yml

* Update Jenkinsfile

* Update Jenkinsfile
pgier pushed a commit to pgier/pulsar-helm-chart that referenced this issue Apr 22, 2022
rdhabalia pushed a commit to rdhabalia/pulsar-helm-chart that referenced this issue Feb 2, 2023
* helm chart for admin-proxy

* naming

* naming error

* syntax

* fix statefulset

* refactor

* refactor

* refactor

* refactor

* refactor

* refactor

* refactor

* refactor

* refactor

* refactor

* refactor

* refactor

* refactor

* refactor

* refactor

* fix lint

* remove configdata

* refactor

* remove redundant template

* refactor

* refactor

* remove liveness probe
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants