-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PulsarSQL worker node readiness and liveness probes fail #87
Comments
@robparrott - thanks for the detailed issue report. I am looking into it now. |
Update: I've reproduced the issue in my local environment using minikube and the latest released version of the helm chart 2.0.1 and a slightly modified version of |
@robparrott - chart version 2.0.7 includes the fix for this issue. Please let us know if you encounter any other issues on the chart. Thanks! |
Add PSP and add/modify RBAC. I'm open for all discussion. ### Motivation On clusters which use PSP and restrictive default policy pulsar cannot be installed, because it uses root user and requires writable container root directory. Additionally default RBAC for broker are too permissive (usage of ClusterRoleBinding) in my opinion. ### Modifications Add PSP and RBAC for bookkeeper and autorecovery to add exception to allow startup even in secure environment where containers cannot access RW on root by default. Add option for limiting broker ClusterRoleBinding to single namespace by replacing to RoleBinding ### Verifying this change - [x] Make sure that the change passes the CI checks.
The readiness and liveness probes fail for the pulsarSQL worker nodes.
For example, from the pod in question:
Calling the status endpoint as localhost fails:
As does calling the status endpoint via the pod IP:
The text was updated successfully, but these errors were encountered: