Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

confluent-schema-registry deployment and service are not created when creating StrimziSchemaRegistry #77

Closed
murthy95 opened this issue Sep 26, 2022 · 5 comments

Comments

@murthy95
Copy link

Hi, I am running a local Minikube setup. Here are the steps I followed to run schema registry on k8s (Minikube).

  • Installed Strimzi Kafka operator using the Strimzi-0.31.1 release.
  • Started a Kafka cluster using that minikube/setupkafka.sh
  • Created Kafka user and Kafka topic.
  • Deployed service registry operator using Minikube/deploysro.sh

Until this everything works fine.

I ran Minikube/deployregistry.sh and it fails citing confluent-shema-registry deployment not found. failing at this step kubectl wait deployment confluent-schema-registry \ --for condition=Available=True --timeout=600s -n default
This is the error.
Error from server (NotFound): deployments.apps "confluent-schema-registry" not found

When I checked for deployments and services in default namespace I didn't find confluent-schema-registery where as secrets is created. How to debug this issue ?

kubectl get deployments
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
test-cluster-entity-operator   1/1     1            1           97m

kubectl get svc

NAME                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                               AGE
kubernetes                      ClusterIP   10.96.0.1       <none>        443/TCP                               100m
test-cluster-kafka-bootstrap    ClusterIP   10.96.120.130   <none>        9091/TCP,9092/TCP,9093/TCP            97m
test-cluster-kafka-brokers      ClusterIP   None            <none>        9090/TCP,9091/TCP,9092/TCP,9093/TCP   97m
test-cluster-zookeeper-client   ClusterIP   10.96.38.74     <none>        2181/TCP                              98m
test-cluster-zookeeper-nodes    ClusterIP   None            <none>        2181/TCP,2888/TCP,3888/TCP            98m

kubectl get secrets

NAME                                       TYPE     DATA   AGE
confluent-schema-registry                  Opaque   5      82m
test-cluster-clients-ca                    Opaque   1      99m
test-cluster-clients-ca-cert               Opaque   3      99m
test-cluster-cluster-ca                    Opaque   1      99m
test-cluster-cluster-ca-cert               Opaque   3      99m
test-cluster-cluster-operator-certs        Opaque   4      99m
test-cluster-entity-topic-operator-certs   Opaque   4      97m
test-cluster-entity-user-operator-certs    Opaque   4      97m
test-cluster-kafka-brokers                 Opaque   4      97m
test-cluster-zookeeper-nodes               Opaque   12     99m
@Guberlo
Copy link

Guberlo commented Oct 18, 2022

Hi there,

I had the same problem with the confluent-schema-registry secret not being created correctly.
In my case it was because the authentication for the KafkaUser was of type simple and the Kafka Cluster has this disabled by default.

You can check if you got the same error I had by doing this:

Get KafkaUsers:
kubectl get kafkausers -o wide -w -n <namespace>
Which will outoput something like that:
image

Now, if your user is not ready (there is not the True under the ready column), then you might just have encountered the same problem as I did.
Just check the status of the user deployment by typing:
kubectl get kafkausers <username> -o yaml
and look for the status attribute.
If you got an authorization problem like this:
image

then you will have to modify your kafka cluster deployment, by specifying to enable the simple authorization as below:
image

@murthy95
Copy link
Author

@Guberlo was able to fix this.

@applike-ss
Copy link

I see that here (https://github.com/lsst-sqre/strimzi-registry-operator/blob/main/strimziregistryoperator/handlers/createregistry.py#L120) we are reading the cluster name from the official strimzi labels to be put on the kafkauser resource.

Is there a specific reason for that? I wonder if we even have to call the k8s api for that.
My suggestion is to enforce the user to use a proper label on the StrimziSchemaRegistry resource in the first place (like you have to on the strimzi provided cr's) and read it from the meta variable directly.

What do you think about this @jonathansick ?

@jonathansick
Copy link
Member

Excellent idea @applike-ss, we'll implement that.

Separately, one of the goals that's emerged from our use at Rubin Observatory is supporting multiple Strimzi-deployed clusters (that are operating in separate namespaces). So annotating StrimziSchemaRegistry with the cluster name also helps us with that.

Also, thank you @Guberlo with pitching in on community support. 🍻 Really appreciate it.

@applike-ss
Copy link

Excellent idea @applike-ss, we'll implement that.

Separately, one of the goals that's emerged from our use at Rubin Observatory is supporting multiple Strimzi-deployed clusters (that are operating in separate namespaces). So annotating StrimziSchemaRegistry with the cluster name also helps us with that.

Also, thank you @Guberlo with pitching in on community support. beers Really appreciate it.

Having the current limit of only one schema registry and one cluster was actually bothering us as well. Great to hear that you want to fix that one as well!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants