Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing required configuration "bootstrap.servers" which has no default value #193

Closed
hagay-david-devops opened this issue Jan 20, 2020 · 31 comments
Labels
question Further information is requested

Comments

@hagay-david-devops
Copy link

Hi
also bootstrap.servers is configure right , the log is prompting the following :L
Caused by: org.apache.kafka.common.config.ConfigException: Missing required configuration "bootstrap.servers" which has no default value.

any idea ?
thanks .

@tchiotludo tchiotludo added the question Further information is requested label Jan 20, 2020
@tchiotludo
Copy link
Owner

I need a configuration files to help you !

@hagay-david-devops
Copy link
Author

hagay-david-devops commented Jan 20, 2020

sure ,
assume bootstrap servers are the real once

image:
repository: tchiotludo/kafkahq
tag: latest
annotations: {}
#prometheus.io/scrape: 'true'
#prometheus.io/port: '8080'
#prometheus.io/path: '/metrics'
extraEnv: []

configuration: |
kafkahq:
server:
access-log:
enabled: false
name: org.kafkahq.log.access
connections:
my-cluster-plain-text:
properties:
bootstrap.servers: "kafkacluster01:9092,kafkacluster02:9092,kafkacluster03:9092"

secrets: |
kafkahq:
connections:
my-cluster-plain-text:
properties:
#bootstrap.servers: "kafka:9092"
bootstrap.servers: "kafkacluster01:9092,kafkacluster02:9092,kafkacluster03:9092"

extraVolumes: []

extraVolumeMounts: []

service:
enabled: true
type: LoadBalancer
port: 8090
annotations:

ingress:
enabled: false
annotations: {}

paths: []
hosts:
- kafkahq.demo.com
tls: []

resources: {}

nodeSelector: {}

tolerations: []

affinity: {}

@hagay-david-devops
Copy link
Author

any other configs are needed except the values.yaml ?
the configmap.yaml is the original one.

@tchiotludo
Copy link
Owner

can you send it with formatting, understand yaml without indentation is "impossible" 😄

@hagay-david-devops
Copy link
Author

image:
  repository: tchiotludo/kafkahq
  tag: latest
  annotations: {}
    #prometheus.io/scrape: 'true'
    #prometheus.io/port: '8080'
    #prometheus.io/path: '/metrics'
  extraEnv: []
  ## You can put directly your configuration here...
  # - name: KAFKAHQ_CONFIGURATION
  #   value: |
  #       kafkahq:
  #         secrets:
  #           docker-kafka-server:
  #             properties:
  #               bootstrap.servers: "kafka:9092"


## Or you can also use configmap for the configuration...
configuration: |
  kafkahq:
    server:
      access-log: 
        enabled: false 
        name: org.kafkahq.log.access
    connections:
      my-cluster-plain-text:
        properties:
          bootstrap.servers: "kafkacluster01:9092,kafkacluster02:9092,kafkacluster03:9092"

##... and secret for connection information
secrets: |
  kafkahq:
    connections:
      my-cluster-plain-text:
        properties:
          #bootstrap.servers: "kafka:9092"
          bootstrap.servers: "kafkacluster01:9092,kafkacluster02:9092,kafkacluster03:9092" 
        #schema-registry:
        #  url: "http://schema-registry:8085"
        #  basic-auth-username: basic-auth-user
        #  basic-auth-password: basic-auth-pass
        #connect:
        #  url: "http://connect:8083"
        #  basic-auth-username: basic-auth-user
        #  basic-auth-password: basic-auth-pass

# Any extra volumes to define for the pod (like keystore/truststore)
extraVolumes: []
# Any extra volume mounts to define for the kafkaHQ container
extraVolumeMounts: []

service:
  enabled: true
  #type: ClusterIP
  type: LoadBalancer
  port: 8090
  annotations:
    # cloud.google.com/load-balancer-type: "Internal"

ingress:
  enabled: false
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  paths: []
  hosts:
    - kafkahq.demo.com
  tls: []
  #  - secretName: kafkahq-tls
  #    hosts:
  #      - kafkahq.demo.com

resources: {}
  # limits:
  #  cpu: 100m
  #  memory: 128Mi
  # requests:
  #  cpu: 100m
  #  memory: 128Mi

nodeSelector: {}

tolerations: []

affinity: {}

@tchiotludo
Copy link
Owner

As i can see, you must provide the cluster only in configuration or secrets, not both.

Change with this and it must work :

configuration: |
  kafkahq:
    connections:
      my-cluster-plain-text:
        properties:
          bootstrap.servers: "kafkacluster01:9092,kafkacluster02:9092,kafkacluster03:9092"

@hagay-david-devops
Copy link
Author

hagay-david-devops commented Jan 20, 2020

Hi ,
I am afraid that's not the case here.
I have commented all secrets section and reinstall the chart but getting the same error.
I can post the whole java log if needed.

@tchiotludo
Copy link
Owner

can you connect inside the pods and look at /app/application.yml,/app/application-secrets.yml please ?
and send me files please ?

@hagay-david-devops
Copy link
Author

kafkahq:
  server:
    access-log: 
      enabled: false 
      name: org.kafkahq.log.access
  connections:
    my-cluster-plain-text:
      properties:
        bootstrap.servers: "kafkacluster01:9092,kafkacluster02:9092,kafkacluster03:9092"

@hagay-david-devops
Copy link
Author

hagay-david-devops commented Jan 20, 2020

/app/application-secrets.yml is not exist as per your request to have only one method

@tchiotludo
Copy link
Owner

Ok all seems good ...
really weird.

Can you look at this one : #184

Especially adding this to your configuration :

endpoints:
    env:
        enabled: true
        sensitive: true

And go to your pods with : curl http://localhost:8080/env to see if the configuration is applied ?

@hagay-david-devops
Copy link
Author

hagay-david-devops commented Jan 20, 2020

from Host

DV-99224:deploy hagayd$ curl -v http://LB-IP:8090
*   Trying 192.168.50.38...
* TCP_NODELAY set
* Connected to 192.168.50.38 (192.168.50.38) port 8090 (#0)
> GET / HTTP/1.1
> Host: 192.168.50.38:8090
> User-Agent: curl/7.64.1
> Accept: */*
> 
< HTTP/1.1 301 Moved Permanently
< Location: /my-cluster-plain-text/topic
< Date: Mon, 20 Jan 2020 13:14:56 GMT
< transfer-encoding: chunked
< connection: close
< 
* Closing connection 0

@hagay-david-devops
Copy link
Author

hagay-david-devops commented Jan 20, 2020

Inside the POD

root@kafkahq-67c97cccf4-hqk4j:/app# curl -v http://localhost:8080/
* Expire in 0 ms for 6 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Expire in 150000 ms for 3 (transfer 0x559d284badc0)
* Expire in 200 ms for 4 (transfer 0x559d284badc0)
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET / HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.64.0
> Accept: */*
> 
< HTTP/1.1 301 Moved Permanently
< Location: /my-cluster-plain-text/topic
< Date: Mon, 20 Jan 2020 13:17:45 GMT
< transfer-encoding: chunked
< connection: close
< 
* Closing connection 0

@tchiotludo
Copy link
Owner

curl -v http://localhost:8080/env please !
the "/" will redirect to your cluster

@hagay-david-devops
Copy link
Author

hagay-david-devops commented Jan 20, 2020

DV-99224:deploy hagayd$ curl -v http://LB-IP:8090/env
*   Trying LB-IP...
* TCP_NODELAY set
* Connected to /LB-IP (/LB-IP) port 8090 (#0)
> GET /env HTTP/1.1
> Host: /LB-IP:8090
> User-Agent: curl/7.64.1
> Accept: */*
> 
< HTTP/1.1 401 Unauthorized
< Date: Mon, 20 Jan 2020 13:24:27 GMT
< transfer-encoding: chunked
< connection: close
< 
* Closing connection 0
DV-99224:deploy hagayd$ 

@tchiotludo
Copy link
Owner

you have added this to your configuration files ?

endpoints:
    env:
        enabled: true
        sensitive: true

if yes, your configuration is not take by micronaut that will be really weird ...

Maybe remove in the bootstrap server the list and put only 1 to see if it's better ?

@hagay-david-devops
Copy link
Author

Yes , i have added above section to configuration.
also tried using only one bootstrap .
when hitting -- > curl -v http://LB-IP:8090/env
I am landing on the login page
is it OK ?

@hagay-david-devops
Copy link
Author

hagay-david-devops commented Jan 20, 2020

Is there a default username and pass ?
or only when enabling basic-auth:

@tchiotludo
Copy link
Owner

no it's not ok ...
Redirect to login page mean that endpoints configuration is not taken, so you can setup every things you want it will not works, since the configuration is not used for a really weird reason ...
To be honest, I'm completely blindness to help you ...

@unixunion
Copy link

unixunion commented Jan 22, 2020

I am getting this same error, trying to spin up the dev-compose, and even though I have kafka->properties->bootstrap.servers set, kafka client throws exception when accessing kafkahq.

Caused by: org.apache.kafka.common.config.ConfigException: Missing required configuration "bootstrap.servers" which has no default value. at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:476)

@tchiotludo
Copy link
Owner

@unixunion : you are just trying with the docker-compose-dev.yml ? right ?

@unixunion
Copy link

unixunion commented Jan 22, 2020

Correct, and I have put a application.yml in the mounted root, and I had to add MICRONAUT_CONFIG_FILES to the compose too. I tried latest tag, master and dev branches. same issue.

added this to env for kafkahq service in compose.

MICRONAUT_CONFIG_FILES: /app/application.yml

I cant seem to get the entire yaml formated into here. pastbin: https://pastebin.com/cgKL85ep

    kafkahq_1          | 	at java.base/java.lang.Thread.run(Thread.java:834)
    kafkahq_1          | Caused by: org.apache.kafka.common.config.ConfigException: Missing required     configuration "bootstrap.servers" which has no default value.
    kafkahq_1          | 	at     org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:476)
    kafkahq_1          | 	at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:466)
    kafkahq_1          | 	at org.apache.kafka.common.config.AbstractConfig.<init>    (AbstractConfig.java:108)
    kafkahq_1          | 	at org.apache.kafka.common.config.AbstractConfig.<init>    (AbstractConfig.java:142)
    kafkahq_1          | 	at org.apache.kafka.clients.admin.AdminClientConfig.<init>    (AdminClientConfig.java:196)
    kafkahq_1          | 	at org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:55)
    kafkahq_1          | 	at org.kafkahq.modules.KafkaModule.getAdminClient(KafkaModule.java:97)
    kafkahq_1          | 	at     org.kafkahq.modules.AbstractKafkaWrapper.lambda$listTopics$1(AbstractKafkaWrapper.java:51)
    kafkahq_1          | 	at org.kafkahq.utils.Logger.call(Logger.java:19)
    kafkahq_1          | 	... 145 common frames omitted

@unixunion
Copy link

unixunion commented Jan 22, 2020

in the docker-compose-dev.yml , I commented out the kafkahq and the webpack and then started kafkahq via IntelliJ. I put a debugger on line 97 of KafkaModule.java, and the clusterId passed is docker-kafka-server, which is not the cluster I have in the application.yaml. So I dont know where it is getting this clusterId from.

EDIT: I see it got the clusterId from a stale URL call. Actually, it seems to be redirected to the non-existent cluster. I hit http://localhost:8080/ and get redirected to http://localhost:8080/docker-kafka-server/topic

EDIT: if I hit http://localhost:8080/my-cluster-plain-text/topic direct, I get to a new error regarding my NW setup. will post results soon.

@unixunion
Copy link

I think I figured the issue out, it seems that browser cache is the culprit here, because I had a kafkahq deployed in docker in another experiment, there is some contamination of the cache, so when I access "localhost:8080", for some reason I get redirected to the clusterId for the docker-kafka-server. Nuking the cache seems to resolve it.

@tchiotludo
Copy link
Owner

Already have also this issue !
Don't think about it, but you are right the browser cache the redirect and will try to go the last url, and if you change name, this can lead to this error ...

@unixunion
Copy link

So this is not really a bug, but perhaps it is needed to add a no-cache header onto the html. I guess @hagay-david-devops has the same problem with cache.

@hagay-david-devops
Copy link
Author

Hi ,
Will check from side as well and update if it's related to browser cache .
thanks.

@tchiotludo
Copy link
Owner

don't really want to add a no-cache header and let the browser do it's job.
And the situation is during initial configuration only
closing for now

@Pixelshifter
Copy link

Pixelshifter commented Jul 1, 2020

I'm running against this issue AFTER initial configuration. I'm running AKHQ for two weeks now, it connects to three 3-node clusters. My users are saying that AKHQ is really slow after running for a while. I've just restarted the container after one week of running with no issues. When looking in the log i see the exact same error being logged. Only a restart of the containers flushes stuff.

Could this be browser related and if so, what kind of browser tools could I use to help find the root cause?

@tchiotludo
Copy link
Owner

I don't think it's a browser issue.
Must be a server side issue instead.

Create a new issue with more log.
Since we say that the issue is depending on the time.
Take a snapshot of the log when the users complains, and a snapshot just after restart and browsing most of the page.

@adacaccia
Copy link

In my experience (app 0.20 from helm chart 0.27), the /env endpoint gets only deployed to the "management" port, 28081 by default in the Helm chart. 8080 is the default port for the standard service, and will never get the /env endpoint!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

5 participants