Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Couldn't find any clusters on your configuration file #184

Closed
eilonk opened this issue Jan 14, 2020 · 18 comments
Closed

Couldn't find any clusters on your configuration file #184

eilonk opened this issue Jan 14, 2020 · 18 comments

Comments

@eilonk
Copy link

eilonk commented Jan 14, 2020

Hi, I'm trying to configure kafkahq for the first time
I am using helm to deploy it on kubernetes, but when i try to access it, i get the following error:

{"message":"Internal Server Error: Couldn't find any clusters on your configuration file, please ensure that the configuration file is loaded correctly","logref":{"empty":true,"present":false},"path":{"empty":true,"present":false}}

the kafka itself is working correctly and i only changed "cluster-name" from my original one,
when I deploy it, I get no errors. But when I try to login through 127.0.0.1:8080 i get the error message

my configuration file looks like this:

image:
  repository: tchiotludo/kafkahq
  tag: latest
  annotations: {}
  extraEnv: {}

secrets: |
  kafkahq:
    secrets:
      cluster-name:
        properties:
          bootstrap.servers: "strimzi-kafka-control-kafka-bootstrap.cluster-name:9092"
        connect:
          url: ""

kafkahq:
  connections: 
    cluster-name:
      properties:
              bootstrap.servers: "strimzi-kafka-control-kafka-bootstrap.cluster-name:9092"

service:
  enabled: true
  type: ClusterIP
  port: 80
  annotations: {}

resources: {}

nodeSelector: {}

tolerations: []

affinity: {}

I'm not really sure why it has problem locating my cluster, and would love to get some insights:

  1. Bootstrap servers - should i write my main bootstrap address only? 2 ports are open on this address and i saw some conflicts online wether i should add the kafka-control servers as well?
  2. is anything else missing/wrong?

Thanks in advance for any help :)

@tchiotludo
Copy link
Owner

Hi,

Can you edit your issue with formatted yaml ?
It's impossible to help you without indentation

Thanks

@eilonk
Copy link
Author

eilonk commented Jan 14, 2020

Hi,

Can you edit your issue with formatted yaml ?
It's impossible to help you without indentation

Thanks

Hi, i changed it - it's my first issue sorry..

@tchiotludo
Copy link
Owner

I've a better view now.

There is a bug on helm that i corrected now :

here is a minimal working examples :

secrets: |
  kafkahq:
    connections:
      cluster-name:
        properties:
          bootstrap.servers: "strimzi-kafka-control-kafka-bootstrap.cluster-name:9092"
        connect:
          url: ""

no need to duplicated bootstrap.servers on configuration & secrets
The bootstrap.servers need to be at least one server on your cluster (can be many seperated by ,)

@eilonk
Copy link
Author

eilonk commented Jan 14, 2020

i still get the error
i tried editing the bootstrap.servers like you said and changed it to the fqdn name
(strimzi-kafka-control-kafka-bootstrap.cluster-name.svc.cluster.local)
and this doesn't seem to work either
also - if it doesn't need to be duplicated can i delete it from the secret and not from the configuration?

@tchiotludo
Copy link
Owner

can you connect on the pods and see if there is a file on /app/application.yml or /app/application-secrets.yml with the configuration you provide ?
You can put in configuration or secrets, it's have not impact, both are loaded, but configuration is a simple configmap and credential are in clear text

@eilonk
Copy link
Author

eilonk commented Jan 14, 2020

yes, the file exists and has the configurations inside

@tchiotludo
Copy link
Owner

can you go on /env url please, this will output the configuration that KafkaHQ see please ?

@eilonk
Copy link
Author

eilonk commented Jan 14, 2020

what am i looking for here?
there isn't a direct /env path in the pod (no such file or directory)
when looking at env (variables) i do see it set some for the kafkahq:
(i erased the address but it appears where it says x.x.x.x - same one for all of them)

KAFKAHQ_0_1_0_PORT=tcp://x.x.x.x:80
KAFKAHQ_0_1_0_SERVICE_PORT_HTTP=80
KAFKAHQ_0_1_0_SERVICE_PORT=80
KAFKAHQ_0_1_0_PORT_80_TCP_PORT=80
KAFKAHQ_0_1_0_PORT_80_TCP_ADDR=x.x.x.x
KAFKAHQ_0_1_0_PORT_80_TCP=tcp://x.x.x.x:80
KAFKAHQ_0_1_0_PORT_80_TCP_PROTO=tcp
KAFKAHQ_0_1_0_SERVICE_HOST=x.x.x.x

@tchiotludo
Copy link
Owner

tchiotludo commented Jan 14, 2020

Sorry' I'm mean http like http://pods:8080/env

@eilonk
Copy link
Author

eilonk commented Jan 14, 2020

oh no worries
with port forwarding i get the same error output

when i try to access it directly it doesn't even load anything, so i checked if the pod is ok
i got this warning

Readiness probe failed: HTTP probe failed with statuscode: 500

is there any network configuration i left out or something needed?

@tchiotludo
Copy link
Owner

really strange ...
The readiness check is always working, no matter the configuration.

the kubectl logs don't give you any information ?

@eilonk
Copy link
Author

eilonk commented Jan 14, 2020

super weird to me too.. everything seems fine other than that
the logs only tell me that the same error as put before and nothing new
are there any requirements that my server might be exceeding?

@tchiotludo
Copy link
Owner

I don't think so ...
Really hard to help blindness for me ...

The things I really need to debug your case is the files on /app/*.yml.
And the curl http://localhost:8080/env

Without the 2, I can't help you ... sorry

@eilonk
Copy link
Author

eilonk commented Jan 14, 2020

if there's anything else tell me, thanks for helping out

event from pod logs:
WARN pGroup-1-2 org.kafkahq.log.access [Date: 2020-01-14T14:17:59.258649Z] [Duration: 3 ms] [Url: GET /health HTTP/1.1] [Status: 500] [Ip: x.x.x.x] [Length: 231] [Port: 8080]

curl output: (same as before)
{"message":"Internal Server Error: Couldn't find any clusters on your configuration file, please ensure that the configuration file is loaded correctly","logref":{"empty":true,"present":false},"path":{"empty":true,"present":false}}

content of application-secrets.yml: (only yml file under /app)

  cluster-name:
    properties:
      bootstrap.servers: "strimzi-kafka-control-kafka-bootstrap.cluster-name.svc.cluster.local:9092"
    connect:
      url: "http://x.x.x.x:9094" ```

**events from pod description (events come from kubelet)**

Liveness probe failed: dial tcp x.x.x.x:8080: connect: connection refused
Readiness probe failed: HTTP probe failed with statuscode: 500

@eilonk
Copy link
Author

eilonk commented Jan 14, 2020

will other logs help? java log trace f.e?

2020-01-14 14:35:59,259 ERROR pGroup-1-3 .s.n.RoutingInBoundHandler Unexpected error occurred: Couldn't find any clusters on your configuration file, please ensure that the configuration file is loaded correctly
java.lang.IllegalArgumentException: Couldn't find any clusters on your configuration file, please ensure that the configuration file is loaded correctly
        at org.kafkahq.middlewares.KafkaWrapperFilter.doFilter(KafkaWrapperFilter.java:25)
        at io.micronaut.http.filter.HttpServerFilter.doFilter(HttpServerFilter.java:48)
        at io.micronaut.http.server.netty.RoutingInBoundHandler.filterPublisher(RoutingInBoundHandler.java:1491)
        at io.micronaut.http.server.netty.RoutingInBoundHandler.exceptionCaughtInternal(RoutingInBoundHandler.java:330)
        at io.micronaut.http.server.netty.RoutingInBoundHandler.exceptionCaught(RoutingInBoundHandler.java:246)
        at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:297)
        at io.netty.channel.AbstractChannelHandlerContext.notifyHandlerException(AbstractChannelHandlerContext.java:831)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:376)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:108)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
        at io.micronaut.http.netty.stream.HttpStreamsHandler.channelRead(HttpStreamsHandler.java:191)
        at io.micronaut.http.netty.stream.HttpStreamsServerHandler.channelRead(HttpStreamsServerHandler.java:121)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
        at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
        at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:93)
        at io.netty.handler.codec.http.HttpServerKeepAliveHandler.channelRead(HttpServerKeepAliveHandler.java:64)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
        at io.netty.handler.flow.FlowControlHandler.dequeue(FlowControlHandler.java:186)
        at io.netty.handler.flow.FlowControlHandler.channelRead(FlowControlHandler.java:152)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
        at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:93)
        at org.kafkahq.middlewares.HttpServerAccessLogHandler.channelRead(HttpServerAccessLogHandler.java:95)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
        at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438)
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:328)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:302)
        at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1422)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:931)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:700)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:635)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:552)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514)
        at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044)
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        at java.base/java.lang.Thread.run(Thread.java:834)

@tchiotludo
Copy link
Owner

Things is see :
Your yaml is wrong, kafkahq: root is missing :

kafkahq:
  cluster-name:
    properties:
      bootstrap.servers: "strimzi-kafka-control-kafka-bootstrap.cluster-name.svc.cluster.local:9092"
    connect:
      url: "http://x.x.x.x:9094" 

The curl curl http://localhost:8080/env can't display this error, since this endpoint don't try to connect to kafka, must be this kind of output :

{
  "activeEnvironments": [
    "dev"
  ],
  "packages": [
    "org.kafkahq"
  ],
  "propertySources": [
    {
      "........................."
    } 
  ]
}

I forgot this one is disabled by default, you need to add this to your configuration files :

endpoints:
    env:
        enabled: true
        sensitive: true

Hope it can help

@eilonk
Copy link
Author

eilonk commented Jan 15, 2020

Hi, thanks for the extra help
i added everything you listed but i still get the errors in the logs (cant find my cluster)
i looked at the application.yml file to see if it gets the values after deployment but the file doesn't look that affected, hope this could shed some light to why it isn't working:

micronaut:
  application:
    name: kafkahq
  io:
    watch:
      paths: src/main
      restart: false # enabled dev server with env vars MICRONAUT_IO_WATCH_RESTART=true
  router:
    static-resources:
      static:
        paths: classpath:static
        mapping: "${kafkahq.server.base-path:}/static/**"
  security:
    enabled: true
    endpoints:
      login:
        enabled: true
        path: "${kafkahq.server.base-path:}/login"
      logout:
        enabled: true
        path: "${kafkahq.server.base-path:}/logout"
        get-allowed: true
    session:
      enabled: true
      login-success-target-url: "${kafkahq.server.base-path:}/"
      logout-target-url: "${kafkahq.server.base-path:}/"
      forbidden-target-url: "${kafkahq.server.base-path:}/login/forbidden"
      unauthorized-target-url: "${kafkahq.server.base-path:}/login/unauthorized"
      login-failure-target-url: "${kafkahq.server.base-path:}/login/failed"
    intercept-url-map:
      - pattern: "${kafkahq.server.base-path:}/static/**"
        access: "isAnonymous()"
  caches:
    kafka-wrapper:
      record-stats: true
      expire-after-write: 0s

endpoints:
  all:
    path: "${kafkahq.server.base-path:}"
  health:
    enabled: true
    sensitive: false
    details-visible: anonymous
  info:
    enabled: true
    sensitive: false
  metrics:
    enabled: true
    sensitive: false
    export:
      prometheus:
        enabled: true
        step: PT1M
        descriptions: true
  prometheus:
    enabled: true
    sensitive: false
  caches:
    enabled: true
    sensitive: false

jackson:
  module-scan: false

kafkahq:
  server:
    base-path: ""
    access-log:
      enabled: true
      name: org.kafkahq.log.access
      format: "[Date: {}] [Duration: {} ms] [Url: {} {} {}] [Status: {}] [Ip: {}] [Length: {}] [Port: {}]"

  clients-defaults:
    consumer:
      properties:
        max.poll.records: 50
        isolation.level: read_committed
        group.id: KafkaHQ
        enable.auto.commit: "false"
        default.api.timeout.ms: 15000

  pagination:
    page-size: 25
    threads: 16

  topic:
    default-view: HIDE_INTERNAL
    replication: 1
    retention: 86400000
    partition: 1
    internal-regexps:
      - "^_.*$"
      - "^.*_schemas$"
      - "^.*connect-config$"
      - "^.*connect-offsets$1"
      - "^.*connect-status$"
    stream-regexps:
      - "^.*-changelog$"
      - "^.*-repartition$"
      - "^.*-rekey$"
    skip-consumer-groups: false

  topic-data:
    sort: OLDEST
    size: 50
    poll-timeout: 1000

  security:
    default-groups:
      - admin
    groups:
      admin:
        roles:
        - topic/read
        - topic/insert
        - topic/delete
        - topic/config/update
        - node/read
        - node/config/update
        - topic/data/read
        - topic/data/insert
        - topic/data/delete
        - group/read
        - group/delete
        - group/offsets/update
        - registry/read
        - registry/insert
        - registry/update
        - registry/delete
        - registry/version/delete
        - acls/read
        - connect/read
        - connect/insert
        - connect/update
        - connect/delete
        - connect/state/update
      reader:
        roles:
          - topic/read
          - node/read
          - topic/data/read
          - group/read
          - registry/read
          - acls/read
          - connect/read

when i try the curl it still displays the same internal server error.
i think the problem is only that it can't find the cluster, but i don't really understand why

@eilonk
Copy link
Author

eilonk commented Jan 15, 2020

I wrote the configuration file again and it worked.

i think the problem was the double use of configuration and secrets that both required similar values.
if someone is looking at this with the same problem - make sure the configuration can accept strings (use |) and that your yaml is accurate

thanks for the help, @tchiotludo !

@eilonk eilonk closed this as completed Jan 15, 2020
ghost pushed a commit that referenced this issue Jul 17, 2020
* fixed bug

* urls in topic page are dynamic

* urls in connect are dynamic

* urls in node page are dynamic

* url in schema page are dynamic

* dropdown redirecting to error page when dropdown value is empty fixed in connect create

* searchbar responsive again

* search bar responsive in big screens
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants