Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[0.2.0] consumers page is throwing error 500 #856

Closed
giom-l opened this issue Sep 6, 2021 · 4 comments · Fixed by #857
Closed

[0.2.0] consumers page is throwing error 500 #856

giom-l opened this issue Sep 6, 2021 · 4 comments · Fixed by #857
Labels
scope/backend type/bug Something isn't working
Milestone

Comments

@giom-l
Copy link

giom-l commented Sep 6, 2021

Describe the bug
When I want to get all consumer groups from a cluster (which I suppose to have a large number of customer group), the server always responds with an error HTTP 500.

image

⚠️ also note the typo in the error message : "Consumer Gropups"

When I get consumer for a single topic, it works perfectly.
It also work on a cluster that have only some consumer groups (I have it working with a cluster that has 3)
Here is the stack trace generated by the server :

19:52:07.634 [kafka-admin-client-thread | adminclient-2] ERROR org.springframework.boot.autoconfigure.web.reactive.error.AbstractErrorWebExceptionHandler - [762b7b6a]  500 Server Error for HTTP GET "/api/clusters/kafka%20Studio%20Dev/consumer-groups"
java.lang.NullPointerException: null
	at java.util.Objects.requireNonNull(Unknown Source) ~[?:?]
	Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException: 
Error has been observed at the following site(s):
	|_ checkpoint ⇢ Handler com.provectus.kafka.ui.controller.ConsumerGroupsController#getConsumerGroups(String, ServerWebExchange) [DispatcherHandler]
	|_ checkpoint ⇢ com.provectus.kafka.ui.config.ReadOnlyModeFilter [DefaultWebFilterChain]
	|_ checkpoint ⇢ com.provectus.kafka.ui.config.CustomWebFilter [DefaultWebFilterChain]
	|_ checkpoint ⇢ org.springframework.security.web.server.authorization.AuthorizationWebFilter [DefaultWebFilterChain]
	|_ checkpoint ⇢ org.springframework.security.web.server.authorization.ExceptionTranslationWebFilter [DefaultWebFilterChain]
	|_ checkpoint ⇢ org.springframework.security.web.server.authentication.logout.LogoutWebFilter [DefaultWebFilterChain]
	|_ checkpoint ⇢ org.springframework.security.web.server.savedrequest.ServerRequestCacheWebFilter [DefaultWebFilterChain]
	|_ checkpoint ⇢ org.springframework.security.web.server.context.SecurityContextServerWebExchangeWebFilter [DefaultWebFilterChain]
	|_ checkpoint ⇢ org.springframework.security.web.server.context.ReactorContextWebFilter [DefaultWebFilterChain]
	|_ checkpoint ⇢ org.springframework.security.web.server.header.HttpHeaderWriterWebFilter [DefaultWebFilterChain]
	|_ checkpoint ⇢ org.springframework.security.config.web.server.ServerHttpSecurity$ServerWebExchangeReactorContextWebFilter [DefaultWebFilterChain]
	|_ checkpoint ⇢ org.springframework.security.web.server.WebFilterChainProxy [DefaultWebFilterChain]
	|_ checkpoint ⇢ HTTP GET "/api/clusters/kafka%20Studio%20Dev/consumer-groups" [ExceptionHandlingWebHandler]
Stack trace:
		at java.util.Objects.requireNonNull(Unknown Source) ~[?:?]
		at java.util.stream.Collectors.lambda$uniqKeysMapAccumulator$1(Unknown Source) ~[?:?]
		at java.util.stream.ReduceOps$3ReducingSink.accept(Unknown Source) ~[?:?]
		at java.util.stream.ReferencePipeline$2$1.accept(Unknown Source) ~[?:?]
		at java.util.HashMap$EntrySpliterator.forEachRemaining(Unknown Source) ~[?:?]
		at java.util.stream.AbstractPipeline.copyInto(Unknown Source) ~[?:?]
		at java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source) ~[?:?]
		at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown Source) ~[?:?]
		at java.util.stream.AbstractPipeline.evaluate(Unknown Source) ~[?:?]
		at java.util.stream.ReferencePipeline.collect(Unknown Source) ~[?:?]
		at com.provectus.kafka.ui.util.ClusterUtil.filterConsumerGroupTopic(ClusterUtil.java:390) ~[classes!/:?]
		at com.provectus.kafka.ui.service.KafkaService.lambda$getConsumerGroups$40(KafkaService.java:381) ~[classes!/:?]
		at java.util.stream.ReferencePipeline$3$1.accept(Unknown Source) ~[?:?]
		at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(Unknown Source) ~[?:?]
		at java.util.stream.AbstractPipeline.copyInto(Unknown Source) ~[?:?]
		at java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source) ~[?:?]
		at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown Source) ~[?:?]
		at java.util.stream.AbstractPipeline.evaluate(Unknown Source) ~[?:?]
		at java.util.stream.ReferencePipeline.collect(Unknown Source) ~[?:?]
		at com.provectus.kafka.ui.service.KafkaService.lambda$getConsumerGroups$42(KafkaService.java:389) ~[classes!/:?]
		at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:107) ~[reactor-core-3.3.2.RELEASE.jar!/:3.3.2.RELEASE]
		at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1637) ~[reactor-core-3.3.2.RELEASE.jar!/:3.3.2.RELEASE]
		at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:241) ~[reactor-core-3.3.2.RELEASE.jar!/:3.3.2.RELEASE]
		at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1637) ~[reactor-core-3.3.2.RELEASE.jar!/:3.3.2.RELEASE]
		at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:241) ~[reactor-core-3.3.2.RELEASE.jar!/:3.3.2.RELEASE]
		at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1637) ~[reactor-core-3.3.2.RELEASE.jar!/:3.3.2.RELEASE]
		at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:241) ~[reactor-core-3.3.2.RELEASE.jar!/:3.3.2.RELEASE]
		at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1637) ~[reactor-core-3.3.2.RELEASE.jar!/:3.3.2.RELEASE]
		at reactor.core.publisher.MonoCollectList$MonoCollectListSubscriber.onComplete(MonoCollectList.java:121) ~[reactor-core-3.3.2.RELEASE.jar!/:3.3.2.RELEASE]
		at reactor.core.publisher.ParallelMergeSequential$MergeSequentialMain.drainLoop(ParallelMergeSequential.java:286) ~[reactor-core-3.3.2.RELEASE.jar!/:3.3.2.RELEASE]
		at reactor.core.publisher.ParallelMergeSequential$MergeSequentialMain.drain(ParallelMergeSequential.java:234) ~[reactor-core-3.3.2.RELEASE.jar!/:3.3.2.RELEASE]
		at reactor.core.publisher.ParallelMergeSequential$MergeSequentialMain.onComplete(ParallelMergeSequential.java:226) ~[reactor-core-3.3.2.RELEASE.jar!/:3.3.2.RELEASE]
		at reactor.core.publisher.ParallelMergeSequential$MergeSequentialInner.onComplete(ParallelMergeSequential.java:407) ~[reactor-core-3.3.2.RELEASE.jar!/:3.3.2.RELEASE]
		at reactor.core.publisher.FluxFlatMap$FlatMapMain.checkTerminated(FluxFlatMap.java:823) ~[reactor-core-3.3.2.RELEASE.jar!/:3.3.2.RELEASE]
		at reactor.core.publisher.FluxFlatMap$FlatMapMain.drainLoop(FluxFlatMap.java:589) ~[reactor-core-3.3.2.RELEASE.jar!/:3.3.2.RELEASE]
		at reactor.core.publisher.FluxFlatMap$FlatMapMain.innerComplete(FluxFlatMap.java:892) ~[reactor-core-3.3.2.RELEASE.jar!/:3.3.2.RELEASE]
		at reactor.core.publisher.FluxFlatMap$FlatMapInner.onComplete(FluxFlatMap.java:986) ~[reactor-core-3.3.2.RELEASE.jar!/:3.3.2.RELEASE]
		at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onComplete(FluxMapFuseable.java:144) ~[reactor-core-3.3.2.RELEASE.jar!/:3.3.2.RELEASE]
		at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1638) ~[reactor-core-3.3.2.RELEASE.jar!/:3.3.2.RELEASE]
		at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:241) ~[reactor-core-3.3.2.RELEASE.jar!/:3.3.2.RELEASE]
		at reactor.core.publisher.MonoCreate$DefaultMonoSink.success(MonoCreate.java:156) ~[reactor-core-3.3.2.RELEASE.jar!/:3.3.2.RELEASE]
		at com.provectus.kafka.ui.util.ClusterUtil.lambda$toMono$0(ClusterUtil.java:64) ~[classes!/:?]
		at org.apache.kafka.common.internals.KafkaFutureImpl$WhenCompleteBiConsumer.accept(KafkaFutureImpl.java:177) [kafka-clients-2.8.0.jar!/:?]
		at org.apache.kafka.common.internals.KafkaFutureImpl$WhenCompleteBiConsumer.accept(KafkaFutureImpl.java:162) [kafka-clients-2.8.0.jar!/:?]
		at org.apache.kafka.common.internals.KafkaFutureImpl.complete(KafkaFutureImpl.java:221) [kafka-clients-2.8.0.jar!/:?]
		at org.apache.kafka.clients.admin.KafkaAdminClient$25.handleResponse(KafkaAdminClient.java:3362) [kafka-clients-2.8.0.jar!/:?]
		at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:1189) [kafka-clients-2.8.0.jar!/:?]
		at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1341) [kafka-clients-2.8.0.jar!/:?]
		at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1264) [kafka-clients-2.8.0.jar!/:?]
		at java.lang.Thread.run(Unknown Source) [?:?]
19:52:07.637 [kafka-admin-client-thread | adminclient-2] DEBUG org.springframework.http.codec.json.Jackson2JsonEncoder - [762b7b6a] Encoding [class ErrorResponse {
    code: 5000
    message: Unexpected internal error
    timestamp: 163095792 (truncated)...]

Set up
(How do you run the app?)

Steps to Reproduce
Steps to reproduce the behavior:

Expected behavior
(A clear and concise description of what you expected to happen)

Screenshots
(If applicable, add screenshots to help explain your problem)

Additional context
(Add any other context about the problem here)

@giom-l giom-l added the type/bug Something isn't working label Sep 6, 2021
@germanosin germanosin added this to the 0.2.1 milestone Sep 7, 2021
@germanosin
Copy link
Contributor

Hi, @giom-l. Thanks for creating this issue. We'll fix it in the next minor version.

@germanosin germanosin linked a pull request Sep 7, 2021 that will close this issue
13 tasks
@germanosin
Copy link
Contributor

@giom-l I fixed a possible issue with NPE. it would be with tag master in a few minutes. Could you please check it?

@germanosin germanosin reopened this Sep 7, 2021
@giom-l
Copy link
Author

giom-l commented Sep 7, 2021

Hi @germanosin
I tested the master branch, and it works :)
It took 58s to fetch all my consumer groups (I have 582 consumer on this cluster, but no error).
However, the json size is not that big : is the fetch time mostly due to the response time from kafka ?

Beside this issue (which is solved for me), I noticed a call on github api. Is it expected ?

Here is a picture of both response time and call on github api.
image

@germanosin
Copy link
Contributor

Hi @giom-l. Nice to hear that it works now.
Wow, we didn't expect these numbers for the consumer groups page. This might be slow because we are querying offsets for each consumer group so we have additional 582 queries. They are working in parallel, but still, this might take a lot of time. Don't you mind creating another issue with consumer groups pagination?

Github API call is sending from frontend to check the latest version, and if you are not on the latest suggest you update.
Again thanks for your questions and contribution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
scope/backend type/bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants