Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: HTTP Error: grpc stream error: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (7833328 vs. 4194304) #4350

Closed
SOF3 opened this issue Mar 29, 2023 · 3 comments

Comments

@SOF3
Copy link

SOF3 commented Mar 29, 2023

What happened?

As a user with large traces, I want to configure the grpc plugin max receive message length so that jaeger can display large traces (e.g. with more than 10000 spans).

Steps to reproduce

  1. Send a large trace response from a grpc storage plugin
  2. The server returns the error

Expected behavior

The server should allow increasing the maximum message size in parity to collector

Relevant log output

{
  "level": "error",
  "ts": 1680074638.9349482,
  "caller": "app/http_handler.go:495",
  "msg": "HTTP handler, Internal Server Error",
  "error": "grpc stream error: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (7833328 vs. 4194304)",
  "stacktrace": "github.com/jaegertracing/jaeger/cmd/query/app.(*APIHandler).handleError\n\tgithub.com/jaegertracing/jaeger/cmd/query/app/http_handler.go:495\ngithub.com/jaegertracing/jaeger/cmd/query/app.(*APIHandler).getTrace\n\tgithub.com/jaegertracing/jaeger/cmd/query/app/http_handler.go:437\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2110\ngithub.com/opentracing-contrib/go-stdlib/nethttp.MiddlewareFunc.func5\n\tgithub.com/opentracing-contrib/go-stdlib@v1.0.0/nethttp/server.go:154\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2110\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2110\ngithub.com/gorilla/mux.(*Router).ServeHTTP\n\tgithub.com/gorilla/mux@v1.8.0/mux.go:210\ngithub.com/jaegertracing/jaeger/cmd/query/app.additionalHeadersHandler.func1\n\tgithub.com/jaegertracing/jaeger/cmd/query/app/additional_headers_handler.go:28\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2110\ngithub.com/gorilla/handlers.CompressHandlerLevel.func1\n\tgithub.com/gorilla/handlers@v1.5.1/compress.go:100\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2110\ngithub.com/gorilla/handlers.recoveryHandler.ServeHTTP\n\tgithub.com/gorilla/handlers@v1.5.1/recovery.go:78\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2951\nnet/http.(*conn).serve\n\tnet/http/server.go:1992"
}

Screenshot

image

Additional context

No response

Jaeger backend version

v1.42

SDK

custom storage plugin

Pipeline

In a nutshell, a custom storage plugin that mutates the trace.

See https://github.com/kubewharf/kelemetry/blob/14752b56543603fe3c84bc817cb437ede51fe1d6/docs/DEPLOY.md for full pipeline, but it is mostly irrelevant.

Stogage backend

custom storage backend

Operating system

Linux

Deployment model

Kubernetes

Deployment configs

N/A
@yurishkuro
Copy link
Member

Which storage implementation are you using? The message limit is already 4Mb for the grpc-client running in jaeger-query. So while technically it is possible to increase its max message size, the client is already using grpc streaming to allow larger volumes of data to be passed. In other words, your remote storage that implements the Jaeger remote storage API could be configured to return results in smaller chunks (<4Mb).

But having said that, I don't object with having a configuration in Jaeger too.

@SOF3
Copy link
Author

SOF3 commented Apr 18, 2023

I am using a custom storage plugin that doesn't collect data through jaeger-collector. I am just using jaeger as a display frontend.

@jkowall
Copy link
Contributor

jkowall commented Jun 8, 2024

Should be fixed in Jaeger v2 when we have moved to otel, feel free to re-open if not.

@jkowall jkowall closed this as not planned Won't fix, can't repro, duplicate, stale Jun 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants