New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request Entity Too Large when connecting to AWS ElasticSearch #2192
Comments
The docker command you referenced uses lower bulk settings that the default - that is a good way to debug this. Could you please confirm that this configuration works and then it suddenly stops? After what time duration Jaeger starts failing? |
Yes - it works and then suddenly starts failing, with this lower configuration too. |
I can try to lower it further. And thank you for your fast response!! |
I can confirm the same happens even with ES_BULK_ACTIONS=1 and ES_BULK_SIZE=1000. |
Hi. I was sending the spans in batches of 1000. I reduced it to 100. Same result. It stops working after a while. |
Could you please do a test against upstream elasticsearch https://www.docker.elastic.co/? Just run it as a docker container and configure Jaeger to use it.
|
Done, working for now (as before). I will report if it fails. |
In a direct connection to upstream elasticsearch, it works perfectly. It is even the same version (6.8) however something in AWS ElasticSearch makes it fail after a while. Any ideas? |
We are not using it maybe somebody from @jaegertracing/elasticsearch has any ideas? Maybe you could raise it in AWS support. |
You might be facing the As for why a container restart is needed, I would think it's because the error returned by AWS ESS causes Jaeger to re-attempt to send it. But obviously, if the data was too big the first time, it will only be bigger if you continue to receive more and try to re-send it later. |
Ok, but the question would be then why option ES_BULK_SIZE does not work? I set it to a much lower value than AWS limit (1K) (-e ES_BULK_SIZE=1000). Maybe it is not the correct format? |
I'm not entirely sure, but
Maybe one of the application is buffering a lot of Spans which causes that behavior? |
I'm facing the same issue using open distro 7.6.2 (unfortunately this is the only version which is available on our cloud provider). Is there a way to limit the amount of spans sent in one bulk request, using the jaeger-operator helm chart? That would definitely be the easiest solution to this problem. |
Requirement
Sending tracings from a client using ElasticSearch backend (as a service in AWS), Zipkin protocol over http.
Problem
It works perfectly, but, after a while, it seems Jaeger starts skipping all traces, not sending anything else to ElasticSearch and a restart of the container is needed to work again.
I am using stand-alone product, version 1.17.0.
Messages in log appear for each request that is discarded:
I tried the parameters ES_BULK_SIZE and ES_BULK_ACTIONS without success. This is the way docker container is started:
Thank you!
The text was updated successfully, but these errors were encountered: