Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kibana becomes unavailable because of "Data too large". #56500

Open
avarf opened this issue Jan 31, 2020 · 2 comments
Open

Kibana becomes unavailable because of "Data too large". #56500

avarf opened this issue Jan 31, 2020 · 2 comments
Labels
Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc triage_needed

Comments

@avarf
Copy link

avarf commented Jan 31, 2020

Kibana version:
6.7.0

Elasticsearch version:
7.3.0

Server OS version:
Ubuntu 18.04

Browser version:
Different browsers with different versions

Browser OS version:
Ubuntu 18.04

Original install method (e.g. download page, yum, from source, etc.):
Helm chart

Describe the bug:
I have deployed an ELK stack and after some weeks today Kibana stopped working and whenever I wanted to access it (just opening the UI) I was getting error 500 and before each error 500 I see Data too large:

{"type":"error","@timestamp":"2020-01-31T13:34:08Z","tags":[],"pid":1,"level":"error","error":{"message":"[parent] Data too large, data for [<http_request>] would be [1063054000/1013.8mb], which is larger than the limit of [1011774259/964.9mb], real usage: [1063054000/1013.8mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=0/0b, in_flight_requests=0/0b, accounting=772545528/736.7mb]: [circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [1063054000/1013.8mb], which is larger than the limit of [1011774259/964.9mb], real usage: [1063054000/1013.8mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=0/0b, in_flight_requests=0/0b, accounting=772545528/736.7mb], with { bytes_wanted=1063054000 & bytes_limit=1011774259 & durability=\"PERMANENT\" }","name":"Error","stack":"[circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [1063054000/1013.8mb], which is larger than the limit of [1011774259/964.9mb], real usage: [1063054000/1013.8mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=0/0b, in_flight_requests=0/0b, accounting=772545528/736.7mb], with { bytes_wanted=1063054000 & bytes_limit=1011774259 & durability=\"PERMANENT\" } :: {\"path\":\"/.kibana/doc/config%3A6.7.0\",\"query\":{},\"statusCode\":429,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"circuit_breaking_exception\\\",\\\"reason\\\":\\\"[parent] Data too large, data for [<http_request>] would be [1063054000/1013.8mb], which is larger than the limit of [1011774259/964.9mb], real usage: [1063054000/1013.8mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=0/0b, in_flight_requests=0/0b, accounting=772545528/736.7mb]\\\",\\\"bytes_wanted\\\":1063054000,\\\"bytes_limit\\\":1011774259,\\\"durability\\\":\\\"PERMANENT\\\"}],\\\"type\\\":\\\"circuit_breaking_exception\\\",\\\"reason\\\":\\\"[parent] Data too large, data for [<http_request>] would be [1063054000/1013.8mb], which is larger than the limit of [1011774259/964.9mb], real usage: [1063054000/1013.8mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=0/0b, in_flight_requests=0/0b, accounting=772545528/736.7mb]\\\",\\\"bytes_wanted\\\":1063054000,\\\"bytes_limit\\\":1011774259,\\\"durability\\\":\\\"PERMANENT\\\"},\\\"status\\\":429}\"}\n    at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n    at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n    at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n    at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n    at IncomingMessage.emit (events.js:194:15)\n    at endReadableNT (_stream_readable.js:1103:12)\n    at process._tickCallback (internal/process/next_tick.js:63:19)"},"url":{"protocol":null,"slashes":null,"auth":null,"host":null,"port":null,"hostname":null,"hash":null,"search":null,"query":{},"pathname":"/app/kibana","path":"/app/kibana","href":"/app/kibana"},"message":"[parent] Data too large, data for [<http_request>] would be [1063054000/1013.8mb], which is larger than the limit of [1011774259/964.9mb], real usage: [1063054000/1013.8mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=0/0b, in_flight_requests=0/0b, accounting=772545528/736.7mb]: [circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [1063054000/1013.8mb], which is larger than the limit of [1011774259/964.9mb], real usage: [1063054000/1013.8mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=0/0b, in_flight_requests=0/0b, accounting=772545528/736.7mb], with { bytes_wanted=1063054000 & bytes_limit=1011774259 & durability=\"PERMANENT\" }"}
{"type":"response","@timestamp":"2020-01-31T13:34:08Z","tags":[],"pid":1,"method":"get","statusCode":500,"req":{"url":"/app/kibana","method":"get","headers":{"host":"10.203.20.160:31630","connection":"keep-alive","upgrade-insecure-requests":"1","user-agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36","accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9","accept-encoding":"gzip, deflate","accept-language":"en-US,en;q=0.9"},"remoteAddress":"10.4.74.0","userAgent":"10.4.74.0"},"res":{"statusCode":500,"responseTime":6,"contentLength":9},"message":"GET /app/kibana 500 6ms - 9.0B"}

After some search I found that this is related to the heap size in Elasticsearch and nothing is actually wrong about the Kibana itself and after increasing the heap size of elasticsearch kibana started to working properly.

Expected behavior:
I expect that instead of making the kibana unavailable because of elasticsearch heap size, you let kibana dashboard to load and then show some error message that the elasticsearch heap size is too large.

@legrego legrego added Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc triage_needed labels Jan 31, 2020
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-platform (Team:Platform)

@adampankow
Copy link

I can say the error message is confusing. It gives the impression the error is reported from Kibana, not Elasticsearch. The only indicator might be the fact that it shows a memory maximum of ~1 GB, versus Kibana's default of 1.4 GB.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc triage_needed
Projects
None yet
Development

No branches or pull requests

4 participants