-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Description
Elasticsearch version: 5.0.0
Plugins installed: []
JVM version: jre1.8.0_111
OS version: Windows Server 2016 Datacenter
Description of the problem including expected versus actual behavior:
When executing a cardinality query, all data nodes ran out of memory and died. I would expect the circuit breaker to kick in and stop the query.
Steps to reproduce:
- A user created a Kibana visualization that did unique count of docvalue_1 over docvalue_2.
- Kibana timed the query out after 30s but ES did not fare so well.
The data nodes (4) all have 28GB physical with half dedicated to ES.
-Xss1m
-Xms14g
-Xmx14g
bootstrap.memory_lock: true
The index has 12 shards. The index overall is 400GB with even distribution across the shards. Regardless of setup, my hope would be that running dangerous queries results in the circuit breaker kicking in before killing the nodes.
Provide logs (if relevant):
[2016-12-02T01:19:54,982][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [AI-DATA-01] fatal error in thread [elasticsearch[AI-DATA-01][search][T#4]], exiting
java.lang.OutOfMemoryError: Java heap space
at org.elasticsearch.common.util.PageCacheRecycler$1.newInstance(PageCacheRecycler.java:99) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.common.util.PageCacheRecycler$1.newInstance(PageCacheRecycler.java:96) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.common.recycler.DequeRecycler.obtain(DequeRecycler.java:53) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.common.recycler.AbstractRecycler.obtain(AbstractRecycler.java:33) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.common.recycler.DequeRecycler.obtain(DequeRecycler.java:28) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.common.recycler.FilterRecycler.obtain(FilterRecycler.java:39) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.common.recycler.Recyclers$3.obtain(Recyclers.java:119) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.common.recycler.FilterRecycler.obtain(FilterRecycler.java:39) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.common.util.PageCacheRecycler.bytePage(PageCacheRecycler.java:147) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.common.util.AbstractBigArray.newBytePage(AbstractBigArray.java:112) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.common.util.BigByteArray.(BigByteArray.java:44) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.common.util.BigArrays.newByteArray(BigArrays.java:464) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.common.util.BigArrays.resize(BigArrays.java:488) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.common.util.BigArrays.grow(BigArrays.java:502) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.search.aggregations.metrics.cardinality.HyperLogLogPlusPlus.ensureCapacity(HyperLogLogPlusPlus.java:197) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.search.aggregations.metrics.cardinality.HyperLogLogPlusPlus.collect(HyperLogLogPlusPlus.java:232) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.search.aggregations.metrics.cardinality.CardinalityAggregator$DirectCollector.collect(CardinalityAggregator.java:198) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.collectExistingBucket(BucketsAggregator.java:80) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.search.aggregations.bucket.terms.GlobalOrdinalsStringTermsAggregator$2.collect(GlobalOrdinalsStringTermsAggregator.java:128) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.search.aggregations.LeafBucketCollector.collect(LeafBucketCollector.java:82) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.apache.lucene.search.MultiCollector$MultiLeafCollector.collect(MultiCollector.java:174) ~[lucene-core-6.2.0.jar:6.2.0 764d0f19151dbff6f5fcd9fc4b2682cf934590c5 - mike - 2016-08-20 05:39:36]
at org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:221) ~[lucene-core-6.2.0.jar:6.2.0 764d0f19151dbff6f5fcd9fc4b2682cf934590c5 - mike - 2016-08-20 05:39:36]
at org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:172) ~[lucene-core-6.2.0.jar:6.2.0 764d0f19151dbff6f5fcd9fc4b2682cf934590c5 - mike - 2016-08-20 05:39:36]
at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39) ~[lucene-core-6.2.0.jar:6.2.0 764d0f19151dbff6f5fcd9fc4b2682cf934590c5 - mike - 2016-08-20 05:39:36]
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:669) ~[lucene-core-6.2.0.jar:6.2.0 764d0f19151dbff6f5fcd9fc4b2682cf934590c5 - mike - 2016-08-20 05:39:36]
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473) ~[lucene-core-6.2.0.jar:6.2.0 764d0f19151dbff6f5fcd9fc4b2682cf934590c5 - mike - 2016-08-20 05:39:36]
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:370) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:106) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.indices.IndicesService.lambda$loadIntoContext$17(IndicesService.java:1109) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.indices.IndicesService$$Lambda$1467/1254300496.load(Unknown Source) ~[?:?]
at org.elasticsearch.indices.AbstractIndexShardCacheEntity.loadValue(AbstractIndexShardCacheEntity.java:73) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.indices.IndicesRequestCache$Loader.load(IndicesRequestCache.java:148) ~[elasticsearch-5.0.0.jar:5.0.0]