New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OOM on percolators #6553
Comments
Do the mapping of the documents being percolated contain nested object types? |
yes they do. |
Do you rely on the nested query/filter in your percolator queries? Before 1.1.x these queries silently failed, and just didn't match anything and since 1.1.x the nested support has been added to the percolator. Each nested object has its own memory index that later on gets wrapped by a composite reader to simulate a single index, this too support the nested query/filter in percolate api. How many nested objects do your document being percolate more or less have? Just to verify: If you add a new mapping with the type set to |
Every document as around 2-20 nested objects, and we have around 200.000 docs. Btw. we upgraded from 0.90.1x i can test the object filtering later if you want me to. |
No need for that, I found that this issue is relatively easily reproducible. |
This issue here is that the filter cache doesn't clean cache immediately after the in-memory index has been destroyed. It keeps a reference this index as it is the cache key and cleans it after 60 seconds. Can you try setting: For the percolator nothing should be cached (filters, fielddata) at all, this should be the right fix. |
Sorry, no need to check this. Just this only won't fix it. The percolator doesn't close the sub memory readers properly, which is the first reason why the issue you reported arrises. |
Hi Martijn! I'm a co-worker of Julian. You are right, the workaround doesn't fix the issue. Even 1ms doesn't make any difference. |
@tiran @julianhille If you like you can try out the PR that addresses this bug: #6578 |
currently testing. |
first feedback is, that it seems to work, i dont see the ram usage rising that fast anymore. Will run now some more tests, if we see the same behavior as before. |
havent found any other issue so far. |
…percolator query parsing part. Closes elastic#6553 Conflicts: src/main/java/org/elasticsearch/index/query/HasChildFilterParser.java src/main/java/org/elasticsearch/index/query/HasChildQueryParser.java src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionParser.java
…percolator query parsing part. Closes elastic#6553 Conflicts: src/main/java/org/elasticsearch/index/query/HasChildFilterParser.java src/main/java/org/elasticsearch/index/query/HasChildQueryParser.java src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionParser.java
…the field data caches. Percolator: Never cache filters and field data in percolator for the percolator query parsing part. Closes #6553
…the field data caches. Percolator: Never cache filters and field data in percolator for the percolator query parsing part. Closes #6553
@julianhille @tiran Thanks for test driving the PR #6578! |
…the field data caches. Percolator: Never cache filters and field data in percolator for the percolator query parsing part. Closes elastic#6553
steps to reproduce:
Facts:
[2014-06-18 16:06:32,277][DEBUG][action.percolate ] [Black Widow] [searching_06be3f3d970eb86180aee114ea873838][0], node[KxkxW_-
5S4Sj36x8144e9w], [P], s[STARTED]: failed to executed [org.elasticsearch.action.percolate.PercolateRequest@1ad4f508]
org.elasticsearch.percolator.PercolateException: failed to percolate
at org.elasticsearch.action.percolate.TransportPercolateAction.shardOperation(TransportPercolateAction.java:198)
at org.elasticsearch.action.percolate.TransportPercolateAction.shardOperation(TransportPercolateAction.java:55)
at org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction$1.run(TransportBroadcastOp
erationAction.java:170)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.RecyclingIntBlockAllocator.getIntBlock(RecyclingIntBlockAllocator.java:82)
at org.apache.lucene.util.IntBlockPool.nextBuffer(IntBlockPool.java:155)
at org.apache.lucene.util.IntBlockPool.newSlice(IntBlockPool.java:168)
at org.apache.lucene.util.IntBlockPool.access$200(IntBlockPool.java:26)
at org.apache.lucene.util.IntBlockPool$SliceWriter.startNewSlice(IntBlockPool.java:274)
at org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:471)
at org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:395)
at org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:370)
at org.elasticsearch.percolator.MultiDocumentPercolatorIndex.indexDoc(MultiDocumentPercolatorIndex.java:88)
at org.elasticsearch.percolator.MultiDocumentPercolatorIndex.prepare(MultiDocumentPercolatorIndex.java:68)
at org.elasticsearch.percolator.PercolatorService.percolate(PercolatorService.java:232)
at org.elasticsearch.action.percolate.TransportPercolateAction.shardOperation(TransportPercolateAction.java:194)
... 5 more
[2014-06-18 16:07:57,493][DEBUG][action.percolate ] [Black Widow] [searching_06be3f3d970eb86180aee114ea873838][0], node[KxkxW_-5S4Sj36x8144e9w], [P], s[STARTED]: failed to executed [org.elasticsearch.action.percolate.PercolateRequest@4c2ffcd2]
org.elasticsearch.percolator.PercolateException: failed to percolate
at org.elasticsearch.action.percolate.TransportPercolateAction.shardOperation(TransportPercolateAction.java:198)
at org.elasticsearch.action.percolate.TransportPercolateAction.shardOperation(TransportPercolateAction.java:55)
at org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction$1.run(TransportBroadcastOperationAction.java:170)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
[2014-06-18 16:08:31,930][DEBUG][action.percolate ] [Black Widow] [searching_06be3f3d970eb86180aee114ea873838][0], node[KxkxW_-5S4Sj36x8144e9w], [P], s[STARTED]: failed to executed [org.elasticsearch.action.percolate.PercolateRequest@3efcd38f]
org.elasticsearch.percolator.PercolateException: failed to percolate
at org.elasticsearch.action.percolate.TransportPercolateAction.shardOperation(TransportPercolateAction.java:198)
at org.elasticsearch.action.percolate.TransportPercolateAction.shardOperation(TransportPercolateAction.java:55)
at org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction$1.run(TransportBroadcastOperationAction.java:170)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
The text was updated successfully, but these errors were encountered: