Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OOM on percolators #6553

Closed
julianhille opened this issue Jun 18, 2014 · 13 comments
Closed

OOM on percolators #6553

julianhille opened this issue Jun 18, 2014 · 13 comments

Comments

@julianhille
Copy link

steps to reproduce:

  • start 1.2.1 without any index
  • create index and add percolators
  • start percolating
  • see ram usages raising and raising.

Facts:

  • We only have in this test environment around 143 percolators.
  • We percolate around 200.000 items against the percolator.

[2014-06-18 16:06:32,277][DEBUG][action.percolate ] [Black Widow] [searching_06be3f3d970eb86180aee114ea873838][0], node[KxkxW_-
5S4Sj36x8144e9w], [P], s[STARTED]: failed to executed [org.elasticsearch.action.percolate.PercolateRequest@1ad4f508]
org.elasticsearch.percolator.PercolateException: failed to percolate
at org.elasticsearch.action.percolate.TransportPercolateAction.shardOperation(TransportPercolateAction.java:198)
at org.elasticsearch.action.percolate.TransportPercolateAction.shardOperation(TransportPercolateAction.java:55)
at org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction$1.run(TransportBroadcastOp
erationAction.java:170)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.RecyclingIntBlockAllocator.getIntBlock(RecyclingIntBlockAllocator.java:82)
at org.apache.lucene.util.IntBlockPool.nextBuffer(IntBlockPool.java:155)
at org.apache.lucene.util.IntBlockPool.newSlice(IntBlockPool.java:168)
at org.apache.lucene.util.IntBlockPool.access$200(IntBlockPool.java:26)
at org.apache.lucene.util.IntBlockPool$SliceWriter.startNewSlice(IntBlockPool.java:274)
at org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:471)
at org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:395)
at org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:370)
at org.elasticsearch.percolator.MultiDocumentPercolatorIndex.indexDoc(MultiDocumentPercolatorIndex.java:88)
at org.elasticsearch.percolator.MultiDocumentPercolatorIndex.prepare(MultiDocumentPercolatorIndex.java:68)
at org.elasticsearch.percolator.PercolatorService.percolate(PercolatorService.java:232)
at org.elasticsearch.action.percolate.TransportPercolateAction.shardOperation(TransportPercolateAction.java:194)
... 5 more

[2014-06-18 16:07:57,493][DEBUG][action.percolate ] [Black Widow] [searching_06be3f3d970eb86180aee114ea873838][0], node[KxkxW_-5S4Sj36x8144e9w], [P], s[STARTED]: failed to executed [org.elasticsearch.action.percolate.PercolateRequest@4c2ffcd2]
org.elasticsearch.percolator.PercolateException: failed to percolate
at org.elasticsearch.action.percolate.TransportPercolateAction.shardOperation(TransportPercolateAction.java:198)
at org.elasticsearch.action.percolate.TransportPercolateAction.shardOperation(TransportPercolateAction.java:55)
at org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction$1.run(TransportBroadcastOperationAction.java:170)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
[2014-06-18 16:08:31,930][DEBUG][action.percolate ] [Black Widow] [searching_06be3f3d970eb86180aee114ea873838][0], node[KxkxW_-5S4Sj36x8144e9w], [P], s[STARTED]: failed to executed [org.elasticsearch.action.percolate.PercolateRequest@3efcd38f]
org.elasticsearch.percolator.PercolateException: failed to percolate
at org.elasticsearch.action.percolate.TransportPercolateAction.shardOperation(TransportPercolateAction.java:198)
at org.elasticsearch.action.percolate.TransportPercolateAction.shardOperation(TransportPercolateAction.java:55)
at org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction$1.run(TransportBroadcastOperationAction.java:170)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space

@martijnvg
Copy link
Member

Do the mapping of the documents being percolated contain nested object types?

@julianhille
Copy link
Author

yes they do.

@martijnvg
Copy link
Member

Do you rely on the nested query/filter in your percolator queries? Before 1.1.x these queries silently failed, and just didn't match anything and since 1.1.x the nested support has been added to the percolator.

Each nested object has its own memory index that later on gets wrapped by a composite reader to simulate a single index, this too support the nested query/filter in percolate api. How many nested objects do your document being percolate more or less have?

Just to verify: If you add a new mapping with the type set to object instead of nested on all your object fields, does the oom still happen?

@julianhille
Copy link
Author

Every document as around 2-20 nested objects, and we have around 200.000 docs.
we totally rely on that. It does not seem to have failed before as we relied on them before. it may have returned wrong values / results.

Btw. we upgraded from 0.90.1x

i can test the object filtering later if you want me to.

@martijnvg
Copy link
Member

No need for that, I found that this issue is relatively easily reproducible.

@martijnvg
Copy link
Member

This issue here is that the filter cache doesn't clean cache immediately after the in-memory index has been destroyed. It keeps a reference this index as it is the cache key and cleans it after 60 seconds.

Can you try setting: indices.cache.filter.clean_interval setting to 1s in the elasticsearch.yml file to verify if this prevents the OOM in your app as well?

For the percolator nothing should be cached (filters, fielddata) at all, this should be the right fix.

@martijnvg
Copy link
Member

Can you try setting: indices.cache.filter.clean_interval setting to 1s in the elasticsearch.yml file to verify if this prevents the OOM in your app as well?

Sorry, no need to check this. Just this only won't fix it. The percolator doesn't close the sub memory readers properly, which is the first reason why the issue you reported arrises.

@tiran
Copy link

tiran commented Jun 19, 2014

Hi Martijn!

I'm a co-worker of Julian. You are right, the workaround doesn't fix the issue. Even 1ms doesn't make any difference.

@martijnvg
Copy link
Member

@tiran @julianhille If you like you can try out the PR that addresses this bug: #6578

@julianhille
Copy link
Author

currently testing.

@julianhille
Copy link
Author

julianhille commented Jun 20, 2014

first feedback is, that it seems to work, i dont see the ram usage rising that fast anymore. Will run now some more tests, if we see the same behavior as before.

@julianhille
Copy link
Author

havent found any other issue so far.

tiran pushed a commit to tiran/elasticsearch that referenced this issue Jun 21, 2014
…percolator query parsing part.

Closes elastic#6553

Conflicts:
	src/main/java/org/elasticsearch/index/query/HasChildFilterParser.java
	src/main/java/org/elasticsearch/index/query/HasChildQueryParser.java
	src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionParser.java
tiran pushed a commit to tiran/elasticsearch that referenced this issue Jun 23, 2014
…percolator query parsing part.

Closes elastic#6553

Conflicts:
	src/main/java/org/elasticsearch/index/query/HasChildFilterParser.java
	src/main/java/org/elasticsearch/index/query/HasChildQueryParser.java
	src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionParser.java
martijnvg added a commit that referenced this issue Jul 1, 2014
…the field data caches.

Percolator: Never cache filters and field data in percolator for the percolator query parsing part.

Closes #6553
martijnvg added a commit that referenced this issue Jul 1, 2014
…the field data caches.

Percolator: Never cache filters and field data in percolator for the percolator query parsing part.

Closes #6553
@martijnvg
Copy link
Member

@julianhille @tiran Thanks for test driving the PR #6578!

@martijnvg martijnvg removed bug labels Jul 2, 2014
mute pushed a commit to mute/elasticsearch that referenced this issue Jul 29, 2015
…the field data caches.

Percolator: Never cache filters and field data in percolator for the percolator query parsing part.

Closes elastic#6553
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants