Please post all questions and issues on janusgraph-users
before opening a GitHub issue. Your questions will reach a wider audience there,
and if we confirm that there is a bug, then you can open a new issue.
Please include configurations and logs if available.
For confirmed bugs, please report:
Access mixedIndex in ElasticSearch with limit which will be less then the batch limit. In such case the scroll will not be used but we always create ElasticSearchScroll for results even when scroll if not used at all. In such case, ElasticSearchScroll will close scroll immediately and the error which be raised (because there is not any scrolls).
Stack Trace (if you have one)
[gremlin-server-exec-37] WARN org.apache.tinkerpop.gremlin.server.op.traversal.TraversalOpProcessor - Exception processing a Traversal on iteration for request [d8f41122-74a4-4bfc-9c81-58b53011545d].
org.janusgraph.core.JanusGraphException: Could not call index
at org.janusgraph.graphdb.util.SubqueryIterator.<init>(SubqueryIterator.java:68)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx$3.execute(StandardJanusGraphTx.java:1341)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx$3.execute(StandardJanusGraphTx.java:1233)
at org.janusgraph.graphdb.query.QueryProcessor$LimitAdjustingIterator.getNewIterator(QueryProcessor.java:194)
at org.janusgraph.graphdb.query.LimitAdjustingIterator.hasNext(LimitAdjustingIterator.java:68)
at com.google.common.collect.Iterators$7.computeNext(Iterators.java:650)
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at org.janusgraph.graphdb.query.ResultSetIterator.nextInternal(ResultSetIterator.java:54)
at org.janusgraph.graphdb.query.ResultSetIterator.<init>(ResultSetIterator.java:44)
at org.janusgraph.graphdb.query.QueryProcessor.iterator(QueryProcessor.java:66)
at com.google.common.collect.Iterables$7.iterator(Iterables.java:613)
at org.janusgraph.graphdb.tinkerpop.optimize.JanusGraphStep.executeGraphCentricQuery(JanusGraphStep.java:155)
at org.janusgraph.graphdb.tinkerpop.optimize.JanusGraphStep.lambda$null$1(JanusGraphStep.java:94)
at java.lang.Iterable.forEach(Iterable.java:75)
at org.janusgraph.graphdb.tinkerpop.optimize.JanusGraphStep.lambda$new$2(JanusGraphStep.java:94)
at org.apache.tinkerpop.gremlin.process.traversal.step.map.GraphStep.processNextStart(GraphStep.java:157)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext(AbstractStep.java:143)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.ExpandableStepIterator.next(ExpandableStepIterator.java:50)
at org.apache.tinkerpop.gremlin.process.traversal.step.filter.FilterStep.processNextStart(FilterStep.java:37)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext(AbstractStep.java:143)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.ExpandableStepIterator.hasNext(ExpandableStepIterator.java:42)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.ReducingBarrierStep.processAllStarts(ReducingBarrierStep.java:82)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.ReducingBarrierStep.processNextStart(ReducingBarrierStep.java:112)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext(AbstractStep.java:143)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.ExpandableStepIterator.next(ExpandableStepIterator.java:50)
at org.apache.tinkerpop.gremlin.process.traversal.step.map.FlatMapStep.processNextStart(FlatMapStep.java:48)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext(AbstractStep.java:143)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.ExpandableStepIterator.next(ExpandableStepIterator.java:50)
at org.apache.tinkerpop.gremlin.process.traversal.step.map.MapStep.processNextStart(MapStep.java:36)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext(AbstractStep.java:143)
at org.apache.tinkerpop.gremlin.process.traversal.util.DefaultTraversal.hasNext(DefaultTraversal.java:197)
at org.apache.tinkerpop.gremlin.server.util.TraverserIterator.fillBulker(TraverserIterator.java:69)
at org.apache.tinkerpop.gremlin.server.util.TraverserIterator.hasNext(TraverserIterator.java:56)
at org.apache.tinkerpop.gremlin.server.op.traversal.TraversalOpProcessor.handleIterator(TraversalOpProcessor.java:512)
at org.apache.tinkerpop.gremlin.server.op.traversal.TraversalOpProcessor.lambda$iterateBytecodeTraversal$4(TraversalOpProcessor.java:411)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.janusgraph.diskstorage.PermanentBackendException: Permanent failure in storage backend
at org.janusgraph.diskstorage.es.ElasticSearchIndex.query(ElasticSearchIndex.java:1120)
at org.janusgraph.diskstorage.indexing.IndexTransaction.queryStream(IndexTransaction.java:108)
at org.janusgraph.diskstorage.BackendTransaction$6.call(BackendTransaction.java:416)
at org.janusgraph.diskstorage.BackendTransaction$6.call(BackendTransaction.java:413)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:68)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:54)
at org.janusgraph.diskstorage.BackendTransaction.executeRead(BackendTransaction.java:469)
at org.janusgraph.diskstorage.BackendTransaction.indexQuery(BackendTransaction.java:413)
at org.janusgraph.graphdb.database.IndexSerializer.query(IndexSerializer.java:533)
at org.janusgraph.graphdb.util.SubqueryIterator.<init>(SubqueryIterator.java:66)
... 41 more
Caused by: java.io.UncheckedIOException: method [DELETE], host [http://11.182.230.151:9200], URI [/_search/scroll/null], status line [HTTP/1.1 400 Bad Request]
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Cannot parse scroll id"}],"type":"illegal_argument_exception","reason":"Cannot parse scroll id","caused_by":{"type":"array_index_out_of_bounds_exception","reason":null}},"status":400}
at org.janusgraph.diskstorage.es.ElasticSearchScroll.update(ElasticSearchScroll.java:52)
at org.janusgraph.diskstorage.es.ElasticSearchScroll.<init>(ElasticSearchScroll.java:43)
at org.janusgraph.diskstorage.es.ElasticSearchIndex.query(ElasticSearchIndex.java:1115)
... 50 more
Caused by: org.elasticsearch.client.ResponseException: method [DELETE], host [http://11.182.230.151:9200], URI [/_search/scroll/null], status line [HTTP/1.1 400 Bad Request]
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Cannot parse scroll id"}],"type":"illegal_argument_exception","reason":"Cannot parse scroll id","caused_by":{"type":"array_index_out_of_bounds_exception","reason":null}},"status":400}
at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:283)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:261)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:235)
at org.janusgraph.diskstorage.es.rest.RestElasticSearchClient.deleteScroll(RestElasticSearchClient.java:450)
at org.janusgraph.diskstorage.es.ElasticSearchScroll.update(ElasticSearchScroll.java:50)
... 52 more
Please post all questions and issues on janusgraph-users
before opening a GitHub issue. Your questions will reach a wider audience there,
and if we confirm that there is a bug, then you can open a new issue.
Please include configurations and logs if available.
For confirmed bugs, please report:
Access mixedIndex in ElasticSearch with limit which will be less then the batch limit. In such case the scroll will not be used but we always create
ElasticSearchScrollfor results even when scroll if not used at all. In such case,ElasticSearchScrollwill close scroll immediately and the error which be raised (because there is not any scrolls).Stack Trace (if you have one)