New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Read time out while fetching all edges of a SuperNode #1467
Comments
@Krittam , what happens when you use next query?: |
@porunov the said query produced this exception.
|
I am using janusgraph 0.2.0. I also tried using spark to run OLAP query but i am using yarn to submit spark jobs on my hadoop cluster and there is not enough documentation available for that. |
Can you reproduce this issue with JanusGraph 0.3.1? |
on JanusGraph 0.3.1 query fails with this exception:
On increasing the |
Confirming performance issue in |
Currently I didn't find a good solution to count edges. Vertex centric indexes don't help in |
Changing the storage backend to cql and other properties relevant to cql solved the issue for me.
|
I have a graph in which a few nodes have many incoming edges(Supernode). All the edges are of same type/label. There is a query in which i need to report the total no of incoming edges.
I'm using cassandarathrift as storage backend
g.V().has('vid','qwerty').inE().count().next()
This fails with
However
g.V().has('vid','qwerty').inE().limit(10000).count().next()
gives==>10000
Now if i wanted to filter all the edges based on some condition i would have used vertex centric indexes but I simply want all the incoming edges.
The said vertex is expected to have millions of such edges.
Please help
The text was updated successfully, but these errors were encountered: