-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Description
Experience Report
Note: Feature requests are judged based on user experience and modeled on Go Experience Reports. These reports should focus on the problems: they should not focus on and need not propose solutions.
What you wanted to do
I want to run queries where individual traversals can be set with a timeout. This timeout can potentially trim the results of the query instead of returning all the data, but that's OK. The use case has strict latency requirements and the query shouldn't run longer than say 50ms.
What you actually did
Dgraph currently has query timeouts or cancellations. But this only returns a message saying the query was cancelled and does not returns any actual data of what it was able to process within the timeout.
Why that wasn't great, with examples
One of the queries I want to run is a recursive query. It tries to go deep enough into the graph to return results. I want to be able to run a query where the query should stop after finding N connected nodes or if it hits a timeout.
One thing I can do today is run the recursive query multiple times with an increasing depth argument. But there's no way to set a time limit while also getting a result if the time limit is reached.
{
q(func: eq(id, "...")) @recurse(loop: false, depth: 2) {
id
connects
~connects
}
}
Any external references to support your case
Gremlin has a .timeLimit() and .limit() steps that can be used to do this trimming that has been key to doing traversal timeouts with result limits in other systems.
Example:
GraphTraversal t = traversal()
.withRemote(DriverRemoteConnection.using(client))
.V(identifiers)
.repeat(timeLimit(queryTimeout)
.out()
.hasLabel("connection")
.in()
.dedup()
.by(id())
.simplePath()
.timeLimit(25))
.until(hasLabel("id"))
.limit(maxResults)
.id();