You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using pathom in clojurescript to do very basic joins from a clientside db (atom). I have a a simple resolver that takes an id and returns the scalar values of that entity (via a key lookup in the hashmap db).
I then have a global resolver that returns all entity ids in the db, and resolves each one via the single entity resolver defined before.
Querying the list (only 500 records) takes nearly 700 ms on my laptop. Each resolver is simply doing a key lookup in a hashtable in memory and returning a few scalar values from one resolver.
My custom clojurescript code that was doing the lookups and joins manually via hashtables returns in 11 ms fully realized.
I can post the code but it's dead simple pasted from your doc examples.
Am I using the library correctly? Is pathom meant to always start from a specific entity in the graph and isn't meant to query many entities like this?
I've tried changing to a batch resolver but it makes not difference seeing as this is an entirely in-memory db.
Any insights are appreciated.
The text was updated successfully, but these errors were encountered:
Hello, thanks for reaching. First question, what parser are you using? parallel-parser is well known to be quite slow to process long sequences, if that's the case you can try the parser or async-parser, they are going to be faster.
The current Pathom implementation wasn't optimized for dealing with large collections, most usages were in the range of visible data in the screen (so 100 records or less per list).
There are a few ways to go around the limitation when its really needed, one of those is to pre-compute the thing on the list processing, and mark the result as final, this way the sequence is not processed, example:
When you add the ::p/final, you are telling pathom to don't process the list, so the value is going to return as-is (but you won't get any further resolution).
I'm currently working on the next version of Pathom, and this is one of the pain points that are going to be addressed, but for now, these are the available options.
I'm using pathom in clojurescript to do very basic joins from a clientside db (atom). I have a a simple resolver that takes an id and returns the scalar values of that entity (via a key lookup in the hashmap db).
I then have a global resolver that returns all entity ids in the db, and resolves each one via the single entity resolver defined before.
Querying the list (only 500 records) takes nearly 700 ms on my laptop. Each resolver is simply doing a key lookup in a hashtable in memory and returning a few scalar values from one resolver.
My custom clojurescript code that was doing the lookups and joins manually via hashtables returns in 11 ms fully realized.
I can post the code but it's dead simple pasted from your doc examples.
Am I using the library correctly? Is pathom meant to always start from a specific entity in the graph and isn't meant to query many entities like this?
I've tried changing to a batch resolver but it makes not difference seeing as this is an entirely in-memory db.
Any insights are appreciated.
The text was updated successfully, but these errors were encountered: