Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Major Performance Issues #170

Closed
scott-kl opened this issue Aug 5, 2020 · 2 comments
Closed

Major Performance Issues #170

scott-kl opened this issue Aug 5, 2020 · 2 comments

Comments

@scott-kl
Copy link

scott-kl commented Aug 5, 2020

I'm using pathom in clojurescript to do very basic joins from a clientside db (atom). I have a a simple resolver that takes an id and returns the scalar values of that entity (via a key lookup in the hashmap db).

I then have a global resolver that returns all entity ids in the db, and resolves each one via the single entity resolver defined before.

Querying the list (only 500 records) takes nearly 700 ms on my laptop. Each resolver is simply doing a key lookup in a hashtable in memory and returning a few scalar values from one resolver.

My custom clojurescript code that was doing the lookups and joins manually via hashtables returns in 11 ms fully realized.

I can post the code but it's dead simple pasted from your doc examples.

Am I using the library correctly? Is pathom meant to always start from a specific entity in the graph and isn't meant to query many entities like this?

I've tried changing to a batch resolver but it makes not difference seeing as this is an entirely in-memory db.

Any insights are appreciated.

@wilkerlucio
Copy link
Owner

Hello, thanks for reaching. First question, what parser are you using? parallel-parser is well known to be quite slow to process long sequences, if that's the case you can try the parser or async-parser, they are going to be faster.

The current Pathom implementation wasn't optimized for dealing with large collections, most usages were in the range of visible data in the screen (so 100 records or less per list).

There are a few ways to go around the limitation when its really needed, one of those is to pre-compute the thing on the list processing, and mark the result as final, this way the sequence is not processed, example:

(pc/defresolver long-sequence [env {:keys []}]
  {::pc/output [{:many-items [:foo :bar]}]}
  {:many-items
   (with-meta [{:foo "1" :bar 2} {:foo "3" :bar 4} ...]
     {::p/final true})})

When you add the ::p/final, you are telling pathom to don't process the list, so the value is going to return as-is (but you won't get any further resolution).

I'm currently working on the next version of Pathom, and this is one of the pain points that are going to be addressed, but for now, these are the available options.

@wilkerlucio
Copy link
Owner

Closing for now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants