Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Find Performance Bottleneck #68

Open
2 tasks
Skn0tt opened this issue Oct 1, 2020 · 4 comments
Open
2 tasks

Find Performance Bottleneck #68

Skn0tt opened this issue Oct 1, 2020 · 4 comments

Comments

@Skn0tt
Copy link
Member

Skn0tt commented Oct 1, 2020

SuperJSON is somewhat slow - at least compared to not using SuperJSON. Our code currently is written for readability first, and not so much for optimal performance.

  • Analyse the Code for it's bottlenecks
  • Remove them
@tomhooijenga
Copy link
Contributor

This project might be suffering from "death by a thousand cuts", aka there are a whole bunch of statements that in itself might not be very costly but it adds up.
Some quick examples

  • array functions vs a plain old for loop
  • lodash vs native (not always though!)
  • isUndefined() vs ===

@KATT
Copy link
Contributor

KATT commented May 30, 2021

We've addressed this in tRPC by allowing different data transformers for upstream and downstream; upstream (client to server) are usually small payloads parsed by the server that needs to be secure and downstream are usually bigger payloads parsed by the client.

@Skn0tt
Copy link
Member Author

Skn0tt commented May 30, 2021

That's a good idea! Still, I'm pretty sure there should be a way of speeding up SuperJSON to better compete with devalue. But I haven't found it yet 🤷‍♂️

@KATT
Copy link
Contributor

KATT commented May 31, 2021

Some ideas, algos isn't really my speciality, so these are mostly novice common sense

  • You might not need to deep clone the object first (or have an option where this is skipped)
  • Every setDeep()-seem to be traversing through the object
    • Potentially a faster strategy:
      • traverse through the whole object once and have a reference to path to update
      • use the reference to know "dead ends" and break; when there's nothing further down to update
  • Make "slow" features optional
  • Rather than traversing yourself when encoding/serializing, you could try using JSON.stringify replacer & JSON.parse reviver.

For inspiration, you can look at https://github.com/yahoo/serialize-javascript where they have the isJSON-option that makes it "to be over 3x faster".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants