Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance issues with readBody in HttpClient.ts #321

Closed
Axosoft-Ashley opened this issue Apr 16, 2022 · 3 comments · Fixed by #322
Closed

Performance issues with readBody in HttpClient.ts #321

Axosoft-Ashley opened this issue Apr 16, 2022 · 3 comments · Fixed by #322

Comments

@Axosoft-Ashley
Copy link

Axosoft-Ashley commented Apr 16, 2022

Environment

Node version: 14.18.3
Npm version: 6.14.5
OS and version: Mac 10.15.7
typed-rest-client version: ^1.8.4

Issue Description

Buffer.concat being called on repeat inside readBody in HttpClient.ts is causing electron UI to freeze.
NOTE: we do have a incomming PR on the way, we collect the chunks into an array and build the Buffer with a single concat in the 'end' phase of the readBody() this has been tested and proved to significantly improve performance!

Expected behaviour

Electron client should still be quick.

Actual behaviour

Electron client freezes and high CPU usage while calling an api on loop

Steps to reproduce

  1. Have electron (version shouldn't matter)
  2. use npm package azure-devops-node-api to call GitApi getRepositories (it uses typed-rest-client to call its rest api)
  3. we found the issue by having 1k+ projects for an azure account then looping through getRepositories to fetch them all.
        return Promise.all(
          fp.map((project) => {
            return _gitApi.getRepositories(project.id, includeLinks, includeAllUrls);
          }, projects)
        );
  1. We found it to be significantly inefficient and dug deeper to find Buffer.concat was killing performance

Logs

profiler logs:
image
profiler logs zoomed in:
image

@AndreyIvanov42
Copy link
Contributor

Hi @Axosoft-Ashley thanks for reporting! We'll look what we can do.

@Mr-Wallet
Copy link
Contributor

Hi, I'm a colleague of Axosoft-Ashley. What did you think of the solution she described about gathering up chunks and doing a single Buffer.concat? We would be happy to open a pull request ourselves with our solution.

The only downside I can see is that it doesn't allow the garbage collector to free up the chunks while the stream is still going, but in practice that doesn't seem like a very large amount of additional RAM for very long (at least for our use case, i.e. the Azure DevOps REST API). We might be able to do even better with a buffer which doubles in size as it runs out, but that's a lot more complicated (either in implentation or by bringing in a package like concat-stream). We didn't want to over-engineer for our particular issue. Given the lack of other issues in this repo's history it seems like this solution is probably sufficient. Please let us know your thoughts.

@AndreyIvanov42
Copy link
Contributor

Thanks all for the help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants