Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[perf] lib/compiler.js makes too many stat() calls #167

Closed
bnoordhuis opened this issue Jan 8, 2016 · 8 comments
Closed

[perf] lib/compiler.js makes too many stat() calls #167

bnoordhuis opened this issue Jan 8, 2016 · 8 comments
Labels

Comments

@bnoordhuis
Copy link
Member

bnoordhuis commented Jan 8, 2016

Profiling loopback-sample-app start-up time, from the ~6500 stat() system calls, about 2000 originate from fixFileExtension() and to a lesser extent tryResolveAppPath() in lib/compiler.js (the fs.existsSync() call - why are you using that?)

Related, I managed to bring down the number of calls from node.js core to ~2800 from ~4700 in nodejs/node#4575 by caching more aggressively. Perhaps a similar strategy is possible in lib/compiler.js?

@bnoordhuis
Copy link
Member Author

If you want to see for yourself:

$ perf trace record -s -g -e \
    syscalls:sys_enter_newstat,syscalls:sys_enter_newfstat,syscalls:sys_enter_newlstat \
    node --perf_basic_prof --nouse_inlining server/server.js

# later
$ perf report --stdio -F overhead_sys

--perf_basic_prof exists in node v4.x or v5.x and records JS stack frames, making the profiler's output a lot more useful.

You may have to remount /sys/kernel/debug and /sys/kernel/debug/tracing with mount -o remount,mode=755. Add -m64 if perf complains about not having permission to mmap memory.

If you just want to count the number of system calls:

$ perf trace -e stat,fstat,lstat -s node server/server.js

@bajtos
Copy link
Member

bajtos commented Jan 4, 2017

This problem is slowing down yo loopback (and the upcoming lb) command, as loobpack-workspace takes about a second to load. The problem gets even worse in lb tests (https://github.com/strongloop/loopback-cli, see https://github.com/strongloop-internal/scrum-loopback/issues/1028), where each test invoking lb in a subprocess is paying this cost and the test suite takes quite long to complete.

@ritch I would like to bump up the priority of this issue and fix the performance in both loopback-boot@2 (which is used by current loopback-workspace version) and loopback-boot@3 (that will be eventually used instead).

@bnoordhuis
Copy link
Member Author

I can spend some time on this in the week of January 16. LMK if I should.

@bajtos
Copy link
Member

bajtos commented Jan 6, 2017

@bnoordhuis thank you for the offer, any help is welcome!

Specifically in loopback-boot, I think there is another place where we are repeating the same filesystem query - see

function resolveAppScriptPath(rootDir, relativePath, resolveOptions) {
var resolvedPath = resolveAppPath(rootDir, relativePath, resolveOptions);
if (!resolvedPath) {
return false;
}
var sourceDir = path.dirname(resolvedPath);
var files = tryReadDir(sourceDir);
var fixedFile = fixFileExtension(resolvedPath, files, false);
return (fixedFile === undefined ? resolvedPath : fixedFile);
}
:

function resolveAppScriptPath(rootDir, relativePath, resolveOptions) {
  var resolvedPath = resolveAppPath(rootDir, relativePath, resolveOptions);
  if (!resolvedPath) {
    return false;
  }
  var sourceDir = path.dirname(resolvedPath);
  var files = tryReadDir(sourceDir);
  var fixedFile = fixFileExtension(resolvedPath, files, false);
  return (fixedFile === undefined ? resolvedPath : fixedFile);
}

Whenever we are trying to resolve an extension-less name to a real require-able file name, we read the content of the directory. I don't have profiling data to prove this adds significant overhead, it's just my guess.

Other places calling tryReadDir may be worth optimising too.

@bajtos
Copy link
Member

bajtos commented Jan 25, 2017

I created a simple breakdown of how long the different parts of loopback-workspace's server/server.js take to finish (Node v6.9.4 using console.time/console.timeEnd functions):

require('strong-globalize'): 400.989ms
SG.SetRootDir: 71.832ms
require('loopback'): 441.484ms
new app: 1.951ms
require('method-override'): 0.046ms
require('loopback-boot'): 38.179ms
require('cookie-parser'): 0.066ms
require('strong-error-handler'): 8.004ms
app.dataSource(): 1.908ms
boot(app): 146.120ms
rest: 139.110ms
Startup time: 1272ms

This break-down shows that the execution of loopback-boot takes only about ~10% of the startup time. Most of the time is spent loading (require()ing) the following two modules: strong-globalize and loopback.

Given this data, I think optimising filesystem access in loopback-boot is not worth our effort right now, because the effect on real applications will be minimal.

@bnoordhuis thoughts? Can you advise how to analyse which (sub)dependencies of loopback and strong-globalize are contributing most to the unusually high load time?

BTW when I had loopback-boot npm-linked into loopback-workspace, the load time went from ~38ms to ~429ms. I think this suggests loopback-boot is sharing quite few dependencies with loopback (npm@3 is using flat layout in node_modules, when loopback-boot is installed locally then it can use the same require()d instances of dependency modules than loopback does).

@bnoordhuis
Copy link
Member Author

I'll take a look, I have time today. What loopback-boot version did you benchmark? Master?

@bajtos
Copy link
Member

bajtos commented Jan 25, 2017

loopback-boot branch 2.x, loopback-workspace branch master.

@bajtos
Copy link
Member

bajtos commented Nov 2, 2018

Since there was no activity here for almost two years, and our focus is on LB4+ now, I am closing this issue as stale/abandoned.

@bajtos bajtos closed this as completed Nov 2, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

6 participants