-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jest performance is at best 2x slower than Jasmine, in our case 7x slower #6694
Comments
@EvHaus thanks for the detailed report. We have similar issues under Semantic-Org/Semantic-UI-React#2971, when Jest's suite is about 5x slower. |
Am I right in saying the problem is that jasmine loads all specs into one process and runs it, where as jest creates a new mini-environment per test suite? |
I think it's a fair assumption to say it's the module resolution that's taking time. While Another difference is that jest executes your code inside the jsdom vm, while with jasmine you've just copied over all the globals to the node runtime (https://github.com/jsdom/jsdom/wiki/Don't-stuff-jsdom-globals-onto-the-Node-global), which will always be quicker as you skip an entire abstraction layer (https://nodejs.org/api/vm.html). That said, I agree it's really not ideal (to put it mildly) that Jest is about twice as slow as jasmine. I'm not really sure what we an do, though. We could try to cache the resolution (although we'd still have to run through the entire tree in case there's been any module mocking) which might allow us to not resolve modules by looking around, but again the FS should be in memory, so I doubt it'd have much impact. @cpojer @mjesun @aaronabramov @rickhanlonii do you think there's anything clever we can do here? Or any awesome ways of profiling what we spend our time on? Also, thank you so much for setting up a great reproduction case @EvHaus! |
I did some profiling of the node processes while running Jest on my projects it seemed like requiring was one of the most time consuming tasks. At least that was the case on Windows (not WSL), which I found to be substantially slower than Linux, especially in watch mode. Granted, I'm not particularly confident in my understanding of the node profiler's output, but that's what it looked like. I saw the same thing with this reproduction. |
|
I haven't looked at the code, so I can't be totally sure -- but this sure "feels" like what's happening. When running through Jasmine there's a very long delay before anything is printed to the console (likely Jasmine resolving/executing ALL deps), and then the tests run through super quick. Whereas with Jest, it immediately starts running tests without any initial lag, but each test is significantly slower to run. |
Any chance switching from worker processes to the node |
Jest already does that. |
I believe you, but then what are these |
@rickhanlonii do you have the Jest architecture chart somewhere? |
If I had to guess, you use the workers for multi core, but VM as well for isolation, even with --runInBand? With Maybe a --runVeryInBand that shares a VM? |
Looks like |
I tried to use cachedData for an experiment about two years back. There is even a PR (sorry on mobile so can’t find the link). There was no difference in perf that I observed. Cached code is much larger and I assume reading and validating that is equal to the parse time overhead that is saved. I’d be curious to see results of somebody re-running that experiment. Changing the script transformer and running some perf tests should give us some data. |
That sounds likely, it could well be that the delayed There's also |
Things we've done to increase the performance of jest in our setup:
|
I was intrigued by the 2.5x speed increase mentioned from using a dot reporter, so I gave it a go. Added |
That was a windows bash shell in windows 8. It wouldn’t surprise me if shells differed greatly and I’ve previously seen a large slow down from console output. |
Is it just console output which is slow or is it colored terminal output? If it is the latter, perhaps somebody could try switching from |
Perhaps somewhat effected, but windows console (=terminal) just renders very slowly in general, seemingly linear to the characters on screen - you can clearly see the speed increase as you resize the window slower. It's still using the ancient GDI api to render each span of text of the same color, so if there's a lot of switching at the character level that might have some effect. (They have reported they are working on the console rendering recently, but no exact dates) The results in the original OP's test repo shows similar differences on a macbook, so I doubt this is the real difference here. |
also interesting is this, watch mode is three times slower than non watch mode even with the same amount of workers. (35s vs 11s) tracked it down to the passing of to |
@leiyangyou that was just changed in #6960 (not released yet), maybe it helps? Not sure about the easiest way for you to test it beyond following the steps in the contributing guide on how to use a local version of Jest. /cc @rubennorte |
@SimenB that didn't improve watch mode as the haste map still has to be transferred to the worker processes (it's not persisted in watch mode). It might make that transference a bit slower because we have to serialize the map as a JSON-serializable array. |
@SimenB thanks. What I don't quite understand is this, according to logging, the sent raw map is pretty much empty |
@leiyangyou the map is only empty when not in watch mode, because the worker is going to read it from disk. In watch mode is the updated haste map with any changes in the watched files already applied. |
@rubennorte so I've added a log inside runTestInWorker inside jest-runner/index.js const runTestInWorker = function(test) {
return mutex(
_asyncToGenerator(function*() {
if (watcher.isInterrupted()) {
return Promise.reject();
}
yield onStart(test);
console.log(test.context.moduleMap.getRawModuleMap())
return worker.worker({
config: test.context.config,
globalConfig: _this3._globalConfig,
path: test.path,
rawModuleMap: (false && watcher.isWatchMode())
? test.context.moduleMap.getRawModuleMap()
: null
});
})
);
}; in watch mode, on initial ran, Maybe another bug somewhere? note that I've disabled the actual sending of the module map. This is on the jest 23.6.0 release |
@SimenB I tried the latest version of the hash map in non-watch mode, running through my test suite takes about 12s (comparable to before) what is the initial motivation for dispatching module maps to workers? |
This comment has been minimized.
This comment has been minimized.
i was trying to do migration from mocha to jest... and... mocha is finishing all tests before jest starts first one... i think there is somewhere issue with resolving/reading files -> my project contains ~70k files, and i'm running ~19k tests. after some digging its looks like jest is trying to import all files from all folders before he starts tests, i'm providing explicit match for test file: i was able to run tests by adding to jest.config
but it's still 11m... as opposed to mocha ~1m and without test framework (try/catch assert) ~40-50s turning off transformation helped to
so far my configuration looks like this:
its still slow, ~4min now i'm looking for way to turn of prettier, i don't care about formatting errors... |
|
Same issue here on 25.2.2, file resolution takes too long. Is there any plan to speed it up? |
I think it's interesting to revisit |
I've succeeded speeding up jest in our project. resolver: require.resolve('./cached-jest-resolver'),
moduleLoader: require.resolve('./jest-runtime'),
const cache = new Map();
module.exports = (request, options) => {
const cacheKey = `${request}!!${options.basedir}`;
let resolved = cache.get(cacheKey);
if (!resolved) {
resolved = options.defaultResolver(request, options);
cache.set(cacheKey, resolved);
}
return resolved
}
const JestRuntime = require('jest-runtime');
const vm = require('vm');
const {handlePotentialSyntaxError} = require('@jest/transform');
const v8 = require('v8')
//TODO SAFER BUFFER! request/request inherits stream
const PROXY_WHITE_LIST = new Set(['process', 'module',
// 'buffer', 'stream',
// 'constants',
'fs'
]);
v8.setFlagsFromString('--expose-gc');
//TODO freeze console????????????
const gcClean = vm.runInNewContext('gc')
let RUN_COUNT_FOR_GC = 1
const CLEAN_EVERY_TIME = 1;
const detectLeaks = (() => {
const weak = require('weak-napi');
let references = 0;
return (obj) => {
references += 1;
console.log('references count ++', references)
weak(obj, () => {
references -= 1;
console.log('references count --', references)
})
}
})()
function makeReadonlyProxy(obj) {
if (
!((typeof obj === 'object' && obj !== null) || typeof obj === 'function')
) {
return obj;
}
return new Proxy(obj, {
get: (target, prop, receiver) => {
return makeReadonlyProxy(Reflect.get(target, prop, receiver), );
},
set: (target, property, value, receiver) => {
if (typeof value !== 'function') {
return Reflect.set(target, property, value, receiver);
}
// console.log(`trying to set! ${path.join(', ')} ${property}, ${typeof value}`);
// throw new Error(`trying to set! ${filename}, ${property as any}, ${typeof value}`);
return true;
},
});
}
const __scriptCache = new Map();
const __transformCache = new Map();
module.exports = class MyJestRuntime extends JestRuntime {
constructor(...args) {
super(...args);
this.__coreModulesCache = new Map();
// Object.freeze(this._environment.global.console);
if (++RUN_COUNT_FOR_GC % CLEAN_EVERY_TIME === 0) {
console.log('running gc')
gcClean();
}
detectLeaks(this)
console.log('memory: ', Math.floor(process.memoryUsage().heapUsed/1000/1000));
}
transformFile(filename, options) {
//TODO IS WATCH
let result = __transformCache.get(filename);
if (!result) {
result = super.transformFile(filename, options);
__transformCache.set(filename, result); //DO NOT COMMIT IT
}
return result
}
_requireCoreModule(moduleName) {
let mod = this.__coreModulesCache.get(moduleName);
if (!mod) {
mod = super._requireCoreModule(moduleName);
if (!PROXY_WHITE_LIST.has(moduleName)) { //TODO!!!!!
mod = makeReadonlyProxy(mod)
}
this.__coreModulesCache.set(moduleName, mod)
}
return mod
}
createScriptFromCode(scriptSource, filename) {
const scriptFromCache = __scriptCache.get(filename);
if (scriptFromCache) {
return scriptFromCache
}
try {
const scriptFilename = this._resolver.isCoreModule(filename)
? `jest-nodejs-core-${filename}`
: filename;
const script = new vm.Script(this.wrapCodeInModuleWrapper(scriptSource), {
displayErrors: true,
filename: scriptFilename,
//is leaking
// @ts-expect-error: Experimental ESM API
// importModuleDynamically: async (specifier) => {
// invariant(
// runtimeSupportsVmModules,
// 'You need to run with a version of node that supports ES Modules in the VM API. See https://jestjs.io/docs/en/ecmascript-modules',
// );
// const context = this._environment.getVmContext?.();
// invariant(context, 'Test environment has been torn down');
// const module = await this.resolveModule(
// specifier,
// scriptFilename,
// context,
// );
// return this.linkAndEvaluateModule(module);
// },
});
__scriptCache.set(filename, script); //TODO is cache
return script
} catch (e) {
throw handlePotentialSyntaxError(e);
}
}
} You can play with this. Mb you'll need to add more deps to PROXY_WHITE_LIST. |
@goloveychuk Interesting idea, but your solution didn't seem to make a significant difference in my benchmark. 😢 I've added it to https://github.com/EvHaus/jest-vs-jasmine/. Native JestYour approachJasmine (for comparison)I've updated my repo with the latest benchmarks, latest version of Jest, latest version of Node and a more reproducible benchmarking tool (via FYI: I'm not complaining. Just want to ensure those subscribed to the thread know that no significant advancements have been made here yet in the latest versions. |
@EvHaus yea, I think it won't have a difference in this benchmark.
In this setup I have 531s default, and 226s with above optimisations. So answering on your comment - those optimisations could help on real world heavy projects, it cannot make jest same speed as jasmine, since jest have expensive runtime (think of all those features/overhead: mocks, transformers, reporters, error formatting, tests isolation, caching etc) |
Just updated my benchmarks with a new player in town -- Vitest. I have good news. It has a compatible API to Jest but in my benchmarks it ran 2x faster than Jest and even outperformed Jasmine. 😮 I'm going to try migrating a larger real world codebase to it early in the new year and report back on the experience for those curious. |
Exciting, @EvHaus ! |
Hey folks, I've done an investigation run on my own with a no-op test and a lot of imports ( Ignoring
So, a couple recommendations for things to look into next:
If I have time, I may be able to dig into some of these potential perf-gain options in the next few months, but no guarantees. I wanted to brain-dump here in case any other Jesters and Fools got inspired :) (note that I do have some low-hanging-fruit PRs that I'll be upstreaming, but none of them address the remaining code hotspots mentioned above). |
To "fix" imports overhead, I've written custom test runner. |
for those of you struggling with memory leaks kulshekhar/ts-jest#1967 (comment) |
I have simular performance issues, our tests are running at least 5x slower. |
I am having the same issue and it got 2X worse after upgrading Node from 12 to 18. |
Probably explains my issues. |
Any news on this? |
I made a hacked together runtime and test env that doesn't isolate the test suites that improved our frontend tests by 8x with a cold cache. It also has a few caveats. So depending on your project setup it may or may not help. https://github.com/m-abboud/hella-fast-jest-runtime. |
🐛 Bug Report
We've been using Jest alongside Jasmine for the same test suite for about a year now. We love Jest because it's developer experience is superb, however, on our very large monorepo with ~7000+ test specs, Jest runs about 7 times slower than Jasmine. This problem has been getting worse and worse as the test suite grows and as a result, we always run our test suite via Jasmine and only use Jest for development --watch mode.
We would ♥ to use Jest as our only test runner, but its poor performance is preventing us from doing so. Having to run both Jest and Jasmine runners requires painful CI setup and constant upkeep of the Jasmine environment setup (which is much more complex than Jest's).
I'd like to better understand why the performance difference is so significant and if there's anything that can be done to optimize it.
To Reproduce
I've created a very detailed project to reproduce and profile both Jest and Jasmine on the same test suite in this project: https://github.com/EvHaus/jest-vs-jasmine
The environment is the same. The configurations are very similar. Both use JSDom. Both use the same Babel setup. Additional instructions are contained therein.
Expected behavior
Running tests through Jest should ideally be as fast as running them through Jasmine.
Link to repl or repo (highly encouraged)
https://github.com/EvHaus/jest-vs-jasmine
Run
npx envinfo --preset jest
Tested on a few different platform. See https://github.com/EvHaus/jest-vs-jasmine README for more info.
The text was updated successfully, but these errors were encountered: