Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Per-worker setup/teardown #8708

Open
jrr opened this issue Jul 17, 2019 · 17 comments
Open

Per-worker setup/teardown #8708

jrr opened this issue Jul 17, 2019 · 17 comments

Comments

@jrr
Copy link

jrr commented Jul 17, 2019

🚀 Feature Proposal

Jest provides once-global setup and teardown, and per-file setup and teardown, but conspicuously absent is per-worker setup and teardown.

Motivation

One worker's set of tests runs serially, and is ideally suited to reusing an exclusive resource like a database or redis connection.

Example

Sue has a test suite that uses a database. She can create multiple databases, but there is a time cost associated with spinning one up before it can be used.

From her test suite she'd like to get maximum concurrency with minimum database setup time.

Currently possible approaches:

  1. One database, run all the tests serially.
    • This is unacceptably slow. Concurrency is one of Jest's killer features and it's sad to lose it.
  2. Multiple databases, created on fly at runtime, per-file. (in e.g. beforeAll())
    • This pays the database setup cost many more times than is necessary, slowing down the test suite.
  3. Multiple databases, created upfront. (e.g., make 20 databases, once, in globalSetup)
    • This requires you to choose a constant (e.g. 20) according to how many workers you want to run.
    • Choose the wrong number for -w (or get the wrong number inferred by your CPU core count), and it will either do too much work or not have enough databases.
    • Also it's painful to wait on a large upfront cost when you're running, say, just one test.

Ideally, instead:

  1. Multiple databases, created once per-worker
    • This is the best-case for minimizing database setup and maximizing reuse, and doesn't require you to premeditate the number of databases.
    • You can use whatever -w you want (or let Jest infer it), and you'd get the exact right number of databases.
    • When you run just one test, you'll get just one database.

Pitch

Why does this feature belong in the Jest core platform?

I'm not sure where the boundary of core platform is, but it stands to reason that beforeWorker and afterWorker should be provided by the same system that provides beforeAll, beforeEach, and globalSetup.

Alternatives

I googled around but couldn't come up with another way to achieve this. Is there another (perhaps undocumented) way to hook into worker setup/teardown?

@merlinnot
Copy link

https://jestjs.io/docs/en/configuration#setupfilesafterenv-array

But AFAIK there's no option to run something before the worker is closed. I'd find such a feature useful, especially for generating database namespaces to run tests in isolation etc., like with Redis you mentioned.

@jeysal
Copy link
Contributor

jeysal commented Jul 17, 2019

For more complex use cases, a custom environment is probably more well suited. We do this e.g. for setting up and tearing down a database connection.

@jrr
Copy link
Author

jrr commented Jul 25, 2019

@jeysal by "custom environment", you mean the thing specified via testEnvironment, right? Does that give you an opportunity for setup/teardown per-worker, or only per-file?

@jeysal
Copy link
Contributor

jeysal commented Jul 25, 2019

Well, not exactly per-worker. You can use JEST_WORKER_ID to distinguish between workers for using distinct databases, but I can see how that might not be enough in some use cases, performance-wise.

@d4vidi
Copy link
Contributor

d4vidi commented Oct 14, 2020

A big +1 for this 👍🏻

I'm from the core team maintaining Detox (who have already introduced enhancements to Jest, in the past), and we would also really appreciate this kind of support. Our use case is effectively identical, up to the point that instead of databases, we have Android/iOS emulators as the resource in question. We already have the means to bring up emulators on a per-worker basis, but we lack the ability to clean up efficiently (i.e. immediately when a worker is no longer required). Things become even more prominent when these emulators are rented from external SaaS providers such as Genymotion cloud: You literally pay for rent time, and hence must painfully optimize it.

@airhorns
Copy link
Contributor

Have any of you folks found a way to emulate this feature using the existing setup / teardown primitives?

@d4vidi
Copy link
Contributor

d4vidi commented Nov 18, 2020

@airhorns I've tried doing so by registering a process.on('beforeExit', () => {...}) callback in a worker's context (reference).
Unfortunately, empirically it seems that Jest keeps all workers alive right until the last one is done, so there's no added value compared to subscribing a global-teardown listener (which is the obvious alternative).

In addition, it seems Jest doesn't approve asynchronous work done past the time it expects everything to be torn down -- you get the famous warning saying Jest is still alive 1 second after it should have ended, and can't distinguish this cause for it from a real problem with your tests/code. That is what happens at best, actually, and at worst - your callback is force-killed prematurely along with the worker itself.

@airhorns
Copy link
Contributor

@d4vidi that makes sense, thanks for the tip! Are you running without jest's module resetting stuff so that you keep a handle on your per-worker shared resource throughout the run? I have been trying to get the same thing going but struggling to keep any kind of persistent state between suites because of the module reset, which seems desirable for other reasons.

@Dfenniak
Copy link

Dfenniak commented Jun 2, 2021

Seems like it would help this person out a lot... also anyone who managed to get here from trying to figure out how to setup a DB per worker - this is a possible inelegant work around.
#10552

@schmod
Copy link

schmod commented Jun 22, 2021

We already have the means to bring up emulators on a per-worker basis

@d4vidi – I'm curious, how did you implement this?

@sahilrajput03
Copy link

sahilrajput03 commented May 12, 2022

I am pretty noob js developer but I made something up that solves* this issue. And please don't critize I just mean to present a solution for curious people by this comment.

* means solving with my own test runner not with jest runner.

flash is my test runner I created few days back, to test this you need below setup:

git clone https://github.com/sahilrajput03/flash
cd flash && npm link
cd ..

git clone https://github.com/sahilrajput03/learning-monogo-and-mongoosejs
cd learning-monogo-and-mongoosejs/mongoosejs-with-hot-flash
npm i && npm link flash
npm start

Now if you edit code in code.js, then you database connection won't be thrown away but reused thus giving you a lightning fast TDD experience with no connection loosing on running tests while in watch mode.

@github-actions
Copy link

This issue is stale because it has been open for 1 year with no activity. Remove stale label or comment or this will be closed in 30 days.

@github-actions github-actions bot added the Stale label May 12, 2023
@darkbasic
Copy link

Not stale.

@github-actions github-actions bot removed the Stale label May 13, 2023
@rattrayalex
Copy link
Contributor

@SimenB or other maintainers – would a PR implementing this proposal be welcome?

@jedwards1211
Copy link

jedwards1211 commented Jun 19, 2023

In our testcase we need to share a (non-serializable) ts-morph Project between test files, because the typescript API does several seconds of sync parsing whenever we use it.

Having one instance per worker would completely solve the performance problems we face. And it's truly the only way we could get both good performance and file structure; TS API objects aren't serializable and having fully isolated test suites isn't giving us any worthwhile safety guarantees in this case.

Copy link

This issue is stale because it has been open for 1 year with no activity. Remove stale label or comment or this will be closed in 30 days.

@darkbasic
Copy link

Not stale.

@github-actions github-actions bot removed the Stale label Jun 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests