Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: forceWorkers param to force workers to be used when they otherwise wouldn't be #13168

Closed
wants to merge 9 commits into from

Conversation

phawxby
Copy link
Contributor

@phawxby phawxby commented Aug 24, 2022

#11956 (comment)

Fairly sure I've done a better job at testing the functionality behaves as expected via e2e tests.

@phawxby phawxby marked this pull request as ready for review August 24, 2022 17:48
docs/CLI.md Outdated Show resolved Hide resolved
docs/CLI.md Outdated
@@ -275,7 +281,7 @@ Lists all test files that Jest will run given the arguments, and exits.

### `--logHeapUsage`

Logs the heap usage after every test. Useful to debug memory leaks. Use together with `--runInBand` and `--expose-gc` in node.
Logs the heap usage after every test. Useful to debug memory leaks. Can be used together with `--runInBand` and `--expose-gc` in node to diagnose memory leaks, etc.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps 'or'? Reading comment from the user it sounded like 'and' was the misleading part. At least this is how my eye is reading this 'and'.

Also would be good to make this change in version docs as well. They live in website/versioned_docs

Suggested change
Logs the heap usage after every test. Useful to debug memory leaks. Can be used together with `--runInBand` and `--expose-gc` in node to diagnose memory leaks, etc.
Logs the heap usage after every test. Useful to debug memory leaks. Can be used together with flags like `--runInBand` or `--expose-gc` to diagnose memory leaks, etc.

@phawxby
Copy link
Contributor Author

phawxby commented Aug 24, 2022

@nover this should do what you need

phawxby and others added 3 commits August 24, 2022 18:49
Co-authored-by: Tom Mrazauskas <tom@mrazauskas.de>
Co-authored-by: Tom Mrazauskas <tom@mrazauskas.de>
@phawxby
Copy link
Contributor Author

phawxby commented Aug 24, 2022

Sorted.

@mrazauskas
Copy link
Contributor

Thanks. These were just tiny details. You should add a Changelog entry as well (;

@mrazauskas
Copy link
Contributor

Out of curiosity. Why did you go for a new flag instead of tweaking shouldRunInBand logic as mention in #11956 (comment)? On surface it sounds logic if --maxWorkers 1 is still using a worker to run tests (and only --maxWorkers 0 could disable workers). Is there some catch?

@phawxby
Copy link
Contributor Author

phawxby commented Aug 24, 2022

Out of curiosity. Why did you go for a new flag instead of tweaking shouldRunInBand logic as mention in #11956 (comment)? On surface it sounds logic if --maxWorkers 1 is still using a worker to run tests (and only --maxWorkers 0 could disable workers). Is there some catch?

Primarily because shouldRunInBand has other situations where it might run in band, such as based in the speed of the tests being run and it would avoid a breaking change to the way the existing arguments work.

@mrazauskas
Copy link
Contributor

I see. There are other limitations most probably.

Just an idea, what if having workerIdleMemoryLimit in a config would simply force workers? It does not work otherwise and might be a reason of flakiness because of shouldRunInBand logic, or? Also docs now state that this option is 'primarily a work-around'. So I was thinking it is fine to force workers and to skip shouldRunInBand logic for this use case to make the work-around do the job.

Also it might make sense to throw (or at least to print a warning), if --run-in-band is passed with workerIdleMemoryLimit.

@phawxby
Copy link
Contributor Author

phawxby commented Aug 25, 2022

I see. There are other limitations most probably.

Just an idea, what if having workerIdleMemoryLimit in a config would simply force workers? It does not work otherwise and might be a reason of flakiness because of shouldRunInBand logic, or? Also docs now state that this option is 'primarily a work-around'. So I was thinking it is fine to force workers and to skip shouldRunInBand logic for this use case to make the work-around do the job.

Also it might make sense to throw (or at least to print a warning), if --run-in-band is passed with workerIdleMemoryLimit.

That is a fair point, the memory limit will be totally useless without workers and there's nothing at runtime to indicate that's the case. I'm happy to do a second PR to make that change and then I guess you can pick your preferred route.

Based on that it would be --maxWorkers 1 --workerIdleMemoryLimit 500MB to get linear test runs with workers

@phawxby
Copy link
Contributor Author

phawxby commented Aug 25, 2022

While I was digging around the code I also had a thought that the way we pass around the number of workers isn't ideal. We pass 1 as an indicator of runInBand, be logically 0 should be runInBand, IE no workers please. 1 should be a valid number of actual workers to use.

@nover
Copy link

nover commented Aug 25, 2022

@phawxby
Wow, thanks. I've been testing this out in our problematic repository and I can confirm that replacing --runInBand with --forceWorkers --maxWorkers 1 does indeed work. Setting idle mem to 2GB resulted in two "recycles" of the worker, so the suite runs to completion 🎉

With the risk of sounding greedy: when do you expect to have this available in an installable alpha? :)

@phawxby
Copy link
Contributor Author

phawxby commented Aug 25, 2022

@nover I'll see if I can find some time to implement @mrazauskas suggestion above later today and then I'll let them decide which approach they want to take

@phawxby
Copy link
Contributor Author

phawxby commented Aug 25, 2022

@SimenB @mrazauskas pick your poison, #13168 or #13171, both do roughly the same thing in different ways

@github-actions
Copy link

This PR is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 30 days.

@github-actions github-actions bot added the Stale label Nov 29, 2022
@github-actions
Copy link

This PR was closed because it has been stalled for 30 days with no activity. Please open a new PR if the issue is still relevant, linking to this one.

@github-actions
Copy link

This pull request has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Please note this issue tracker is not a help forum. We recommend using StackOverflow or our discord channel for questions.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jan 29, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants