-
-
Notifications
You must be signed in to change notification settings - Fork 351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perl syntax checking in parallel, but limit the number of processes. #4328
Conversation
Adapted from the refresh_taxonomies.js script. The syntax checking is slow (5-6 mins on github), but checking over 100 scripts simultaneously is also unworkable: openfoodfacts#3368 The limit is configurable, in case the environment running the checks ever has more than the 2 cores it currently does. Increasing the limit on 2 cores works (at least with 3 & 4), but is no faster.
scripts/check_perl.js
Outdated
running = Number.MAX_SAFE_INTEGER; | ||
while (running > 0) { | ||
running = spawns.reduce((pv, cv) => { | ||
if (cv.exitCode === null) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You might want to check cv.signalCode
here as well. If the process gets killed for out of memory reason, the exitCode will stay as null and CI wont fail until timeout is reached.
This is a issue with refresh-taxonomies.js too, as we saw recently with Travis failures.
From the exit event documentation. This refers to the function arguments, but it also applies to the properties:
The 'exit' event is emitted after the child process ends. If the process exited, code is the final exit code of the process, otherwise null. If the process terminated due to receipt of a signal, signal is the string name of the signal, otherwise null. One of the two will always be non-null.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so, if (cv.exitCode === null && cv.signalCode === null) {
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@svensven Yeah, I think that's good enough for our purposes. I think technically signalCode can be some non-null signal that didn't lead to exit (like SIGUSR2
or whatever) but seems pretty unlikely to happen.
…are no longer running See #4328 (comment)
to match refresh_taxonomies.js. openfoodfacts#4329 (comment)
If it's just syntax checking, maybe we could try a way to run the tests only when the scripts have changed, like we do for the taxonomies. It doesn't make much sense to test 100 scripts every time. I don't know if we can get the info of whether a file has been changed in a PR when we run the tests though. |
That would be nice. Given that someone's written this https://github.com/jitterbit/get-changed-files but, looking at its issues, github doesn't expose enough information to handle force-pushes etc, it may be difficult to achieve. |
this one seems to use a different technique, which maybe doesn't have the same issue: |
also, some whitespace, because it took me far too long to realise the run block wasn't all one backslash-continued command.
they were never checked directly, but checking all scripts checked them by inheritance.
Kudos, SonarCloud Quality Gate passed! 0 Bugs No Coverage information |
Adapted from the refresh_taxonomies.js script.
The syntax checking is slow (5-6 mins on github), but checking over 100 scripts simultaneously is also unworkable:
#3368
The limit is configurable, in case the environment running the checks ever has more than the 2 cores it currently does. Increasing the limit on 2 cores works (at least with 3 & 4), but is no faster.
Should shave another 2-3 mins off the pull request workflow.
Update: With the changed files workflow plugin, it removes the whole 5-6 mins.