-
Notifications
You must be signed in to change notification settings - Fork 146
Add color.h test #524
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add color.h test #524
Conversation
I added some print statements to
so it fails to find the variable it is trying to redefine in
for So, why is |
Thanks for the investigation. I will look into the ParFORM issue. (You know, in programming, the person you were a month ago is a stranger. Then, the person you were more than 10 years ago is...) By the way, maybe this (the code that gives Valgrind error) should be broken up into small unit tests. |
It seems the problem with valgrind and With Edit: it is some kind of race condition though it seems: if I add a MesPrint in The easiest solution is to just disable valgrind for this test... |
Don't run this test with tform and less than 4 threads, it goes extremely slowly. See discussion in form-dev#524 .
Once #525 is merged we can rebase this on top and the parform tests will run successfully also. |
Here is a simpler way to reproduce the load-balancing issue under valgrind. I cut the expression out of the color test. This script does basically nothing at all, but on my system
|
Don't run this test with tform and less than 4 threads, it goes extremely slowly. See discussion in form-dev#524 .
Do you eventually want to put The former does not comply with the rule in README (modulo Perhaps the rule could be reconsidered and rewritten. Maybe the |
|
If we want to run the CI tests for For running Valgrind/coverage jobs for |
What do you mean, by forcer not being compatible with valgrind? |
Forcer always hits memory leaks of FORM, if my memory is correct. |
OK, I will try to test that. Ideally both the color and forcer tests can pass valgrind too. |
Here is the issue that is problematic for Forcer with Valgrind: #252 |
Currently the forcer test (and color, if it moves to the same status) are not part of the coveralls statistic. Perhaps they should also go into that run. At least on my machine, the forcer test running with |
Do you think it makes sense to keep things like forcer, color etc all in separate directories, or would it be better to create a single directory "extra" (for eg) with such tests in? Then the CI can run everything in the extra dir (but not for the valgrind run). |
I think collecting extra test cases into the |
@tueda how does this seem? Now "extra" runs both the forcer and color test, and I added them to the coverage run. Further tests which require external libraries or otherwise somehow don't fit in the default tests could also go into "extra". |
Oh, now I see what you meant, but just to clarify: (1) You don't want to collect By the way, in future, duplicated code in workflows may be rewritten with reusable workflows. |
In principle If they take a long time, can't we just split the runs up with the "groups" mechanism that the valgrind tests already use? In principle it would be nice to add valgrind tests for Is there a particular reason why the coverage check should only be for the tests which run under valgrind? This isn't actually quite the case at the moment anyway, since a few tests are skipped for the valgrind run but run anyway with lcov. |
OK, placing all the extra tests in
It depends on how we define the scope of test coverage. Measuring coverage for Valgrind-validated tests ensures that our tests include verification of the absence of memory errors, at least those detectable by Valgrind (though currently there are |
Right, this does make sense; it is good to know the % of code which is both covered and valgrind-error-free. In this case is there a way to fake the valgrind flag, so that the tests which are Then all test runs could "just run everything", and rely on the |
I think I can add a command option for this. |
More generally, is it possible to have a "build" job which caches the executables, which are then used by all of the checks? Currently all 35 valgrind jobs compile form, as I understand it. Maybe this will improve performance a bit, though I expect some of the harder valgrind tests will still be the bottleneck. |
See discussion in form-dev#524.
This might be worth a try. But there could be some time overhead for uploading/downloading executables, and hardware or OS versions may not be guaranteed to fully match in different jobs(?) (I've had a bad experience with Google Colab VM, though it is a different vendor.) |
OK, then in that case maybe it is fine as it is. At least, the tests only build the binaries that they use so the build is fairly fast. |
Don't run this test with tform and less than 4 threads, it goes extremely slowly. See discussion in form-dev#524 .
Run the color.h example as a test. Fetch color.h and put it in formlib. Reorganise the "forcer", "checkpoint" and "multithreaded" tests into the "extra" dir, along with the "color" test. Run the extra tests in the valgrind and coverage runs. Some tests are disabled under valgrind and thus also under coverage. Fix the multithreaded test patches settings such that it runs under vorm, tvorm and tvorm -w2 (now that they print warnings if these settings are implicitly adjusted).
This is rebased and cleaned up. Now all the extra tests live in the "extra" dir, and these are run also for valgrind and code coverage runs. The usual The CI should pass in principle, but it looks like jobs are failing to fetch color.h from nikhef.nl. I am not sure if there are just temporary issues there or there is some rate limiting, in which case we need to fetch the file from elsewhere... |
You are allowed to put the color.h in the examples. There is nothing proprietary about it.
… On 8 Jun 2025, at 00:09, jodavies ***@***.***> wrote:
jodavies
left a comment
(form-dev/form#524)
<#524 (comment)>
This is rebased and cleaned up. Now all the extra tests live in the "extra" dir, and these are run also for valgrind and code coverage runs. The usual #pend_if valgrind? skips the forcer tests under those.
The CI should pass in principle, but it looks like jobs are failing to fetch color.h from nikhef.nl. I am not sure if there are just temporary issues there or there is some rate limiting, in which case we need to fetch the file from elsewhere...
—
Reply to this email directly, view it on GitHub <#524 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ABJPCEX3W6Q7FOTHN5UBW2T3CNPILAVCNFSM6AAAAAB62KWWYSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDSNJTGE2TANBZGE>.
You are receiving this because you are subscribed to this thread.
|
This is one option indeed. Another could be to host a mirror of the packages on the nikhef website in a new repository in the form-dev organisation |
I think this is a "temporary" issue because 6 other jobs successfully accessed the Nikhef server and downloaded |
Which jobs are fetching color.h unnecessarily? It is all conditional on running the “extra” tests isn’t it? |
Oh, sorry, I was wrong. I saw that |
I also made tests for mincer2 and minceex. The latter is not clean under valgrind though, I’ll try to make a simple reproducing example. |
This test seems trickier than expected. Currently:
mpirun -np {1,2} parform
: OKmpirun -np {3,4} parform
: hang?mpirun -np {5,6} parform
: crashvalgrind vorm
: OKvalgrind tvorm -w2
: mostlyhangstakes 100-200s, sometimes finishes in 2svalgrind tvorm -w4
: OK, finishes in 2sThe CI sees valgrind errors in
vorm
,tvorm
that I can't reproduce locally. Edit: I can reproduce them on ubuntu 20.04 (as the runners are using) but not in 22.04.