-
Notifications
You must be signed in to change notification settings - Fork 117
[test] New OpenACC/CUDA/C++ test from MCH #342
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@ajocksch Please associate this PR with the issue it solves. |
|
@ajocksch Are the failures on Kesch expected? |
vkarak
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apart from my specific comments, I have the following more generic ones:
- The test provided by MCH supports also two test cases (MPI and no MPI) without OpenACC. Should we generate tests for these as well? The corresponding issues do not require that, though.
- Are the failures on Kesch expected?
|
|
||
|
|
||
| @rfm.parameterized_test(['mpi'], ['nompi']) | ||
| class OpenaccCudaMpiCppstd(rfm.RegressionTest): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The class name should not have the Mpi inside it because you are also generating a test that is not using MPI. This is confusing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
|
|
||
| @rfm.parameterized_test(['mpi'], ['nompi']) | ||
| class OpenaccCudaMpiCppstd(rfm.RegressionTest): | ||
| def __init__(self, withmpi): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The withmpi should be a boolean, not a string.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
| self.valid_prog_environs = ['PrgEnv-cray', 'PrgEnv-pgi'] | ||
| if self.current_system.name in ['daint', 'dom']: | ||
| self.modules = ['craype-accel-nvidia60'] | ||
| self._pgi_flags = '-O2 -acc -ta=tesla:cc60 -Mnorpath -lstdc++' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the -lstdc++ is needed here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It throws link error std_cpp_call.o:(.eh_frame+0x12): undefined reference to __gxx_personality_v0 if the options is not there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this for some reason required
std_cpp_call.o:(.eh_frame+0x12): undefined reference to `__gxx_personality_v0'
I could also try to move it into the makefile
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's fine here. No problem.
| self._nvidia_sm = '37' | ||
|
|
||
| if withmpi == 'mpi': | ||
| self.mpiflag = ' -DUSEMPI' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could also make this "private" as with the _nvidia_sm. Also the flag should be USE_MPI, not USEMPI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you!
teojgo
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm apart from the changes proposed by @vkarak
|
currently the check with pgi+mpi fails, the reeason should be the wrong PrgEnv modules; this should be fixed when merging into the master |
|
about 1.: Do you mean to compile with -hnoacc? I think this is not possible since a cuda kernel is called, we would need to provide a cpu version of that part of the code as well. |
|
@jenkins-cscs retry all |
|
lgtm. @ajocksch are the kesch failures expected? |
…cks/mch_openacc_cuda_mpi_cppstd
|
#348 needs to be merged in the master first |
vkarak
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please change the name of the class, and I will approve.
|
|
||
|
|
||
| @rfm.parameterized_test([True], [False]) | ||
| class OpenaccCudaMpiNoMPICppstd(rfm.RegressionTest): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The name is weird! Why not just OpenaccCudaCpp, which is in accordance with the issue as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@teojgo Can you change the name of this class? Other than this, this is ready to be merged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the automatically generated name will be very meaningful for this one: OpenaccCudaCpp_False and OpenaccCudaCpp_True. So it should be set manually.
…scs/reframe into checks/mch_openacc_cuda_mpi_cppstd
Codecov Report
@@ Coverage Diff @@
## master #342 +/- ##
==========================================
- Coverage 91.12% 91.09% -0.03%
==========================================
Files 68 68
Lines 8244 8244
==========================================
- Hits 7512 7510 -2
- Misses 732 734 +2
Continue to review full report at Codecov.
|
|
@ajocksch Can you check if the failures on Kesch are expected? IMO, perhaps they are not. If they are, though, I can merge this immediately. |
Conflicts: config/cscs.py cscs-checks/mch/automatic_arrays.py
|
@jenkins-cscs retry dom |
This PR also provides a small correction of
automatic_arrays.py.Closes #248
Closes #249