You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current MPI details scheme might not be flexible enough for all scenarios. Here's one scenario that it does not do well. It's not an urgent problem, but it might be good to make MPI details be fleible enough to handle this kind of scenario:
cluster of 32 4-way SMPs
want to test several BTLs, including "sm"
but "sm" cannot be tested by itself except when we are running one one node
For example, the following MPI details definition, when spanning multiple nodes, will not work because multi-node jobs will be launched with "--mca btl self,sm":
Instead, it seems like we want to make the value of @btl@ be a bit more conditional -- in this case, we want it to be dependent upon how many nodes (''not'' the value of np!) the job will run across.
The text was updated successfully, but these errors were encountered:
The current MPI details scheme might not be flexible enough for all scenarios. Here's one scenario that it does not do well. It's not an urgent problem, but it might be good to make MPI details be fleible enough to handle this kind of scenario:
For example, the following MPI details definition, when spanning multiple nodes, will not work because multi-node jobs will be launched with "--mca btl self,sm":
{{{
[MPI Details: Open MPI]
exec = mpirun -np &test_np() --prefix &test_prefix() --mca btl self,@btl@ &test_executable() &test_argv()
btl = &enumerate("tcp", "sm")
}}}
Instead, it seems like we want to make the value of @btl@ be a bit more conditional -- in this case, we want it to be dependent upon how many nodes (''not'' the value of np!) the job will run across.
The text was updated successfully, but these errors were encountered: