Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom command to run "make test" with MPI #296

Open
galexv opened this issue Mar 7, 2017 · 0 comments · May be fixed by #613
Open

Custom command to run "make test" with MPI #296

galexv opened this issue Mar 7, 2017 · 0 comments · May be fixed by #613

Comments

@galexv
Copy link
Collaborator

galexv commented Mar 7, 2017

As noted by Hiroshi (@shinaoka), on some systems mpiexec -np ... is not the correct way to run MPI jobs. Some way to specify a user command must be provided (other than rerunning cmake with -DMPIEXEC=... -DMPIEXEC_NUMPROC_FLAG=... etc.)

Cf. issue #211.

@galexv galexv added this to the Short-term fixes to 1.0.0 milestone Mar 7, 2017
@galexv galexv self-assigned this Mar 7, 2017
galexv added a commit that referenced this issue Mar 29, 2019
This commit introduces the following environment variables that affect
`make test` (or `ctest`) behavior of ALPSCore:

| Variable                      | Default                  | Usual value    | Meaning                                                           |
| `ALPS_TEST_MPIEXEC`           | `${MPIEXEC}`             | `mpiexec`      | MPI launcher                                                      |
| `ALPS_TEST_MPI_NROC_FLAG`     | `${MPIEXEC_NUMPROC_FLAG}`| `-n`           | flag to specify the number of MPI processes                       |
| `ALPS_TEST_MPI_NPROC`         | 1                        | 1              | How many MPI processes to launch in MPI-enabled tests             |
| `ALPS_TEST_MPIEXEC_PREFLAGS`  | `${MPIEXEC_PREFLAGS}`    | (empty string) | MPI launcher arguments preceding the executable name              |
| `ALPS_TEST_MPIEXEC_POSTFLAGS` | `${MPIEXEC_POSTFLAGS}`   | (empty string) | MPI launcher arguments preceding the arguments for the executable |

The `${...}` above are CMake variables, normally set by `find_MPI` module.

Related: issue #211.

This should close #296.

Intended use:

** Case 1: Vanilla MPI-enabled environment. **

The command to run an MPI program using 2 processes: `mpiexec -n 2 some_test`

Setting the variables to run each MPI-enabled tests on 2 processes: `ALPS_TEST_MPI_NPROC=2 make test`

** Case 2: NERSC Cori **

(Disclaimer: not tested with an actual Cori run)

Users are not supposed to run `mpiexec`. One has to allocate interactive nodes first.

Allocating 2 Haswell nodes for 30 minutes: `salloc -N 2 -C haswell -q interactive -t 0:30:00`

Command to run on the allocated nodes: `srun some_test`

Setting the variables to run each MPI-enabled tests on the allocated nodes:
`ALPS_TEST_MPIEXEC=srun ALPS_TEST_MPI_NPROC=' ' ALPS_TEST_MPI_NPROC_FLAG=' ' make test`

(note the variables are assigned spaces, not empty strings!)

** Case 3: Blue Waters **

(Disclaimer: not tested on actual Blue Waters machine)

The `aprun` command is supposed to be used to launch parallel processes from an interactive node
(see https://bluewaters.ncsa.illinois.edu/using-aprun ).

Command to run on 16 cores, using 8 cores per node (that is, 2 nodes), placing the processes on adjacent cores:
`aprun -N 8 -d 1 -n 16 some_test`

Setting the variables to run each MPI-enabled tests with this configuration:
`ALPS_TEST_MPIEXEC=aprun ALPS_TEST_MPI_NPROC=16 ALPS_TEST_MPI_NPROC_FLAG='-N 8 -d 1 -n' make test`
galexv added a commit that referenced this issue Mar 29, 2019
`make test` (or `ctest`) behavior of ALPSCore:

| Variable                    | Default                 | Usual value    | Meaning |
|-----------------------------|-------------------------|----------------|---------|
| `ALPS_TEST_MPIEXEC`           | `${MPIEXEC}`             | `mpiexec`      | MPI launcher                                                      |
| `ALPS_TEST_MPI_NROC_FLAG`     | `${MPIEXEC_NUMPROC_FLAG}`| `-n`           | flag to specify the number of MPI processes                       |
| `ALPS_TEST_MPI_NPROC`         | 1                        | 1              | How many MPI processes to launch in MPI-enabled tests             |
| `ALPS_TEST_MPIEXEC_PREFLAGS`  | `${MPIEXEC_PREFLAGS}`    | (empty string) | MPI launcher arguments preceding the executable name              |
| `ALPS_TEST_MPIEXEC_POSTFLAGS` | `${MPIEXEC_POSTFLAGS}`   | (empty string) | MPI launcher arguments preceding the arguments for the executable |

The `${...}` above are CMake variables, normally set by `FindMPI` module.

Related: issue #211.

This should close #296.

Intended use:

**Case 1: Vanilla MPI-enabled environment.**

The command to run an MPI program using 2 processes: `mpiexec -n 2 some_test`

Setting the variables to run each MPI-enabled tests on 2 processes: `ALPS_TEST_MPI_NPROC=2 make test`

**Case 2: NERSC Cori**

(*Disclaimer:* not tested with an actual Cori run)

Users are not supposed to run `mpiexec`. One has to allocate interactive nodes first.

Allocating 2 Haswell nodes for 30 minutes: `salloc -N 2 -C haswell -q interactive -t 0:30:00`

Command to run on the allocated nodes: `srun some_test`

Setting the variables to run each MPI-enabled tests on the allocated nodes:
`ALPS_TEST_MPIEXEC=srun ALPS_TEST_MPI_NPROC=' ' ALPS_TEST_MPI_NPROC_FLAG=' ' make test`

(note the variables are assigned spaces, not empty strings!)

**Case 3: Blue Waters**

(*Disclaimer:* not tested on actual Blue Waters machine)

The `aprun` command is supposed to be used to launch parallel processes from an interactive node
(see https://bluewaters.ncsa.illinois.edu/using-aprun ).

Command to run on 16 cores, using 8 cores per node (that is, 2 nodes), placing the processes on adjacent cores:
`aprun -N 8 -d 1 -n 16 some_test`

Setting the variables to run each MPI-enabled tests with this configuration:
`ALPS_TEST_MPIEXEC=aprun ALPS_TEST_MPI_NPROC=16 ALPS_TEST_MPI_NPROC_FLAG='-N 8 -d 1 -n' make test`
@galexv galexv linked a pull request Mar 29, 2019 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant