Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-implement branch mpi-collective #8

Closed
3 of 4 tasks
rubenhorn opened this issue Oct 12, 2023 · 5 comments · Fixed by #31
Closed
3 of 4 tasks

Re-implement branch mpi-collective #8

rubenhorn opened this issue Oct 12, 2023 · 5 comments · Fixed by #31
Assignees
Labels
enhancement New feature or request

Comments

@rubenhorn
Copy link
Contributor

rubenhorn commented Oct 12, 2023

The branch mpi-collective was never merged and is way too far behind (see master...mpi-collective).
Thus, it is necessary to re-implement the features on this branch based on the current version of the master branch.

To-Do:

  • Re-implement changes from branch mpi-collective
  • Test correctness of the code using simple static couette flow scenario
  • Test correctness of the code using dynamic scenario (adding/removing MD instances depending on error threshold)
  • Delete stale branch mpi-collective
@rubenhorn rubenhorn added the enhancement New feature or request label Oct 12, 2023
@amartyads
Copy link
Member

Some clarification; reimplementing was my idea, since this branch is before the clang-tidy reformat of the whole codebase. Hence, automatic merging (which I tried) does not merge properly, I faced the same issues as back then. For example, git does not detect the large function signatures as the same, and often puts two of the same function in the same file. So I thought it's probably just easier to grab the needed code from the branch and update it in a new branch.

Unless anyone else has other ideas?

@rubenhorn
Copy link
Contributor Author

rubenhorn commented Nov 20, 2023

Starting work on new branch.

Track difference between the original and reimplementation here (only considers changes on one branch)
and that between the master and the reimplementation here.

@rubenhorn
Copy link
Contributor Author

rubenhorn commented Nov 28, 2023

Manual test of correctness using 4 MD instances:

MPI-collective:
with 6a406ca3e2c1958eb875409d9d94c981 couette.xml.template

Sequential:
with 6a406ca3e2c1958eb875409d9d94c981 couette.xml.template

Comparison:

md5sum *.csv *.vtk | awk '{print $1}' > output.hashes # Ignore different filenames
diff output.hashes /path/to/sequential/version/output.hashes # Finds no difference

@rubenhorn
Copy link
Contributor Author

rubenhorn commented Dec 20, 2023

Manual test of correctness using dynamic MD ensemble:

MPI-collective:
with 3254eb45201b39cf95bcf0aaba6a27c6 based on couette.xml.template
with fix-seed="yes" and

number-md-simulations="dynamic"
min-number-md="4"
error-start="0.1"
error-end="0.05"

Sequential:
with 3254eb45201b39cf95bcf0aaba6a27c6 based on couette.xml.template
(Same change as for MPI-collective)

Comparison:

./compare_sim_outputs.py . /path/to/sequential/version/

The values in the output start to diverge after 36 coupling cycles.1
The maximum difference in the CSV files is 0.9154 (7%) for the macroscopic mass.
The maximum difference in the VTK files is 0.0027 (5%) for the Y-velocity.

Values in output are identical if 2-way coupling is disabled.

Footnotes

  1. The error may result from the change in how the sum of values is obtained across instances, since floating point addition is not associative (see IEEE 754, Section 3.2 or the corresponding Wikipedia article)

@rubenhorn
Copy link
Contributor Author

rubenhorn commented Dec 28, 2023

My suspicion is that the difference for dynamic MD ensembles is just attributable to the use of the MPI reduce operation. While Balaji and Kimpe 2013 mention that implementors are encouraged to take care not to introduce non-determinism between runs, this does not hold for collective vs. sequential implementation.
I've been testing using OpenMPI which assumes that the reduce operations are associative according to the section Predefined Reduce Operations in the documentation. (This is not the case for addition of floating point numbers)
When using the collective MPI methods only for broadcasting from macro to MD, there is no divergence which would support this hypothesis.
The coupling cycle after which a divergence (of 1e-7 for one cell and one value) of the results first occurs is also influenced by the value of the attribute reorganise-memory-every-timestep of the node simulation-configuration in the configuration, at least slightly. (Offset of 1 coupling cycle for a value of 10000000000000 instead of 20)

A visual inspection of the two versions after 50 coupling cycles using ParaView shows no discernible difference, although that is quite a weak test:
LB_collective_mpi_48
LB_master_48
The difference between both images is indicated below in pink and is completely attributable to resizing artifacts.
difference

@rubenhorn rubenhorn linked a pull request Dec 29, 2023 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants