-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MPI Parallelization #138
MPI Parallelization #138
Conversation
…o be passed around
- no more use of cout - set loggers on other ranks to only print errors
@gomezzz Regarding the runtime summary in the end: what to we want there for each entry?
|
…g on the decomposition
Co-authored-by: FG-TUM <FG-TUM@users.noreply.github.com>
- Fixing tests
|
Smarter domain decomposition
@FG-TUM As discussed I tried out constellations just uncommenting the respective line in the default config. Currently segfaults on this branch with EDIT: Also happens without MPI, so constellations are currently broken, I guess?
|
Also, in the CI we are getting the same error I mentioned once before which seems to occur occasionally depending on the number of generated fragments in the breakup 🤔
|
@FG-TUM Please add a warning that constellations are broken :) |
Description
Adds support for MPI Parallelization. Decomposition is a simple regular grid decomposition.
Sending / receiving of halo particles(see comment)New ID System
Ideas:
2^64 / 10^6 = 1.8*10^13
ids per rank.Optional TODOs
If these are not addressed in this PR create issues for them.
std::ofstream
tospdlog::basic_logger_mt<spdlog::async_factory>
.Related Pull RequestsResolved Issues
How Has This Been Tested?