Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mpi: Always generate MPIComm with MPI enabled #1905

Merged
merged 1 commit into from Apr 15, 2022
Merged

Conversation

FabioLuporini
Copy link
Contributor

fixes #1728

@nmbader

@FabioLuporini FabioLuporini added the MPI mpi-related label Apr 14, 2022
@codecov
Copy link

codecov bot commented Apr 14, 2022

Codecov Report

Merging #1905 (bcd8c1f) into master (e1b1692) will decrease coverage by 0.00%.
The diff coverage is 88.88%.

@@            Coverage Diff             @@
##           master    #1905      +/-   ##
==========================================
- Coverage   89.64%   89.63%   -0.01%     
==========================================
  Files         210      210              
  Lines       35523    35530       +7     
  Branches     5361     5363       +2     
==========================================
+ Hits        31843    31849       +6     
  Misses       3183     3183              
- Partials      497      498       +1     
Impacted Files Coverage Δ
devito/passes/iet/mpi.py 98.85% <88.88%> (-0.56%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e1b1692...bcd8c1f. Read the comment docs.

Copy link
Contributor

@georgebisbas georgebisbas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless no tests are required for this, all good

@FabioLuporini FabioLuporini merged commit b23a82f into master Apr 15, 2022
@FabioLuporini FabioLuporini deleted the patch-mpi-init branch April 15, 2022 07:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
MPI mpi-related
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Operators allocate memory blocks on the same GPU when using multiple GPUs with domain decomposition
3 participants