Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check the role of MPI in MPI+X #11

Closed
ouankou opened this issue Jul 22, 2020 · 2 comments
Closed

Check the role of MPI in MPI+X #11

ouankou opened this issue Jul 22, 2020 · 2 comments
Assignees

Comments

@ouankou
Copy link
Owner

ouankou commented Jul 22, 2020

In our research, only OpenMP part is studied. We need to make sure the proposal could be applied to MPI+X architecture without much code changing. It should not conflict with the parallelism conducted by MPI layer.

@ouankou ouankou self-assigned this Jul 22, 2020
@ouankou ouankou added this to In progress in OpenMP 5.0 in ROSE Jul 22, 2020
@ouankou ouankou added this to In progress in Metadirective Jul 23, 2020
@ouankou ouankou moved this from In progress to To do in Metadirective Jul 23, 2020
@ouankou ouankou removed this from In progress in OpenMP 5.0 in ROSE Jul 23, 2020
@ouankou
Copy link
Owner Author

ouankou commented Jul 24, 2020

In the AMR library based on Charm, MPI or any other load balancing techniques are independent of the computation inside a cell.
Each cell has its own local data for computing and they have the same amount of data. The refinement or coarsening is to increase or reduce the number of cells rather than changing the data amount inside a cell.
Load balancing is used to distribute the cells to different nodes for better parallelism, but doesn't affect the computing inside a cell directly.

Our goal is using metadirective to speed up the computing in all cases. For AMR in Charm, if the cell size is very large in an application, which indicates it has more data, the computing should be offloaded to GPU. It's irrelevant to how many cells there are in the application. Load balancing is responsible for that subject.

@ouankou ouankou moved this from To do to In progress in Metadirective Jul 24, 2020
@ouankou
Copy link
Owner Author

ouankou commented Aug 19, 2020

Added to Overleaf.
MPI is used for load balancing for the outer parallelism, such as to distribute part fo AMR mesh to a compute node.
After that, within an MPI rank, the computing could be performed sequentially or in parallel by OpenMP, OpenACC, or CUDA. This is the inner-parallelism.

Metadirective is used for the inner-parallelism and doesn't affect the load balancing.

@ouankou ouankou closed this as completed Aug 19, 2020
Metadirective automation moved this from In progress to Done Aug 19, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Metadirective
  
Done
Development

No branches or pull requests

1 participant