Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert device-aware MPI implementation for LinearAlgebra::distributed::Vector #14571

Merged

Conversation

masterleinad
Copy link
Member

@masterleinad masterleinad commented Dec 12, 2022

Based on top of #14537.

- mpi/parallel_partitioner_device_06.mpirun=4.debug (Failed)
- mpi/parallel_partitioner_device_07.mpirun=4.debug (Failed)

are known to fail. Also, this needs work for running on GPUs.

@masterleinad
Copy link
Member Author

/rebuild

@masterleinad masterleinad force-pushed the kokkos_la_d_vector_device_aware_mpi branch from 49c65ac to 340ec56 Compare December 13, 2022 03:24
@masterleinad masterleinad force-pushed the kokkos_la_d_vector_device_aware_mpi branch from 1a53a05 to 10e4a84 Compare December 29, 2022 15:58
@masterleinad masterleinad changed the title [WIP] Convert device-aware MPI implementation for LinearAlgebra::distributed::Vector Convert device-aware MPI implementation for LinearAlgebra::distributed::Vector Dec 30, 2022
@masterleinad masterleinad marked this pull request as ready for review December 30, 2022 19:11
@drwells
Copy link
Member

drwells commented Jan 25, 2023

Can you rebase?

@masterleinad
Copy link
Member Author

Can you rebase?

What would rebasing change?

@drwells
Copy link
Member

drwells commented Jan 25, 2023

I should have been more specific - my intention is that we should squash some commits.

@masterleinad masterleinad force-pushed the kokkos_la_d_vector_device_aware_mpi branch from d74fa6e to ac60c6e Compare January 25, 2023 15:17
@masterleinad
Copy link
Member Author

I should have been more specific - my intention is that we should squash some commits.

Ah, sure, here you go!

Copy link
Member

@drwells drwells left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have two quick questions on CMake variables. We also should add a changelog entry.

@@ -65,8 +65,12 @@ macro(feature_mpi_configure_external)
# (in Modules/FindMPI.cmake) at some point. For the time being this is an
# advanced configuration option.
#
option(DEAL_II_MPI_WITH_CUDA_SUPPORT "Enable MPI Cuda support" OFF)
mark_as_advanced(DEAL_II_MPI_WITH_CUDA_SUPPORT)
if(DEAL_II_MPI_WITH_CUDA_SUPPORT)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where does DEAL_II_MPI_WITH_CUDA_SUPPORT get set? AFAICT, based on the changes to cuda.html, we would not expect this to be set by a user any more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only for backward-compatibility.

@@ -448,7 +448,10 @@
# define DEAL_II_MPI_VERSION_GTE(major,minor) false
#endif

#cmakedefine DEAL_II_MPI_WITH_DEVICE_SUPPORT
#ifdef DEAL_II_MPI_WITH_DEVICE_SUPPORT
#cmakedefine DEAL_II_MPI_WITH_CUDA_SUPPORT
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this now just for backwards compatibility? I don't see anywhere in the library where it is used any more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only for backward-compatibility.

@drwells drwells added the GPU label Jan 26, 2023
@Rombur
Copy link
Member

Rombur commented Jan 27, 2023

Anyone who wanted to look at this PR has had enough time to do it. So I am merging it.

@Rombur Rombur merged commit cd731cf into dealii:master Jan 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants