Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory alloction/deallocation is not safe with MVAPICH #1703

Open
ye-luo opened this issue Jul 5, 2019 · 8 comments

Comments

@ye-luo
Copy link
Contributor

commented Jul 5, 2019

After investigating 'segmentation fault ' on Cooley at ALCF, I found that the Mallocator was causing the problem but I think this is not our fault but MVAPICH.

Mallocator uses aligned_alloc whenever available. If aligned_alloc is not available, posix_memalign is used. On Cooley, aligned_alloc is available from the OS.
In the libmpi.so of MVAPICH, it provides definition of both posix_memalign and free but not aligned_alloc. At linking, the aligned_alloc is picked up from OS and free is picked up from libmpi.so. The free was not able to delete the memory allocated by aligned_alloc safely and caused 'segmentation fault'.

Solution 1.
add if defined(MVAPICH2_VERSION). This doesn't work when mpi.h is not included.

Solution 2.
Users add -D QMC_EXTRA_LIBS=-lc when configuring cmake, -libc is added explicitly before libraries supplied by the MPI compiler wrapper.

Solution 3.
Just don't use aligned_alloc and stick to posix_memalign. posix_memalign is linux only, aligned_alloc is a language standard.

I think it is the fault of MVAPICH overriding system(libc/glibc) malloc/posix_memalign/free but aligned_alloc was missed.
I verified that MPICH(Intel/Cray) and OpenMPI doesn't override these routines.

I tend to use solution 2 and push MVAPICH to fix the problem. Any thoughts?

@PDoakORNL

This comment has been minimized.

Copy link
Contributor

commented Jul 5, 2019

I think that is best, they should not break libc.

@ye-luo

This comment has been minimized.

Copy link
Contributor Author

commented Jul 5, 2019

Solution 2. We don't add -D QMC_EXTRA_LIBS=-lc. Users using MVAPICH needs to apply this workaround when they type cmake.

@prckent

This comment has been minimized.

Copy link
Contributor

commented Jul 7, 2019

I vote for none of the above, although we inform the mvapich2 project regardless.

Solution: Detect mvapchi2 during our cmake configuration and put in the link time workaround.

@prckent prckent added the bug label Jul 9, 2019

@harisubramoni

This comment has been minimized.

Copy link

commented Jul 9, 2019

Dear, Ye.

Thank you for bringing this to our attention. We appreciate it. So far, we have not received any issues about users wanting to use “aligned_alloc”. We will see how to handle it in our code and get back to you.

Some history about this feature is given below.

As you may know, (barring a few exceptions) any buffer that an InifniBand HCA can act upon must be registered with it ahead of time. Since registration for InfiniBand is very expensive we attempt to cache these registrations so if the same buffer is re-used again for communication it will already be registered (speeding up the application). The reason why MVAPICH2 (and several other MPI libraries like OpenMPI – please refer to https://www.open-mpi.org/faq/?category=openfabrics#large-message-leave-pinned; https://www.open-mpi.org/papers/euro-pvmmpi-2006-hpc-protocols/euro-pvmmpi-2006-hpc-protocols.pdf) intercept malloc and free routines is to allow correctness while caching these InfiniBand memory registrations (since the MPI library needs to know if the memory is being freed etc).

Whether disabling registration cache will have a negative effect on application performance depends entirely on the communication pattern of the application. If the application uses mostly small to medium sized messages (approximately less than 16 KB), then disabling registration cache will mostly have no impact on the performance of the application.

The following section of the userguide has more information about the impact of disabling memory registration cache on application performance.

http://mvapich.cse.ohio-state.edu/static/media/mvapich/mvapich2-2.3.1-userguide.html#x1-1340009.1.3

This feature can be disabled at runtime by setting “MV2_USE_LAZY_MEM_UNREGISTER=0”. The following section of the userguide has more information about this parameter.

http://mvapich.cse.ohio-state.edu/static/media/mvapich/mvapich2-2.3.1-userguide.html#x1-26100011.81

This feature can be disabled at configuration time, the “--disable-registration-cache” parameter can be used. The following section of the userguide has more information about this parameter.

http://mvapich.cse.ohio-state.edu/static/media/mvapich/mvapich2-2.3.1-userguide.html#x1-140004.4

Best,
Hari.

@ye-luo

This comment has been minimized.

Copy link
Contributor Author

commented Jul 9, 2019

@harisubramoni I tried MV2_USE_LAZY_MEM_UNREGISTER=0 and it doesn't workaround the current segmentation fault. As OpenMPI documentation says it doesn't turn on memory registration by default and that is why I didn't hit the problem.

@harisubramoni

This comment has been minimized.

Copy link

commented Jul 9, 2019

@ye-luo - can you please try the configuration option I mentioned?

@ye-luo

This comment has been minimized.

Copy link
Contributor Author

commented Jul 9, 2019

@harisubramoni I will try when I have free cycles. Here is a minimal reproducer.

#include <stdlib.h>
int main()
{
  float *a = aligned_alloc(64,512);
  float *b = aligned_alloc(64,512000);
  free(a);
  free(b);
}

This is a standards compliant C code.

@ye-luo

This comment has been minimized.

Copy link
Contributor Author

commented Jul 9, 2019

@harisubramoni I can confirm that after building MVAPICH with --disable-registration-cache, malloc posix_memalign and free are no more intercepted by MVAPICH and thus the test code passes. However, this solution is not useful in practice because we can't ask users to build their own MPI library and administrator may not want to disable this feature for performance reason. Do you know any macro defined only by the MVAPICH compiler wrapper even without including any MPI header files?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants
You can’t perform that action at this time.