Skip to content

Releases: tskisner/pshmem

Switch to using Python SharedMemory

17 Mar 21:54
02cd641
Compare
Choose a tag to compare

In order to resolve portability challenges with use of shared memory, switch to using multiprocessing.shared_memory.SharedMemory from the standard library. This fixes tests on MacOS. One workaround is still needed to manually disable resource tracking, but that monkey patch will not be needed after python-3.13.0. One possible concern with this approach is that the resource_tracker code used by this python package spawns a helper process. Historically, some MPI implementations were very unhappy with this. However, I tested this with OpenMPI on Linux and MacOS, and also with Cray MPICH, running across multiple nodes. No problems observed so far.

Fix catch of unit test failures

14 Mar 05:48
2ca8e38
Compare
Choose a tag to compare
  • Unit test failures were not triggering a non-zero return to the calling shell. Now fixed.
  • Moved the pre-deletion of the shared memory segment until after all processes have wrapped the buffer in a numpy array, ensuring that the buffer is not released too early. This fixes a failure on macos.

Move to sysv_ipc

31 Jan 22:52
7dbb4ed
Compare
Choose a tag to compare

The posix_ipc module is not available on conda-forge. This release moves the codebase to using sysv_ipc, which is available.

Fix python version in deploy action

30 Jan 21:29
Compare
Choose a tag to compare

Small updates to tests and CI workflows

30 Jan 21:25
b1d60d5
Compare
Choose a tag to compare
  • Bump python versions for tests and requirements.
  • Fix use of dtype np.int_ in tests.
  • Use concurrency in github actions rather than cancel-workflow action.
  • Update versioneer for python 3.12 compatibility.

Use Numpy Arrays for Single Process per Node

15 Dec 17:58
Compare
Choose a tag to compare

Even in the MPI case, if there is only one process in the node shared communicator, then use a simple numpy array. This reduces the total number of shared memory segments, which is limited by the kernel in terms of maximum number of open files (since shared memory segments appear as files to userspace).

Use POSIX shared memory instead of MPI shared windows

09 Dec 23:45
Compare
Choose a tag to compare

This converts the MPIShared class to use POSIX shared memory underneath. We had been using MPI shared windows for this even though we are managing write access to this memory ourselves. Most MPI implementations have a limit on the number of global shared memory buffers that can be allocated and this is insufficient for many cases. This new code is limited only by the number of shared memory segments per node supported by the kernel. The unit tests were run successfully up to 8192 processes.

Fix the array interface

25 Oct 00:17
Compare
Choose a tag to compare

This just adds pass-through methods to the underlying data view.

Add array interface to MPIShared

24 Oct 18:39
ef01d9d
Compare
Choose a tag to compare

This defines the __array__() method to the MPIShared class, which exposes the data member when wrapping in a numpy array.

Fix shared memory allocation on some MPI implementations

27 Jul 10:05
f13f38a
Compare
Choose a tag to compare

On some MPI implementations, allocating zero bytes of shared memory on some processes produces an error. Allocate a small dummy buffer on those processes as a workaround.