Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: refactor storage module and add support for NumPy >= 2.0 #1559

Closed
wants to merge 8 commits into from

Conversation

egparedes
Copy link
Contributor

@egparedes egparedes commented Jun 24, 2024

Refactor the storage module to encapsulate the required functionality from array libraries to allocate memory buffers in a new concept, which simplifies the design at the same time by removing one layer.

Add:

  • New MemoryResourceHandler concept for low-level memory buffer handling, and two different implementations using NumPy and CuPy, which support newest changes in NumPy 2.0 interface
  • Basic unit tests for storage modules and a TODO note to migrate the old unit tests from cartesian

Change:

  • Rename type aliases for DeviceType literal values in _core.definitions
  • Unify the logic to deal with GPU allocation depending on CuPy being installed in the system in storage.allocators instead of being duplicated in cartesian and next allocators

Delete:

  • The typing-only protocol ValidNumPyLikeAllocationNS from storage.allocators, which is not longer needed

@egparedes egparedes changed the title refactor: refactor storage module to support NumPy >= 2.0 refactor: refactor storage module and add support for NumPy >= 2.0 Jun 24, 2024
@egparedes egparedes requested a review from havogt June 25, 2024 05:35
@egparedes egparedes marked this pull request as ready for review June 25, 2024 07:29
TensorShape: TypeAlias = Sequence[
int
] # TODO(egparedes) figure out if PositiveIntegral can be made to work
BufferStrides: TypeAlias = Sequence[IntScalar]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not used?

@dataclasses.dataclass(frozen=True, init=False)
class _BaseNDArrayBufferAllocator(abc.ABC, Generic[core_defs.DeviceTypeT]):
"""Base class for buffer allocators using NumPy-like modules."""
class MemoryResourceHandler(Protocol[core_defs.DeviceTypeLiteralT]):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe just

Suggested change
class MemoryResourceHandler(Protocol[core_defs.DeviceTypeLiteralT]):
class Allocator(Protocol[core_defs.DeviceTypeLiteralT]):

or even _Allocator?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or BufferAllocator if you think Allocator is too generic.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After reading more in this file, my suggestion doesn't make sense for what this is intended to do, but I am not sure I like this grouping into a class with classmethods.

The trigger for this change was byte_bounds, the following solutions looks cleaner to me:

  • malloc returns buffer and address, because that's the only place we use it (if I didn't miss anything)
  • address_of as single dispatch and we register cupy and numpy buffer

tensorize is duplicated, it seems cleaner to use a free function and pass the namespace

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While looking at this I am trying to understand at which abstraction level would we add a low level pool allocator.

havogt added a commit that referenced this pull request Jun 27, 2024
requires gridtools_cpp >= 2.3.4 for nanobind 2.x

Additional changes: freezes numpy to 1.x until #1559
@egparedes
Copy link
Contributor Author

Close in favor of #1563

@egparedes egparedes closed this Jun 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants