Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helper for numpy/cupy based on amr.Config #216

Closed
ax3l opened this issue Oct 24, 2023 · 1 comment · Fixed by #289
Closed

Helper for numpy/cupy based on amr.Config #216

ax3l opened this issue Oct 24, 2023 · 1 comment · Fixed by #289
Assignees
Labels
backend: cuda Specific to CUDA execution (GPUs) enhancement New feature or request

Comments

@ax3l
Copy link
Member

ax3l commented Oct 24, 2023

Let's add a helper for this pattern:

field_list = mfab.to_cupy() if Config.have_gpu else mfab.to_numpy()

for pti in pc.iterator(pc, level=0):
    soa = pti.soa().to_cupy() if Config.have_gpu else pti.soa().to_numpy()

or this
https://github.com/AMReX-Codes/amrex-tutorials/blob/58fff188ab8963702db208ac6ff0bd47188ceeaa/GuidedTutorials/HeatEquation/Source/main.py#L14-L33

To simplify writing host-device agnostic Python code like this:
https://docs.cupy.dev/en/stable/user_guide/basic.html#how-to-write-cpu-gpu-agnostic-code

Maybe we can add

field_list = mfab.to_xp()

for pti in pc.iterator(pc, level=0):
    soa = pti.soa().to_xp()
@ax3l ax3l added enhancement New feature or request backend: cuda Specific to CUDA execution (GPUs) labels Oct 24, 2023
@ax3l
Copy link
Member Author

ax3l commented Apr 4, 2024

Feature will land via #289

@ax3l ax3l self-assigned this Apr 4, 2024
@ax3l ax3l closed this as completed in #289 May 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend: cuda Specific to CUDA execution (GPUs) enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant