Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support NVIDIA's CUDA Python bindings #7461

Merged
merged 77 commits into from Nov 24, 2021
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
Show all changes
77 commits
Select commit Hold shift + click to select a range
ccfce50
CUDA: Start of support for CUDA Python bindings
gmarkall Sep 7, 2021
c51a9d1
CUDA testsuite runs with CUDA Python bindings
gmarkall Sep 7, 2021
8db1185
Implement memory allocation for CUDA Python
gmarkall Sep 7, 2021
1244c4e
Some fixes for views with CUDA Python
gmarkall Sep 7, 2021
2ba0e4b
CUDA: Add framework for two separate linker implementations
gmarkall Sep 7, 2021
cd57e37
CUDA: Implement load_module_image with CUDA Python bindings
gmarkall Sep 7, 2021
cb3d680
Implement modules and functions for CUDA Python
gmarkall Sep 8, 2021
b4e93c6
Correct argument preparation with CUDA Python
gmarkall Sep 8, 2021
e4db9a5
Kernel launches now starting with CUDA Python
gmarkall Sep 8, 2021
bd539e8
Skip record test with CUDA Python
gmarkall Sep 8, 2021
0670f2d
Fix block size and occupancy functions and kernel launch for CUDA Python
gmarkall Sep 8, 2021
5e69273
CUDA: Only handle device pointers in launch_kernel
gmarkall Sep 9, 2021
84ce354
Fix CUDA Python stream creation and skip some tests
gmarkall Sep 9, 2021
d25ee6e
Revert changes to context stack
gmarkall Sep 9, 2021
03dd908
CUDA Python fixes for IPC, streams, and CAI
gmarkall Sep 9, 2021
c033420
CUDA Python IPC and context fixes
gmarkall Sep 9, 2021
d1c1eb9
CUDA Python host allocation fixes
gmarkall Sep 9, 2021
96f617a
CUDA Python host allocation fixes
gmarkall Sep 9, 2021
e1dfe1e
Fix managed allocation with CUDA Python
gmarkall Sep 9, 2021
afa0d1b
Fix views with CUDA Python
gmarkall Sep 9, 2021
0b974c7
Some CUDA Python IPC fixes
gmarkall Sep 9, 2021
991b58f
Fix test_cuda_memory with CUDA Python
gmarkall Sep 9, 2021
92e053d
Fix record argument passing with CUDA Python
gmarkall Sep 9, 2021
24d7b48
Unskip remaining skipped CUDA Python tests
gmarkall Sep 9, 2021
00d568c
Fix CUDA driver tests with CUDA Python
gmarkall Sep 10, 2021
5d757be
Fix a couple of CAI tests with CUDA Python
gmarkall Sep 10, 2021
55161ec
Fix CUDA Array Interface tests with CUDA Python
gmarkall Sep 16, 2021
b1ef00f
Mark PTDS as unsupported with CUDA Python
gmarkall Sep 16, 2021
58fd927
Fix context stack tests with CUDA Python
gmarkall Sep 16, 2021
eb959b2
Add file extension map for CUDA Python
gmarkall Sep 28, 2021
be49f23
Fix async callbacks for CUDA Python
gmarkall Sep 28, 2021
217b658
Fix event recording for CUDA Python
gmarkall Sep 28, 2021
3d07299
Fix device_memory_size for CUDA Python
gmarkall Sep 28, 2021
cca4e4e
Fix test_managed_alloc for CUDA Python
gmarkall Sep 28, 2021
7c9b3c3
Fix a few more CUDA Python fails
gmarkall Sep 28, 2021
93cc0f1
Fix remaining CUDA Python test fails
gmarkall Sep 29, 2021
08182d0
Fix import when CUDA Python not available
gmarkall Sep 29, 2021
25eb71c
Merge remote-tracking branch 'numba/master' into cuda-python
gmarkall Sep 29, 2021
0dd03dd
Small comment and whitespace change undo
gmarkall Sep 29, 2021
7cc1f53
Merge remote-tracking branch 'numba/master' into cuda-python
gmarkall Oct 6, 2021
3b0a363
Reuse alloc_key for allocations key in memhostalloc
gmarkall Oct 6, 2021
6404dfb
Simplify getting pointers for ctypes functions
gmarkall Oct 6, 2021
0c6ed5b
Don't use CUDA Python by default
gmarkall Oct 6, 2021
c2d4d8d
Remove some dead code
gmarkall Oct 6, 2021
12effed
Document CUDA Python environment variable
gmarkall Oct 6, 2021
379ac22
Merge remote-tracking branch 'numba/master' into cuda-python
gmarkall Nov 1, 2021
7847d51
driver.py: rename cuda_driver to binding (PR #7461 feedback
gmarkall Nov 1, 2021
4beb7dd
Rename CUDA_USE_CUDA_PYTHON to CUDA_USE_NV_BINDING
gmarkall Nov 1, 2021
caf34c0
Update docs for CUDA_USE_NVIDIA_BINDING
gmarkall Nov 1, 2021
004d74a
CUDA driver: Use defined values instead of magic numbers for streams
gmarkall Nov 1, 2021
c3e7fdb
CUDA driver error checking: factor out fork detection
gmarkall Nov 1, 2021
2eef758
Use CU_STREAM_DEFAULT in Stream.__repr__
gmarkall Nov 1, 2021
af14cc8
Fix spelling of CU_JIT_INPUT_FATBINARY
gmarkall Nov 1, 2021
d110d1c
Add docstring to add_file_guess_ext
gmarkall Nov 1, 2021
3d964cd
CUDA: Remove a needless del from the Ctypes linker
gmarkall Nov 1, 2021
84d46d4
Some small fixups from PR #7461 feedback
gmarkall Nov 1, 2021
d4d5176
CUDA: Fix simulator by adding missing USE_NV_BINDING to simulator
gmarkall Nov 1, 2021
96776f7
CUDA: Use helper function in test_derived_pointer
gmarkall Nov 1, 2021
0a7a8d8
Re-enable profiler with CUDA Python
gmarkall Nov 1, 2021
0617911
Update documentation for NVIDIA bindings
gmarkall Nov 3, 2021
6eb1924
Merge remote-tracking branch 'numba/master' into cuda-python
gmarkall Nov 4, 2021
43f3ae7
PR #7461 feedback on deprecation wording
gmarkall Nov 8, 2021
57413cf
Merge remote-tracking branch 'numba/master' into cuda-python
gmarkall Nov 22, 2021
37ef39b
CUDA: Add function to get driver version
gmarkall Nov 22, 2021
771bc38
Report CUDA binding availability and use in Numba sysinfo
gmarkall Nov 22, 2021
81809f7
Merge remote-tracking branch 'gmarkall/cuda-python' into cuda-python
gmarkall Nov 22, 2021
5699b91
CUDA: Attempt to test with NVIDIA binding on CUDA 11.4
gmarkall Nov 22, 2021
dd57c0c
CUDA: Add docs for NVIDIA binding support
gmarkall Nov 22, 2021
b86d4fa
Correct spelling of NUMBA_CUDA_USE_NVIDIA_BINDING
gmarkall Nov 22, 2021
29b3ea8
Revert "Re-enable profiler with CUDA Python"
gmarkall Nov 22, 2021
22a4b74
CUDA docs: Note that profiler not supported with NV bindings
gmarkall Nov 22, 2021
1b59892
Correct mis-spelled env var in docs
gmarkall Nov 23, 2021
60321d5
Update CUDA docs based on PR #7461 feedback
gmarkall Nov 23, 2021
1ca9acb
Warn when NVIDIA bindings requested but not found
gmarkall Nov 23, 2021
4a50d0c
Mention env var in NVIDIA bindings warning
gmarkall Nov 23, 2021
8ab1535
Update NV binding env var docs
gmarkall Nov 23, 2021
f93c602
Update numba/core/config.py
gmarkall Nov 23, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
9 changes: 6 additions & 3 deletions docs/source/cuda/bindings.rst
Expand Up @@ -4,9 +4,12 @@ CUDA Bindings
Numba supports two bindings to the CUDA Driver APIs: its own internal bindings
based on ctypes, and the official `NVIDIA CUDA Python bindings
<https://nvidia.github.io/cuda-python/>`_. Functionality is equivalent between
the two bindings, with one exception: the NVIDIA bindings presently do not
support Per-Thread Default Streams (PTDS), and an exception will be raised on
import if PTDS is enabled along with the NVIDIA bindings.
the two bindings, with two exceptions:

* the NVIDIA bindings presently do not support Per-Thread Default Streams
(PTDS), and an exception will be raised on import if PTDS is enabled along
with the NVIDIA bindings.
* The profiling APIs are not available with the NVIDIA bindings.

The internal bindings are used by default. If the NVIDIA bindings are installed,
then they can be used by setting the environment variable
Expand Down
12 changes: 5 additions & 7 deletions docs/source/cuda/overview.rst
Expand Up @@ -65,9 +65,9 @@ CUDA Bindings
Numba supports interacting with the CUDA Driver API via the `NVIDIA CUDA Python
bindings <https://nvidia.github.io/cuda-python/>`_ and its own ctypes-based
binding. The ctypes-based binding is presently the default as Per-Thread
stuartarchibald marked this conversation as resolved.
Show resolved Hide resolved
Default Streams and profiler APIs are not supported with the NVIDIA bindings,
but otherwise functionality is equivalent between the two. You can install the
NVIDIA bindings with::
Default Streams and the profiler APIs are not supported with the NVIDIA
bindings, but otherwise functionality is equivalent between the two. You can
install the NVIDIA bindings with::

$ conda install nvidia::cuda-python

Expand All @@ -77,10 +77,8 @@ if you are using Conda, or::

if you are using pip.

The use of NVIDIA bindings is enabled by setting the environment variable
``NUMBA_CUDA_USE_NVIDIA_BINDING`` to ``"1"``. See
:ref:`GPU Support Environment Variables <numba-envvars-gpu-support>` for more
information.
The use of the NVIDIA bindings is enabled by setting the environment variable
:envvar:`NUMBA_CUDA_USE_NVIDIA_BINDING` to ``"1"``.

.. _cudatoolkit-lookup:

Expand Down
4 changes: 2 additions & 2 deletions docs/source/reference/envvars.rst
Expand Up @@ -516,12 +516,12 @@ GPU support
heuristic needs to check the number of SMs available on the device in the
current context.

.. envvar:: CUDA_WARN_ON_IMPLICIT_COPY
.. envvar:: NUMBA_CUDA_WARN_ON_IMPLICIT_COPY
stuartarchibald marked this conversation as resolved.
Show resolved Hide resolved

Enable warnings if a kernel is launched with host memory which forces a copy to and
from the device. This option is on by default (default value is 1).

.. envvar:: CUDA_USE_NVIDIA_BINDING
.. envvar:: NUMBA_CUDA_USE_NVIDIA_BINDING

When set to 1, Numba will use the `NVIDIA CUDA Python binding
<https://nvidia.github.io/cuda-python/>`_ to make calls to the driver API
Expand Down