-
Couldn't load subscription status.
- Fork 217
Reapply "refactor(python): drop support for 3.9, document 3.14 support (#1069)" (#1109) #1191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
f27711e
78c1449
80e646e
0c44cc6
62cdc43
0f00a6e
52090b2
69d4b59
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -28,7 +28,6 @@ jobs: | |
| fail-fast: false | ||
| matrix: | ||
| python-version: | ||
| - "3.9" | ||
| - "3.10" | ||
| - "3.11" | ||
| - "3.12" | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -10,7 +10,7 @@ from cuda.bindings cimport cydriver | |
| from cuda.core.experimental._utils.cuda_utils cimport HANDLE_RETURN | ||
|
|
||
| import threading | ||
| from typing import Optional, Union | ||
| from typing import Union | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Any reason to modernize Also (possibly as a follow-on PR), we should do the same thing in the generated code. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I can't remember exactly why, but I think it might've been because ruff took care of some of the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I've already started on it, so I'll just push up a commit in this PR. |
||
|
|
||
| from cuda.core.experimental._context import Context, ContextOptions | ||
| from cuda.core.experimental._event import Event, EventOptions | ||
|
|
@@ -951,7 +951,7 @@ class Device: | |
| """ | ||
| __slots__ = ("_id", "_mr", "_has_inited", "_properties") | ||
|
|
||
| def __new__(cls, device_id: Optional[int] = None): | ||
| def __new__(cls, device_id: int | None = None): | ||
| global _is_cuInit | ||
| if _is_cuInit is False: | ||
| with _lock, nogil: | ||
|
|
@@ -1223,7 +1223,7 @@ class Device: | |
| """ | ||
| raise NotImplementedError("WIP: https://github.com/NVIDIA/cuda-python/issues/189") | ||
|
|
||
| def create_stream(self, obj: Optional[IsStreamT] = None, options: Optional[StreamOptions] = None) -> Stream: | ||
| def create_stream(self, obj: IsStreamT | None = None, options: StreamOptions | None = None) -> Stream: | ||
| """Create a Stream object. | ||
|
|
||
| New stream objects can be created in two different ways: | ||
|
|
@@ -1254,7 +1254,7 @@ class Device: | |
| self._check_context_initialized() | ||
| return Stream._init(obj=obj, options=options, device_id=self._id) | ||
|
|
||
| def create_event(self, options: Optional[EventOptions] = None) -> Event: | ||
| def create_event(self, options: EventOptions | None = None) -> Event: | ||
| """Create an Event object without recording it to a Stream. | ||
|
|
||
| Note | ||
|
|
@@ -1276,7 +1276,7 @@ class Device: | |
| ctx = self._get_current_context() | ||
| return Event._init(self._id, ctx, options, True) | ||
|
|
||
| def allocate(self, size, stream: Optional[Stream] = None) -> Buffer: | ||
| def allocate(self, size, stream: Stream | None = None) -> Buffer: | ||
| """Allocate device memory from a specified stream. | ||
|
|
||
| Allocates device memory of `size` bytes on the specified `stream` | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -3,7 +3,6 @@ | |
| # SPDX-License-Identifier: Apache-2.0 | ||
|
|
||
| from dataclasses import dataclass | ||
| from typing import Optional, Union | ||
|
|
||
| from cuda.core.experimental._device import Device | ||
| from cuda.core.experimental._utils.cuda_utils import ( | ||
|
|
@@ -45,15 +44,15 @@ class LaunchConfig: | |
|
|
||
| Attributes | ||
| ---------- | ||
| grid : Union[tuple, int] | ||
| grid : tuple | int | ||
| Collection of threads that will execute a kernel function. When cluster | ||
| is not specified, this represents the number of blocks, otherwise | ||
| this represents the number of clusters. | ||
| cluster : Union[tuple, int] | ||
| cluster : tuple | int | ||
| Group of blocks (Thread Block Cluster) that will execute on the same | ||
| GPU Processing Cluster (GPC). Blocks within a cluster have access to | ||
| distributed shared memory and can be explicitly synchronized. | ||
| block : Union[tuple, int] | ||
| block : tuple | int | ||
| Group of threads (Thread Block) that will execute on the same | ||
| streaming multiprocessor (SM). Threads within a thread blocks have | ||
| access to shared memory and can be explicitly synchronized. | ||
|
|
@@ -65,11 +64,11 @@ class LaunchConfig: | |
| """ | ||
|
|
||
| # TODO: expand LaunchConfig to include other attributes | ||
| grid: Union[tuple, int] = None | ||
| cluster: Union[tuple, int] = None | ||
| block: Union[tuple, int] = None | ||
| shmem_size: Optional[int] = None | ||
| cooperative_launch: Optional[bool] = False | ||
| grid: tuple | int = None | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. logic: default There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @greptile-apps Are you sure it's causing a runtime error? Give me a self-contained example that you think fails here. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I guess it didn't provide an example? :( There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah, you're absolutely right! Looking at the if isinstance(cfg, int):
cfg = (cfg,)
else:
common = "must be an int, or a tuple with up to 3 ints"
if not isinstance(cfg, tuple):
raise ValueError(f"{label} {common} (got {type(cfg)})")So when
Here's the actual failing example: from cuda.core.experimental import LaunchConfig
# This WILL fail:
config = LaunchConfig()
# Raises: ValueError: LaunchConfig.grid must be an int, or a tuple with up to 3 ints (got <class 'NoneType'>)So yes, my original comment was correct - the default
My apologies for not providing the example initially! |
||
| cluster: tuple | int = None | ||
| block: tuple | int = None | ||
|
Comment on lines
+68
to
+69
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. logic: inconsistent defaults: There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Probably not, because that's all happening in |
||
| shmem_size: int | None = None | ||
| cooperative_launch: bool | None = False | ||
|
Comment on lines
+70
to
+71
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. style: There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Meh, seems distracting. I'll do it in a follow-up. |
||
|
|
||
| def __post_init__(self): | ||
| _lazy_init() | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch. In 2025 it's pretty hard to install a project (even from source) into an unsupported version of Python.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was actually caught by
ruff.