Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TYP: Add type stub from StringDType #26528

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

ngoldbaum
Copy link
Member

mypy tests pass locally. I don't know much about python typing so I'm not sure if we should add types for anything else NEP 55 related?

Copy link
Member

@rgommers rgommers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if we should add types for anything else NEP 55 related?

There probably are a few, but not many - and they're hard to find. One that came to mind was the character codes, they're treated as literals in a few places. Not all implemented perhaps though for StringDtype, .e.g. np.typename('T') doesn't work. So I wouldn't worry too much about anything beyond adding the dtype itself to dtypes.pyi.

@@ -38,6 +38,7 @@ CLongDoubleDType = np.dtype[np.clongdouble]
ObjectDType = np.dtype[np.object_]
BytesDType = np.dtype[np.bytes_]
StrDType = np.dtype[np.str_]
StringDType = np.dtype[str]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this correct? It looks to me like the default dtype is the old-style one:

>>> np.dtype(str)
dtype('<U')
>>> np.array(['str', 'str2'])
array(['str', 'str2'], dtype='<U4')
>>> np.array(['str', 'str2'], dtype=np.dtypes.StringDType)
array(['str', 'str2'], dtype=StringDType())
>>> np.array(['str', 'str2'], dtype=np.dtypes.StringDType).dtype.kind
'T'

I think it should be np.dtype[np.dtypes.StringDType()].

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently all the type stubs for dtypes are parametrized in terms of a scalar that subclasses np.generic, so np.dtypes.StringDType doesn't work. Neither does str, because that's not a numpy scalar type either. The assumption that dtypes are parameterized in terms of the scalar type is baked in pretty deeply. For example, the type stub for np.dtype looks like:

_DTypeScalar_co = TypeVar("_DTypeScalar_co", covariant=True, bound=generic)
@final
class dtype(Generic[_DTypeScalar_co]):
    ...

So I guess I'd need to have an alternate definition for dtype that isn't parametrized, specifically for StringDType? Like I said in the PR description, I don't really know what I'm doing with the type stubs...

Unfortunately I didn't realize this was an issue until pretty late in the game.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is a pretty fundamental issue it looks like.

I played around a bit trying to understand the situation.

>>> s = np.array(['str', 'str2'], dtype=np.dtypes.StringDType)
>>> s0 = s[0]
>>> type(s0)
<class 'str'>
>>> dt = np.dtypes.StringDType('abc')
...
TypeError: StringDType() takes no positional arguments

>>> dt = np.dtypes.StringDType()
>>> isinstance(dt, np.dtype)
True

And whether np.dtype can be used as a base:

>>> isinstance(np.dtypes.StringDType(), np.dtype)
True
>>> isinstance(np.float64(1.5), np.dtype)
False
>>> isinstance(np.float64, np.dtype)
False

The class hierarchy for scalars (see https://numpy.org/neps/nep-0040-legacy-datatype-impl.html#numpy-scalars-and-type-hierarchy) is pretty deeply ingrained. There is no such hierarchy with np.dtype as the root I think.

It's unclear to me what the desired solution here is, even leaving aside backwards compatibility. I am also unsure how much it matters - how many functions would we actually like to annotate to indicate that they take any numerical array and arrays with a StringDtype dtype?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's unclear to me what the desired solution here is, even leaving aside backwards compatibility. I am also unsure how much it matters - how many functions would we actually like to annotate to indicate that they take any numerical array and arrays with a StringDtype dtype?

I'd like to at least have type stubs for np.strings that say the functions in that namespace accept StringDType arrays. That doesn't work right now because fundamentally the type stubs assume all dtypes are parameterized by a scalar type.

If you're curious about where the trouble is coming from, expand this details block:

This issue arises from trying to add new overloads for all the functions in np.strings that take string arrays. Right now, the overloads look like this:

 @overload
 def expandtabs(a: U_co, tabsize: i_co = ...) -> NDArray[str_]: ...
 @overload
 def expandtabs(a: S_co, tabsize: i_co = ...) -> NDArray[bytes_]: ...

Where U_co is an alias for _ArrayLikeStr_co:

_ArrayLikeStr_co = _DualArrayLike[
    dtype[str_],
    str,
]

I'd like to make it so we have an overload like:

@overload
def expandtabs(a: T_co, tabsize: i_co = ...) -> NDArray[StringDType()]: ...

Where T_co is an alias for this, which I was hoping could work:

# StringDType currently doesn't have a scalar type, so it can't use
# _DualArrayLike
_ArrayLikeStringDType_co = Union[
    _SupportsArray[np.dtypes.StringDType],
    _NestedSequence[_SupportsArray[np.dtypes.StringDType]],
]

But mypy complains about the use of _SupportsArray:

numpy/_typing/_array_like.py:153: error: Type argument "StringDType" of "dtype" must be a subtype of "generic"  [type-var]

_SupportsArray is defined in terms of np.dtype and the type stub for np.dtype is defined to only accept a scalar type:

@final
class dtype(Generic[_DTypeScalar_co]):
    ...

where

_DTypeScalar_co = TypeVar("_DTypeScalar_co", covariant=True, bound=generic)

So fundamentally numpy's type stubs assume that all dtypes have a scalar and it'll take a lot of surgery to break that assumption.

All that said, I think maybe the best way forward is to just bite the bullet and add a scalar type that interns a reference to a packed string entry. That should fix this problem in a way that doesn't require major surgery on the type stubs. I also suspect it will improve performance for operations that are sensitive to creating a scalar.

Unfortunately this may just have to be a known issue we ship it with.

@charris
Copy link
Member

charris commented May 26, 2024

close/reopen

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants