Skip to content

Conversation

juntyr
Copy link
Contributor

@juntyr juntyr commented Sep 7, 2025

Fixes #144

@juntyr
Copy link
Contributor Author

juntyr commented Sep 7, 2025

@SwayamInSync Does the version of C++ we use already have float16 support / what does numpy use so that I could also add casting support for float16?

@ngoldbaum
Copy link
Member

Here's where it happens in the stringdtype prototype:

STRING_TO_FLOAT_RESOLVE_DESCRIPTORS(float16, HALF)

You need to link against libnpymath to get the necessary C API:

https://numpy.org/doc/stable/reference/c-api/coremath.html#half-precision-functions

Here's where I handled that in the meson config:

npymath_path = incdir_numpy / '..' / 'lib'

@juntyr
Copy link
Contributor Author

juntyr commented Sep 7, 2025

The other problem, and probably why ubyte wasn't added earlier, is that ubyte and bool alias but need different code paths. Not sure how to fix that ...

@juntyr juntyr changed the title Implement cast support for ubyte Implement cast support for ubyte and half Sep 8, 2025
@juntyr
Copy link
Contributor Author

juntyr commented Sep 8, 2025

@SwayamInSync @ngoldbaum I think everything should work now, what do you think?

Copy link
Member

@SwayamInSync SwayamInSync left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Implement cast support for ubyte

Use template magic to distinguish npy_bool and npy_half

Implement cast support for half
@ngoldbaum
Copy link
Member

Thanks for doing this @juntyr!

@ngoldbaum ngoldbaum merged commit 77d4406 into numpy:main Sep 9, 2025
7 checks passed
@SwayamInSync
Copy link
Member

BTW if needed then in this SwayamInSync#13 I wrote the workaround to get an "Non-Implemented" error instead of segfault crash.

@ngoldbaum
Copy link
Member

Ah, I missed that - can you send it in as a followup PR?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

QuadDtype is missing casts for np.uint8 and np.float16
3 participants