Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: fftpack: use float32 routines for float16 inputs. #6909

Merged
merged 2 commits into from
Jan 15, 2017

Conversation

grlee77
Copy link
Contributor

@grlee77 grlee77 commented Dec 31, 2016

prior to this commit an error was raised for float16 input

import numpy as np
import scipy.fftpack

scipy.fftpack.fft(np.ones(8).astype(np.float16))
Traceback (most recent call last):

  File "<ipython-input-5-3b66b4d7b5f7>", line 1, in <module>
    scipy.fftpack.fft(np.ones(8).astype(np.float16))

  File "/Users/lee8rx/anaconda/lib/python3.5/site-packages/scipy/fftpack/basic.py", line 257, in fft
    raise ValueError("type %s is not supported" % tmp.dtype)

ValueError: type float16 is not supported

@@ -128,12 +134,17 @@ def _asfarray(x):
# 'dtype' attribute does not ensure that the
# object is an ndarray (e.g. Series class
# from the pandas library)
if x.dtype.char is float16_code:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

strings must not be compared with is

@@ -20,6 +20,12 @@
atexit.register(_fftpack.destroy_rfft_cache)
del atexit

try:
# no half-precision routines in fftpack. run float16 in single precision
float16_code = numpy.dtype('float16').char
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

half-float exists since numpy 1.6.0. You can assume it is always available.
You don't need to declare float16_code, just write dtype == np.half below.

class TestFloat16FFT(TestCase):
def setUp(self):
self.cdt = np.complex64
self.rdt = np.float16
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you don't inherit from _TestFFTBase, there's no need for this.
Do you need to inherit from it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did originally, but changed it later and didn't remove the setUp. will remove it

@pv pv added the needs-work Items that are pending response from the author label Jan 1, 2017
return numpy.asarray(x, dtype=x.dtype)
else:
# We cannot use asfarray directly because it converts sequences of
# complex to sequence of real
ret = numpy.asarray(x)
if ret.dtype.char not in numpy.typecodes["AllFloat"]:
if ret.dtype.char is float16_code:
return numpy.asarray(ret, dtype=numpy.float32)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needs Fortran-order?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so. The other cases in _asfarray seem to be C order

prior to this commit an error was raised for float16 input
@grlee77
Copy link
Contributor Author

grlee77 commented Jan 2, 2017

I made the suggested changes and add a fftn-based test case to make sure there was not a C/F order issue.

@larsoner
Copy link
Member

larsoner commented Jan 3, 2017

Can you add a comment in the affected fftpack functions that half-precision inputs will be cast to single precision? Otherwise LGTM

@grlee77
Copy link
Contributor Author

grlee77 commented Jan 3, 2017

@Eric89GXL : do you mean a docstring note for users or code comment for devs (or both)?

@grlee77
Copy link
Contributor Author

grlee77 commented Jan 3, 2017

also related: float128 or complex256 inputs will still raise an error, but I have currently left that as is. There is no way to avoid losing precision in that case and the user should probably use another FFT library, such as pyFFTW, that has long-double support. An alternative would be to convert to double precision and raise a warning about precision loss.

@grlee77
Copy link
Contributor Author

grlee77 commented Jan 3, 2017

I just checked that the routines in numpy.fft appear to always use double precision regardless of the input precision. Let me know if you would prefer to also just silently run in double precision for long-double inputs in a similar manner.

@larsoner
Copy link
Member

larsoner commented Jan 3, 2017

I mean a docstring note. I would say leave the long double cases out for now, and just raise an error. If people really need those they can use another library as you suggest or cast down manually, which is more explicit anyway.

@larsoner larsoner removed the needs-work Items that are pending response from the author label Jan 4, 2017
@larsoner
Copy link
Member

larsoner commented Jan 4, 2017

LGTM. @pv feel free to have another look to see if you're happy with this new version.

@rgommers rgommers added enhancement A new feature or improvement scipy.fftpack labels Jan 6, 2017
@pv pv merged commit d994731 into scipy:master Jan 15, 2017
@pv pv added this to the 0.19.0 milestone Jan 15, 2017
@grlee77 grlee77 deleted the fftpack_float16 branch August 19, 2020 10:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement A new feature or improvement scipy.fftpack
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants