Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement n-dimensional hermitian FFTs #63890

Closed
wants to merge 6 commits into from

Conversation

peterbell10
Copy link
Collaborator

@peterbell10 peterbell10 commented Aug 24, 2021

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Aug 24, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 7a01f5b (more details on the Dr. CI page):


  • 1/1 failures introduced in this PR

1 failure not recognized by patterns:

Job Step Action
GitHub Actions Lint / clang-format Run clang-format 🔁 rerun

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@heitorschueroff heitorschueroff added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Aug 24, 2021
@mruberry
Copy link
Collaborator

This is really exciting but it looks like the test failures are real

@peterbell10 peterbell10 force-pushed the hermitian-fftn branch 2 times, most recently from c04a026 to 16cda64 Compare August 26, 2021 11:48
@codecov
Copy link

codecov bot commented Aug 26, 2021

Codecov Report

Merging #63890 (16cda64) into master (b0782f0) will decrease coverage by 0.26%.
The diff coverage is 100.00%.

❗ Current head 16cda64 differs from pull request most recent head f10778f. Consider uploading reports for the commit f10778f to get more accurate results

@@            Coverage Diff             @@
##           master   #63890      +/-   ##
==========================================
- Coverage   67.03%   66.76%   -0.27%     
==========================================
  Files         695      695              
  Lines       90643    90749     +106     
==========================================
- Hits        60760    60588     -172     
- Misses      29883    30161     +278     

@peterbell10
Copy link
Collaborator Author

@mruberry PTAL, I've fixed the tests.

@@ -10135,6 +10135,26 @@
python_module: fft
variants: function

- func: fft_hfft2(Tensor self, int[1]? s=None, int[1] dim=[-2,-1], str? norm=None) -> Tensor
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice following of SciPy's hfft2 signature

python_module: fft
variants: function

- func: fft_ihfft2(Tensor self, int[1]? s=None, int[1] dim=[-2,-1], str? norm=None) -> Tensor
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ihfft2 signature looks great, too

@@ -10167,6 +10187,26 @@
python_module: fft
variants: function

- func: fft_hfftn(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None) -> Tensor
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hfftn signature looks great

python_module: fft
variants: function

- func: fft_ihfftn(Tensor self, int[1]? s=None, int[1]? dim=None, str? norm=None) -> Tensor
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome

@@ -25,6 +25,20 @@
if TEST_LIBROSA:
import librosa

has_scipy_fft = False
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool

except ModuleNotFoundError:
pass

LooseVersion = distutils.version.LooseVersion
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice versioning

@@ -6272,6 +6278,18 @@ def gradcheck_wrapper_triangular_input(op, input, *args, upper=False, **kwargs):
dtypes=all_types_and_complex_and(torch.bool),
default_test_dtypes=floating_and_complex_types(),
check_batched_gradgrad=False),
SpectralFuncInfo('fft.hfftn',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For completeness we should add hfft2 and ihfft2, too (this can be done in a follow-up PR)

hfftn = _add_docstr(_fft.fft_hfftn, r"""
hfftn(input, s=None, dim=None, norm=None, *, out=None) -> Tensor

Computes the n-dimensional discrete Fourier transform of a Herimitian symmetric
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great first sentence

Computes the n-dimensional discrete Fourier transform of a Herimitian symmetric
:attr:`input` signal.

:attr:`input` is interpreted as a one-sided Hermitian signal in the time
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool

a real signal in the time-domain and gives Hermitian symmetry in the
frequency-domain. The Hermitian FFT is the opposite; Hermitian symmetric in
the time-domain and real-valued in the frequency-domain. For this reason,
special care needs to be taken with the shape argument :attr:`s`, in the
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This last sentence is a little vague but good enough for now

the original data, as given by :attr:`s`. This is because each input shape
could correspond to either an odd or even length signal. By default, the
signal is assumed to be even length and odd signals will not round-trip
properly. So, it is recommended to always pass the signal shape :attr:`s`.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"So," can be removed and the last sentence can start with "It is..."

dim (Tuple[int], optional): Dimensions to be transformed.
The last dimension must be the half-Hermitian compressed dimension.
Default: all dimensions, or the last ``len(s)`` dimensions if :attr:`s` is given.
norm (str, optional): Normalization mode. For the forward transform
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great norm description


Example:

>>> t = torch.rand(10, 10)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This example is cool but doesn't actually call ihfftn anywhere

ihfft2 = _add_docstr(_fft.fft_ihfft2, r"""
ihfft2(input, s=None, dim=(-2, -1), norm=None, *, out=None) -> Tensor

Computes the N-dimensional inverse discrete Fourier transform of real
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great description

torch.Size([10, 6])

Compared against the full output from :func:`~torch.fft.ifft2`, the
hermitian time-space signal takes up only half the space.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hermitian


>>> two_ffts = torch.fft.ifft(torch.fft.ihfft(t, dim=1), dim=0)
>>> torch.allclose(t, two_ffts)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True

Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @peterbell10! This looks great overall. I made a few doc comments for your review. Nothing major but I think one example is still a placeholder. Just ping me when this is ready to go!

@peterbell10
Copy link
Collaborator Author

@mruberry PTAL

desc.shape.back() = last_dim_size / 2 + 1;
auto ld = last_dim_size / 2 + 1;
desc.shape.back() = ld;
TORCH_CHECK(ld >= 1, "Invalid number of data points (", last_dim_size, ") specified");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fname should probably sneak in here, too

@mruberry mruberry self-requested a review September 6, 2021 06:11
Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome! Thanks @peterbell10!

Did you see that JAX added SciPy's dct and dctn? Might be time to revisit #49016.

Looking at uses within Facebook, I can there are some uses of dct and dctn internally, and these ops seem reasonably popular on Github, too.

@facebook-github-bot
Copy link
Contributor

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

# input_ndim, dim
transform_desc = [
*product(range(2, 5), (None, (0,), (0, -1))),
*product(range(2, 5), (None,)),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: why is this line needed? (2,None) etc that it generates is already generated by the previous line.

(3, (0, -1)),
(3, (1,)),
(1, (0,)),
(4, None),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here

*product(range(2, 5), (None,)),
(6, None),
(5, (1, 3, 4)),
(3, (0, -1)),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here

hfft2 = _add_docstr(_fft.fft_hfft2, r"""
hfft2(input, s=None, dim=(-2, -1), norm=None, *, out=None) -> Tensor

Computes the 2-dimensional discrete Fourier transform of a Herimitian symmetric
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: Herimitian -> Hermitian

Computes the 2-dimensional discrete Fourier transform of a Herimitian symmetric
:attr:`input` signal. Equivalent to :func:`~torch.fft.hfftn` but only
transforms the last two dimensions by default.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the note from hfftn that input is interpreted as a one-sided Hermitian signal in the time domain should also be repeated here, otherwise it's unclear what if input itself is required to have some symmetric properties.

@mruberry
Copy link
Collaborator

@peterbell10 just ping me when @ngimel's comments are addressed (thanks for taking a look, @ngimel!)

@pytorch-probot
Copy link

pytorch-probot bot commented Sep 20, 2021

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/peterbell10/pytorch/blob/7a01f5b06c92854c290d328833f5ce905df09ab6/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-bionic-py3.8-gcc9-coverage ciflow/all, ciflow/coverage, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
puretorch-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
win-vs2019-cuda10.2-py3 ciflow/all, ciflow/cuda, ciflow/win 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@peterbell10
Copy link
Collaborator Author

@mruberry PTAL.

@facebook-github-bot
Copy link
Contributor

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@peterbell10
Copy link
Collaborator Author

@mruberry any update on this?

@mruberry
Copy link
Collaborator

It's just been hitting some internal unrelated land failures. I'm jedi landing it now to bypass them

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed module: fft open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Hermitian symmetric Fast Fourier Transform
5 participants