Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Autograd tests for Transforms #1414

Closed
15 tasks done
mthrok opened this issue Apr 1, 2021 · 10 comments
Closed
15 tasks done

Autograd tests for Transforms #1414

mthrok opened this issue Apr 1, 2021 · 10 comments

Comments

@mthrok
Copy link
Collaborator

mthrok commented Apr 1, 2021

Until recent, we have been assumed that ops provided in torchaudio support autograd just because they are implemented with PyTorch. However, this assumption was not correct all the time. For example, in #704, it was pointed out that lfilter does not support autograd, and this was resolved in #1310 with proper unit tests by community contribution. Similarly, as a part of #1337, I have added autograd test to some transforms in #1340. We would like to extend the autograd testing to as many functionals and transforms as possible. We would like to ask your help.

Steps

  1. Pick a transform from the list bellow and see if someone is working on it. If not, leave a comment in this thread that you will be working on it.
  2. Add a test in AutogradTestMixin similar to the existing tests.
    NOTE Please first try adding test without providing nondet_tol to see if the transform's backward pass is deterministic or not. If you see Backward is not reentrant error message, that means it is not deterministic. Please report it back here, so that we can discuss how to handle it.
  3. Run test with (cd test && pytest torchaudio_unittest/transforms/autograd_*_test.py).
  4. Make a PR and add cc @mthrok in the description (or comment)

Note: We are not sure if all the transforms actually support autograd. If you find a Transform does not support autograd please report back in this thread.

For the instruction to set up development environment, please refer to CONTRIBUTING.md.

Transforms

@yoyololicon
Copy link
Collaborator

yoyololicon commented Apr 1, 2021

@mthrok
I have some experiences with spectrogram inversion before, so I would like to take on GriffinLim.
Also ComputeDeltas.

@mthrok
Copy link
Collaborator Author

mthrok commented Apr 1, 2021

@mthrok
I have some experiences with spectrogram inversion before, so I would like to take on GriffinLim.

@yoyololicon

Thanks. Let me know if you need help there.

@krishnakalyan3
Copy link
Contributor

I will be working on MFCC

@krishnakalyan3
Copy link
Contributor

Working on resample, SpectralCentroid and Fade

@yoyololicon
Copy link
Collaborator

I don't think Mulaw*coding needs autograd cuz it quantized value to integers, right?

@mthrok
Copy link
Collaborator Author

mthrok commented Apr 7, 2021

I don't think Mulaw*coding needs autograd cuz it quantized value to integers, right?

Hi @yoyololicon
Yes, you are right. Scratching them out. Thanks for the pointer.

@dhthompson
Copy link
Contributor

I'm taking a look at Vol

@pavithranrao
Copy link
Contributor

pavithranrao commented Apr 30, 2021

I will try SlidingWindowCmn Thanks!

@kiri11
Copy link
Contributor

kiri11 commented May 8, 2021

I'm going to take a stab at FrequencyMasking/TimeMasking, the only ones left!

@mthrok
Copy link
Collaborator Author

mthrok commented May 12, 2021

Now all the Transforms with autograd supports are properly tested. Thanks for the help!

@mthrok mthrok closed this as completed May 12, 2021
mthrok pushed a commit to mthrok/audio that referenced this issue Dec 13, 2022
…1414)

* Exclude audio_preprocessing_tutorial.py on Windows

* Fix

* Update build_for_windows.sh

Co-authored-by: Brian Johnson <brianjo@fb.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants