Skip to content

Conversation

yoyolicoris
Copy link
Contributor

Adopt the autograd test model from transforms/autograd_test_impl.py.
Also add more combinations of testing parameters.

Copy link
Contributor

@mthrok mthrok left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR.

Although I like the thorough testing, it's hitting the CI job time out limit.
Can we reduce the time it requires for the test somehow?
Autograd tests are by nature expensive itself.

I think it's good enough to use only one value for central_freq and Q.
I could be wrong but it feels that 2, 100, 800, 4000, 8000 are in a safe range for gradient computation.
If it is around 0, I would be more cautious. (but again, this is my intuition and not backed by any hard fact. So if you disagree, please let me know)

Things like const_skirt_gain should be parameterized because that change the code execution path, and we want to cover all the path.

@yoyolicoris
Copy link
Contributor Author

I think it's good enough to use only one value for central_freq and Q.
I could be wrong but it feels that 2, 100, 800, 4000, 8000 are in a safe range for gradient computation.
If it is around 0, I would be more cautious. (but again, this is my intuition and not backed by any hard fact. So if you disagree, please let me know)

Sounds reasonable, I agree.

@mthrok mthrok merged commit c1ef2ed into pytorch:master Apr 7, 2021
@mthrok
Copy link
Contributor

mthrok commented Apr 7, 2021

Thanks!

@yoyolicoris yoyolicoris deleted the refactor-lfilter-autograd-test branch April 8, 2021 00:34
carolineechen pushed a commit to carolineechen/audio that referenced this pull request Apr 30, 2021
mthrok pushed a commit to mthrok/audio that referenced this pull request Dec 13, 2022
* Update pytorch_with_examples.rst

* Update pytorch_with_examples.rst
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants