Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add contrast to functional #551

Merged
merged 3 commits into from Apr 16, 2020
Merged

Conversation

bhargavkathivarapu
Copy link
Contributor

Hi ,

Regarding the issue #260 ( Reducing dependency in Sox)
Implemented below function in functional.py

  • contrast

Added a test case to test_sox_compatibility.py

All tests in test_sox_compatibility.py passed on CPU.

................gain: can't reclaim headroom
.
----------------------------------------------------------------------
Ran 17 tests in 501.774s

OK

@vincentqb could you please review changes

@mthrok mthrok requested a review from vincentqb April 15, 2020 13:54
torchaudio/functional.py Outdated Show resolved Hide resolved
Copy link
Contributor

@vincentqb vincentqb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working on this! Can you add the following?

torchaudio/functional.py Show resolved Hide resolved
@mthrok
Copy link
Collaborator

mthrok commented Apr 15, 2020

@bhargavkathivarapu If you need help in adding the tests, let me know.
For batch test you can add test_contrast under TestFunctional and call _test_batch function.
For torchscript test, you can add test_contrast under _FunctionalTestMixin and define an inner function that takes input Tensor as the only argument (other parameters should be embedded in the function), then call self._assert_consistency with input Tensor.

@bhargavkathivarapu
Copy link
Contributor Author

@vincentqb

  • Added batching test for contrast
  • Added torchscript test for contrast
  • Updated functional.rst for contrast and other missing functions

@mthrok , @vincentqb
I think we should have a CONTRIBUTING.md just like the pytorch repo.
Detailing orgainization of torchaudio code base and with the checklists indicating what tests to do and where to add them when adding new features to functional , transforms or datasets

test/test_sox_compatibility.py Show resolved Hide resolved
@@ -65,6 +65,18 @@ def test_istft(self):
])
_test_batch(F.istft, stft, n_fft=4, length=4)

def test_contrast(self):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you use simple random tensor of value range [-1.0, 1.0] i.e. torch.rand(2, 100) - 0.5 ?

The reason is that we do not want this unit test to rely on other library function torchaudio.load.
That way when torchaudio.load is broken, this test is not affected.
I know that test_detect_pitch_frequency does this but it's only because of recent refactoring and in my opinion it should not.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mthrok I have this modified .
Are torchaudio functionals conditioned on normalization ?

  • Because sox in some functions it internally clips the values as per max and min of input.
  • I assumed normalized waveform in writing contrast function .
  • In tests also i didn't see anywhere normalization=False.
  • If unnormalized waveform is passed, the outputs will differ between existing SoxEffects and functional contrast

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bhargavkathivarapu

I am not sure if there is/was a design principle for audio normalization but since torchaudio.load has normalization=True by default, and all the tests are written in normalization=True so I think that's de-facto standard.

Talking about batch consistency test here specifically, the purpose of the test is to assure that function returns the same result regardless of batching and not to assure the result is same as SoX. So it's okay to put whatever the tensor, but to be closer to the actual situation, I thought rather than torch.rand (which produces [0, 1]), torch.rand - 0.5 would be slightly better. You can normalize it to be further closer if you would like.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure if there is/was a design principle for audio normalization but since torchaudio.load has normalization=True by default, and all the tests are written in normalization=True so I think that's de-facto standard.

Yes, we assume that waveforms are in the range [-1, 1] in torchaudio.

We should consider documenting this convention in the readme, and guaranteeing it in some form. A caveat is that, when doing a transformation that makes the waveform fall outside (e.g. gain), there may be more than one way to bring the signal back to the [-1, 1] range.

@mthrok
Copy link
Collaborator

mthrok commented Apr 16, 2020

I think we should have a CONTRIBUTING.md just like the pytorch repo.
Detailing orgainization of torchaudio code base and with the checklists indicating what tests to do and where to add them when adding new features to functional , transforms or datasets

I agree. Recently I also had difficult time figuring out what tests need to be added so I reorganized most of the test, and I was planning to add overview of the test structure. Thanks for bringing this up.

@mthrok
Copy link
Collaborator

mthrok commented Apr 16, 2020

Tests look good to me. Thanks! @bhargavkathivarapu

Copy link
Contributor

@vincentqb vincentqb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

@vincentqb vincentqb requested a review from mthrok April 16, 2020 19:47
@vincentqb vincentqb merged commit 8a742e0 into pytorch:master Apr 16, 2020
@bhargavkathivarapu bhargavkathivarapu deleted the add-contrast branch April 17, 2020 03:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants