Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add flanger to functional #629

Closed

Conversation

bhargavkathivarapu
Copy link
Contributor

Hi ,
Regarding the issue ( Reducing dependency in Sox - #260 )
Implemented flanger function in functional.py

  • function.rst - updated with new function

  • test_sox_compatibility.py - Added four tests for flanger ( Tested if all variations are working fine . we can remove some tests as it adds more time to test run time )

  • test_batch_consistency.py - added test for flanger

  • test_torchscript_consistency.py - added test for flanger

All CPU test passed.

@vincentqb or @mthrok could you please review these changes

Signed-off-by: Bhargav Kathivarapu <bhargavkathivarapu31@gmail.com>
Signed-off-by: Bhargav Kathivarapu <bhargavkathivarapu31@gmail.com>
Signed-off-by: Bhargav Kathivarapu <bhargavkathivarapu31@gmail.com>
@codecov
Copy link

codecov bot commented May 12, 2020

Codecov Report

Merging #629 into master will increase coverage by 0.30%.
The diff coverage is 98.21%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #629      +/-   ##
==========================================
+ Coverage   89.00%   89.31%   +0.30%     
==========================================
  Files          21       21              
  Lines        2255     2311      +56     
==========================================
+ Hits         2007     2064      +57     
+ Misses        248      247       -1     
Impacted Files Coverage Δ
torchaudio/functional.py 95.94% <98.21%> (+0.43%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 00d3820...7440564. Read the comment docs.


delay_buf_pos = (delay_buf_pos + delay_buf_length - 1) % delay_buf_length

for c in range(0, n_channels):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct me if I am wrong, it seems that the same operation with different parameters is applied to each channel individually. If operations among channels do not interweave, can we vectorize the operation?

Looking at the test result, the test of this function takes the longest.
https://app.circleci.com/pipelines/github/pytorch/audio/1768/workflows/4a2bae91-b123-4fcd-af13-a0a24c929c1e/jobs/44621

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mthrok , I have vectorized the channels loop , But facing issue in torch script test , I am getting below error

RuntimeError: 
E           Arguments for call are not valid.
E           The following variants are available:
E             
E             aten::frac(Tensor self) -> (Tensor):
E             Expected a value of type 'Tensor' for argument 'self' but instead found type 'float'.
E             
E             aten::frac.out(Tensor self, *, Tensor(a!) out) -> (Tensor(a!)):
E             Expected a value of type 'Tensor' for argument 'self' but instead found type 'float'.
E           
E           The original call is:
E             File "/Users/bhargav/audio/torchaudio/functional.py", line 1542
E                   cur_channel_phase = (torch.arange(0, n_channels) * lfo_length * channel_phase + .5).to(torch.int64)
E                   delay = lfo[(lfo_pos + cur_channel_phase) % lfo_length]
E                   frac_delay = torch.frac(delay)
E                                ~~~~~~~~~~ <--- HERE
E                   delay = torch.floor(delay)
E           'flanger' is being compiled since it was called from 'func'
E             File "/Users/bhargav/audio/test/test_torchscript_consistency.py", line 542
E                       phase = 60.
E                       sample_rate = 44100
E                       return F.flanger(tensor, sample_rate, delay, depth, regen, width, speed,
E                              ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~...  <--- HERE
E                                        phase, sinusoidal=True, linear_interpolation=True)

The delay variable is a tensor, But the torch script is saying it got float instead of tensor

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

delay is used in function signature and annotated as float, so JIT compiler considers it float but you are also re-defining variable delay as Tensor.

Change the variable name to something like delay_tensor = lfo[(lfo_pos + cur_channel_phase) % lfo_length], and it should work.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mthrok , Fixed it , Torch script test passed

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bhargavkathivarapu
Thanks. Can you rebase? In #644, I moved the test file.

Also, in Torchscript test and batch consistency test, instead of loading sample value from a file, can you generate input value, with fixed random seed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mthrok , I have messed up rebasing , I am closing this pull request . Made a new pull request with suggested changes( #651 ) , please check that .

bhargavkathivarapu and others added 21 commits May 16, 2020 14:27
Signed-off-by: Bhargav Kathivarapu <bhargavkathivarapu31@gmail.com>
Signed-off-by: Bhargav Kathivarapu <bhargavkathivarapu31@gmail.com>
Signed-off-by: Bhargav Kathivarapu <bhargavkathivarapu31@gmail.com>
* Make fbank support cuda

* Reduce rtol for kaldi

* fix test

* fix flake8
* Enable CUDA tests for Windows

* Add back branch filters
* backend change instructions in readme.

* link to installation instructions.
* add slaney normalization.

* add torchscript.

* convert to string for torchscript compatibility.

* flake8.

* use string as default.
This PR
 - Changes the location of C++ codes from
     - from `torchaudio/torch_sox.cpp` to `torchaudio/csrc/sox.cpp`
     - from `torchaudio/torch_sox.h` to `torchaudio/csrc/sox.h`
 - Changes the location where the resulting library is placed
     - from `_torch_sox.so` (outside of `torchaudio` module)
        to `torchaudio/_torchaudio.so`.
Currently there is no CUDA device for regular FB's infra, where `torchaudio`'s tests are ran.
Skipping these tests consistently raises a flag so we decided to split the tests into runnable and not-runnable on file-level.
* Add windows binary jobs

* Make wheel package platform-specific and python-version-specific
1. Docker image used for smoke test is old (No Python 3.8). Updating this to the latest one. build [here](https://app.circleci.com/pipelines/github/pytorch/audio/1763/workflows/56c846a5-acaa-41a7-92f5-46ec66186c61/jobs/44484)
2. Add `latest` tag to the newly built docker so that later we can always use `latest` version and we do not have to update docker image tag manually.
Signed-off-by: Bhargav Kathivarapu <bhargavkathivarapu31@gmail.com>
Signed-off-by: Bhargav Kathivarapu <bhargavkathivarapu31@gmail.com>
mthrok pushed a commit to mthrok/audio that referenced this pull request Feb 26, 2021
mpc001 pushed a commit to mpc001/audio that referenced this pull request Aug 4, 2023
This change is necessary for --resume and --evaluate methods when --gpu is specified. For example, if the model was trained on GPU 3 and then you try to evaluate it on a machine with a single GPU called GPU 0, you would get this error: 

RuntimeError: Attempting to deserialize object on CUDA device 3 but torch.cuda.device_count() is 3. Please use torch.load with map_location to map your storages to an existing device.

This change allows --evaluate and --resume functionalities to work correctly when a single GPU is used.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants