-
Notifications
You must be signed in to change notification settings - Fork 732
A Law #930
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
A Law #930
Conversation
Thanks for working on this :) The main thing we need is a way of knowing that the algorithm is correct (now and in the future). Do you have a reference you used, e.g. a paper, a book, or a software? Looking at wikipedia as you pointed here, I see two graphs that could be used for testing. A first test would consist of a python/numpy implementation for the graph "Plot of F(x) for A-Law for A = 87.6", and then compare with pytorch. Maybe the other graph can also be turned into a second test. The first test would be enough, along with a link to wikipedia, which does provide the equation. |
As mentioned here, let me clarify the things that need to be done:
Does the readme for tests give you enough clarity on what each test line mean above? |
For the implementation, I used wikipedia formula as a reference. Now I found that python has alaw in its stdlib. The question is then should I just run a sample wave through How can I test "for torchscriptability on CPU and GPU" and "the operation is batchable"? Regarding the documentation - I added an A Law entry in the |
Thanks again for working on this :)
Good catch! Yes, that is a great way to test for correctness. :)
For torchscript-ability, all you have to do is add a test like this one. The test will then be ran automatically on CPU and GPU in CircleCI :) It checks that the python version and the jitted version of a transform give the same result. For batch-ability, all you need to do is add a test like this one. The test simply checks that "running 3 times the same transformation" versus "running one time on a batch tensor with 3 copies" give the same results.
Adding an entry for the functional in here and for the transform in here as you point out. |
Added documentation, tested for torchscriptability anr batchablity, yet the tests seem to be failing. Is there a way to inspect why? The only thing that is left for doing is the compliance test. Sadly |
There should be a "details" button next to unittest section in the checks list (I believe this is public :) ). On the following page, we can select a system and click. This should lead to here for instance. Does that help? The tests can also be ran locally with |
btw the errors below are fixed on master (#934), so rebasing will fix them :)
|
I'm sure you've noticed :) but for reference: there's some lint errors
and the new test is failing
|
Yes, I noticed. Just wanted to push some changes to have an ability to work on other computer. Regarding the tests, |
No, this was not against audioop. In fact, if you notice some difference, could you open an issue with a code snippet so we can look into it? Thanks! |
Hi @dvisockas! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks! |
Added RPC parameter server tutorial
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Facebook open source project. Thanks! |
I have also noticed this discrepancy. Here's a test: import audioop # NOTE: will be removed from Python stdlib at v3.13!
import torchaudio
# Load as normalized fp32.
waveform, _ = torchaudio.load("https://mod9.io/hi.wav", normalize=True)
ulaw_encoded_torchaudio = torchaudio.functional.mu_law_encoding(waveform, quantization_channels=256)
# Load as int16.
waveform, _ = torchaudio.load("https://mod9.io/hi.wav", normalize=False)
ulaw_encoded_audioop = audioop.lin2ulaw(waveform.numpy().tobytes(), 2)
print("torchaudio", ulaw_encoded_torchaudio[0, :16].tolist())
print("audioop ", list(ulaw_encoded_audioop)[:16]) This prints:
I think the difference might be because TorchAudio is using the Wikipedia definition of the mu-law algorithm, but other audio tools tend to use the reference C implementation of the G711, e.g.: |
TODO: