Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

License #1

Closed
JuanFMontesinos opened this issue Mar 28, 2019 · 6 comments
Closed

License #1

JuanFMontesinos opened this issue Mar 28, 2019 · 6 comments

Comments

@JuanFMontesinos
Copy link
Contributor

Hi,
Do you plan to add open-source license? It would be super cool.

@seungwonpark
Copy link
Owner

Hi, thanks for your interest.
Actually, these codes are not my own - this repo originates from Keunwoo Choi’s work: keunwoochoi/torchaudio-contrib#27
If you plan to use his work(istft_irfft.py), I recommend you to ask him about license there. Are you planning to use my implementation, istft_deconv.py?

@JuanFMontesinos
Copy link
Contributor Author

JuanFMontesinos commented Mar 28, 2019

Hi, It's the most clean one I've seen, following PyTorch semantics and so on. I was thinking that it would be good to generate a functional from this function and a nn.Module registering the kernel of the convolution as a buffer (not to be recomputed each time you apply an istft)

I have also seen the fft version but I don't think it were faster than convolutional version (did you compared it?).

Anyway, even if the code comes from keunwoochoi, I saw you've applied several changes and copyright "only" covers the usage of exactly the same code. As you rewrote his code it belongs to you in practice.

Edit:
I performed some reconstruction test and error is usually <1e-3, using torch.stft as stft generator. Thus yep, I would like to use it if you don't mind.

@seungwonpark
Copy link
Owner

Wow, thank you for saying so.

I was thinking that it would be good to generate a functional from this function and a nn.Module registering the kernel of the convolution as a buffer (not to be recomputed each time you apply an istft)

-> Yes, I've already done that privately and works seemlessly. The only reason I didn't show that work here is to compare my code's speed performance with keunwoochoi's implementation. As his code doesn't generate nn.Module, I didn't do that here.

I have also seen the fft version but I don't think it were faster than convolutional version (did you compared it?).

-> Yes, I've experimented that in inspection.ipynb, and found that my implementation is 2~5x slower than keunwoochoi's. If speed matters to you, you'd better use istft_irfft.py. If code semantics matter, then you'll prefer using istft_deconv.py, which is mine ;)

As you rewrote his code it belongs to you in practice.

-> I'm glad to hear that. I'll be right back with commit that adds license.

@seungwonpark
Copy link
Owner

I've added a license. da9a824
Feel free to reopen this issue if you need further discussion about the license.

@JuanFMontesinos
Copy link
Contributor Author

Hi, just to be sure, have you considered at the time of timing that cuda is asynchronous?
I don't know if the way you did it is fair. (Soz, before inspection file wasn't loading for me, and not loading now again).
You may want to have a look at https://discuss.pytorch.org/t/how-to-measure-time-in-pytorch/26964
It would be good to move allocate tensors on cuda outside the function to be more precise.

@seungwonpark
Copy link
Owner

Seems like GitHub can't load IPython notebook these days. :(
I've exported inspection.ipynb to inspection.py at here.

I just did %timeit command in IPython. As you pointed out, I think it would not be a fair comparison. I have almost no knowledge on PyTorch time profiling.

If you create a Pull Request that adds code for fair comparison or precision improvement, I would love to merge it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants