Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Any possibility for a TensorFlow implementation? #1

Closed
TensorHusker opened this issue Nov 16, 2021 · 3 comments
Closed

Any possibility for a TensorFlow implementation? #1

TensorHusker opened this issue Nov 16, 2021 · 3 comments
Labels
enhancement New feature or request

Comments

@TensorHusker
Copy link

Hello. I have a few different projects that could potentially benefit from these algorithms. However, these projects and classes all use TensorFlow as their main framework. While rewriting these layers would probably not be too difficult on my end, are there any plans for a TensorFlow port? If not, would it be possible for one to be written?

Also, just to get an idea, how difficult is it generally to port PyTorch layers to TensorFlow? How much debugging and messing around would one be looking at?

@eleGAN23
Copy link
Owner

Dear TensorHusker,
thanks for opening this issue!
Actually, we have not planned yet to port our method in TensorFlow, but it should not be painful. To make our approach work on TF, you should only port the file layers/ph_layers.py and then include the classes in your pre-existing models.

If you would like to do the porting and make some tests, we can merge a pull request and add your implementation to our repository! It would be nice.

Let us know!

@eleGAN23 eleGAN23 added the enhancement New feature or request label Nov 16, 2021
@TensorHusker
Copy link
Author

It looks like a relatively straightforward port. Layers should be easier than things like optimizers. Anything you guys used that is Torch-specific that I should be aware of? Or should everything have an equivalent in TensorFlow?

Also, has this been tested with autoencoders, by any chance? I saw on the ReadMe that there is an autoencoder repository from some other paper. One of the things I'm trying out involves some downsampling and upsampling on images with dimensions between (512)^2 - (1024)^2. I was wondering if this is mainly something that only really seems to work well on image classification, or if it can be co-opted as generic convolutional (and dense) layer replacements. I bring up the dimensions because - if I am not mistaken - the paper talks about how the inputs don't have to be restricted like in prior work.

@eleGAN23
Copy link
Owner

Yes, it should be relatively straightforward. I think the key components are the tensor_name.unsqueeze operation which in TF should be tf.expand_dims (but you'd better check it) and the F.conv2d which allows the user to pass custom weights for the convolution. I guess that similar functions already exist in TF too.

We have not tested these layers with autoencoders yet. However, we perform experiments also for sound event detection and the approach works well! We are also testing it in other applications obtaining good results. It may be considered as a generic convolutional layer, however, it works especially for multidimensional inputs since it grasps correlations among input dimensions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants