Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

little bug in tensor_regression_layer_pytorch.ipynb #13

Closed
segalinc opened this issue Jan 26, 2021 · 3 comments
Closed

little bug in tensor_regression_layer_pytorch.ipynb #13

segalinc opened this issue Jan 26, 2021 · 3 comments

Comments

@segalinc
Copy link

Hi Jean,

just wanted to point out the in the example tensor_regression_layer_pytorch.ipynb
in the TRL layer forward pass this line regression_weights = tl.tucker_to_tensor(self.core, self.factors)
should instead be regression_weights = tl.tucker_to_tensor((self.core, self.factors)) otherwise you get an error.
This happened to me while working on my code using latest version

Let me know if you get it too running the example

@JeanKossaifi
Copy link
Owner

JeanKossaifi commented Jan 27, 2021

Hi Christina,

Thanks for reporting, you are completely right! The issue should now have been fixed in 6af351c.

As a side note, we now provide well tested PyTorch Tensor Regression Layers in TensorLy-Torch:

These also support tensor hooks such as tensor dropout or rank regularization (lasso).

Feel free to re-open if you still have the issue!

@segalinc
Copy link
Author

segalinc commented Jan 27, 2021 via email

@JeanKossaifi
Copy link
Owner

JeanKossaifi commented Jan 27, 2021

Based on the tuple you show for rank, I'm assuming this is for Tucker. I would say rank='same' is a good start. Alternatively, rank=kernel.shape should always work well.
Ideally you want to reduce the rank to benefit from the low-rank regularization (e.g. rank=0.75).

Depending on the problem, with fine-tuning/retraining, you should be able to reach rank=0.5 without loss of performance (the low-rank regularization may even help generalize better with longer fine-tuning/retraining).
Let me know of any feedback you may have! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants