Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert kernel_mask into a constant tensor #74

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

Larst0
Copy link

@Larst0 Larst0 commented Aug 17, 2021

When I use the given implementation for training, I always get NaN values at the output. Sometimes this happens after a few training steps and sometimes after a few epochs (depending on the training data used).

While debugging, I noticed that the kernel_mask was updated. I think this is because K.ones(shape=...) returns a trainable variable if all entries in the passed shape are >0. In the original PyTorch implementation the kernel_mask is initialized using weight_maskUpdater = torch.ones(...), which by default creates a non-trainable tensor (since requires_grad=False).

After replacing K.ones(...) with K.constant(...) the NaN values no longer occur.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant