Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extra loss functions in loss list are ignored #123

Closed
arvoelke opened this issue Dec 20, 2019 · 1 comment
Closed

Extra loss functions in loss list are ignored #123

arvoelke opened this issue Dec 20, 2019 · 1 comment

Comments

@arvoelke
Copy link
Contributor

arvoelke commented Dec 20, 2019

import nengo
import nengo_dl
import numpy as np
import tensorflow as tf

ignored_loss = lambda y_true, y_pred: 0 / 0  # raise ZeroDivisionError

with nengo.Network() as net:
    p = nengo.Probe(nengo.Node(1))
    
with nengo_dl.Simulator(net) as sim:
    sim.compile(loss=[nengo_dl.losses.Regularize(), ignored_loss])
    sim.fit(n_steps=1, y=np.zeros((1, 1, 1)), epochs=1)

Expected behaviour: this should raise an error or warning saying that extra elements in the loss list are not being used. We can see that ignored_loss is being ignored since no ZeroDivisionError is raised; if we make it the first element in the loss list then we do get the zero division error.

Context: I was trying to do this in the hope that it might somehow weight together the two loss functions (normally in Keras one can easily add extra loss functions like regularization to other parts of the network). For reference, to do that the correct way, see the pattern in this unit test:

def test_regularize_train(Simulator, mode, seed):
with nengo.Network(seed=seed) as net:
a = nengo.Node([1])
b = nengo.Ensemble(
30,
1,
neuron_type=nengo.Sigmoid(tau_ref=1),
gain=nengo.dists.Choice([1]),
bias=nengo.dists.Choice([0]),
)
c = nengo.Connection(
a, b.neurons, synapse=None, transform=nengo.dists.Uniform(-0.1, 0.1)
)
if mode == "weights":
p = nengo.Probe(c, "weights")
else:
p = nengo.Probe(b.neurons)
# default output required so that there is a defined gradient for all
# parameters
default_p = nengo.Probe(b)
with Simulator(net) as sim:
sim.compile(
tf.optimizers.RMSprop(0.01 if mode == "weights" else 0.1),
loss={p: losses.Regularize(), default_p: lambda y_true, y_pred: 0 * y_pred},
)
sim.fit(
n_steps=5,
y={
p: np.zeros((1, 5, p.size_in)),
default_p: np.zeros((1, 5, default_p.size_in)),
},
epochs=100,
)
sim.step()
assert np.allclose(sim.data[p], 0, atol=1e-2)

@drasmuss
Copy link
Member

drasmuss commented Aug 3, 2020

A warning was added in #139

@drasmuss drasmuss closed this as completed Aug 3, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants