You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
When I try to train DEPICT I receive the following error in random epochs. I have received it in epochs 165 400 2000. The error is
TypeError: ('An update must have the same type as the original shared variable (shared_var=<TensorType(float32, matrix)>, shared_var.type=TensorType(float32, matrix), update_val=Elemwise{add,no_inplace}.0, update_val.type=TensorType(float64, matrix)).', 'If the difference is related to the broadcast pattern, you can call the tensor.unbroadcast(var, axis_to_unbroadcast[, ...]) function to remove broadcastable dimensions.')
It occurs in this block of code:
updates = lasagne.updates.adam(
loss, params2, learning_rate=learning_rate)
train_fn = theano.function([input_var, target_var],
[loss, loss_recons, loss_clus], updates=updates)
Did you face it ? Any ideas on how can I bypass it? The versions of Theano and Lasagne are as instructed in the guidelines.
The text was updated successfully, but these errors were encountered:
The model shouldn't really go to epochs 400 or 2000. What data are you using for this? Seems like you are having type problems with float64 apparently, did you make sure all the float values are float32 before you pass them to the network?
Hi,
When I try to train DEPICT I receive the following error in random epochs. I have received it in epochs 165 400 2000. The error is
TypeError: ('An update must have the same type as the original shared variable (shared_var=<TensorType(float32, matrix)>, shared_var.type=TensorType(float32, matrix), update_val=Elemwise{add,no_inplace}.0, update_val.type=TensorType(float64, matrix)).', 'If the difference is related to the broadcast pattern, you can call the tensor.unbroadcast(var, axis_to_unbroadcast[, ...]) function to remove broadcastable dimensions.')
It occurs in this block of code:
updates = lasagne.updates.adam(
loss, params2, learning_rate=learning_rate)
train_fn = theano.function([input_var, target_var],
[loss, loss_recons, loss_clus], updates=updates)
Did you face it ? Any ideas on how can I bypass it? The versions of Theano and Lasagne are as instructed in the guidelines.
The text was updated successfully, but these errors were encountered: