You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently we are using custom numpy logic for rmsprop. Keras already provides optimizers that we should leverage. It is also much much better to run updates on GPU instead transitioning back and forth to numpy.
This has bigger implications..everything in numpy needs to be converted to use tensors. Ex: modifiers module.
The text was updated successfully, but these errors were encountered:
It is not possible to use keras optimizer with an input placeholder as we cannot compute the gradient of placeholder wrt loss. The only other alternative is to modify model graph which is pretty hairy. Closing this as we wont go this route.
Currently we are using custom numpy logic for rmsprop. Keras already provides optimizers that we should leverage. It is also much much better to run updates on GPU instead transitioning back and forth to numpy.
This has bigger implications..everything in numpy needs to be converted to use tensors. Ex: modifiers module.
The text was updated successfully, but these errors were encountered: