You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Generally, the losses take (y_true, y_pred) as inputs.
Is there a simple way to incorporate x to the losses in order to evaluate gradient based losses/norms?
In parallel, it could be interesting to enable "custom losses" for model.compile(), i.e. something like model.compile(loss_fn = custom_loss)
with a previously defined custom_loss(y_true, y_pred) or even custom_loss(y_true, y_pred, x)
Thank you for your help!
Paul
The text was updated successfully, but these errors were encountered:
Hi,
I was trying to reproduce the method proposed in: Gradient-enhanced physics-informed neural networks for forward and inverse PDE problems, for example using H^1 (Sobolev) norm instead of a L^2 norm.
Generally, the losses take (y_true, y_pred) as inputs.
Is there a simple way to incorporate x to the losses in order to evaluate gradient based losses/norms?
In parallel, it could be interesting to enable "custom losses" for model.compile(), i.e. something like
model.compile(loss_fn = custom_loss)
with a previously defined
custom_loss(y_true, y_pred)
or evencustom_loss(y_true, y_pred, x)
Thank you for your help!
Paul
The text was updated successfully, but these errors were encountered: