You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now that wrappers are mostly outside of the API (outside exception cases), having @cost unwrap and then add as a cost seems unnecessary. The wrapper could be removed in favour of storch.add_cost, or alternatively tensor.backward() should implement what is currently implemented in storch.backward to make it more "PyTorchy".
I'm still not a big fan of that last idea, as it does not allow computing the extra cost nodes and adding them together in one big loss, which is much more efficient. So I think storch.add_cost should be the API to settle on.
The text was updated successfully, but these errors were encountered:
- Removed the cost wrapper as it did not have much of a logical use anymore. See #32.
- Replaced `DeterministicTensor` by `CostTensor`, as its functionality was only to be able to accomodate positive `is_cost` checks.
- Added a experimental`reduce` wrapper that is able to reduce a batched dim without raising an error. Might remove it, as the use case did not actually need it.
- Reworked `storch.nn.b_binary_cross_entropy` to better accomodate the current API.
Now that wrappers are mostly outside of the API (outside exception cases), having @cost unwrap and then add as a cost seems unnecessary. The wrapper could be removed in favour of
storch.add_cost
, or alternatively tensor.backward() should implement what is currently implemented instorch.backward
to make it more "PyTorchy".I'm still not a big fan of that last idea, as it does not allow computing the extra cost nodes and adding them together in one big loss, which is much more efficient. So I think
storch.add_cost
should be the API to settle on.The text was updated successfully, but these errors were encountered: