New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SquaredL2Loss should inherit from WeightedSquaredL2Loss #94
Comments
I don't think it would break anything. If jitted there would be no performance change. |
I expect that the performance change would be small in any event, but are you really confident that the jit analysis would be able to identify a multiplication by a variable that happens to have been assigned as a unit matrix? Independent of this, is it possible to re-arrange the representation a bit so that we can set W = 1.0 (i.e. a scalar) in the case of no weighting? |
iirc the default isn’t an IdentityMatrix, but rather the Identity LinOp— it’s eval is a no-op. |
cf Lines 214 to 215 in 4684909
|
Only concern would be that the diagonal is explicitly set to Line 211 in 4684909
But can this be scalar Then it could be that the shape of |
The shape of As for identity: yes, it is true that Lines 213 to 217 in 4684909
the .diagonal never gets used.
In earlier times, |
What then is the reasoning to inherit from diagonal? Seems like in other contexts (other than the issue at hand) it may also be wasteful to store In other words, if |
Also, this means that |
I suppose you could relax |
SquaredL2Loss
=WeightedSquaredL2Loss
withW=None
, however, internallyW
will still become the and Identity operator.Hence, should we have the following inheritance?
Pro's and con's:
SquaredL2Loss
trivially smallSomeone with more understanding of the inner workings of scico (@lukepfister) can perhaps comment?
Otherwise the performance concerns can also easily be tested using
while benchmarking the performance of the eval and prox methods.
The text was updated successfully, but these errors were encountered: