-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SIRT objective #1790
base: master
Are you sure you want to change the base?
SIRT objective #1790
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Signed-off-by: Margaret Duff <43645617+MargaretDuff@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a couple of minor things.
@@ -46,6 +46,8 @@ class SIRT(Algorithm): | |||
:math:`\mathrm{prox}_{C}` is the projection over a set :math:`C`, | |||
and :math:`\omega` is the relaxation parameter. | |||
|
|||
Note that SIRT is equivalent to gradient descent on a weighted least squares objective (weighted by :math:`M`) preconditioned by the sensitivity matrix (:math:`D`). | |||
Thus the objective calculation for SIRT is the weighted least squares objective :math:`\frac{1}{2}\|A x - b\|^{2}_{M}`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a line to define the formula for the M-norm.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the docstring.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
y = self.operator.direct(self.x) | ||
y.subtract(self.data, out = y) | ||
wy=self.M.multiply(y) | ||
self.objective.append(0.5* y.dot(wy)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want the 0.5? Could be omitted for (slight) speed-up and objective function rewritten to omit this. Should think about consistency with defaults in related operations including LeastSquares and CGLS though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There isn't a 0.5 in CGLS (it uses \min || A x - b ||^2_2) and the default for least squares is 1.0. So in some ways, SIRT would be odd to include the 0.5. Mathematically, it is optimising the same thing. Shall we remove it?
y = self.operator.direct(self.x) | ||
y.subtract(self.data, out = y) | ||
wy=self.M.multiply(y) | ||
self.objective.append(0.5* y.dot(wy)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is adding an additional 2 copies of the data in to memory so we need to try to improve it.
wy
is the same as the residuals r
in update
and this is still in memory. So instead I think we can square r
(in place) and then weight it with M (again in place). As r
is not reused by the algorithm it's ok using the memory for this. The first step of the algorithm recomputes the new residuals in to r
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried this but got numerical errors (effectively from dividing and multiplying by M). I will try to debug this again.
a910556
to
f8b513b
Compare
Changes
objective_function
to match the mathematics - it is now a weighted least squares calculation i.e.We might need to consider the efficiency of the objective calculation
Testing you performed
Related issues/links
Closes #1787
Checklist
Contribution Notes
Please read and adhere to the developer guide and local patterns and conventions.
--->