New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DM-14740: Stop using ndarray::EigenView indirectly in C++ code #59
Conversation
fc538a7
to
9777421
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some style comments, otherwise looks good.
0.5 * (ndarray::asEigenMatrix(result.likelihood->getUnweightedData()).cast<Scalar>() - | ||
ndarray::asEigenMatrix(modelMatrix).cast<Scalar>() * | ||
ndarray::asEigenMatrix(data.amplitudes)) | ||
.squaredNorm(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ugh... maybe factor this expression into several local variables?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kfindeisen is probably right that this needs more drastic surgery to be readable, but for simpler cases, at least, you might be able to smooth things out just by omitting the ndarray::
from any call to asEigenMatrix
, as argument-dependent lookup will automatically search that namespace.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice, but I worry about temporary memory going away. I think it's safer to leave it as is.
In practice I think the arrays returned by Likelihood getters are documented to stick around, but that will not be obvious to anyone reading the code. I would rather than the memory safety be self-evident.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Call it chi
, I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@TallJimbo I assume I would have to save the result in a matrix (e.g. copy it). That copy is unnecessary in the original code. Is this a concern?
Eigen::MatrixXd chi =
ndarray::asEigenMatrix(result.likelihood->getUnweightedData()).cast<Scalar>() -
ndarray::asEigenMatrix(modelMatrix).cast<Scalar>() * ndarray::asEigenMatrix(data.amplitudes);
result.objective = 0.5 * chi.squaredNorm();
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For expediency I'm going to leave the code as it was. I agree it could use clarification but want to be sure to do it in a way that is safe and performant.
src/CModel.cc
Outdated
@@ -904,7 +907,7 @@ class CModelAlgorithm::Impl { | |||
// on the two components, which means the actual uncertainty is neither Gaussian nor symmetric, | |||
// which is a lot harder to compute and a lot harder to use. | |||
ndarray::Array<Pixel,1,1> model = ndarray::allocate(likelihood.getDataDim()); | |||
model.asEigen() = modelMatrix.asEigen() * amplitudes.cast<Pixel>(); | |||
ndarray::asEigenMatrix(model) = ndarray::asEigenMatrix(modelMatrix) * amplitudes.cast<Pixel>(); | |||
WeightSums sums(model, likelihood.getUnweightedData(), likelihood.getVariance()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace with unweightedData
?
ndarray::Array<Pixel,2,-1> modelMatrix = makeModelMatrix(likelihood, nonlinear); | ||
Vector gradient = -(modelMatrix.asEigen().adjoint() * | ||
likelihood.getUnweightedData().asEigen()).cast<Scalar>(); | ||
auto unweightedData = likelihood.getUnweightedData(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider making this variable be ndarray::asEigenMatrix(likelihood.getUnweightedData())
(though it would invalidate my comment about the definition of WeightSums
).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would want to keep the ndarray around as well as the Eigen::Map view of it (to make it obvious that the memory is safely held). I'm not very happy keeping track of two variables that contain different views of the same data, especially when the ndarray version's name already ends with "Matrix"
src/GeneralPsfFitter.cc
Outdated
ndarray::asEigenMatrix(nonlinearHessian) = | ||
ndarray::asEigenMatrix(nonlinearHessian).selfadjointView<Eigen::Lower>(); | ||
ndarray::asEigenMatrix(nonlinearGradient) *= p; | ||
ndarray::asEigenMatrix(nonlinearHessian) *= p; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider factoring these identical asEigenMatrix
calls into local variables; I think that would make this code a bit more readable.
} else { | ||
ix->asEigen() = component._mu | ||
ndarray::asEigenMatrix(*ix) = component._mu |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could not only get rid of the *ix
, but also simplify the rest of the method with a foreach loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on @TallJimbo's comment on the shapelet
pull request I'll hold off on this suggestion, attractive though it is.
src/optimizer.cc
Outdated
@@ -310,10 +311,11 @@ void OptimizerHistoryRecorder::fillObjectiveModelGrid( | |||
Vector gradient(parameters.getSize()); | |||
Matrix hessian(parameters.getSize(), parameters.getSize()); | |||
Vector s(parameters.getSize()); | |||
Vector current = record.get(parameters).asEigen(); | |||
auto currentNdArray = record.get(parameters); | |||
Vector current = ndarray::asEigenMatrix(currentNdArray); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding a comment that currentNdArray
must be a local variable.
thus stop using ndarray::EigenView in C++ code (though it is still indirectly used in pybind11 wrappers)
No description provided.