Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DM-14740: Stop using ndarray::EigenView indirectly in C++ code #59

Merged
merged 1 commit into from Jun 18, 2018

Conversation

r-owen
Copy link
Contributor

@r-owen r-owen commented Jun 11, 2018

No description provided.

@r-owen r-owen force-pushed the tickets/DM-14740 branch 2 times, most recently from fc538a7 to 9777421 Compare June 13, 2018 13:43
Copy link
Member

@kfindeisen kfindeisen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some style comments, otherwise looks good.

0.5 * (ndarray::asEigenMatrix(result.likelihood->getUnweightedData()).cast<Scalar>() -
ndarray::asEigenMatrix(modelMatrix).cast<Scalar>() *
ndarray::asEigenMatrix(data.amplitudes))
.squaredNorm();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ugh... maybe factor this expression into several local variables?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kfindeisen is probably right that this needs more drastic surgery to be readable, but for simpler cases, at least, you might be able to smooth things out just by omitting the ndarray:: from any call to asEigenMatrix, as argument-dependent lookup will automatically search that namespace.

Copy link
Contributor Author

@r-owen r-owen Jun 18, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice, but I worry about temporary memory going away. I think it's safer to leave it as is.

In practice I think the arrays returned by Likelihood getters are documented to stick around, but that will not be obvious to anyone reading the code. I would rather than the memory safety be self-evident.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Call it chi, I think.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TallJimbo I assume I would have to save the result in a matrix (e.g. copy it). That copy is unnecessary in the original code. Is this a concern?

Eigen::MatrixXd chi =
        ndarray::asEigenMatrix(result.likelihood->getUnweightedData()).cast<Scalar>() -
        ndarray::asEigenMatrix(modelMatrix).cast<Scalar>() * ndarray::asEigenMatrix(data.amplitudes);
result.objective = 0.5 * chi.squaredNorm();

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For expediency I'm going to leave the code as it was. I agree it could use clarification but want to be sure to do it in a way that is safe and performant.

src/CModel.cc Outdated
@@ -904,7 +907,7 @@ class CModelAlgorithm::Impl {
// on the two components, which means the actual uncertainty is neither Gaussian nor symmetric,
// which is a lot harder to compute and a lot harder to use.
ndarray::Array<Pixel,1,1> model = ndarray::allocate(likelihood.getDataDim());
model.asEigen() = modelMatrix.asEigen() * amplitudes.cast<Pixel>();
ndarray::asEigenMatrix(model) = ndarray::asEigenMatrix(modelMatrix) * amplitudes.cast<Pixel>();
WeightSums sums(model, likelihood.getUnweightedData(), likelihood.getVariance());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replace with unweightedData?

ndarray::Array<Pixel,2,-1> modelMatrix = makeModelMatrix(likelihood, nonlinear);
Vector gradient = -(modelMatrix.asEigen().adjoint() *
likelihood.getUnweightedData().asEigen()).cast<Scalar>();
auto unweightedData = likelihood.getUnweightedData();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider making this variable be ndarray::asEigenMatrix(likelihood.getUnweightedData()) (though it would invalidate my comment about the definition of WeightSums).

Copy link
Contributor Author

@r-owen r-owen Jun 18, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would want to keep the ndarray around as well as the Eigen::Map view of it (to make it obvious that the memory is safely held). I'm not very happy keeping track of two variables that contain different views of the same data, especially when the ndarray version's name already ends with "Matrix"

ndarray::asEigenMatrix(nonlinearHessian) =
ndarray::asEigenMatrix(nonlinearHessian).selfadjointView<Eigen::Lower>();
ndarray::asEigenMatrix(nonlinearGradient) *= p;
ndarray::asEigenMatrix(nonlinearHessian) *= p;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider factoring these identical asEigenMatrix calls into local variables; I think that would make this code a bit more readable.

} else {
ix->asEigen() = component._mu
ndarray::asEigenMatrix(*ix) = component._mu
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could not only get rid of the *ix, but also simplify the rest of the method with a foreach loop.

Copy link
Contributor Author

@r-owen r-owen Jun 18, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on @TallJimbo's comment on the shapelet pull request I'll hold off on this suggestion, attractive though it is.

src/optimizer.cc Outdated
@@ -310,10 +311,11 @@ void OptimizerHistoryRecorder::fillObjectiveModelGrid(
Vector gradient(parameters.getSize());
Matrix hessian(parameters.getSize(), parameters.getSize());
Vector s(parameters.getSize());
Vector current = record.get(parameters).asEigen();
auto currentNdArray = record.get(parameters);
Vector current = ndarray::asEigenMatrix(currentNdArray);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding a comment that currentNdArray must be a local variable.

thus stop using ndarray::EigenView in C++ code
(though it is still indirectly used in pybind11 wrappers)
@r-owen r-owen merged commit dc19177 into master Jun 18, 2018
@ktlim ktlim deleted the tickets/DM-14740 branch August 25, 2018 05:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants