Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix typo in jacobian comments #191

Merged

Conversation

efernandez
Copy link
Collaborator

Simply saw this typo while I was looking at this trying to figure out how hard it'd be to have analytic jacobians for the absolute and relative 2D pose cost constraints, i.e. for the normal prior and delta specialization for 2D poses they use:

  • template <typename T>
    bool NormalPriorPose2DCostFunctor::operator()(const T* const position, const T* const orientation, T* residual) const
    {
    Eigen::Matrix<T, 3, 1> full_residuals_vector;
    full_residuals_vector(0) = position[0] - T(b_(0));
    full_residuals_vector(1) = position[1] - T(b_(1));
    full_residuals_vector(2) = fuse_core::wrapAngle2D(orientation[0] - T(b_(2)));
    // Scale the residuals by the square root information matrix to account for
    // the measurement uncertainty.
    Eigen::Map<Eigen::Matrix<T, Eigen::Dynamic, 1>> residuals_vector(residual, A_.rows());
    residuals_vector = A_.template cast<T>() * full_residuals_vector;
    return true;
    }
  • template <typename T>
    bool NormalDeltaPose2DCostFunctor::operator()(
    const T* const position1,
    const T* const orientation1,
    const T* const position2,
    const T* const orientation2,
    T* residual) const
    {
    Eigen::Map<const Eigen::Matrix<T, 2, 1>> position1_vector(position1);
    Eigen::Map<const Eigen::Matrix<T, 2, 1>> position2_vector(position2);
    Eigen::Matrix<T, 3, 1> full_residuals_vector;
    full_residuals_vector.template head<2>() =
    fuse_core::rotationMatrix2D(orientation1[0]).transpose() * (position2_vector - position1_vector) -
    b_.head<2>().template cast<T>();
    full_residuals_vector(2) = fuse_core::wrapAngle2D(orientation2[0] - orientation1[0] - T(b_(2)));
    // Scale the residuals by the square root information matrix to account for
    // the measurement uncertainty.
    Eigen::Map<Eigen::Matrix<T, Eigen::Dynamic, 1>> residuals_vector(residual, A_.rows());
    residuals_vector = A_.template cast<T>() * full_residuals_vector;
    return true;
    }

This is a bit off-topic in this PR, but it looks like the main reason they don't use analytic jacobians is because those functors don't have access to the specific pose components/indices that are being used, i.e. x, y and/or yaw. I wonder:

  1. if it'd be possible and relatively clean to pass the indices to these cost functors. Actually to the corresponding cost function that would evaluate the jacobians analytically
  2. and what would be the benefit of using analytic jacobians over automatic differentiation for each one of these two types of constraints.

Sorry I'm hijacking this PR to open this topic, but I'd like to know your thoughts on this @svwilliams 馃槂

I'd be happy to give this a try. The analytic jacobians are indeed trivial for the absolute constraint (the normal prior) case, since it's just the matrix A_ for the full pose, and simply some blocks depending on the indices used.

@efernandez efernandez self-assigned this May 7, 2020
@efernandez efernandez added the question Further information is requested label May 7, 2020
@svwilliams svwilliams merged commit 3448361 into locusrobotics:devel May 18, 2020
@svwilliams
Copy link
Contributor

The reason there are no analytic jacobians is pure laziness on my part. The pose component order is implied by the implemented function; I don't think you even need to pass the indices into the function to compute the derivatives. If you want to implement some derivative math, that would be great.

@efernandez efernandez deleted the fix-jacobian-comment-typo branch May 20, 2020 15:57
@efernandez
Copy link
Collaborator Author

The reason there are no analytic jacobians is pure laziness on my part. The pose component order is implied by the implemented function; I don't think you even need to pass the indices into the function to compute the derivatives. If you want to implement some derivative math, that would be great.

Yep, there's no need to pass any indices. I've already done this for the normal prior and I should sent a PR soon. I'm working on testing and benchmarking things for all possible combinations of indices atm. For the full state (all indices) the speed up for the cost function is approx. x4, which is great, although I don't expect to have a huge impact overall.

Then I'd do the normal delta, that it's just a bit more complex, but still simple. That one I only have on paper atm.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Development

Successfully merging this pull request may close these issues.

None yet

3 participants