-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
the size of Jacobians when using localparamerization and analytical derivative #303
Comments
The local parameterization also has to implement a Plus operation which must be compatible with the Jacobian. Therefore, the solution you are using only makes sense if you can ignore the last n_xg - n_xl coordinates, which if you can, then why use that as a parameter size at all? All that said, the big question is, how much of the time that you are trying to optimize is being spent in the application of the local parameterization that you feel this is necessary? generally speaking this time is quite small. |
Thanks for your reply. For a simple example, we try to use 9 * 1 (3 * 3) elements to denote a rotation matrix and use the exponential mapping ( Exp( ) ) as the oplus operation involving all these 9 elements. We think this way can save time compared to quaternion/axis-angle because the 9 elements can be directly used in the ``Evaluate" without the need to transform the state vector to a rotation matrix. For our understanding, we have to provide a n_f * 9 Jacobian matrix ( for orientation, in Cost function ) and a 9*3 Jacobian in local-parameterization. However, the size of a real Jacobian matrix for building the Hessian matrix only need the local size of the parameter instead of the global size. |
@RomaTeng I understand that. My question to you is,
|
|
how does this work with the Plus operation for your local parameterization? |
Now our solution works. |
Hi @RomaTeng, How do you implement the trick of jacobians? Does the Plus operation need any special treatment? Thanks in advance, Jon |
In the residual block, I provide |
Hi @RomaTeng, In the LocalParameterization, ceres gives us the ability to change the MultiplyByJacobian function which appears to be the way ceres maps the globally parameterized jacobians into the locally parameterized jacobians. With the ability to modify this function, we should be able to skip that unnecessary I have not tried this yet, but if it works, then only a little time is wasted reassigning variables. This is much better than multiplying by an identity matrix. I will be trying this out in the coming weeks. |
Thanks. If your method works, could you tell me? |
Hello,
I am trying Ceres for solving nonlinear least squares.
There is a little problem (but I am not sure):
For the cost function f(x), the residual is n_f dimension, the global size of x is n_xg,
the local size of x is n_xl ( n_xl << n_xg ).
For efficiency, we are trying to use the analytical cost function and provide the Jacobians.
However, in Ceres, we have to provide the n_f * n_xg Jacobians(in cost function) and the n_xg * n_xl
Jacobians (in Local parameterization).
Our current solution is filling the matrix with zeros:
we use [n_f * n_xl 0] as the Jacobians in the costfunction and [ I ; 0 ] as the Jacobians in the localparamerization. From our experiment, this solution works.
If we just want to directly provide the n_f * n_xl Jacobians ( especially for the case n_xg >> n_xl ) , how should we do?
Dear developers, I think you can understand my problem.
Regards,
Teng
The text was updated successfully, but these errors were encountered: