-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Confusing about the algorithms #5
Comments
Hi @xuhao1, The function you should look at is:
This function called from: https://github.com/CogRob/distributed-mapper/blob/master/distributed_mapper_core/cpp/scripts/runDistributedMapper.cpp#L265 As specified here:
Depending on the gamma value and update type, the algorithm can be switched from Jacobi Over-relaxation, Successive Over-Relaxation, Jacobi or Gauss-Seidel. By default, Distirbuted Gauss-Seidel is used.
Hope this helps. |
Hi @itzsid Thanks for you reply! |
Equation 19 is equivalent to estimating rotation or translation, depending on which stage of the two-stage is being optimized. Rotation optimization is called here:
and implemented here:
The equation inside the bracket corresponds to all the communication from the neighboring robots. Some of those robots are already updated in the current iteration (y^(k+1)) and others are not updated (y^k). We take the latest estimate of all those robots along with the measurement constraints given by H to optimize each robot. So, each robot's optimization given updated constraints from neighboring robots solves the full equation 19 and not the summation separately. The slides here might help: https://itzsid.github.io/publications/web/icra16/presentation.pdf |
@itzsid |
@xuhao1, I did it mostly due to the convenience of factor graph framework (its easier to use GaussianFactorGraph than to solve the linear system myself). If you can invert H_{\alpha\alpha}, it can be cached and stored since it won't change throughout the optimization. Only the estimates from other robots (y_\beta) changes as the optimization progresses. Although I haven't tested it, I think it should work. |
@itzsid Actually, I have tested directly using Equation (19) on a 4 DoF pose graph estimation in my implementation. It works when the initial error is not too big but still has some convergence issue on large scale problem (that is why I am reading your code and try to figure out the difference). |
Hello, paper "Distributed Trajectory Estimation with Privacy and Communication Constraints: a Two-Stage Distributed Gauss-Seidel Approach" describe the two stage distrbuted pose graph optimization is based on Distributed Gauss Seidel(or JOR/SOR), the core iteration of the DGS(SOR/JOR) for solving the linear equation is equation (18) and (19) in paper page 8.
However, what's I found in the repo is, at line
, a GaussianFactorGraph is adopted to solve the local linear equation with variable elimination. Then the local poses are updated by
y_(k+1) = (1-gamma)y_k+gamma y_(k+1)
with code
at line
My question is, why these codes equal to function (18)-(19)?
The text was updated successfully, but these errors were encountered: