Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confusing about the algorithms #5

Closed
xuhao1 opened this issue May 5, 2021 · 6 comments
Closed

Confusing about the algorithms #5

xuhao1 opened this issue May 5, 2021 · 6 comments

Comments

@xuhao1
Copy link

xuhao1 commented May 5, 2021

Hello, paper "Distributed Trajectory Estimation with Privacy and Communication Constraints: a Two-Stage Distributed Gauss-Seidel Approach" describe the two stage distrbuted pose graph optimization is based on Distributed Gauss Seidel(or JOR/SOR), the core iteration of the DGS(SOR/JOR) for solving the linear equation is equation (18) and (19) in paper page 8.
However, what's I found in the repo is, at line

  newLinearizedPoses_ = distGFG.optimize();

, a GaussianFactorGraph is adopted to solve the local linear equation with variable elimination. Then the local poses are updated by
y_(k+1) = (1-gamma)y_k+gamma y_(k+1)
with code

          linearizedPoses_.at(key) = (1-gamma_)*linearizedPoses_.at(key) + gamma_*newLinearizedPoses_.at(key);

at line

My question is, why these codes equal to function (18)-(19)?

@itzsid
Copy link
Collaborator

itzsid commented May 8, 2021

Hi @xuhao1,

The function you should look at is:

distributedOptimizer(std::vector< boost::shared_ptr<DistributedMapper> > distMappers,

This function called from: https://github.com/CogRob/distributed-mapper/blob/master/distributed_mapper_core/cpp/scripts/runDistributedMapper.cpp#L265

As specified here:

if(distMappers[robot]->updateType_ == DistributedMapper::incUpdate){

Depending on the gamma value and update type, the algorithm can be switched from Jacobi Over-relaxation, Successive Over-Relaxation, Jacobi or Gauss-Seidel. By default, Distirbuted Gauss-Seidel is used.

        /*  Distributed Jacobi: updateType_ = postUpdate, gamma = 1
      *  Gauss Seidel: updateType_ = incUpdate, gamma = 1
      *  Jacobi Overrelax: updateType_ = postUpdate, gamma != 1
      *  Succ Overrelax: updateType_ = incUpdate, gamma != 1

Hope this helps.

@xuhao1
Copy link
Author

xuhao1 commented May 10, 2021

Hi @itzsid Thanks for you reply!
I noticed these functions and understand the gamma value and update type switches the Jacobi Over-relaxation, Successive Over-Relaxation, Jacobi or Gauss-Seidel. My questions is, where exactly this
cap part is performed. I mean where I can found the implentation of this summation
c2

@itzsid
Copy link
Collaborator

itzsid commented May 10, 2021

Equation 19 is equivalent to estimating rotation or translation, depending on which stage of the two-stage is being optimized. Rotation optimization is called here:

distMappers[robot]->estimateRotation(); // optimization

and implemented here:

DistributedMapper::estimateRotation(){
.

The equation inside the bracket corresponds to all the communication from the neighboring robots. Some of those robots are already updated in the current iteration (y^(k+1)) and others are not updated (y^k). We take the latest estimate of all those robots along with the measurement constraints given by H to optimize each robot. So, each robot's optimization given updated constraints from neighboring robots solves the full equation 19 and not the summation separately.

The slides here might help: https://itzsid.github.io/publications/web/icra16/presentation.pdf

@xuhao1
Copy link
Author

xuhao1 commented May 11, 2021

@itzsid
Thanks again for your reply! I watched your slides and understand the equivalent now.
Is the reason for using optimization instead of directly summation separately and dividing by H_{\alpha\alpha} matrix is to avoid the case where H_{\alpha\alpha} matrix is irreversible?

@itzsid
Copy link
Collaborator

itzsid commented May 11, 2021

@xuhao1, I did it mostly due to the convenience of factor graph framework (its easier to use GaussianFactorGraph than to solve the linear system myself). If you can invert H_{\alpha\alpha}, it can be cached and stored since it won't change throughout the optimization. Only the estimates from other robots (y_\beta) changes as the optimization progresses. Although I haven't tested it, I think it should work.

@xuhao1
Copy link
Author

xuhao1 commented May 12, 2021

@itzsid Actually, I have tested directly using Equation (19) on a 4 DoF pose graph estimation in my implementation. It works when the initial error is not too big but still has some convergence issue on large scale problem (that is why I am reading your code and try to figure out the difference).

@itzsid itzsid closed this as completed Jan 17, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants