-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate sensor probabilities across resampling iterations #149
Conversation
a5dcc7a
to
dd1eb21
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@glpuga This is great! Thanks for the documentation fixes and minor refactors.
b67902f
to
1311a4f
Compare
Ready for another view. I did not update the tolerances since I could not get the system_test to run. I opened a ticket about that, since the error seems to be in |
}); | ||
// TODO(glpuga): Investigate why AMCL and QuickMCL use this formula for the weight. | ||
// See https://github.com/ekumenlabs/beluga/issues/153 | ||
return pz * pz * pz; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not exactly what AMCL does. I think you need to cube the result of:
index < likelihood_field_.size() ? likelihood_field_[index] : unknown_space_occupancy_prob
And then accumulate the results by adding them, not multiplying them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, you're right. My mistake. I'll fix it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The longer you look at it, the weirder it becomes. The accumulation starts from 1.0, not from 0.0.
// Compute the sample weights
for (j = 0; j < set->sample_count; j++) {
// ...
p = 1.0;
// ...
for (i = 0; i < data->range_count; i += step) {
// ...
pz = 0.0;
// ...
pz += ...;
pz += ...;
assert(pz <= 1.0);
assert(pz >= 0.0);
p += pz * pz * pz;
}
sample->weight *= p;
total_weight += sample->weight;
}
Also, the Beam Model does the same. #99
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The accumulation starts from 1.0, not from 0.0.
That looks like they started by multiplying and then and some point switched and forgot to change the starting value...
But starting with a weight of 1 also makes sure all particles are somehow weighted, so it's a big difference to have that in the addition or not (my assumption is that all other terms of the sum are less than 1., that's why I consider starting with 1 a big difference).
516a0ac
to
82ccc15
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
@nahueespinosa I have not yet run system_test and checked if the tolerances need ajustments. Should I
|
@ivanpauno Could you run a benchmark on this branch? I have to rebuild the dockers to do it myself. @glpuga I have run the system test locally and it consistently passes. So merging this as is is also a good option in my opinion. |
Ok! I'll wait for @ivanpauno JIC. |
Sure, I think I can do it for tomorrow. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Signed-off-by: Gerardo Puga <glpuga@ekumenlabs.com>
82ccc15
to
953aaa6
Compare
Related to #57.
Summary
Checklist