Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Grid accumulator race condition #45

Closed
wannesvanloock opened this issue Jan 6, 2020 · 4 comments
Closed

Grid accumulator race condition #45

wannesvanloock opened this issue Jan 6, 2020 · 4 comments

Comments

@wannesvanloock
Copy link
Contributor

I'm running into another issue with the parallel GridAccumulator in combination with the IndexAccumulatorProxy. Consider the example below

#include <iostream>

#include <cilantro/core.hpp>

namespace cilantro {
template <typename ScalarT, ptrdiff_t EigenDim,
          typename GridPointScalarT = ptrdiff_t>
class IndexGridDownsampler
    : public GridAccumulator<ScalarT, EigenDim, IndexAccumulatorProxy<>,
                             GridPointScalarT> {
 public:
  EIGEN_MAKE_ALIGNED_OPERATOR_NEW

  IndexGridDownsampler(const ConstVectorSetMatrixMap<ScalarT, EigenDim> &points,
                       ScalarT bin_size, bool parallel = true)
      : GridAccumulator<ScalarT, EigenDim, IndexAccumulatorProxy<>,
                        GridPointScalarT>(
            points, bin_size, IndexAccumulatorProxy<>(), parallel) {}

  const IndexGridDownsampler &getDownsampledPoints(
      VectorSet<ScalarT, EigenDim> &ds_points,
      size_t min_points_in_bin = 1) const {
    ds_points.resize(this->data_map_.rows(), this->grid_lookup_table_.size());

    ScalarT scale;
    size_t ind = 0;
    for (size_t k = 0; k < this->bin_iterators_.size(); k++) {
      if (this->bin_iterators_[k]->second.indices.size() < min_points_in_bin) {
        continue;
      } else {
        scale = (ScalarT)(1.0) / this->bin_iterators_[k]->second.indices.size();
        for (auto i : this->bin_iterators_[k]->second.indices) {
          ds_points.col(ind) += scale * this->data_map_.col(i);
        }
        ind++;
      }
    }

    ds_points.conservativeResize(Eigen::NoChange, ind);
    return *this;
  }

  inline VectorSet<ScalarT, EigenDim> getDownsampledPoints(
      size_t min_points_in_bin = 1) const {
    VectorSet<ScalarT, EigenDim> ds_points;
    getDownsampledPoints(ds_points, min_points_in_bin);
    return ds_points;
  }
};
}

int main(int argc, char ** argv) {
    cilantro::VectorSet3f points(3, 16);
    points << 0.1, 0.1, 0.1, 0.1, 0.9, 0.9, 0.9, 0.9, 1.1, 1.1, 1.1, 1.1, 1.9, 1.9, 1.9, 1.9,
              0.1, 0.9, 1.1, 1.9, 0.1, 0.9, 1.1, 1.9, 0.1, 0.9, 1.1, 1.9, 0.1, 0.9, 1.1, 1.9,
              0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0;

    cilantro::IndexGridDownsampler<float, 3> downsampler(points, 2. / 3, false);
    std::cout << "parallel=false" << std::endl;
    std::cout << downsampler.getDownsampledPoints() << std::endl;

    cilantro::IndexGridDownsampler<float, 3> downsampler_p(points, 2. / 3, true);
    std::cout << "parallel=true" << std::endl;
    std::cout << downsampler_p.getDownsampledPoints() << std::endl;
    return 0;
}

The output on my machine yields:

parallel=false
0.1 0.1 0.1   1   1   1 1.9 1.9 1.9
0.1   1 1.9 0.1   1 1.9 0.1   1 1.9
  0   0   0   0   0   0   0   0   0
parallel=true
0.1 0.2 0.2   2   2   2 3.8 3.8 3.8
0.1   2 3.8 0.2   2 3.8 0.2   2 3.8
  0   0   0   0   0   0   0   0   0

The result of the parallel implementation is clearly incorrect. It seems that it combines the wrong indices together.

@kzampog
Copy link
Owner

kzampog commented Jan 6, 2020

Hi! Happy new year! :D

It seems that accumulation is done on an uninitialized matrix; adding ds_points.col(ind).setZero(); in the else clause seems to give consistent results. I believe there is a macro that tells Eigen to initialize matrices to zero, but I haven't been using it.

I haven't extensively tested this on other data -- please let me know if there might still be something suspicious in there :D

@wannesvanloock
Copy link
Contributor Author

Quick debugging! Thanks.
Probably a good idea to set ds_points to zero in all implementations. I'll see if I can create a PR later today.

@wannesvanloock
Copy link
Contributor Author

and a happy new year to you too.

@wannesvanloock
Copy link
Contributor Author

Calling setZero() globally on ds_points is dangerous when ds_points points to the same memory as the accumulator's data_map_. So decided I won't change implementation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants