Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

fix a bug in sparse batch loader when batch size is extremely large #8922

Closed
wants to merge 3 commits into from

Conversation

eric-haibin-lin
Copy link
Member

Description

The previous implementation of sparse batch loader waits for the number of data instances reaches batch_size before allocating the buffer for copy (unlike dense batch loader, we don't know the actual buffer size to allocate since the number of non-zeros varies). However, such delayed allocate is buggy when batch size is extremely large, because the DataInst returned by base_ is only a reference to the data. By the time copy happens, the referenced data might already be updated by the parser.

This PR modifies the sparse batch loader to be almost the same as batch loader, except that:

  • data buffer is allocated based on estimation, and adjusted when capacity is not sufficient
  • before returning the output, the shape of tensor has to reflect the actual number of elements in the tensor.

@cjolivier01 @reminisce @anirudh2290 @ZiyueHuang

Checklist

Essentials

  • Passed code style checking (make lint)
  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage
  • For user-facing API changes, API doc string has been updated. For new C++ functions in header files, their functionalities and arguments are well-documented.
  • To my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

  • Feature1, tests, (and when applicable, API doc)
  • Feature2, tests, (and when applicable, API doc)

Comments

  • If this change is a backward incompatible change, why must this change be made.
  • Interesting edge cases to note here

// tensor buffer sizes
std::vector<size_t> buff_sizes(total_size, 0);
dtypes_.resize(total_size);
out_.data.resize(total_size);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How big can total_size get?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

at most 6

int64_t unit_size = 0;
out_.inst_index[top] = d.index;
for (size_t i = 0; i < d.data.size(); ++i) {
if (!IsIndPtr(i)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How big can d.data.size() get?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

usually at most 6

@eric-haibin-lin
Copy link
Member Author

Closing it for now until the test is fixed

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants