Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add local cache of double buffer reader #9535

Merged
merged 9 commits into from
Apr 2, 2018

Conversation

reyoung
Copy link
Collaborator

@reyoung reyoung commented Mar 30, 2018

No description provided.

@reyoung reyoung requested a review from JiayiFeng March 30, 2018 09:46
@@ -149,21 +146,30 @@ void DoubleBufferReader::ReInit() {
void DoubleBufferReader::PrefetchThreadFunc() {
VLOG(5) << "A new prefetch thread starts.";
size_t gpu_ctx_offset = 0;
std::vector<std::vector<framework::LoDTensor>> cpu_tensor_cache(4);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is the size of the outer vector 4? It seems empirical. Should we have

const int kEmpiricalCacheSize = 4;
std::vector<std::vector<framework::LoDTensor>> gpu_tensor_cache(kEmpiricalCacheSize);

while (reader_->HasNext()) {
Item batch;
reader_->ReadNext(&batch.payloads_);
if (platform::is_gpu_place(place_)) {
std::vector<framework::LoDTensor> gpu_batch;
tensor_cache_id %= 4;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that here we need 4 because of the above 4.

tensor_cache_id %= 4;
auto& gpu_batch = gpu_tensor_cache[tensor_cache_id];
auto& cpu_batch = cpu_tensor_cache[tensor_cache_id];
cpu_batch = batch.payloads_;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a little lost here -- it seems that L159 and L160 can be merged into a single line:

auto& cpu_batch = batch.payloads_;

Am I wrong?

@JiayiFeng
Copy link
Collaborator

JiayiFeng commented Mar 31, 2018

I have just updated this PR. Did some code clean. Maybe you could take a look at it. Thanks! @wangkuiyi

@@ -123,70 +142,70 @@ class CreateDoubleBufferReaderOpMaker : public DecoratedReaderMakerBase {
}
};

bool DoubleBufferReader::HasNext() const {
while (!channel_->IsClosed() && !channel_->CanReceive()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it will be better to use semaphore here. Or add TODO.

@JiayiFeng JiayiFeng merged commit 899827f into PaddlePaddle:develop Apr 2, 2018
@JiayiFeng JiayiFeng deleted the feature/fix_double_buffer branch April 2, 2018 10:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants