-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/rnn to array to lod tensor #5411
Merged
QiJune
merged 54 commits into
PaddlePaddle:develop
from
reyoung:feature/rnn_to_array_to_lod_tensor
Nov 8, 2017
Merged
Changes from all commits
Commits
Show all changes
54 commits
Select commit
Hold shift + click to select a range
eaa0e64
Add LoDRankTable
reyoung 1d843f8
Add skeleton for array_to_lod_tensor and lod_tensor_to_array
reyoung 955faa5
Add VarType::LoDTensorArray
reyoung 7d1c63b
Add PyBind of LoDTensorArray
reyoung c1a091d
Add InferVarType
reyoung 6d0f0e2
Merge branch 'feature/lod_rank_table' into feature/rnn_to_array_to_lo…
reyoung a204207
Merge branch 'develop' of github.com:baidu/Paddle into feature/rnn_to…
reyoung 3df735f
Add first unittest
reyoung c5ff3b5
Add ut
reyoung 0b043dc
Add unittest
reyoung 48a207e
Add unittest
reyoung ebbde26
Add unittests
reyoung ad254d1
update
JiayiFeng 4468702
init
QiJune d1c4886
Merge branch 'feature/rnn_to_array_to_lod_tensor' of https://github.c…
JiayiFeng 14a2ecd
Merge branch 'feature/rnn_to_array_to_lod_tensor' of https://github.c…
QiJune d297aa4
add infershape for lod_tensor_to_array_op
QiJune 3f1ffc0
merge baidu/develop
QiJune b962258
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
JiayiFeng f9c79f0
compelete array_to_lod_tensor_op
JiayiFeng 6169cf1
copy data
QiJune 1fa5abc
Merge branch 'feature/rnn_to_array_to_lod_tensor' of https://github.c…
JiayiFeng 38f36e9
clean code
QiJune 21ca973
Merge branch 'feature/rnn_to_array_to_lod_tensor' of https://github.c…
QiJune eb4f9b3
clean code
QiJune 9bccc4e
Fix unittest data
reyoung 4564c66
fix bugs
JiayiFeng 1082af0
fix compile error
QiJune 6cbc5ce
Merge branch 'feature/rnn_to_array_to_lod_tensor' of https://github.c…
QiJune 7b199ae
Merge branch 'feature/rnn_to_array_to_lod_tensor' of https://github.c…
JiayiFeng 52bd5bc
Merge branch 'develop' of github.com:baidu/Paddle into feature/rnn_to…
reyoung 4d0bac5
Merge branch 'feature/rnn_to_array_to_lod_tensor' of github.com:reyou…
reyoung e3709d9
Refine TensorToArrayOp
reyoung fd41612
Merge branch 'feature/rnn_to_array_to_lod_tensor' of https://github.c…
JiayiFeng 412d4e9
refactor array_to_lod_tensor
JiayiFeng f48dd7f
Unittest
reyoung 18fc054
fix bugs
JiayiFeng ad6b1a0
Fix unittest
reyoung abc1e2b
Fix unittest
reyoung 34ce1fd
debug
JiayiFeng b141311
Debug
JiayiFeng 42b1de0
Fix unittest
reyoung c60c697
Merge branch 'feature/rnn_to_array_to_lod_tensor' of github.com:reyou…
reyoung 9ae7184
clean code
QiJune 27afda0
refactor
11f5642
use ostream
700d9e1
update test
aaba078
Merge remote-tracking branch 'upstream/develop' into lod_tensor_array
febcac3
fix gpu build error
QiJune 51dd1dd
make gpu test pass
f548e6e
Merge remote-tracking branch 'pr/5411' into 5411
7c62017
Merge pull request #7 from tonyyang-svail/rnn_to_array_to_lod_tensor
reyoung be1be76
Merge remote-tracking branch 'baidu/develop' into feature/rnn_to_arra…
QiJune 8f1c9bc
Merge branch 'feature/rnn_to_array_to_lod_tensor' of https://github.c…
QiJune File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -71,7 +71,7 @@ struct DDim { | |
|
||
DDim operator*(DDim d) const; | ||
|
||
int64_t size() const; | ||
int size() const; | ||
}; | ||
|
||
/** | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -45,7 +45,8 @@ void VarDescBind::SetLoDLevel(int32_t lod_level) { | |
desc_.mutable_tensor_array()->set_lod_level(lod_level); | ||
break; | ||
default: | ||
PADDLE_THROW("Tensor type=%d does not support LoDLevel", desc_.type()); | ||
PADDLE_THROW("Tensor type=%d does not support LoDLevel", | ||
desc_.tensor_array().lod_level()); | ||
} | ||
} | ||
|
||
|
@@ -56,7 +57,8 @@ int32_t VarDescBind::GetLodLevel() const { | |
case VarDesc::LOD_TENSOR_ARRAY: | ||
return desc_.tensor_array().lod_level(); | ||
default: | ||
PADDLE_THROW("Tensor type=%d does not support LoDLevel", desc_.type()); | ||
PADDLE_THROW("Tensor type=%d does not support LoDLevel", | ||
desc_.tensor_array().lod_level()); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. why substitute |
||
} | ||
} | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,152 @@ | ||
/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved. | ||
|
||
Licensed under the Apache License, Version 2.0 (the "License"); | ||
you may not use this file except in compliance with the License. | ||
You may obtain a copy of the License at | ||
|
||
http://www.apache.org/licenses/LICENSE-2.0 | ||
|
||
Unless required by applicable law or agreed to in writing, software | ||
distributed under the License is distributed on an "AS IS" BASIS, | ||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
See the License for the specific language governing permissions and | ||
limitations under the License. */ | ||
#include <numeric> | ||
#include "paddle/framework/lod_rank_table.h" | ||
#include "paddle/framework/lod_tensor_array.h" | ||
#include "paddle/framework/op_registry.h" | ||
#include "paddle/memory/memcpy.h" | ||
|
||
namespace paddle { | ||
namespace operators { | ||
|
||
using LoD = framework::LoD; | ||
|
||
class ArrayToLoDTensorOp : public framework::OperatorBase { | ||
public: | ||
ArrayToLoDTensorOp(const std::string &type, | ||
const framework::VariableNameMap &inputs, | ||
const framework::VariableNameMap &outputs, | ||
const framework::AttributeMap &attrs) | ||
: OperatorBase(type, inputs, outputs, attrs) {} | ||
void Run(const framework::Scope &scope, | ||
const platform::DeviceContext &dev_ctx) const override { | ||
auto &x = scope.FindVar(Input("X"))->Get<framework::LoDTensorArray>(); | ||
auto &rank_table = | ||
scope.FindVar(Input("RankTable"))->Get<framework::LoDRankTable>(); | ||
auto *out = | ||
scope.FindVar(Output("Out"))->GetMutable<framework::LoDTensor>(); | ||
|
||
// Check dims, place and data type of input's elements and infer output's | ||
// dim | ||
PADDLE_ENFORCE(!x.empty(), "There's no element in the input array."); | ||
int rank = x[0].dims().size(); | ||
platform::Place place = x[0].place(); | ||
std::type_index data_type = x[0].type(); | ||
framework::DDim ins_dims = framework::slice_ddim(x[0].dims(), 1, rank); | ||
int64_t batch_size = x[0].dims()[0]; | ||
for (size_t i = 1; i < x.size(); ++i) { | ||
PADDLE_ENFORCE_EQ(framework::slice_ddim(x[i].dims(), 1, rank), ins_dims, | ||
"The dimension of the %zu'th element in LoDTensorArray " | ||
"differs from previous ones.", | ||
i); | ||
PADDLE_ENFORCE(platform::places_are_same_class(x[i].place(), place), | ||
"The place class of the %zu'th element in LoDTensorArray " | ||
"differs from previous ones.", | ||
i); | ||
PADDLE_ENFORCE(x[i].type() == data_type, | ||
"The date type of the %zu'th element in LoDTensorArray " | ||
"differs from previous ones.", | ||
i); | ||
batch_size += x[i].dims()[0]; | ||
} | ||
auto ins_dim_vec = framework::vectorize(ins_dims); | ||
ins_dim_vec.insert(ins_dim_vec.begin(), batch_size); | ||
framework::DDim out_dims = framework::make_ddim(ins_dim_vec); | ||
out->Resize(out_dims); | ||
out->mutable_data(place, data_type); | ||
|
||
auto &table_items = rank_table.items(); | ||
std::vector<size_t> table_item_idx(table_items.size()); | ||
// table_item_idx = range(table_items_idx.size()) | ||
std::iota(table_item_idx.begin(), table_item_idx.end(), 0); | ||
std::sort(table_item_idx.begin(), table_item_idx.end(), | ||
[&](size_t a, size_t b) { | ||
return table_items[a].index < table_items[b].index; | ||
}); | ||
|
||
// Build LoDTensor `out` | ||
framework::LoD *out_lod = out->mutable_lod(); | ||
out_lod->clear(); | ||
size_t out_offset = 0; | ||
auto prefix_lod = rank_table.coarse_lod(); | ||
prefix_lod.emplace_back(); | ||
auto &cur_level_lod = prefix_lod.back(); | ||
cur_level_lod.push_back(0); | ||
for (size_t idx : table_item_idx) { | ||
cur_level_lod.push_back(cur_level_lod.back() + table_items[idx].length); | ||
for (size_t x_idx = 0; x_idx < table_items[idx].length; ++x_idx) { | ||
auto lod_and_offset = framework::GetSubLoDAndAbsoluteOffset( | ||
x[x_idx].lod(), idx, idx + 1, 0); | ||
|
||
auto &lod_length = lod_and_offset.first; | ||
framework::AppendLoD(out_lod, lod_length); | ||
|
||
size_t start_offset = lod_and_offset.second.first; | ||
size_t end_offset = lod_and_offset.second.second; | ||
VLOG(10) << "idx=" << idx << " x_idx=" << x_idx << " [" | ||
<< ", " << end_offset << "]"; | ||
// Copy data | ||
PADDLE_ENFORCE_GE(end_offset, start_offset); | ||
size_t len = end_offset - start_offset; | ||
if (len == 0) { | ||
continue; | ||
} | ||
out->Slice(out_offset, out_offset + len) | ||
.CopyFrom(x[x_idx].Slice(start_offset, end_offset), place, dev_ctx); | ||
out_offset += len; | ||
} | ||
} | ||
out_lod->insert(out_lod->begin(), prefix_lod.begin(), prefix_lod.end()); | ||
} | ||
}; | ||
|
||
class ArrayToLoDTensorOpProtoMaker : public framework::OpProtoAndCheckerMaker { | ||
public: | ||
ArrayToLoDTensorOpProtoMaker(framework::OpProto *proto, | ||
framework::OpAttrChecker *op_checker) | ||
: OpProtoAndCheckerMaker(proto, op_checker) { | ||
AddInput("X", | ||
"(std::vector<LodTensor>) A vector of tensors that is going to " | ||
"be casted to a big LoDTensor."); | ||
AddInput("RankTable", | ||
"(LoDRankTable) RankTable provides the coarse lod infomation to " | ||
"build the output LoDTensor. See " | ||
"'paddle/framework/lod_rank_table.h' for more details."); | ||
AddOutput("Out", "(LoDTensor) The LoDTensor formed by input tensor array."); | ||
AddComment( | ||
R"DOC(This Op build a big LoDTensor from a std::vector<LoDTensor> | ||
and a LoDRankTable. It is supposed to be used in getting dynamic RNN's | ||
outputs back to a normal LoDTensor. The std::vector<LoDTensor> | ||
would be the output of RNN Op and the LoDRankTable would be build | ||
with RNN's input.)DOC"); | ||
} | ||
}; | ||
|
||
class ArrayToLoDTensorInferShape : public framework::InferShapeBase { | ||
public: | ||
void operator()(framework::InferShapeContext *context) const override { | ||
PADDLE_ENFORCE(context->HasInput("X"), | ||
"ArrayToLoDTensorOp must has input X."); | ||
PADDLE_ENFORCE(context->HasInput("RankTable"), | ||
"ArrayToLoDTensorOp must has input RankTable."); | ||
} | ||
}; | ||
|
||
} // namespace operators | ||
} // namespace paddle | ||
|
||
namespace ops = paddle::operators; | ||
REGISTER_OPERATOR(array_to_lod_tensor, ops::ArrayToLoDTensorOp, | ||
ops::ArrayToLoDTensorOpProtoMaker, | ||
ops::ArrayToLoDTensorInferShape); |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why substitute
desc_.tensor_array().lod_level()
intotype=%d
?