-
Notifications
You must be signed in to change notification settings - Fork 609
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CPU argument input #1423
CPU argument input #1423
Conversation
5592773
to
ebc0e3e
Compare
!build |
* Change type of existing support operators * Remove support device from tests * Review fixes Author: Rafal Banas <Banas.Rafal97@gmail.com> Signed-off-by: Rafal Banas <Banas.Rafal97@gmail.com>
… ArgumentInputs of GPU operators. Signed-off-by: Michal Zientkiewicz <michalz@nvidia.com>
Signed-off-by: Michal Zientkiewicz <michalz@nvidia.com>
- Update TensorVectors in ArgumentWorkspace when created from TensorLists. - Make TensorVector `UpdateViews` a public function. Signed-off-by: Michal Zientkiewicz <michalz@nvidia.com>
CI MESSAGE: [961040]: BUILD FAILED |
ebc0e3e
to
5abb306
Compare
CI MESSAGE: [961155]: BUILD STARTED |
Signed-off-by: Michal Zientkiewicz <michalz@nvidia.com>
dali/pipeline/data/tensor_test.cc
Outdated
@@ -368,11 +368,11 @@ TYPED_TEST(TensorTest, TestShareData) { | |||
} | |||
|
|||
TYPED_TEST(TensorTest, TestCopyToTensorList) { | |||
std::vector<Tensor<TypeParam>> tensors(16); | |||
TensorVector<CPUBackend> tensors(16); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why change from parametrized backend to only CPUBackend here?
Also, as you changed the std::vector to TensorVector it would be good to test both modes of TensorVector, the contiguous (backed by TL) and noncontiguous (separate Tensors).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reverting to TypeParam.
Testing contiguous copy seems a bit excessive when there's no special code path for such case. When we added, it's going to be useful, though.
…ces to "support" device. Restore backend parameter in TestCopyToTensorList. Signed-off-by: Michal Zientkiewicz <michalz@nvidia.com>
08d9d19
to
70dc7bc
Compare
CI MESSAGE: [961194]: BUILD STARTED |
Signed-off-by: Michal Zientkiewicz <michalz@nvidia.com>
CI MESSAGE: [961232]: BUILD STARTED |
bool is_valid_shape = shape.tensor_shape(0) == TensorShape<1>{batch_size_}; | ||
|
||
DALI_ENFORCE(is_valid_shape, | ||
make_string("`", argument_name, "` must be a 1xN or Nx1 (N = ", batch_size_, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nitpick, the make_string
adds spaces automatically, so you will get (N_=__10)
etc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If that's the only thing, then I'll do it in another PR.
auto tensor = queue[idxs[OpType::SUPPORT]]; | ||
ws.AddArgumentInput(tensor, arg_pair.first); | ||
auto &parent_node = graph.Node(graph.Tensor(tid).producer.node); | ||
auto parent_op_type = parent_node.op_type; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
auto parent_op_type = parent_node.op_type; | |
auto parent_op_idx = idxs[parent_node.op_type]; |
"Argument Inputs must be stored in CPU memory"); | ||
|
||
auto add_arg_input = [&](auto &queue) { | ||
auto tensor = queue[idxs[parent_op_type]]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
auto tensor = queue[idxs[parent_op_type]]; | |
auto tensor = queue[parent_op_idx]; |
} | ||
|
||
protected: | ||
struct ArgumentInputDesc { | ||
shared_ptr<TensorVector<CPUBackend>> tvec; | ||
bool should_update = false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a comment describing what and why should be updated?
CI MESSAGE: [961232]: BUILD PASSED |
* Make input arguments accept TensorVector * Change type of existing support operators * Remove support device from tests * Remove SupportWorkspace and SupportBackend. Insert MakeContiguous for ArgumentInputs of GPU operators. * Make TensorVector track changes in underlying TensorList. * Update TensorVectors in ArgumentWorkspace when created from TensorLists. * Make TensorVector `UpdateViews` a public function. * Deprecate \"support\" device. Co-authored-by: Rafal Banas <Banas.Rafal97@gmail.com> Signed-off-by: Rafal Banas <Banas.Rafal97@gmail.com> Signed-off-by: Michal Zientkiewicz <michalz@nvidia.com>
Why we need this PR?
To make DALI simple and more flexible the current Support operators can be reworked to regular CPU operators. To make this working, the argument inputs had to be reworked to accept CPU operator outputs.
What happened in this PR?
TensorVector
not aTensorList
. This change was made to be consistent with the type of CPU operators outputs.CoinFlip
,Uniform
) were changed to CPU operators.There will be a follow-up PR that removes any notion of support operators from DALI code and docs.
JIRA TASK: [DALI-687]