Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Shapes operator returning sample shapes. #1223

Merged
merged 3 commits into from Sep 4, 2019
Merged

Conversation

mzient
Copy link
Contributor

@mzient mzient commented Aug 30, 2019

Signed-off-by: Michal Zientkiewicz michalz@nvidia.com

Why we need this PR?

Pick one

  • It adds an operator that will be required to build meaningful warp matrices

What happened in this PR?

  • Explain solution of the problem, new feature added.
    • added Shapes operator that returns input shape as a list of 1D tensors containing input shapes
  • Was this PR tested? How?
    • Ran in jupyter
  • Were docs and examples updated, if necessary?
    • No

JIRA TASK: [DALI-XXXX]

@mzient
Copy link
Contributor Author

mzient commented Aug 30, 2019

!build

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [880753]: BUILD STARTED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [880753]: BUILD FAILED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [880753]: BUILD PASSED

Copy link
Contributor

@awolant awolant left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add some tests for this?

@mzient
Copy link
Contributor Author

mzient commented Sep 2, 2019

!build

@mzient mzient requested a review from awolant September 2, 2019 13:58
@mzient
Copy link
Contributor Author

mzient commented Sep 2, 2019

Maybe add some tests for this?

Done

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [882689]: BUILD STARTED

output_desc.resize(1);
output_desc[0].type = TypeTable::GetTypeInfo(output_type_);
decltype(auto) shape = GetInputShape(ws);
output_desc[0].shape = ShapeShape(shape);;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe one less ; ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess when CI is in better shape...

assert(out.shape().num_samples() == shape.num_samples());
for (int i = 0; i < shape.num_samples(); i++) {
type *data = out.mutable_tensor<type>(i);
auto tshape = shape.tensor_shape_span(i);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe t_shape or tensor_shape for a bit more clarity?

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [882689]: BUILD FAILED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [882689]: BUILD PASSED

}
}

static kernels::TensorListShape<1> ShapeShape(const kernels::TensorListShape<> &shape) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

auto &output_tensor = output[i];
auto *dest = output_tensor.raw_mutable_data();
auto *src = tmp_.raw_mutable_tensor(i);
std::memcpy(dest, src, output_tensor.nbytes());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do the batch_size memcpys instead of writing directly to output in case of CPU op?

Copy link
Contributor Author

@mzient mzient Sep 4, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because TensorVector is not really interchangeable with TensorList and I didn't want to write the conversion twice.

void ConvertShape(TensorList<CPUBackend> &out, const kernels::TensorListShape<> &shape) {
assert(out.shape().num_samples() == shape.num_samples());
for (int i = 0; i < shape.num_samples(); i++) {
type *data = out.mutable_tensor<type>(i);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In case of CPU you will do the allocation at this point implicitly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Irrelevant now.

dali/pipeline/operators/geometric/shapes.h Show resolved Hide resolved
Signed-off-by: Michal Zientkiewicz <michalz@nvidia.com>
Signed-off-by: Michal Zientkiewicz <michalz@nvidia.com>
Signed-off-by: Michal Zientkiewicz <michalz@nvidia.com>
@mzient
Copy link
Contributor Author

mzient commented Sep 4, 2019

!build

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [885167]: BUILD STARTED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [885167]: BUILD PASSED

@mzient mzient merged commit 76891f2 into NVIDIA:master Sep 4, 2019
@mzient mzient deleted the ShapesOp branch September 4, 2019 12:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants