New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Shapes operator returning sample shapes. #1223
Conversation
!build |
CI MESSAGE: [880753]: BUILD STARTED |
CI MESSAGE: [880753]: BUILD FAILED |
CI MESSAGE: [880753]: BUILD PASSED |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe add some tests for this?
!build |
Done |
CI MESSAGE: [882689]: BUILD STARTED |
output_desc.resize(1); | ||
output_desc[0].type = TypeTable::GetTypeInfo(output_type_); | ||
decltype(auto) shape = GetInputShape(ws); | ||
output_desc[0].shape = ShapeShape(shape);; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe one less ;
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess when CI is in better shape...
assert(out.shape().num_samples() == shape.num_samples()); | ||
for (int i = 0; i < shape.num_samples(); i++) { | ||
type *data = out.mutable_tensor<type>(i); | ||
auto tshape = shape.tensor_shape_span(i); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe t_shape
or tensor_shape
for a bit more clarity?
CI MESSAGE: [882689]: BUILD FAILED |
CI MESSAGE: [882689]: BUILD PASSED |
} | ||
} | ||
|
||
static kernels::TensorListShape<1> ShapeShape(const kernels::TensorListShape<> &shape) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
auto &output_tensor = output[i]; | ||
auto *dest = output_tensor.raw_mutable_data(); | ||
auto *src = tmp_.raw_mutable_tensor(i); | ||
std::memcpy(dest, src, output_tensor.nbytes()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do the batch_size
memcpys instead of writing directly to output in case of CPU op?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because TensorVector is not really interchangeable with TensorList and I didn't want to write the conversion twice.
void ConvertShape(TensorList<CPUBackend> &out, const kernels::TensorListShape<> &shape) { | ||
assert(out.shape().num_samples() == shape.num_samples()); | ||
for (int i = 0; i < shape.num_samples(); i++) { | ||
type *data = out.mutable_tensor<type>(i); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case of CPU you will do the allocation at this point implicitly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Irrelevant now.
Signed-off-by: Michal Zientkiewicz <michalz@nvidia.com>
Signed-off-by: Michal Zientkiewicz <michalz@nvidia.com>
Signed-off-by: Michal Zientkiewicz <michalz@nvidia.com>
!build |
CI MESSAGE: [885167]: BUILD STARTED |
CI MESSAGE: [885167]: BUILD PASSED |
Signed-off-by: Michal Zientkiewicz michalz@nvidia.com
Why we need this PR?
Pick one
What happened in this PR?
JIRA TASK: [DALI-XXXX]