-
Notifications
You must be signed in to change notification settings - Fork 618
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Pad operator #1180
Add Pad operator #1180
Conversation
You can also work toward the |
Done |
c4f5e71
to
61167d3
Compare
CI MESSAGE: [880809]: BUILD STARTED |
CI MESSAGE: [880809]: BUILD FAILED |
CI MESSAGE: [880898]: BUILD STARTED |
CI MESSAGE: [880898]: BUILD FAILED |
CI MESSAGE: [883624]: BUILD STARTED |
CI MESSAGE: [883624]: BUILD PASSED |
CI MESSAGE: [947658]: BUILD PASSED |
CI MESSAGE: [949417]: BUILD FAILED |
Signed-off-by: Serge Panev <spanev@nvidia.com>
Signed-off-by: Serge Panev <spanev@nvidia.com>
CI MESSAGE: [950879]: BUILD STARTED |
const auto &input = ws.Input<GPUBackend>(0); | ||
auto &output = ws.Output<GPUBackend>(0); | ||
std::size_t number_of_axes = input.shape().sample_dim(); | ||
DALI_TYPE_SWITCH_WITH_FP16(input.type().id(), DataType, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not for now, but it may be worth it to only switch over type's size, copy the pad value to some opaque buffer and run the function on a type-erased buffer. This would reduce the number of switch cases to just 4: 1,2,4,8 bytes.
|
||
std::vector<Arguments> basic_args = {{{"fill_value", -1.0f}, {"axes", std::vector<int>{0}}}}; | ||
|
||
std::vector<Arguments> two_d_args = {{{"fill_value", 42.0f}, {"axes", std::vector<int>{1}}}}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At least manually run a test which tests multiple, but not all axes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test added
CI MESSAGE: [950879]: BUILD PASSED |
Signed-off-by: Serge Panev <spanev@nvidia.com>
!build |
CI MESSAGE: [952483]: BUILD STARTED |
CI MESSAGE: [952476]: BUILD PASSED |
CI MESSAGE: [952483]: BUILD PASSED |
Why we need this PR?
We need an operator to pad batches of non-uniform shape elements.
A use case would be padding the bboxes batch in detections models (as YOLOv3) where the number of bboxes per sample varies.
What happened in this PR?
We can reuse the
SliceFlipNormalizePermutePad
CUDA kernel and provide an easy interface to use it through thePad
operator for the GPU (CPU to be implemented).Add Pad DALI Kernel, Pad operator and Pad unit tests
The
Pad
kernel implementation.It is tested with unit tests in
dali/pipeline/operators/util/pad_test.cc
Yes, adding the docstring for the operator