Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Pad operator #1180

Merged
merged 3 commits into from
Oct 22, 2019
Merged

Add Pad operator #1180

merged 3 commits into from
Oct 22, 2019

Conversation

Kh4L
Copy link
Contributor

@Kh4L Kh4L commented Aug 19, 2019

Why we need this PR?

We need an operator to pad batches of non-uniform shape elements.
A use case would be padding the bboxes batch in detections models (as YOLOv3) where the number of bboxes per sample varies.

What happened in this PR?

  • Explain solution of the problem, new feature added.
    We can reuse the SliceFlipNormalizePermutePad CUDA kernel and provide an easy interface to use it through the Pad operator for the GPU (CPU to be implemented).
  • What was changed, added, removed?
    Add Pad DALI Kernel, Pad operator and Pad unit tests
  • What is most important part that reviewers should focus on?
    The Pad kernel implementation.
  • Was this PR tested? How?
    It is tested with unit tests in dali/pipeline/operators/util/pad_test.cc
  • Were docs and examples updated, if necessary?
    Yes, adding the docstring for the operator

dali/kernels/common/pad.h Show resolved Hide resolved
dali/kernels/common/pad.h Outdated Show resolved Hide resolved
dali/kernels/common/pad.h Show resolved Hide resolved
dali/kernels/common/pad.h Outdated Show resolved Hide resolved
dali/kernels/common/pad.h Show resolved Hide resolved
dali/pipeline/operators/util/pad.cu Outdated Show resolved Hide resolved
dali/pipeline/operators/util/pad.cu Outdated Show resolved Hide resolved
dali/pipeline/operators/util/pad.h Outdated Show resolved Hide resolved
dali/pipeline/operators/util/pad_test.cc Outdated Show resolved Hide resolved
dali/pipeline/proto/dali_proto_intern.cc Outdated Show resolved Hide resolved
dali/pipeline/operators/util/pad.cu Outdated Show resolved Hide resolved
dali/pipeline/operators/util/pad_test.cc Outdated Show resolved Hide resolved
dali/kernels/common/pad.h Outdated Show resolved Hide resolved
@JanuszL
Copy link
Contributor

JanuszL commented Aug 20, 2019

You can also work toward the Operator::Setup as #1140

@Kh4L
Copy link
Contributor Author

Kh4L commented Aug 27, 2019

You can also work toward the Operator::Setup as #1140

Done

@Kh4L Kh4L changed the title Add Pad operator [WIP] Add Pad operator Aug 27, 2019
@Kh4L Kh4L requested a review from jantonguirao August 28, 2019 14:56
@Kh4L Kh4L force-pushed the pad_operator branch 2 times, most recently from c4f5e71 to 61167d3 Compare August 30, 2019 15:34
@Kh4L Kh4L changed the title [WIP] Add Pad operator Add Pad operator Aug 30, 2019
@Kh4L Kh4L requested a review from JanuszL August 30, 2019 15:39
@dali-automaton
Copy link
Collaborator

CI MESSAGE: [880809]: BUILD STARTED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [880809]: BUILD FAILED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [880898]: BUILD STARTED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [880898]: BUILD FAILED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [883624]: BUILD STARTED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [883624]: BUILD PASSED

dali/kernels/common/pad.h Outdated Show resolved Hide resolved
dali/kernels/common/pad.h Show resolved Hide resolved
dali/pipeline/proto/dali_proto_intern.cc Outdated Show resolved Hide resolved
@dali-automaton
Copy link
Collaborator

CI MESSAGE: [947658]: BUILD PASSED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [949417]: BUILD FAILED

dali/operators/util/pad.cu Outdated Show resolved Hide resolved
dali/operators/util/pad.cu Outdated Show resolved Hide resolved
dali/operators/util/pad.cu Outdated Show resolved Hide resolved
dali/operators/util/pad.cu Outdated Show resolved Hide resolved
dali/operators/util/pad.cu Outdated Show resolved Hide resolved
Signed-off-by: Serge Panev <spanev@nvidia.com>
Signed-off-by: Serge Panev <spanev@nvidia.com>
@dali-automaton
Copy link
Collaborator

CI MESSAGE: [950879]: BUILD STARTED

@Kh4L Kh4L requested a review from mzient October 18, 2019 16:01
const auto &input = ws.Input<GPUBackend>(0);
auto &output = ws.Output<GPUBackend>(0);
std::size_t number_of_axes = input.shape().sample_dim();
DALI_TYPE_SWITCH_WITH_FP16(input.type().id(), DataType,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not for now, but it may be worth it to only switch over type's size, copy the pad value to some opaque buffer and run the function on a type-erased buffer. This would reduce the number of switch cases to just 4: 1,2,4,8 bytes.


std::vector<Arguments> basic_args = {{{"fill_value", -1.0f}, {"axes", std::vector<int>{0}}}};

std::vector<Arguments> two_d_args = {{{"fill_value", 42.0f}, {"axes", std::vector<int>{1}}}};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At least manually run a test which tests multiple, but not all axes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test added

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [950879]: BUILD PASSED

Signed-off-by: Serge Panev <spanev@nvidia.com>
@Kh4L
Copy link
Contributor Author

Kh4L commented Oct 19, 2019

!build

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [952483]: BUILD STARTED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [952476]: BUILD PASSED

@dali-automaton
Copy link
Collaborator

CI MESSAGE: [952483]: BUILD PASSED

@jantonguirao jantonguirao merged commit 9db25b8 into NVIDIA:master Oct 22, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants