-
Notifications
You must be signed in to change notification settings - Fork 621
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pad operator: Add support for per-sample shape and alignment requirements #2432
Conversation
@@ -31,6 +31,8 @@ bool Pad<GPUBackend>::SetupImpl(std::vector<OutputDesc> &output_desc, | |||
int ndim = in_shape.sample_dim(); | |||
int nsamples = in_shape.num_samples(); | |||
|
|||
this->ReadArguments(spec_, ws); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gave me an error without this->
ee671f3
to
8e04c95
Compare
…ents Signed-off-by: Joaquin Anton <janton@nvidia.com>
8e04c95
to
2427804
Compare
dali/operators/generic/pad.h
Outdated
if (remainder > 0) | ||
extent += alignment - remainder; | ||
return extent; | ||
void ReadShapeListArg(std::vector<TensorShape<>> &out, const std::string &arg_name, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't you use GetShapeArgument
from operator/common.h
?
dali/operators/random/uniform.h
Outdated
output_desc[0].type = TypeTable::GetTypeInfo(DALI_FLOAT); | ||
auto& sh = output_desc[0].shape; | ||
if (spec_.HasTensorArgument("shape")) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Likewise - use GetShapeArgument
?
Signed-off-by: Joaquin Anton <janton@nvidia.com>
dali/pipeline/operator/common.h
Outdated
out_tls.resize(out_tls.shape().num_samples / ndim, ndim); | ||
ndim = GetShapeLikeArgument<ArgumentType>(out_tls.shapes, spec, argument_name, | ||
ws, ndim, batch_size); | ||
out_tls.resize(ndim == 0 ? 0 : out_tls.shapes.size() / ndim, ndim); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is wrong (and it's my fault). Please change semantics of GetShapeLikeArgument to return number of samples, not dimensions (or both, perhaps). The return value is not used anywhere except tests, so it won't break anything.
dali/operators/random/uniform.cc
Outdated
@@ -75,7 +75,7 @@ This argument is mutually exclusive with ``values``. | |||
This argument is mutually exclusive with ``range``. | |||
)code", std::vector<float>({})) | |||
.AddOptionalArg("shape", | |||
R"code(Shape of the samples.)code", std::vector<int>{1}); | |||
R"code(Shape of the samples.)code", std::vector<int>{1}, true); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
R"code(Shape of the samples.)code", std::vector<int>{1}, true); | |
R"code(Shape of the samples.)code", std::vector<int>{}, true); |
The default has been changed to generate scalars, not 1-element 1D tensors. It should stay like this!
fc34aa9
to
ec781ee
Compare
include/dali/core/tensor_shape.h
Outdated
template <typename It, | ||
typename = std::enable_if<is_integer_iterator<It>::value>> | ||
TensorShape(It first, It last) { | ||
int sz = last - first; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
int sz = last - first; | |
std::ptrdiff_t sz = last - first; |
although I don't know if we need to require contiguity at all
dali/pipeline/operator/common.h
Outdated
std::pair<int, int> GetShapeArgument(TensorListShape<out_ndim> &out_tls, const OpSpec &spec, | ||
const std::string &argument_name, const ArgumentWorkspace &ws, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the formatting change? This grouping seems random.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CTRL+F did that. I can restore
dali/pipeline/operator/common.h
Outdated
std::pair<int, int> GetShapeLikeArgument(std::vector<ExtentType> &out_shape, const OpSpec &spec, | ||
const std::string &argument_name, | ||
const ArgumentWorkspace &ws, int ndim = -1, | ||
int batch_size = -1 /* -1 = get from "batch_size" arg */) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Random grouping of arguments.
dali/operators/random/uniform.h
Outdated
if (spec_.ArgumentDefined("shape")) { | ||
GetShapeArgument(output_desc[0].shape, spec_, "shape", ws, -1, batch_size_); | ||
} else { | ||
output_desc[0].shape = uniform_list_shape(batch_size_, TensorShape<>{1}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
output_desc[0].shape = uniform_list_shape(batch_size_, TensorShape<>{1}); | |
output_desc[0].shape = uniform_list_shape(batch_size_, TensorShape<0>()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a bug. Please fix!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
true
ec781ee
to
5da2d20
Compare
include/dali/core/tensor_shape.h
Outdated
template <typename It, | ||
typename = std::enable_if<is_integer_iterator<It>::value>> | ||
TensorShape(It first, It last) : Base() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing:
DALI_NO_EXEC_CHECK
DALI_HOST_DEV
Unnecessary explicit invocation of base's default constructor.
include/dali/core/tensor_shape.h
Outdated
template <typename Collection, | ||
typename = std::enable_if_t<is_integer_collection<Collection>::value>> | ||
TensorShape(const Collection &c) // NOLINT(runtime/explicit) | ||
: TensorShape(dali::begin(c), dali::end(c)) {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DALI_NO_EXEC_CHECK
DALI_HOST_DEV
3560aa2
to
17287f5
Compare
Signed-off-by: Joaquin Anton <janton@nvidia.com>
17287f5
to
939d164
Compare
!build |
CI MESSAGE: [1762284]: BUILD STARTED |
CI MESSAGE: [1762284]: BUILD FAILED |
@@ -197,19 +198,20 @@ int GetShapeLikeArgument(std::vector<ExtentType> &out_shape, | |||
out_shape[i * ndim + d] = to_extent(tsvec[d]); | |||
} | |||
|
|||
return ndim; | |||
return {batch_size, ndim}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to return pair?
Shape it fill is accepted as an argument, so we can do that with batch_size and ndim as well.
ret.first, ret.second
is not very self descriptive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue is that there are already input ndim and batch size arguments that can be omitted or in the case of ndim, also inferred from the type (TensorShape<ndim>
). IMHO it would be too messy to use those both as optional inputs and also as outputs, that's why I am using return
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just wonder if this is not too pythonic. But sure.
!build |
CI MESSAGE: [1765042]: BUILD STARTED |
CI MESSAGE: [1765042]: BUILD PASSED |
Signed-off-by: Joaquin Anton janton@nvidia.com
Why we need this PR?
Pad
shape/align requirements per sampleWhat happened in this PR?
Fill relevant points, put NA otherwise. Replace anything inside []
Pad operator:
shape
/align
to support argument inputs(Bonus, needed for testing) Uniform operator:
shape
to support argument inputsPad, Uniform
N/A
Tests added
N/A
JIRA TASK: [DALI-1669]