-
Notifications
You must be signed in to change notification settings - Fork 621
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add DLPack input support to the ExternalSource operator #2023
Conversation
Check out this pull request on Review Jupyter notebook visual diffs & provide feedback on notebooks. Powered by ReviewNB |
!build |
It goes after #1997 |
CI MESSAGE: [1395708]: BUILD STARTED |
CI MESSAGE: [1395708]: BUILD FAILED |
184b06e
to
7461691
Compare
!build |
CI MESSAGE: [1403482]: BUILD STARTED |
CI MESSAGE: [1403482]: BUILD FAILED |
!build |
CI MESSAGE: [1405230]: BUILD STARTED |
CI MESSAGE: [1405230]: BUILD FAILED |
!build |
CI MESSAGE: [1405462]: BUILD STARTED |
CI MESSAGE: [1405462]: BUILD PASSED |
dali/python/backend_impl.cc
Outdated
template <typename TensorType> | ||
TensorShape<> FillTensorData(const py::object object, TensorType *t, int device_id, string layout) { | ||
void FillTensorCudaArrayInterfaceData(const py::object object, TensorType *batch, | ||
int device_id, string layout) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nitpick: align
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
dali/python/backend_impl.cc
Outdated
auto dlm_tensor_ptr = DLMTensorRawPtrFromCapsule(capsule, false); | ||
const auto &dl_tensor = dlm_tensor_ptr->dl_tensor; | ||
list.append(dl_tensor.ctx.device_type == kDLGPU); | ||
if (dl_tensor.ctx.device_type != kDLGPU && dl_tensor.ctx.device_type != kDLCPU) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about:
list.append(dl_tensor.ctx.device_type == kDLGPU || dl_tensor.ctx.device_type == kDLCPU);
list.append(dl_tensor.ctx.device_type == kDLGPU);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
dali/python/backend_impl.cc
Outdated
@@ -292,10 +399,26 @@ void ExposeTensor(py::module &m) { | |||
)code"); | |||
|
|||
py::class_<Tensor<GPUBackend>>(m, "TensorGPU") | |||
.def(py::init([](py::capsule &capsule, string layout = "") { | |||
auto t = new Tensor<GPUBackend>; | |||
FillTensorDlpackInterfaceData(capsule, t, layout); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest we make a shorter name FillTensorFromDlPack
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
dali/python/backend_impl.cc
Outdated
.def(py::init([](const py::object object, string layout = "", int device_id = -1) { | ||
auto t = new Tensor<GPUBackend>; | ||
auto shape = FillTensorData(object, t, device_id, layout); | ||
t->Resize(shape); | ||
FillTensorCudaArrayInterfaceData(object, t, device_id, layout); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FillTensorFromCudaArray
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
tensor_list = TensorListCPU(to_dlpack(arr), "NHWC") | ||
dali_torch_tensor = convert_to_torch(tensor_list, device=arr.device, dtype=arr.dtype) | ||
assert(torch.all(arr.eq(dali_torch_tensor))) | ||
test_dlpack_tensor_list_cpu_direct_creation() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
dali/pipeline/data/tensor_list.h
Outdated
* persist while it is in use by the Tensor. | ||
*/ | ||
DLL_PUBLIC inline void ShareData(void *ptr, size_t bytes) { | ||
ShareData(shared_ptr<void>(ptr, [](void *) {}), bytes, uniform_list_shape(1, {0})); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this particular shape? What's wrong with TensorListshape<>{}
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
dali/pipeline/data/tensor_list.h
Outdated
@@ -219,14 +217,14 @@ class DLL_PUBLIC TensorList : public Buffer<Backend> { | |||
* the user to manage the lifetime of the allocation such that it | |||
* persist while it is in use by the Tensor. | |||
*/ | |||
DLL_PUBLIC inline void ShareData(void *ptr, size_t bytes) { | |||
inline void ShareData(const shared_ptr<void> &ptr, size_t bytes, const TensorListShape<> &shape) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For completeness - maybe there should also be a type?
inline void ShareData(const shared_ptr<void> &ptr, size_t bytes, const TensorListShape<> &shape) { | |
inline void ShareData(const shared_ptr<void> &ptr, size_t bytes, const TensorListShape<> &shape, const TypeInfo &type = {}) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
...this would be in line with the new Resiz.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wanted to align the API with the Tensor API. It doesn't accept type in the ShareData
function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
!build |
CI MESSAGE: [1409136]: BUILD STARTED |
CI MESSAGE: [1409136]: BUILD PASSED |
51ecddf
to
363846d
Compare
!build |
CI MESSAGE: [1423002]: BUILD STARTED |
CI MESSAGE: [1423002]: BUILD FAILED |
sample_dim, layout_str, pipe_handle->copy_stream, true, | ||
is_pinned); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nitpick: indentation.
!build |
CI MESSAGE: [1423041]: BUILD STARTED |
- adds an ability to pass DLPack object in the ExternalSource operator - sorts CPU work on ExternalSource by size Signed-off-by: Janusz Lisiecki <jlisiecki@nvidia.com>
Signed-off-by: Janusz Lisiecki <jlisiecki@nvidia.com>
CI MESSAGE: [1423041]: BUILD FAILED |
dali/pipeline/data/tensor_list.h
Outdated
@@ -219,14 +217,15 @@ class DLL_PUBLIC TensorList : public Buffer<Backend> { | |||
* the user to manage the lifetime of the allocation such that it | |||
* persist while it is in use by the Tensor. | |||
*/ | |||
DLL_PUBLIC inline void ShareData(void *ptr, size_t bytes) { | |||
inline void ShareData(const shared_ptr<void> &ptr, size_t bytes, const TensorListShape<> &shape, | |||
const TypeInfo &type = TypeInfo::Create<NoType>()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't that sufficient?
const TypeInfo &type = TypeInfo::Create<NoType>()) { | |
const TypeInfo &type = {}) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
@@ -57,6 +57,10 @@ class CachingList { | |||
return full_data_.empty(); | |||
} | |||
|
|||
T &PeakFront() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
T &PeakFront() { | |
T &PeekFront() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
output_desc[0].shape = tl_data_.PeakFront()->shape(); | ||
output_desc[0].type = tl_data_.PeakFront()->type(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
output_desc[0].shape = tl_data_.PeakFront()->shape(); | |
output_desc[0].type = tl_data_.PeakFront()->type(); | |
output_desc[0].shape = tl_data_.PeekFront()->shape(); | |
output_desc[0].type = tl_data_.PeekFront()->type(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
dali/python/backend_impl.cc
Outdated
void CheckStrides(TStrides &strides, TShape &shape, size_t type_size, | ||
size_t strides_size, size_t shape_size) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
void CheckStrides(TStrides &strides, TShape &shape, size_t type_size, | |
size_t strides_size, size_t shape_size) { | |
void CheckContiguousTensor(const TStrides &strides, size_t num_strides, const TShape &shape, | |
size_t num_extents, size_t element_size) { |
...and add an overload:
void CheckContiguousTensor(const TStrides &strides, const TShape &shape, size_t element_size) {
CheckContiguousTensor(strides, dali::size(strides), shape, dali::size(shape), element_size);
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
dali/python/backend_impl.cc
Outdated
std::vector<Index> tensor_shape(shape.size()-1); | ||
for (int i = 1; i < shape.size(); ++i) { | ||
tensor_shape[i-1] = shape[i]; | ||
} | ||
return uniform_list_shape(shape[0], tensor_shape); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
std::vector<Index> tensor_shape(shape.size()-1); | |
for (int i = 1; i < shape.size(); ++i) { | |
tensor_shape[i-1] = shape[i]; | |
} | |
return uniform_list_shape(shape[0], tensor_shape); | |
return uniform_list_shape(shape[0], shape.last(shape.size()-1)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
dali/python/backend_impl.cc
Outdated
} | ||
|
||
template<typename SrcBackend> | ||
TensorShape<> CreateShape(TensorShape<> &shape, Tensor<SrcBackend>*) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TensorShape<> CreateShape(TensorShape<> &shape, Tensor<SrcBackend>*) { | |
const TensorShape<> &ConvertShape(const TensorShape<> &shape, Tensor<SrcBackend>*) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
dali/python/backend_impl.cc
Outdated
CheckStrides(info.strides, info.shape, info.itemsize, info.strides.size(), | ||
info.shape.size()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CheckStrides(info.strides, info.shape, info.itemsize, info.strides.size(), | |
info.shape.size()); | |
CheckStrides(info.strides, info.shape, info.itemsize); |
The function should take care of obtaining the sizes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
dali/python/backend_impl.cc
Outdated
CheckStrides(info.strides, info.shape, info.itemsize, info.strides.size(), | ||
info.shape.size()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CheckStrides(info.strides, info.shape, info.itemsize, info.strides.size(), | |
info.shape.size()); | |
CheckStrides(info.strides, info.shape, info.itemsize); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
dali/python/backend_impl.cc
Outdated
" whereas densely packed data of this shape would have a stride ", stride_from_shape)); | ||
stride_from_shape *= shape[i]; | ||
} | ||
CheckStrides(strides, shape, type.size(), strides.size(), shape.size()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CheckStrides(strides, shape, type.size(), strides.size(), shape.size()); | |
CheckStrides(strides, shape, type.size()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
dali/python/backend_impl.cc
Outdated
It returns a two element tuple, if this is a valid DLPack object, and if data | ||
resides on the GPU. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It returns a two element tuple, if this is a valid DLPack object, and if data | |
resides on the GPU. | |
It returns a tuple of two boolean values: one indicating if this is a valid DLPack object, and the other if the data | |
resides on the GPU. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
!build |
@@ -504,17 +523,13 @@ def define_graph(self): | |||
|
|||
def iter_setup(self): | |||
if use_list: | |||
batch_data = [random_array([100, 100, 3]) for _ in range(self.batch_size)] | |||
batch_data = [cast_to(random_array([100, 100, 3]), datapy.uint8) for _ in range(self.batch_size)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
batch_data = [cast_to(random_array([100, 100, 3]), datapy.uint8) for _ in range(self.batch_size)] | |
batch_data = [cast_to(random_array([100, 100, 3])*255, datapy.uint8) for _ in range(self.batch_size)] |
else: | ||
batch_data = random_array([self.batch_size, 100, 100, 3]) | ||
self.feed_input(self.batch, batch_data) | ||
batch_data = cast_to(random_array([self.batch_size, 100, 100, 3]), datapy.uint8) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
batch_data = cast_to(random_array([self.batch_size, 100, 100, 3]), datapy.uint8) | |
batch_data = cast_to(random_array([self.batch_size, 100, 100, 3])*256, datapy.uint8) |
CI MESSAGE: [1423418]: BUILD STARTED |
CI MESSAGE: [1423418]: BUILD FAILED |
Signed-off-by: Janusz Lisiecki <jlisiecki@nvidia.com>
CI MESSAGE: [1424173]: BUILD STARTED |
CI MESSAGE: [1424173]: BUILD PASSED |
Signed-off-by: Janusz Lisiecki <jlisiecki@nvidia.com>
CI MESSAGE: [1425810]: BUILD STARTED |
CI MESSAGE: [1425810]: BUILD PASSED |
Signed-off-by: Janusz Lisiecki jlisiecki@nvidia.com
Why we need this PR?
Pick one, remove the rest
What happened in this PR?
Fill relevant points, put NA otherwise. Replace anything inside []
adds DLPack input support to the ExternalSource operator
backend, pipeline, external source
NA
new tests added
docs updated
JIRA TASK: [DALI-1465]