Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce owning IODescriptor #16

Merged
merged 11 commits into from
Jan 26, 2021
Merged

Introduce owning IODescriptor #16

merged 11 commits into from
Jan 26, 2021

Conversation

szalpal
Copy link
Member

@szalpal szalpal commented Jan 15, 2021

This PR introduces changes to fix #14 : correcting TritonBackend API usage and introducing owning IODescriptor

The solution is to add an owning IODescriptor. The data buffer obtained from Triton API comes in chunks and they need to be stitched together before passing the input into DALI. Therefore we do need an owning data structure. In future, DALI API might change so it would allow skipping this one extra copy.

Signed-off-by: szalpal <mszolucha@nvidia.com>
Signed-off-by: szalpal <mszolucha@nvidia.com>
Signed-off-by: szalpal <mszolucha@nvidia.com>
Signed-off-by: szalpal <mszolucha@nvidia.com>
Copy link
Contributor

@deadeyegoodwin deadeyegoodwin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the CI testing updated to cover this failure case?

@szalpal
Copy link
Member Author

szalpal commented Jan 20, 2021

@deadeyegoodwin ,
Yes, this PR includes an example (docs/examples/multi_input/model_repository/dali_multi_input/config.pbtxt), which is multi-input, multi-device test for this case

Copy link
Contributor

@deadeyegoodwin deadeyegoodwin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the new test running in CI? Don't you also need to make some change here: https://github.com/triton-inference-server/server/tree/master/qa/L0_backend_dali

Dockerfile Show resolved Hide resolved
Signed-off-by: szalpal <mszolucha@nvidia.com>
Signed-off-by: szalpal <mszolucha@nvidia.com>
Signed-off-by: szalpal <mszolucha@nvidia.com>
Signed-off-by: szalpal <mszolucha@nvidia.com>
Signed-off-by: szalpal <mszolucha@nvidia.com>
@szalpal szalpal marked this pull request as ready for review January 25, 2021 10:27
@szalpal
Copy link
Member Author

szalpal commented Jan 25, 2021

Is the new test running in CI? Don't you also need to make some change here: https://github.com/triton-inference-server/server/tree/master/qa/L0_backend_dali

Yes, I do need. I was going to do it after making this PR "Ready for review" :)

@szalpal szalpal changed the title Fix for multi-input bug. Enable GPU input Fix for multi-input bug. Introduce owning IODescriptor Jan 25, 2021
@szalpal szalpal changed the title Fix for multi-input bug. Introduce owning IODescriptor Introduce owning IODescriptor Jan 25, 2021
Signed-off-by: szalpal <mszolucha@nvidia.com>
@@ -0,0 +1,18 @@
# Multi input model for DALI Backend

This is a multi input model, for DALI preprocessing.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This is a multi input model, for DALI preprocessing.
This is a multi input model for DALI preprocessing.

banasraf
banasraf previously approved these changes Jan 25, 2021
Signed-off-by: szalpal <mszolucha@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

Multi input model crashes TRITON server without errors
3 participants