-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[coreml] delegate multiple outputs #88345
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/88345
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 FailuresAs of commit eb75da0: The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D40328684 |
@@ -90,6 +90,11 @@ GenericList pack_outputs(const std::vector<TensorSpec>& output_specs, id<MLFeatu | |||
count * sizeof(float)); | |||
outputs.push_back(tensor); | |||
} | |||
if(output_specs.size() > 1){ | |||
c10::List<c10::List<torch::Tensor>> output_res; | |||
ouptut_res.push_back(outputs); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ouptut_res.push_back(outputs); | |
output_res.push_back(outputs); |
This pull request was exported from Phabricator. Differential Revision: D40328684 |
Summary: Pull Request resolved: pytorch#88345 https://www.internalfb.com/code/fbsource/[c0e4da0b5c7fff3b4e31e4611033c30cabdc6aef]/fbcode/caffe2/torch/csrc/jit/backends/backend_detail.cpp?lines=268-276 seems like the torchscript addition of `$unpack, = self.__backend.execute( ... ` the comma after unpack forces the result of execute to have only one item. So for this fix now when the size of the outputs > 1, execute returns a List List of outputs (basically put the outputs in another list before putting it into the list we return) ``` [[output1, output2, output3, ...]] ``` instead of ``` [output1, output2, output3, ...] ``` Do we want to fix this in backend_detail? Or should we make the change in our delegate to accomadate the torchscript? Proposing this q here. Requesting cccclai, kimishpatel for approval here Test Plan: unblocked models for chengxiangyin and models in pytorch playground all passing unit tests Reviewed By: kimishpatel, cccclai Differential Revision: D40328684 fbshipit-source-id: 19e4a87c5570df56a6db05942d3729cfcc86009b
65c10c9
to
6503551
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One other nit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also this one.
This pull request was exported from Phabricator. Differential Revision: D40328684 |
Summary: Pull Request resolved: pytorch#88345 https://www.internalfb.com/code/fbsource/[c0e4da0b5c7fff3b4e31e4611033c30cabdc6aef]/fbcode/caffe2/torch/csrc/jit/backends/backend_detail.cpp?lines=268-276 seems like the torchscript addition of `$unpack, = self.__backend.execute( ... ` the comma after unpack forces the result of execute to have only one item. So for this fix now when the size of the outputs > 1, execute returns a List List of outputs (basically put the outputs in another list before putting it into the list we return) ``` [[output1, output2, output3, ...]] ``` instead of ``` [output1, output2, output3, ...] ``` Do we want to fix this in backend_detail? Or should we make the change in our delegate to accomadate the torchscript? Proposing this q here. Requesting cccclai, kimishpatel for approval here Test Plan: unblocked models for chengxiangyin and models in pytorch playground all passing unit tests Reviewed By: kimishpatel, cccclai Differential Revision: D40328684 fbshipit-source-id: 27154dbfde812388bfdb16ba77f9cc3a2d105b9a
6503551
to
688a4c8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, this piece of code could definitely be optimized a lot with more std::move and reserve() etc... for all the push_back, toList etc...
return c10::impl::toList(output_res); | ||
} | ||
return c10::impl::toList(outputs); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should also move here actually.
return c10::impl::toList(output_res); | |
} | |
return c10::impl::toList(outputs); | |
return c10::impl::toList(std::move(output_res)); | |
} | |
return c10::impl::toList(std::move(outputs)); |
@@ -90,6 +90,11 @@ GenericList pack_outputs(const std::vector<TensorSpec>& output_specs, id<MLFeatu | |||
count * sizeof(float)); | |||
outputs.push_back(tensor); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
outputs.push_back(tensor); | |
outputs.push_back(std::move(tensor)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also see the std::vector above and properly call .reserve() on it etc.
Summary: Pull Request resolved: pytorch#88345 https://www.internalfb.com/code/fbsource/[c0e4da0b5c7fff3b4e31e4611033c30cabdc6aef]/fbcode/caffe2/torch/csrc/jit/backends/backend_detail.cpp?lines=268-276 seems like the torchscript addition of `$unpack, = self.__backend.execute( ... ` the comma after unpack forces the result of execute to have only one item. So for this fix now when the size of the outputs > 1, execute returns a List List of outputs (basically put the outputs in another list before putting it into the list we return) ``` [[output1, output2, output3, ...]] ``` instead of ``` [output1, output2, output3, ...] ``` Do we want to fix this in backend_detail? Or should we make the change in our delegate to accomadate the torchscript? Proposing this q here. Requesting cccclai, kimishpatel for approval here Test Plan: unblocked models for chengxiangyin and models in pytorch playground all passing unit tests Reviewed By: kimishpatel, cccclai Differential Revision: D40328684 fbshipit-source-id: bfb85f3f0933b062a14c5d8cd3acb7db58c52944
This pull request was exported from Phabricator. Differential Revision: D40328684 |
688a4c8
to
eb75da0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the changed function, the std::vector output_shape
should also allocate it's space using .reserve()
before the for loop. Otherwise this code is looking much better and more performant.
@pytorchbot merge (Initiating merge automatically since Phabricator Diff has merged) |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Summary: https://www.internalfb.com/code/fbsource/[c0e4da0b5c7fff3b4e31e4611033c30cabdc6aef]/fbcode/caffe2/torch/csrc/jit/backends/backend_detail.cpp?lines=268-276 seems like the torchscript addition of `$unpack, = self.__backend.execute( ... ` the comma after unpack forces the result of execute to have only one item. So for this fix now when the size of the outputs > 1, execute returns a List List of outputs (basically put the outputs in another list before putting it into the list we return) ``` [[output1, output2, output3, ...]] ``` instead of ``` [output1, output2, output3, ...] ``` Do we want to fix this in backend_detail? Or should we make the change in our delegate to accomadate the torchscript? Proposing this q here. Requesting cccclai, kimishpatel for approval here Test Plan: unblocked models for chengxiangyin and models in pytorch playground all passing unit tests Reviewed By: kimishpatel, cccclai Differential Revision: D40328684 Pull Request resolved: pytorch#88345 Approved by: https://github.com/jmdetloff, https://github.com/Skylion007
Summary: https://www.internalfb.com/code/fbsource/[c0e4da0b5c7fff3b4e31e4611033c30cabdc6aef]/fbcode/caffe2/torch/csrc/jit/backends/backend_detail.cpp?lines=268-276 seems like the torchscript addition of `$unpack, = self.__backend.execute( ... ` the comma after unpack forces the result of execute to have only one item. So for this fix now when the size of the outputs > 1, execute returns a List List of outputs (basically put the outputs in another list before putting it into the list we return) ``` [[output1, output2, output3, ...]] ``` instead of ``` [output1, output2, output3, ...] ``` Do we want to fix this in backend_detail? Or should we make the change in our delegate to accomadate the torchscript? Proposing this q here. Requesting cccclai, kimishpatel for approval here Test Plan: unblocked models for chengxiangyin and models in pytorch playground all passing unit tests Reviewed By: kimishpatel, cccclai Differential Revision: D40328684 Pull Request resolved: pytorch#88345 Approved by: https://github.com/jmdetloff, https://github.com/Skylion007
Summary:
https://www.internalfb.com/code/fbsource/[c0e4da0b5c7fff3b4e31e4611033c30cabdc6aef]/fbcode/caffe2/torch/csrc/jit/backends/backend_detail.cpp?lines=268-276
seems like the torchscript addition of
$unpack, = self.__backend.execute( ...
the comma after unpack forces the result of execute to have only one item. So for this fix now when the size of the outputs > 1, execute returns a List List of outputs (basically put the outputs in another list before putting it into the list we return)
instead of
Do we want to fix this in backend_detail? Or should we make the change in our delegate to accomadate the torchscript? Proposing this q here. Requesting cccclai, kimishpatel for approval here
Test Plan: unblocked models for chengxiangyin and models in pytorch playground all passing unit tests
Reviewed By: kimishpatel, cccclai
Differential Revision: D40328684