Skip to content

Conversation

shunting314
Copy link
Contributor

@shunting314 shunting314 commented Oct 23, 2021

Stack from ghstack:

This diff demos torch::deploy unity which builds the model, the dependencies and the runtime as a unity!

The end user only need to use the build_unity rule to replace the python_binary rule to define the python application. Under the hood, we build the python application (an xar file), build the torch deploy runtime, and then embed the python application (the xar file) into the torch deploy runtime.

When starting the torch::deploy runtime, the xar will be written to the filesystem and extracted. We put the extracted path to python sys.path so all the model files and all the python dependencies can be found!

As a demo, the model here is just a simple python program using numpy and scipy. But theoretically, it can be as complex as we want.

I'll check how bento_kernel works. Maybe we can learn from bento_kernel to simplify things a bit.

Differential Revision: D31816526

NOTE FOR REVIEWERS: This PR has internal Facebook specific changes or comments, please review them on Phabricator!

This diff demos torch::deploy unity which builds the model, the dependencies and the runtime as a unity!

The end user only need to use the build_unity rule to replace the python_binary rule to define the python application. Under the hood, we build the python application (an xar file), build the torch deploy runtime, and then embed the python application (the xar file) into the torch deploy runtime.

When starting the torch::deploy runtime, the xar will be written to the filesystem and extracted. We put the extracted path to python sys.path so all the model files and all the python dependencies can be found!

As a demo, the model here is just a simple python program using numpy and scipy. But  theoretically, it can be as complex as we want.

I'll check how bento_kernel works. Maybe we can learn from bento_kernel to simplify things a bit.

Differential Revision: [D31816526](https://our.internmc.facebook.com/intern/diff/D31816526/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D31816526/)!

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Oct 23, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 306841f (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@pytorch-probot
Copy link

pytorch-probot bot commented Oct 23, 2021

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/pytorch/pytorch/blob/306841f8cd577855c3589d7ac9d3fbee478120e3/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-vulkan-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-dynamic ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3.6-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers ✅ triggered
linux-xenial-py3.6-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
caffe2-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
docker-builds ciflow/all 🚫 skipped
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-py3-clang5-mobile-code-analysis ciflow/all, ciflow/linux, ciflow/mobile 🚫 skipped
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

shunting314 added a commit that referenced this pull request Oct 23, 2021
This diff demos torch::deploy unity which builds the model, the dependencies and the runtime as a unity!

The end user only need to use the build_unity rule to replace the python_binary rule to define the python application. Under the hood, we build the python application (an xar file), build the torch deploy runtime, and then embed the python application (the xar file) into the torch deploy runtime.

When starting the torch::deploy runtime, the xar will be written to the filesystem and extracted. We put the extracted path to python sys.path so all the model files and all the python dependencies can be found!

As a demo, the model here is just a simple python program using numpy and scipy. But  theoretically, it can be as complex as we want.

I'll check how bento_kernel works. Maybe we can learn from bento_kernel to simplify things a bit.

Differential Revision: [D31816526](https://our.internmc.facebook.com/intern/diff/D31816526/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D31816526/)!

ghstack-source-id: 141389280
Pull Request resolved: #67134
@shunting314 shunting314 requested a review from suo October 23, 2021 01:50
This diff demos torch::deploy unity which builds the model, the dependencies and the runtime as a unity!

The end user only need to use the build_unity rule to replace the python_binary rule to define the python application. Under the hood, we build the python application (an xar file), build the torch deploy runtime, and then embed the python application (the xar file) into the torch deploy runtime.

When starting the torch::deploy runtime, the xar will be written to the filesystem and extracted. We put the extracted path to python sys.path so all the model files and all the python dependencies can be found!

As a demo, the model here is just a simple python program using numpy and scipy. But  theoretically, it can be as complex as we want.

I'll check how bento_kernel works. Maybe we can learn from bento_kernel to simplify things a bit.

Differential Revision: [D31816526](https://our.internmc.facebook.com/intern/diff/D31816526/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D31816526/)!

[ghstack-poisoned]
shunting314 added a commit that referenced this pull request Oct 26, 2021
Pull Request resolved: #67134

This diff demos torch::deploy unity which builds the model, the dependencies and the runtime as a unity!

The end user only need to use the build_unity rule to replace the python_binary rule to define the python application. Under the hood, we build the python application (an xar file), build the torch deploy runtime, and then embed the python application (the xar file) into the torch deploy runtime.

When starting the torch::deploy runtime, the xar will be written to the filesystem and extracted. We put the extracted path to python sys.path so all the model files and all the python dependencies can be found!

As a demo, the model here is just a simple python program using numpy and scipy. But  theoretically, it can be as complex as we want.

I'll check how bento_kernel works. Maybe we can learn from bento_kernel to simplify things a bit.
ghstack-source-id: 141581647

Differential Revision: [D31816526](https://our.internmc.facebook.com/intern/diff/D31816526/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D31816526/)!
This diff demos torch::deploy unity which builds the model, the dependencies and the runtime as a unity!

The end user only need to use the build_unity rule to replace the python_binary rule to define the python application. Under the hood, we build the python application (an xar file), build the torch deploy runtime, and then embed the python application (the xar file) into the torch deploy runtime.

When starting the torch::deploy runtime, the xar will be written to the filesystem and extracted. We put the extracted path to python sys.path so all the model files and all the python dependencies can be found!

As a demo, the model here is just a simple python program using numpy and scipy. But  theoretically, it can be as complex as we want.

I'll check how bento_kernel works. Maybe we can learn from bento_kernel to simplify things a bit.

Differential Revision: [D31816526](https://our.internmc.facebook.com/intern/diff/D31816526/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D31816526/)!

[ghstack-poisoned]
shunting314 added a commit that referenced this pull request Oct 28, 2021
Pull Request resolved: #67134

This diff demos torch::deploy unity which builds the model, the dependencies and the runtime as a unity!

The end user only need to use the build_unity rule to replace the python_binary rule to define the python application. Under the hood, we build the python application (an xar file), build the torch deploy runtime, and then embed the python application (the xar file) into the torch deploy runtime.

When starting the torch::deploy runtime, the xar will be written to the filesystem and extracted. We put the extracted path to python sys.path so all the model files and all the python dependencies can be found!

As a demo, the model here is just a simple python program using numpy and scipy. But  theoretically, it can be as complex as we want.

I'll check how bento_kernel works. Maybe we can learn from bento_kernel to simplify things a bit.
ghstack-source-id: 141766837

Differential Revision: [D31816526](https://our.internmc.facebook.com/intern/diff/D31816526/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D31816526/)!
This diff demos torch::deploy unity which builds the model, the dependencies and the runtime as a unity!

The end user only need to use the build_unity rule to replace the python_binary rule to define the python application. Under the hood, we build the python application (an xar file), build the torch deploy runtime, and then embed the python application (the xar file) into the torch deploy runtime.

When starting the torch::deploy runtime, the xar will be written to the filesystem and extracted. We put the extracted path to python sys.path so all the model files and all the python dependencies can be found!

As a demo, the model here is just a simple python program using numpy and scipy. But  theoretically, it can be as complex as we want.

I'll check how bento_kernel works. Maybe we can learn from bento_kernel to simplify things a bit.

Differential Revision: [D31816526](https://our.internmc.facebook.com/intern/diff/D31816526/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D31816526/)!

[ghstack-poisoned]
This diff demos torch::deploy unity which builds the model, the dependencies and the runtime as a unity!

The end user only need to use the build_unity rule to replace the python_binary rule to define the python application. Under the hood, we build the python application (an xar file), build the torch deploy runtime, and then embed the python application (the xar file) into the torch deploy runtime.

When starting the torch::deploy runtime, the xar will be written to the filesystem and extracted. We put the extracted path to python sys.path so all the model files and all the python dependencies can be found!

As a demo, the model here is just a simple python program using numpy and scipy. But  theoretically, it can be as complex as we want.

I'll check how bento_kernel works. Maybe we can learn from bento_kernel to simplify things a bit.

Differential Revision: [D31816526](https://our.internmc.facebook.com/intern/diff/D31816526/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D31816526/)!

[ghstack-poisoned]
shunting314 added a commit that referenced this pull request Oct 29, 2021
Pull Request resolved: #67134

This diff demos torch::deploy unity which builds the model, the dependencies and the runtime as a unity!

The end user only need to use the build_unity rule to replace the python_binary rule to define the python application. Under the hood, we build the python application (an xar file), build the torch deploy runtime, and then embed the python application (the xar file) into the torch deploy runtime.

When starting the torch::deploy runtime, the xar will be written to the filesystem and extracted. We put the extracted path to python sys.path so all the model files and all the python dependencies can be found!

As a demo, the model here is just a simple python program using numpy and scipy. But  theoretically, it can be as complex as we want.

I'll check how bento_kernel works. Maybe we can learn from bento_kernel to simplify things a bit.
ghstack-source-id: 141946773

Differential Revision: [D31816526](https://our.internmc.facebook.com/intern/diff/D31816526/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D31816526/)!
@shunting314
Copy link
Contributor Author

The error message for the failed check linux-xenial-cuda11.3-py3.6-gcc7 looks as follows

ERROR 2021-10-29T19:30:13Z: sccache::server: Compilation failed: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "conftest.c:29:1: error: function declaration isn\'t a prototype [-Werror=strict-prototypes]\n main ()\n ^~~~\ncc1: all warnings being treated as errors\n" }

Look unrelated to this PR.

This diff demos torch::deploy unity which builds the model, the dependencies and the runtime as a unity!

The end user only need to use the build_unity rule to replace the python_binary rule to define the python application. Under the hood, we build the python application (an xar file), build the torch deploy runtime, and then embed the python application (the xar file) into the torch deploy runtime.

When starting the torch::deploy runtime, the xar will be written to the filesystem and extracted. We put the extracted path to python sys.path so all the model files and all the python dependencies can be found!

As a demo, the model here is just a simple python program using numpy and scipy. But  theoretically, it can be as complex as we want.

I'll check how bento_kernel works. Maybe we can learn from bento_kernel to simplify things a bit.

Differential Revision: [D31816526](https://our.internmc.facebook.com/intern/diff/D31816526/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D31816526/)!

[ghstack-poisoned]
shunting314 added a commit that referenced this pull request Nov 1, 2021
Pull Request resolved: #67134

This diff demos torch::deploy unity which builds the model, the dependencies and the runtime as a unity!

The end user only need to use the build_unity rule to replace the python_binary rule to define the python application. Under the hood, we build the python application (an xar file), build the torch deploy runtime, and then embed the python application (the xar file) into the torch deploy runtime.

When starting the torch::deploy runtime, the xar will be written to the filesystem and extracted. We put the extracted path to python sys.path so all the model files and all the python dependencies can be found!

As a demo, the model here is just a simple python program using numpy and scipy. But  theoretically, it can be as complex as we want.

I'll check how bento_kernel works. Maybe we can learn from bento_kernel to simplify things a bit.
ghstack-source-id: 142074917

Differential Revision: [D31816526](https://our.internmc.facebook.com/intern/diff/D31816526/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D31816526/)!
@suo
Copy link
Member

suo commented Nov 1, 2021

hmm, if you read back in the logs, there are some suspicious lines:

CMakeFiles/interactive_embedded_interpreter.dir/interactive_embedded_interpreter.cpp.o: In function `std::__shared_ptr<torch::deploy::PathEnvironment, (__gnu_cxx::_Lock_policy)2>::__shared_ptr<std::allocator<torch::deploy::PathEnvironment>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&>(std::_Sp_make_shared_tag, std::allocator<torch::deploy::PathEnvironment> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&)':
interactive_embedded_interpreter.cpp:(.text._ZNSt12__shared_ptrIN5torch6deploy15PathEnvironmentELN9__gnu_cxx12_Lock_policyE2EEC2ISaIS2_EJRNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEEEESt19_Sp_make_shared_tagRKT_DpOT0_[_ZNSt12__shared_ptrIN5torch6deploy15PathEnvironmentELN9__gnu_cxx12_Lock_policyE2EEC5ISaIS2_EJRNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEEEESt19_Sp_make_shared_tagRKT_DpOT0_]+0xb0): undefined reference to `vtable for torch::deploy::PathEnvironment'
collect2: error: ld returned 1 exit status
torch/csrc/deploy/CMakeFiles/interactive_embedded_interpreter.dir/build.make:113: recipe for target 'bin/interactive_embedded_interpreter' failed

@shunting314
Copy link
Contributor Author

@suo thanks for pointing out. I was mainly focusing on those error lines rendered as red colors in the log, didn't pay attention to those white colored lines. There are just so many log lines. Good catch! I'll fix that.

This diff demos torch::deploy unity which builds the model, the dependencies and the runtime as a unity!

The end user only need to use the build_unity rule to replace the python_binary rule to define the python application. Under the hood, we build the python application (an xar file), build the torch deploy runtime, and then embed the python application (the xar file) into the torch deploy runtime.

When starting the torch::deploy runtime, the xar will be written to the filesystem and extracted. We put the extracted path to python sys.path so all the model files and all the python dependencies can be found!

As a demo, the model here is just a simple python program using numpy and scipy. But  theoretically, it can be as complex as we want.

I'll check how bento_kernel works. Maybe we can learn from bento_kernel to simplify things a bit.

Differential Revision: [D31816526](https://our.internmc.facebook.com/intern/diff/D31816526/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D31816526/)!

[ghstack-poisoned]
shunting314 added a commit that referenced this pull request Nov 1, 2021
Pull Request resolved: #67134

This diff demos torch::deploy unity which builds the model, the dependencies and the runtime as a unity!

The end user only need to use the build_unity rule to replace the python_binary rule to define the python application. Under the hood, we build the python application (an xar file), build the torch deploy runtime, and then embed the python application (the xar file) into the torch deploy runtime.

When starting the torch::deploy runtime, the xar will be written to the filesystem and extracted. We put the extracted path to python sys.path so all the model files and all the python dependencies can be found!

As a demo, the model here is just a simple python program using numpy and scipy. But  theoretically, it can be as complex as we want.

I'll check how bento_kernel works. Maybe we can learn from bento_kernel to simplify things a bit.
ghstack-source-id: 142085742

Differential Revision: [D31816526](https://our.internmc.facebook.com/intern/diff/D31816526/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D31816526/)!
@suo
Copy link
Member

suo commented Nov 1, 2021

yeah the signal is very hard to interpret…we ahve #65431 to clean it up

facebook-github-bot pushed a commit that referenced this pull request Nov 2, 2021
Summary:
Pull Request resolved: #67134

This diff demos torch::deploy unity which builds the model, the dependencies and the runtime as a unity!

The end user only need to use the build_unity rule to replace the python_binary rule to define the python application. Under the hood, we build the python application (an xar file), build the torch deploy runtime, and then embed the python application (the xar file) into the torch deploy runtime.

When starting the torch::deploy runtime, the xar will be written to the filesystem and extracted. We put the extracted path to python sys.path so all the model files and all the python dependencies can be found!

As a demo, the model here is just a simple python program using numpy and scipy. But  theoretically, it can be as complex as we want.

I'll check how bento_kernel works. Maybe we can learn from bento_kernel to simplify things a bit.
ghstack-source-id: 142085742

Test Plan:
```
#build
buck build mode/opt unity:unity

# make sure the path exists before we start torch::deploy runtime
# Otherwise the dynamic loader will just skip this non-existing path
# even though we create it after the runtime starts.
mkdir -p /tmp/torch_deploy_python_app/python_app_root

#run
LD_LIBRARY_PATH=/tmp/torch_deploy_python_app/python_app_root ~/fbcode/buck-out/gen/caffe2/torch/csrc/deploy/unity/unity
```

Reviewed By: suo

Differential Revision: D31816526

fbshipit-source-id: 8eba97952aad10dcf1c86779fb3f7e500773d7ee
@facebook-github-bot facebook-github-bot deleted the gh/shunting314/7/head branch December 2, 2021 15:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants