Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add dockerfile.bitModel to download bit models #115

Closed
vli11 opened this issue Jun 15, 2023 · 0 comments
Closed

add dockerfile.bitModel to download bit models #115

vli11 opened this issue Jun 15, 2023 · 0 comments
Assignees
Labels
1.5.0 enhancement New feature or request

Comments

@vli11
Copy link
Contributor

vli11 commented Jun 15, 2023

Version

What release version was the issue found in? 1.5.0

Hardware configuration

List Hardware components used.
11th Gen Intel® Core™ i7 with Mesa Intel® Xe Graphics

Describe the bug

need to automate the way to download bit models for OVMS pipeline

To Reproduce

Steps to reproduce the behavior:
Missing dockerfile and script file.

@vli11 vli11 added enhancement New feature or request 1.5.0 labels Jun 15, 2023
@vli11 vli11 self-assigned this Jun 15, 2023
vli11 added a commit that referenced this issue Jun 20, 2023
* feat: Added dockerfile.bitModel to download bit models

closes: #115
Signed-off-by: Valina Li <valina.li@intel.com>
@vli11 vli11 closed this as completed Jun 20, 2023
jim-wang-intel pushed a commit to jim-wang-intel/vision-self-checkout that referenced this issue Jun 30, 2023
)

* feat: Added dockerfile.bitModel to download bit models

closes: intel-retail#115
Signed-off-by: Valina Li <valina.li@intel.com>
jim-wang-intel added a commit that referenced this issue Jun 30, 2023
* Ovms integration (#59)

* feat: Renamed to Automated self checkout (#62)

Closes: issue #61

Signed-off-by: Valina Li <valina.li@intel.com>

* Benchmark docker (#26)

* feat: Dockerfile for benchmark scripts

* Updated to support --workload option

Signed-off-by: Jubilee Steinbrink <jubilee.steinbrink@intel.com>

* Initial commit for supporting opencv-ovms workload

Signed-off-by: Jubilee Steinbrink <jubilee.steinbrink@intel.com>

* Integration for ovms

Signed-off-by: Jubilee Steinbrink <jubilee.steinbrink@intel.com>

* Updated docker image to openvino/model_server:2022.3.0.1-gpu

Signed-off-by: Jubilee Steinbrink <jubilee.steinbrink@intel.com>

* Update configs/opencv-ovms/scripts/grpc_infer_binary_bit.py

Co-authored-by: Jim Wang @ Intel <yutsung.jim.wang@intel.com>

* updated client image

Signed-off-by: Jubilee Steinbrink <jubilee.steinbrink@intel.com>

* Add script for downloading the sample image

Signed-off-by: Jubilee Steinbrink <jubilee.steinbrink@intel.com>

* Name fix for container

Signed-off-by: Jubilee Steinbrink <jubilee.steinbrink@intel.com>

* Updated copyright and exit code

Signed-off-by: Jubilee Steinbrink <jubilee.steinbrink@intel.com>

---------

Signed-off-by: Valina Li <valina.li@intel.com>
Signed-off-by: Jubilee Steinbrink <jubilee.steinbrink@intel.com>
Co-authored-by: Valina Li <valina.li@intel.com>
Co-authored-by: brian-intel <brian.mcginn@intel.com>
Co-authored-by: Jim Wang @ Intel <yutsung.jim.wang@intel.com>

* feat: Enable pipeline failure log (#56)

* feat: Enable pipeline failure log

Closes: issue #41
Signed-off-by: Valina Li <valina.li@intel.com>

* feat: enhance stream density function (#54)

* feat: enhance stream density function
  Now the increment of pipeline numbers can be larger if the system is high power and performed well:
   The increment number can be taken from user's input or can be dynamically adjusted based the actual average FPS per stream and the target FPS.
   The searching policy is greedy to attempt to get to the maximum pipelines first and then decrement by 1 at a time
   to find the maximum number of pipelines reaching the target FPS.

  Closes: #51

* feat: update docker-run-dlstreamer.sh for stream density increments env

* feat: update the increments logic and also add logic to cleanup the child processes of pipeline parent process

* fix: resolve the hung issue caused by --rm as exited containers should be removed first before checking

---------

Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>

* feat: Add github actions (#70)

Signed-off-by: Marc-Philippe Fuller <marc-philippe.fuller@intel.com>

* feat: enhance the benchmark script to handle --workload option in any argument position (#76)

Now if --workload is placed in front of pipeline options, it will work.  Also works if it is placed in-between or after the pipeline options.

  Closes: #67

Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>

* fix: Update shecllcheck minimum severity to warning (#82)

Signed-off-by: Marc-Philippe Fuller <marc-philippe.fuller@intel.com>

* fix: Remove actoins/checkout from reviewdog (#83)

Signed-off-by: Marc-Philippe Fuller <marc-philippe.fuller@intel.com>

* fix: fix shellcheck script issues (#81)

* fix: fix shellcheck SC2167
* wipfix: use mapfile to better deal with multiple line output in commnad line
* fix: fix shellcheck issue for benchmark.sh
* fix: fix shellcheck issues for camera-simulator.sh and cleanup_gpu_metrics.sh
* fix: fix shellcheck issues for copy-platform-metrics.sh
* fix: fix shellcheck issues for format_avc_mp4.sh
* fix: fix shellcheck issues for get-gpu-info.sh
* fix: remove unused run.sh and run_server.sh under benchmark-scripts
* fix: fix shellcheck issues for benchmark-scripts/stop_server.sh
* fix: fix shellcheck issues for benchmark-scripts/stream_density.sh
* fix: fix shellcheck issues for benchmark-scripts/test_format_avc_mp4.sh
* fix: fix shellcheck issues for camera-simulator/camera-simulator.sh
* fix: fix some boundary case when log_time_monitor_pid may not be there any more
* fix: reoslve the issue of stop_server.sh when it is meant to be re-splitting for docker rm to be working
* fix: use force kill to clean up its processes as sometimes it stuck with pkill

---------

Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>

* fix: fixed docker-run.sh with wrong message when no --workload option (#87)

* fix: fixed docker-run.sh with wrong message when no --workload option
Signed-off-by: Valina Li <valina.li@intel.com>

* feat: Add model download top level script and add unit test for ovms … (#86)

* feat: Add model download top level script and add unit test for ovms model download
Signed-off-by: Valina Li <valina.li@intel.com>

* Update reviewdog.yaml

added new  env token

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Feat: adding core smoke test (#93)

* Fix: update collect platform metrics to handle multiple gpu devices (#94)

* Update reviewdog.yaml

* Update .reviewdog.yml

* Fix: invert the format avc mp4 script tag logic to match our expected tag (#95)

* Docs: updates for release 1.0.1 (#97)

* fix: resolve the issue of the current model downloader (#98)

* fix: resolve the issue of the current model downloader
  current getModel.sh script and other model downloading related scripts are all using relative path like . or .., which is not working when the scripts are
  running in different directories.
  This PR resolves the issue of using relative path and uses the absolute path to run via finding out the execution path on the runtime.

  Fixes: #96

* fix: remove check point model files and update the model downloader

Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>

---------

Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>

* feat: add trivy code scan (#102)

Signed-off-by: Marc-Philippe Fuller <marc-philippe.fuller@intel.com>

* fix: Remove old linting workflows as reviewdog runs them (#103)

Signed-off-by: Marc-Philippe Fuller <marc-philippe.fuller@intel.com>

* Update reviewdog.yaml

fix golangci-lint install

* Update reviewdog.yaml

fix: create new directory for golangci-lint install

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* fix reviewdog golangcit

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update .reviewdog.yml

fix: Add shellcheck to reviewdog and fixed hadolint cmd call

* feat: Add cleaning ovms containers to makefile (#101)

feat: Add cleaning ovms containers to makefile
closes: #100 
Signed-off-by: Valina Li <valina.li@intel.com>

* Update .reviewdog.yml

* Create .golangci.yml

feat: Add golangci config to .github/

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update .reviewdog.yml

* feat: Made CPU as default device for ovms pipeline (#114)

closes: #112

Signed-off-by: Valina Li <valina.li@intel.com>

* Fix: igt path pointing to the incorrect directory causing the igt log to not be written (#110)

Signed-off-by: Brian McGinn <brian.mcginn@intel.com>

* Update gotest.yaml

* Update reviewdog.yaml

* fix: move workflow config files to .github directory (#121)

Signed-off-by: Marc-Philippe Fuller <marc-philippe.fuller@intel.com>

* Update codeql.yaml

* fix: fixed pipelinesetup doc has incorect link to models.list.yml (#120)

closes: #119

Signed-off-by: Valina Li <valina.li@intel.com>

* feat: Added dockerfile.bitModel to download bit models (#116)

* feat: Added dockerfile.bitModel to download bit models

closes: #115
Signed-off-by: Valina Li <valina.li@intel.com>

* Fix: pcm directory inconsistent between dockerfile and run script (#107)

* Merge doc changes (#118)

* Docs: merge from the 1.0.1 changes

* only add new commands

* feat: added OVMS sample image download into run script (#125)

Closes: #124

Signed-off-by: Valina Li <valina.li@intel.com>

* docs: Updated doc to include OVMS info (#126)

* docs: Updated doc to include OVMS info

Closes: #65
Signed-off-by: Valina Li <valina.li@intel.com>

* Update codeql.yaml

fix: there are no cpp files in the project

* fix: update license to apache-2.0 (#130)

* fix: update license to apache-2.0

Closes: #129 

Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>

---------

Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>

* docs: Updated mkdoc to navigate to OVMS docs (#132)

closes: #131

Signed-off-by: Valina Li <valina.li@intel.com>

* feat: clean up checked-in dlstreamer models (#133)

Model downloader can download those dlstreamer models, so we don't check in those model files.

 Closes: #99

Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>

* feat: add docker image build to workflows (#137)

Signed-off-by: Marc-Philippe Fuller <marc-philippe.fuller@intel.com>

* Update build.yaml

fix: syntax error

* Update build.yaml

fix: Remove OVMS server build

* feat: add platform and CPU_ONLY parameters to enable smoke testing on all platform types (#136)

* fix: add platform and CPU_ONLY parameters to enable smoke testing on all platform types

---------

Signed-off-by: Brian McGinn <brian.mcginn@intel.com>
Co-authored-by: Jim Wang @ Intel <yutsung.jim.wang@intel.com>

* fix: resolve hadolint issues (#140)

Signed-off-by: Marc-Philippe Fuller <marc-philippe.fuller@intel.com>

* Update reviewdog.yaml

fix:  Create directory for storing results for artifact

* Update reviewdog.yaml

fix: use absolute path

* Update reviewdog.yaml

* Update reviewdog.yaml

* fix: Save scan results to artifact (#141)

Signed-off-by: Marc-Philippe Fuller <marc-philippe.fuller@intel.com>

* fix: resolve the make build-ovms-server rebuild issue (#143)

Fixes: #142

Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>

* fix: move sample output label to correct location and fix json format (#146)

Signed-off-by: Brian McGinn <brian.mcginn@intel.com>

* docs: point model list to github model file (#148)

Signed-off-by: Brian McGinn <brian.mcginn@intel.com>

* fix: update the model list URL in pipelinesetup.md

Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>

---------

Signed-off-by: Valina Li <valina.li@intel.com>
Signed-off-by: Jubilee Steinbrink <jubilee.steinbrink@intel.com>
Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>
Signed-off-by: Marc-Philippe Fuller <marc-philippe.fuller@intel.com>
Signed-off-by: Brian McGinn <brian.mcginn@intel.com>
Co-authored-by: Jubilee Steinbrink <jubilee.steinbrink@intel.com>
Co-authored-by: Valina Li <valina.li@intel.com>
Co-authored-by: brian-intel <brian.mcginn@intel.com>
Co-authored-by: Marc-Philippe Fuller <marc-philippe.fuller@intel.com>
brian-intel added a commit that referenced this issue Jul 12, 2023
* Ovms integration (#59)

* feat: Renamed to Automated self checkout (#62)

Closes: issue #61

Signed-off-by: Valina Li <valina.li@intel.com>

* Benchmark docker (#26)

* feat: Dockerfile for benchmark scripts

* Updated to support --workload option

Signed-off-by: Jubilee Steinbrink <jubilee.steinbrink@intel.com>

* Initial commit for supporting opencv-ovms workload

Signed-off-by: Jubilee Steinbrink <jubilee.steinbrink@intel.com>

* Integration for ovms

Signed-off-by: Jubilee Steinbrink <jubilee.steinbrink@intel.com>

* Updated docker image to openvino/model_server:2022.3.0.1-gpu

Signed-off-by: Jubilee Steinbrink <jubilee.steinbrink@intel.com>

* Update configs/opencv-ovms/scripts/grpc_infer_binary_bit.py

Co-authored-by: Jim Wang @ Intel <yutsung.jim.wang@intel.com>

* updated client image

Signed-off-by: Jubilee Steinbrink <jubilee.steinbrink@intel.com>

* Add script for downloading the sample image

Signed-off-by: Jubilee Steinbrink <jubilee.steinbrink@intel.com>

* Name fix for container

Signed-off-by: Jubilee Steinbrink <jubilee.steinbrink@intel.com>

* Updated copyright and exit code

Signed-off-by: Jubilee Steinbrink <jubilee.steinbrink@intel.com>

---------

Signed-off-by: Valina Li <valina.li@intel.com>
Signed-off-by: Jubilee Steinbrink <jubilee.steinbrink@intel.com>
Co-authored-by: Valina Li <valina.li@intel.com>
Co-authored-by: brian-intel <brian.mcginn@intel.com>
Co-authored-by: Jim Wang @ Intel <yutsung.jim.wang@intel.com>

* feat: Enable pipeline failure log (#56)

* feat: Enable pipeline failure log

Closes: issue #41
Signed-off-by: Valina Li <valina.li@intel.com>

* feat: enhance stream density function (#54)

* feat: enhance stream density function
  Now the increment of pipeline numbers can be larger if the system is high power and performed well:
   The increment number can be taken from user's input or can be dynamically adjusted based the actual average FPS per stream and the target FPS.
   The searching policy is greedy to attempt to get to the maximum pipelines first and then decrement by 1 at a time
   to find the maximum number of pipelines reaching the target FPS.

  Closes: #51

* feat: update docker-run-dlstreamer.sh for stream density increments env

* feat: update the increments logic and also add logic to cleanup the child processes of pipeline parent process

* fix: resolve the hung issue caused by --rm as exited containers should be removed first before checking

---------

Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>

* feat: Add github actions (#70)

Signed-off-by: Marc-Philippe Fuller <marc-philippe.fuller@intel.com>

* feat: enhance the benchmark script to handle --workload option in any argument position (#76)

Now if --workload is placed in front of pipeline options, it will work.  Also works if it is placed in-between or after the pipeline options.

  Closes: #67

Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>

* fix: Update shecllcheck minimum severity to warning (#82)

Signed-off-by: Marc-Philippe Fuller <marc-philippe.fuller@intel.com>

* fix: Remove actoins/checkout from reviewdog (#83)

Signed-off-by: Marc-Philippe Fuller <marc-philippe.fuller@intel.com>

* fix: fix shellcheck script issues (#81)

* fix: fix shellcheck SC2167
* wipfix: use mapfile to better deal with multiple line output in commnad line
* fix: fix shellcheck issue for benchmark.sh
* fix: fix shellcheck issues for camera-simulator.sh and cleanup_gpu_metrics.sh
* fix: fix shellcheck issues for copy-platform-metrics.sh
* fix: fix shellcheck issues for format_avc_mp4.sh
* fix: fix shellcheck issues for get-gpu-info.sh
* fix: remove unused run.sh and run_server.sh under benchmark-scripts
* fix: fix shellcheck issues for benchmark-scripts/stop_server.sh
* fix: fix shellcheck issues for benchmark-scripts/stream_density.sh
* fix: fix shellcheck issues for benchmark-scripts/test_format_avc_mp4.sh
* fix: fix shellcheck issues for camera-simulator/camera-simulator.sh
* fix: fix some boundary case when log_time_monitor_pid may not be there any more
* fix: reoslve the issue of stop_server.sh when it is meant to be re-splitting for docker rm to be working
* fix: use force kill to clean up its processes as sometimes it stuck with pkill

---------

Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>

* fix: fixed docker-run.sh with wrong message when no --workload option (#87)

* fix: fixed docker-run.sh with wrong message when no --workload option
Signed-off-by: Valina Li <valina.li@intel.com>

* feat: Add model download top level script and add unit test for ovms … (#86)

* feat: Add model download top level script and add unit test for ovms model download
Signed-off-by: Valina Li <valina.li@intel.com>

* Update reviewdog.yaml

added new  env token

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Feat: adding core smoke test (#93)

* Fix: update collect platform metrics to handle multiple gpu devices (#94)

* Update reviewdog.yaml

* Update .reviewdog.yml

* Fix: invert the format avc mp4 script tag logic to match our expected tag (#95)

* Docs: updates for release 1.0.1 (#97)

* fix: resolve the issue of the current model downloader (#98)

* fix: resolve the issue of the current model downloader
  current getModel.sh script and other model downloading related scripts are all using relative path like . or .., which is not working when the scripts are
  running in different directories.
  This PR resolves the issue of using relative path and uses the absolute path to run via finding out the execution path on the runtime.

  Fixes: #96

* fix: remove check point model files and update the model downloader

Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>

---------

Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>

* feat: add trivy code scan (#102)

Signed-off-by: Marc-Philippe Fuller <marc-philippe.fuller@intel.com>

* fix: Remove old linting workflows as reviewdog runs them (#103)

Signed-off-by: Marc-Philippe Fuller <marc-philippe.fuller@intel.com>

* Update reviewdog.yaml

fix golangci-lint install

* Update reviewdog.yaml

fix: create new directory for golangci-lint install

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* fix reviewdog golangcit

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update .reviewdog.yml

fix: Add shellcheck to reviewdog and fixed hadolint cmd call

* feat: Add cleaning ovms containers to makefile (#101)

feat: Add cleaning ovms containers to makefile
closes: #100 
Signed-off-by: Valina Li <valina.li@intel.com>

* Update .reviewdog.yml

* Create .golangci.yml

feat: Add golangci config to .github/

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update reviewdog.yaml

* Update .reviewdog.yml

* feat: Made CPU as default device for ovms pipeline (#114)

closes: #112

Signed-off-by: Valina Li <valina.li@intel.com>

* Fix: igt path pointing to the incorrect directory causing the igt log to not be written (#110)

Signed-off-by: Brian McGinn <brian.mcginn@intel.com>

* Update gotest.yaml

* Update reviewdog.yaml

* fix: move workflow config files to .github directory (#121)

Signed-off-by: Marc-Philippe Fuller <marc-philippe.fuller@intel.com>

* Update codeql.yaml

* fix: fixed pipelinesetup doc has incorect link to models.list.yml (#120)

closes: #119

Signed-off-by: Valina Li <valina.li@intel.com>

* feat: Added dockerfile.bitModel to download bit models (#116)

* feat: Added dockerfile.bitModel to download bit models

closes: #115
Signed-off-by: Valina Li <valina.li@intel.com>

* Fix: pcm directory inconsistent between dockerfile and run script (#107)

* Merge doc changes (#118)

* Docs: merge from the 1.0.1 changes

* only add new commands

* feat: added OVMS sample image download into run script (#125)

Closes: #124

Signed-off-by: Valina Li <valina.li@intel.com>

* docs: Updated doc to include OVMS info (#126)

* docs: Updated doc to include OVMS info

Closes: #65
Signed-off-by: Valina Li <valina.li@intel.com>

* Update codeql.yaml

fix: there are no cpp files in the project

* fix: update license to apache-2.0 (#130)

* fix: update license to apache-2.0

Closes: #129 

Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>

---------

Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>

* docs: Updated mkdoc to navigate to OVMS docs (#132)

closes: #131

Signed-off-by: Valina Li <valina.li@intel.com>

* feat: clean up checked-in dlstreamer models (#133)

Model downloader can download those dlstreamer models, so we don't check in those model files.

 Closes: #99

Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>

* feat: add docker image build to workflows (#137)

Signed-off-by: Marc-Philippe Fuller <marc-philippe.fuller@intel.com>

* Update build.yaml

fix: syntax error

* Update build.yaml

fix: Remove OVMS server build

* feat: add platform and CPU_ONLY parameters to enable smoke testing on all platform types (#136)

* fix: add platform and CPU_ONLY parameters to enable smoke testing on all platform types

---------

Signed-off-by: Brian McGinn <brian.mcginn@intel.com>
Co-authored-by: Jim Wang @ Intel <yutsung.jim.wang@intel.com>

* fix: resolve hadolint issues (#140)

Signed-off-by: Marc-Philippe Fuller <marc-philippe.fuller@intel.com>

* Update reviewdog.yaml

fix:  Create directory for storing results for artifact

* Update reviewdog.yaml

fix: use absolute path

* Update reviewdog.yaml

* Update reviewdog.yaml

* fix: Save scan results to artifact (#141)

Signed-off-by: Marc-Philippe Fuller <marc-philippe.fuller@intel.com>

* fix: resolve the make build-ovms-server rebuild issue (#143)

Fixes: #142

Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>

* fix: move sample output label to correct location and fix json format (#146)

Signed-off-by: Brian McGinn <brian.mcginn@intel.com>

* docs: point model list to github model file (#148)

Signed-off-by: Brian McGinn <brian.mcginn@intel.com>

* feat: temp pythonStream.py example

Signed-off-by: Brian McGinn <brian.mcginn@intel.com>

* feat: Add grpc python files

* fix: change to rtsp stream

* save to jpg then read

* encode instead of readfile

* move code to grpc python file

Signed-off-by: Brian McGinn <brian.mcginn@intel.com>

* fix: unit tests

Signed-off-by: Brian McGinn <brian.mcginn@intel.com>

* feat: separate postprocess from the main grpc python

* fix: output to file

* feat: handle multiple servers and pass through inputsrc

Signed-off-by: Brian McGinn <brian.mcginn@intel.com>

* merge updates

* fix: PR merge

* fix: PR updates

Signed-off-by: Brian McGinn <brian.mcginn@intel.com>

* fix: Go env

* fix: Add try loop around frame read. Ignore exceptions for now

Signed-off-by: Brian McGinn <brian.mcginn@intel.com>

* fix: Stream Density for OVMS

Signed-off-by: Brian McGinn <brian.mcginn@intel.com>

* fix: Bash command

* Update configs/opencv-ovms/scripts/grpc_python.py

---------

Signed-off-by: Valina Li <valina.li@intel.com>
Signed-off-by: Jubilee Steinbrink <jubilee.steinbrink@intel.com>
Signed-off-by: Jim Wang <yutsung.jim.wang@intel.com>
Signed-off-by: Marc-Philippe Fuller <marc-philippe.fuller@intel.com>
Signed-off-by: Brian McGinn <brian.mcginn@intel.com>
Co-authored-by: Jubilee Steinbrink <jubilee.steinbrink@intel.com>
Co-authored-by: Valina Li <valina.li@intel.com>
Co-authored-by: Jim Wang @ Intel <yutsung.jim.wang@intel.com>
Co-authored-by: Marc-Philippe Fuller <marc-philippe.fuller@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1.5.0 enhancement New feature or request
Projects
Development

No branches or pull requests

1 participant