Skip to content

Commit

Permalink
Merge branch 'master' into wheels_cmake_fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
mryzhov committed Jun 24, 2024
2 parents 1bf6741 + 2878545 commit b90c807
Show file tree
Hide file tree
Showing 47 changed files with 1,491 additions and 88 deletions.
3 changes: 2 additions & 1 deletion .github/CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,7 @@
/src/frontends/tensorflow_common/ @openvinotoolkit/openvino-tf-frontend-maintainers
/src/frontends/tensorflow_lite/ @openvinotoolkit/openvino-tf-frontend-maintainers
/src/frontends/pytorch/ @openvinotoolkit/openvino-pytorch-frontend-maintainers
/src/frontends/jax/ @openvinotoolkit/openvino-jax-frontend-maintainers

# OpenVINO ONNX Frontend:
/src/frontends/onnx/ @openvinotoolkit/openvino-onnx-frontend-maintainers
Expand All @@ -99,7 +100,7 @@
/tests/layer_tests/ @openvinotoolkit/openvino-tests-maintainers @openvinotoolkit/openvino-mo-maintainers
/tests/layer_tests/pytorch_tests/ @openvinotoolkit/openvino-pytorch-frontend-maintainers
/tests/layer_tests/tensorflow_tests @openvinotoolkit/openvino-tf-frontend-maintainers
/tests/layer_tests/jax_tests @openvinotoolkit/openvino-tf-frontend-maintainers
/tests/layer_tests/jax_tests @openvinotoolkit/openvino-tf-frontend-maintainers @openvinotoolkit/openvino-jax-frontend-maintainers
/tests/model_hub_tests @openvinotoolkit/openvino-tf-frontend-maintainers
/tests/model_hub_tests/pytorch @openvinotoolkit/openvino-pytorch-frontend-maintainers

Expand Down
10 changes: 10 additions & 0 deletions .github/components.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ LP_transformations:
- TFL_FE
- ONNX_FE
- PDPD_FE
- JAX_FE

preprocessing:
revalidate:
Expand All @@ -32,6 +33,7 @@ CPU:
- PyTorch_FE
- TF_FE
- ONNX_FE
- JAX_FE
build:
- AUTO
- HETERO
Expand Down Expand Up @@ -141,6 +143,13 @@ PyTorch_FE:
- Python_API
- TOKENIZERS

JAX_FE:
revalidate:
- MO
build:
- CPU
- Python_API

C_API:
build:
- CPU
Expand All @@ -167,6 +176,7 @@ Python_API:
- TF_FE
- TFL_FE
- PyTorch_FE
- JAX_FE

JS_API:
build:
Expand Down
4 changes: 4 additions & 0 deletions .github/labeler.yml
Original file line number Diff line number Diff line change
Expand Up @@ -161,6 +161,10 @@
- any: ['tests/model_hub_tests/**',
'!tests/model_hub_tests/tensorflow/**/*']

'category: JAX FE':
- 'src/frontends/jax/**/*'
- 'tests/layer_tests/jax_tests/**/*'

'category: tools':
- any: ['tools/**',
'!tools/mo/**/*',
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/code_style.yml
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ jobs:
# always provide suggestions even for skipped scripts in ov_shellcheck tagret
- name: ShellCheck action
if: always()
uses: reviewdog/action-shellcheck@3546242c869924d13293e38e6289e00a26468e02 # v1.22.0
uses: reviewdog/action-shellcheck@52f34f737a16c65b8caa8c51ae1b23036afe5685 # v1.23.0
with:
level: style
reporter: github-pr-review
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,15 +61,14 @@ How AUTO Works
##############

To put it simply, when loading the model to the first device on the list fails, AUTO will try to load it to the next device in line, until one of them succeeds.
What is important, **AUTO starts inference with the CPU of the system by default**, as it provides very low latency and can start inference with no additional delays.
What is important, **AUTO starts inference with the CPU of the system by default unless there is model cached for the best suited device**, as it provides very low latency and can start inference with no additional delays.
While the CPU is performing inference, AUTO continues to load the model to the device best suited for the purpose and transfers the task to it when ready.
This way, the devices which are much slower in compiling models, GPU being the best example, do not impact inference at its initial stages.
For example, if you use a CPU and a GPU, the first-inference latency of AUTO will be better than that of using GPU alone.

Note that if you choose to exclude CPU from the priority list or disable the initial
CPU acceleration feature via ``ov::intel_auto::enable_startup_fallback``, it will be
unable to support the initial model compilation stage. The models with dynamic
input/output or :doc:`stateful operations <../stateful-models>`
unable to support the initial model compilation stage. The models with :doc:`stateful operations <../stateful-models>`
will be loaded to the CPU if it is in the candidate list. Otherwise,
these models will follow the normal flow and be loaded to the device based on priority.

Expand Down
2 changes: 1 addition & 1 deletion docs/dev/build_windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ Supported configurations:
3. After the build process finishes, export the newly built Python libraries to the user environment variables:
```
set PYTHONPATH=<openvino_repo>/bin/<arch>/Release/python;%PYTHONPATH%
set OPENVINO_LIB_PATHS=<openvino_repo>/bin/<arch>/Release;%OPENVINO_LIB_PATH%
set OPENVINO_LIB_PATHS=<openvino_repo>/bin/<arch>/Release;<openvino_repo>/temp/tbb/bin
```
or install the wheel with pip:
```
Expand Down
Loading

0 comments on commit b90c807

Please sign in to comment.