Skip to content

Commit

Permalink
[CPU][DOCS] Remove recommendation to use partially defined shapes (#1…
Browse files Browse the repository at this point in the history
  • Loading branch information
maxnick authored Dec 19, 2022
1 parent b2feb56 commit 3111e23
Show file tree
Hide file tree
Showing 3 changed files with 0 additions and 36 deletions.
21 changes: 0 additions & 21 deletions docs/OV_Runtime_UG/supported_plugins/CPU.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,27 +136,6 @@ CPU provides full functional support for models with dynamic shapes in terms of

> **NOTE**: The CPU plugin does not support tensors with dynamically changing rank. In case of an attempt to infer a model with such tensors, an exception will be thrown.
Dynamic shapes support introduces additional overhead on memory management and may limit internal runtime optimizations.
The more degrees of freedom are used, the more difficult it is to achieve the best performance.
The most flexible configuration, and the most convenient approach, is the fully undefined shape, which means that no constraints to the shape dimensions are applied.
However, reducing the level of uncertainty results in performance gains.
You can reduce memory consumption through memory reuse, achieving better cache locality and increasing inference performance. To do so, set dynamic shapes explicitly, with defined upper bounds.

@sphinxtabset

@sphinxtab{C++}
@snippet docs/snippets/cpu/dynamic_shape.cpp defined_upper_bound
@endsphinxtab

@sphinxtab{Python}
@snippet docs/snippets/cpu/dynamic_shape.py defined_upper_bound
@endsphinxtab

@endsphinxtabset

> **NOTE**: Using fully undefined shapes may result in significantly higher memory consumption compared to inferring the same model with static shapes.
> If memory consumption is unacceptable but dynamic shapes are still required, the model can be reshaped using shapes with defined upper bounds to reduce memory footprint.
Some runtime optimizations work better if the model shapes are known in advance.
Therefore, if the input data shape is not changed between inference calls, it is recommended to use a model with static shapes or reshape the existing model with the static input shape to get the best performance.

Expand Down
9 changes: 0 additions & 9 deletions docs/snippets/cpu/dynamic_shape.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,6 @@


int main() {
{
//! [defined_upper_bound]
ov::Core core;
auto model = core.read_model("model.xml");

model->reshape({{ov::Dimension(1, 10), ov::Dimension(1, 20), ov::Dimension(1, 30), ov::Dimension(1, 40)}});
//! [defined_upper_bound]
}

{
//! [static_shape]
ov::Core core;
Expand Down
6 changes: 0 additions & 6 deletions docs/snippets/cpu/dynamic_shape.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,6 @@

from openvino.runtime import Core

#! [defined_upper_bound]
core = Core()
model = core.read_model("model.xml")
model.reshape([(1, 10), (1, 20), (1, 30), (1, 40)])
#! [defined_upper_bound]

#! [static_shape]
core = Core()
model = core.read_model("model.xml")
Expand Down

0 comments on commit 3111e23

Please sign in to comment.