Skip to content

Commit

Permalink
Fixing broken links (#2406)
Browse files Browse the repository at this point in the history
  • Loading branch information
sgolebiewski-intel committed May 23, 2024
1 parent df5829d commit c8e0243
Show file tree
Hide file tree
Showing 7 changed files with 96 additions and 95 deletions.
14 changes: 7 additions & 7 deletions demos/real_time_stream_analysis/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,10 @@
## Overview

This demo demonstrates how to write an application running AI analysis using OpenVINO Model Server.
In the video analysis we can deal with various form of the source content. Here, you will see how to
In the video analysis we can deal with various form of the source content. Here, you will see how to
take the source of the video from a local USB camera, saved encoded video file and an encoded video stream.

The client application is expected to read the video source and send for the analysis every frame to the OpenVINO Model Server via gRPC connection. The analysis can be fully delegated to the model server endpoint with the
The client application is expected to read the video source and send for the analysis every frame to the OpenVINO Model Server via gRPC connection. The analysis can be fully delegated to the model server endpoint with the
complete processing pipeline arranged via a [MediaPipe graph](../../../docs/mediapipe.md) or [DAG](../../../docs/dag_scheduler.md). The remote analysis can be also reduced just to inference execution but in such case the video frame preprocessing and the postprocessing of the results must be implemented on the client side.

In this demo, reading the video content from a local USB camera and encoded video file is straightforward using OpenCV library. The use case with encoded network stream might require more explanation.
Expand All @@ -29,12 +29,12 @@ In the demo will be used two gRPC communication patterns which might be advantag

## gRPC streaming with MediaPipe graphs

gRPC stream connection is allowed for served [MediaPipe graphs](). It allows sending asynchronous calls to the endpoint all linked in a single session context. Responses are sent back via a stream and processed in the callback function.
The helper class [StreamClient](../../common/stream_client/stream_client.py) provides a mechanism for flow control and tracking the sequence of the requests and responses. In the StreamClient initialization the streaming mode is set via the parameter `streaming_api=True`.
gRPC stream connection is allowed for served [MediaPipe graphs](../../../docs/mediapipe.md). It allows sending asynchronous calls to the endpoint all linked in a single session context. Responses are sent back via a stream and processed in the callback function.
The helper class [StreamClient](https://github.com/openvinotoolkit/model_server/blob/releases/2024/0/demos/common/stream_client/stream_client.py) provides a mechanism for flow control and tracking the sequence of the requests and responses. In the StreamClient initialization the streaming mode is set via the parameter `streaming_api=True`.

Using the streaming API has the following advantages:
- good performance thanks to asynchronous calls and sharing the graph execution for multiple calls
- support for stateful pipelines like object tracking when the response is dependent on the sequence of requests
- support for stateful pipelines like object tracking when the response is dependent on the sequence of requests


### Preparing the model server for gRPC streaming with a Holistic graph
Expand Down Expand Up @@ -66,7 +66,7 @@ For the use case with RTSP client, install also FFMPEG component on the host.
Alternatively build a docker image with the client with the following command:
```bash
docker build ../../common/stream_client/ -t rtsp_client
```
```

Client parameters:
```bash
Expand Down Expand Up @@ -136,7 +136,7 @@ ffmpeg -stream_loop -1 -i ./video.mp4 -f rtsp -rtsp_transport tcp rtsp://localho
ffmpeg -f dshow -i video="HP HD Camera" -f rtsp -rtsp_transport tcp rtsp://localhost:8080/channel1
```
While the RTSP stream is active, run the client to read it and send the output stream
While the RTSP stream is active, run the client to read it and send the output stream
```bash
python3 client.py --grpc_address localhost:9000 --input_stream 'rtsp://localhost:8080/channel1' --output_stream 'rtsp://localhost:8080/channel2'
```
Expand Down
6 changes: 3 additions & 3 deletions docs/binary_input.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ ovms_docs_binary_input_kfs
ovms_docs_demo_tensorflow_conversion
```

For images, to reduce data size and lower bandwidth usage you can send them in binary-encoded instead of array-like format. How you can do it, depends on the kind of servable.
For images, to reduce data size and lower bandwidth usage you can send them in binary-encoded instead of array-like format. How you can do it, depends on the kind of servable.

**Single Models and DAG Pipelines**:

Expand All @@ -23,8 +23,8 @@ automatically from JPEG/PNG to OpenVINO friendly format using built-in [OpenCV](
- [TensorFlow Serving API](./binary_input_tfs.md)
- [KServe API](./binary_input_kfs.md)

It's worth noting that with KServe API, you can also send raw data with or without image encoding via REST API. This makes KServe REST API more performant choice comparing to json format in TFS API. The guide linked above explains how to work with both regular data in binary format as well as JPEG/PNG encoded images.
It's worth noting that with KServe API, you can also send raw data with or without image encoding via REST API. This makes KServe REST API more performant choice comparing to json format in TFS API. The guide linked above explains how to work with both regular data in binary format as well as JPEG/PNG encoded images.

**MediaPipe Graphs**:

When serving MediaPipe Graph it is possible to configure it to accept binary encoded images. You can either create your own calculator that would implement image decoding and use it in the graph or use `PythonExecutorCalculator` and implement decoding in Python [execute function](./python_support/reference.md#ovmspythonmodel-class).
When serving MediaPipe Graph it is possible to configure it to accept binary encoded images. You can either create your own calculator that would implement image decoding and use it in the graph or use `PythonExecutorCalculator` and implement decoding in Python [execute function](./python_support/reference.md#ovmspythonmodel-class).
2 changes: 1 addition & 1 deletion docs/mediapipe.md
Original file line number Diff line number Diff line change
Expand Up @@ -200,7 +200,7 @@ Currently the graph tracing on the model server side is not supported. If you wo

### Benchmarking
While you implemented and deployed the graph you have several options to test the performance.
To validate the throughput for unary requests you can use the [benchmark client](../demos/benchmark/python#mediapipe-benchmarking).
To validate the throughput for unary requests you can use the [benchmark client](../demos/benchmark/python/README.md#mediapipe-benchmarking).

For streaming gRPC connections, there is available [rtps_client](../demos/mediapipe/holistic_tracking#rtsp-client).
It can generate the load to gRPC stream and the mediapipe graph based on the content from RTSP video stream, MPG4 file or from the local camera.
Expand Down
110 changes: 55 additions & 55 deletions docs/model_server_rest_api_tfs.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,10 +55,10 @@ $ curl http://localhost:8001/v1/models/person-detection/versions/1
{
'model_version_status':[
{
'version': '1',
'state': 'AVAILABLE',
'version': '1',
'state': 'AVAILABLE',
'status': {
'error_code': 'OK',
'error_code': 'OK',
'error_message': ''
}
}
Expand Down Expand Up @@ -172,7 +172,7 @@ POST http://${REST_URL}:${REST_PORT}/v1/models/${MODEL_NAME}/versions/${MODEL_VE
"instances": <value>|<(nested)list>|<list-of-objects>
"inputs": <value>|<(nested)list>|<object>
}
```
```

Read [How to specify input tensors in row format](https://www.tensorflow.org/tfx/serving/api_rest#specifying_input_tensors_in_row_format) and [How to specify input tensors in column format](https://www.tensorflow.org/tfx/serving/api_rest#specifying_input_tensors_in_column_format) for more details.

Expand Down Expand Up @@ -220,7 +220,7 @@ Read more about [Predict API usage](https://github.com/openvinotoolkit/model_ser
Sends requests via RESTful API to trigger config reloading and gets models and [DAGs](./dag_scheduler.md) statuses as a response. This endpoint can be used with disabled automatic config reload to ensure configuration changes are applied in a specific time and also to get confirmation about reload operation status. Typically this option is to be used when OVMS is started with a parameter `--file_system_poll_wait_seconds 0`.
Reload operation does not pass new configuration to OVMS server. The configuration file changes need to be applied by the OVMS administrator. The REST API call just initiate applying the configuration file which is already present.

**URL**
**URL**
```
POST http://${REST_URL}:${REST_PORT}/v1/config/reload
```
Expand All @@ -243,37 +243,37 @@ curl --request POST http://${REST_URL}:${REST_PORT}/v1/config/reload

**Response**

In case of config reload success, the response contains JSON with aggregation of getModelStatus responses for all models and DAGs after reload is finished, along with operation status:
In case of config reload success, the response contains JSON with aggregation of getModelStatus responses for all models and DAGs after reload is finished, along with operation status:
```JSON
{
"<model name>":
{
"model_version_status": [
{
{
"<model name>":
{
"model_version_status": [
{
"version": <model version>|<string>,
"state": <model state>|<string>,
"state": <model state>|<string>,
"status":
{
"error_code": <error code>|<string>,
"error_message": <error message>|<string>
}
},
...
]
},
...
}
```

In case of any failure during execution:
{
"error_code": <error code>|<string>,
"error_message": <error message>|<string>
}
},
...
]
},
...
}
```

In case of any failure during execution:

```JSON
{
"error": <error message>|<string>
}
{
"error": <error message>|<string>
}
```
When an operation succeeds HTTP response status code is
- `201` when config(config file or model version) was reloaded
- `201` when config(config file or model version) was reloaded
- `200` when reload was not required, already applied or OVMS was started in single model mode

When an operation fails another status code is returned.
Expand Down Expand Up @@ -313,54 +313,54 @@ Possible messages returned on error:
}
```

Even if one of models reload failed other may be working properly. To check state of loaded models use [Config Status API](#config-status-api). To detect exact cause of errors described above analyzing sever logs may be necessary.
Even if one of models reload failed other may be working properly. To check state of loaded models use [Config Status API](#config-status). To detect exact cause of errors described above analyzing sever logs may be necessary.

## Config Status API <a name="config-status"></a>
**Description**

Sends requests via RESTful API to get a response that contains an aggregation of getModelStatus responses for all models and [DAGs](./dag_scheduler.md).

**URL**
**URL**
```
GET http://${REST_URL}:${REST_PORT}/v1/config
```
**Request**
**Request**
To trigger this API HTTP GET request should be sent on a given URL. Example `curl` command:

```
curl --request GET http://${REST_URL}:${REST_PORT}/v1/config
```

**Response**
In case of success, the response contains JSON with aggregation of getModelStatus responses for all models and DAGs, along with operation status:
**Response**
In case of success, the response contains JSON with aggregation of getModelStatus responses for all models and DAGs, along with operation status:

```JSON
{
"<model name>":
{
"model_version_status": [
{
{
"<model name>":
{
"model_version_status": [
{
"version": <model version>|<string>,
"state": <model state>|<string>,
"state": <model state>|<string>,
"status":
{
"error_code": <error code>|<string>,
"error_message": <error message>|<string>
}
},
...
]
},
...
}
{
"error_code": <error code>|<string>,
"error_message": <error message>|<string>
}
},
...
]
},
...
}
```

In case of any failure during execution:

```JSON
{
"error": <error message>|<string>
}
{
"error": <error message>|<string>
}
```
When operation succeeded HTTP response status code is 200, otherwise, another code is returned.
Possible messages returned on error:
Expand Down
19 changes: 10 additions & 9 deletions docs/ovms_quickstart.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Quickstart Guide {#ovms_docs_quick_start_guide}

OpenVINO Model Server can perform inference using pre-trained models in either [OpenVINO IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format/operation-sets.html)
OpenVINO Model Server can perform inference using pre-trained models in either [OpenVINO IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format/operation-sets.html)
, [ONNX](https://onnx.ai/), [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) or [TensorFlow](https://www.tensorflow.org/) format. You can get them by:

- downloading models from [Open Model Zoo](https://storage.openvinotoolkit.org/repositories/open_model_zoo/)
Expand All @@ -24,12 +24,12 @@ To quickly start using OpenVINO™ Model Server follow these steps:

### Step 1: Prepare Docker

[Install Docker Engine](https://docs.docker.com/engine/install/), including its [post-installation steps](https://docs.docker.com/engine/install/linux-postinstall/), on your development system.
[Install Docker Engine](https://docs.docker.com/engine/install/), including its [post-installation steps](https://docs.docker.com/engine/install/linux-postinstall/), on your development system.
To verify installation, test it using the following command. If it displays a test image and a message, it is ready.

``` bash
$ docker run hello-world
```
```

### Step 2: Download the Model Server

Expand All @@ -49,7 +49,7 @@ wget https://storage.googleapis.com/tfhub-modules/tensorflow/faster_rcnn/resnet5
tar xzf 1.tar.gz -C model/1
```

OpenVINO Model Server expects a particular folder structure for models - in this case `model` directory has the following content:
OpenVINO Model Server expects a particular folder structure for models - in this case `model` directory has the following content:
```bash
model
└── 1
Expand All @@ -59,11 +59,11 @@ model
└── variables.index
```

Sub-folder `1` indicates the version of the model. If you want to upgrade the model, other versions can be added in separate subfolders (2,3...).
Sub-folder `1` indicates the version of the model. If you want to upgrade the model, other versions can be added in separate subfolders (2,3...).
For more information about the directory structure and how to deploy multiple models at a time, check out the following sections:
- [Preparing models](models_repository.md)
- [Serving models](starting_server.md)
- [Serving multiple model versions](model_version_policy.md)
- [Serving multiple model versions](model_version_policy.md)

### Step 4: Start the Model Server Container

Expand Down Expand Up @@ -107,12 +107,13 @@ python3 object_detection.py --image coco_bike.jpg --output output.jpg --service_

### Step 8: Review the Results

In the current folder, you can find files containing inference results.
In the current folder, you can find files containing inference results.
In our case, it will be a modified input image with bounding boxes indicating detected objects and their labels.

![Inference results](quickstart_result.jpeg)

> **Note**: Similar steps can be performed with other model formats. Check the [ONNX use case example](../demos/using_onnx_model/python/README.md),
[TensorFlow classification model demo](../demos/image_classification_using_tf_model/python/README.md ) or [PaddlePaddle model demo](../demos/segmentation_using_paddlepaddle_model/python/README.md).
> **Note**: Similar steps can be performed with other model formats. Check the [ONNX use case example](../demos/using_onnx_model/python/README.md),
[TensorFlow classification model demo](../demos/image_classification_using_tf_model/python/README.md)
or [PaddlePaddle model demo](../demos/segmentation_using_paddlepaddle_model/python/README.md).

Congratulations, you have completed the QuickStart guide. Try other Model Server [demos](../demos/README.md) or explore more [features](features.md) to create your application.

0 comments on commit c8e0243

Please sign in to comment.