From bcf17c385a38b0dcaf9bc0eb283ce71d969f0f65 Mon Sep 17 00:00:00 2001 From: Alex Cureton-Griffiths Date: Mon, 10 Oct 2022 12:22:30 +0200 Subject: [PATCH] docs(flow): clean up (#5255) --- docs/fundamentals/flow/add-executors.md | 120 +++++++++--------- docs/fundamentals/flow/create-flow.md | 43 +++---- docs/fundamentals/flow/health-check.md | 77 ++++++----- docs/fundamentals/flow/index.md | 25 ++-- docs/fundamentals/flow/monitoring-flow.md | 90 +++++++------ docs/fundamentals/flow/topologies.md | 108 ++++++++-------- .../fundamentals/flow/when-things-go-wrong.md | 83 ++++++------ docs/fundamentals/flow/yaml-spec.md | 18 +-- 8 files changed, 271 insertions(+), 293 deletions(-) diff --git a/docs/fundamentals/flow/add-executors.md b/docs/fundamentals/flow/add-executors.md index 1663e16cfb9d3..4010a558dd37d 100644 --- a/docs/fundamentals/flow/add-executors.md +++ b/docs/fundamentals/flow/add-executors.md @@ -1,13 +1,13 @@ (flow-add-executors)= # Add Executors -A {class}`~jina.Flow` orchestrates its {class}`~jina.Executor`s as a graph and will send requests to all Executors in the order specified by {meth}`~jina.Flow.add` or listed in {ref}`a YAML file`. +A {class}`~jina.Flow` orchestrates its {class}`~jina.Executor`s as a graph and sends requests to all Executors in the order specified by {meth}`~jina.Flow.add` or listed in {ref}`a YAML file`. -When you start a Flow, the Executor will always be running in a **separate process**. Multiple Executors will be running in **different processes**. Multiprocessing is the lowest level of separation when you run a Flow locally. When running a Flow on Kubernetes, Docker Swarm, {ref}`jcloud`, different Executors are running in different containers, pods or instances. +When you start a Flow, Executors always run in **separate processes**. Multiple Executors run in **different processes**. Multiprocessing is the lowest level of separation when you run a Flow locally. When running a Flow on Kubernetes, Docker Swarm, {ref}`jcloud`, different Executors run in different containers, pods or instances. ## Add Executors -Executors can be added into a Flow via {meth}`~jina.Flow.add`. +Executors can be added into a Flow with {meth}`~jina.Flow.add`. ```python from jina import Flow @@ -15,14 +15,13 @@ from jina import Flow f = Flow().add() ``` -This will add a "no-op" Executor called {class}`~jina.Executor.BaseExecutor` to the Flow. +This adds an "empty" Executor called {class}`~jina.Executor.BaseExecutor` to the Flow. This Executor (without any parameters) performs no actions. ```{figure} no-op-flow.svg :scale: 70% ``` - -To better identify and executor, you can change its name by passing the `name` parameter: +To more easily identify an Executor, you can change its name by passing the `name` parameter: ```python from jina import Flow @@ -35,7 +34,7 @@ f = Flow().add(name='myVeryFirstExecutor').add(name='secondIsBest') :scale: 70% ``` -The above Flow can be also defined via YAML: +You can also define the above Flow in YAML: ```yaml jtype: Flow @@ -44,7 +43,7 @@ executors: - name: secondIsBest ``` -Save it as `flow.yml` and run it via: +Save it as `flow.yml` and run it: ```bash jina flow --uses flow.yml @@ -53,26 +52,25 @@ jina flow --uses flow.yml More Flow YAML specifications can be found in {ref}`Flow YAML Specification`. -## Define Executor types via `uses` +## Define Executor types with `uses` -The type of {class}`~jina.Executor` is defined by the `uses` keyword. `uses` accepts a wide range of Executor. Please also beware that some usages are not support on JCloud because of security reasons and their nature of facilitating local debugging. +An {class}`~jina.Executor`'s type is defined by the `uses` keyword. Note that some usages are not supported on JCloud due to security reasons and the nature of facilitating local debugging. | Local Dev | JCloud | `.add(uses=...)` | Description | |-----------|--------|-----------------------------------------------|-----------------------------------------------------------------------------------------------------------| -| ✅ | ❌ | `ExecutorClass` | use `ExecutorClass` from the inline context. | -| ✅ | ❌ | `'my.py_modules.ExecutorClass'` | use `ExecutorClass` from `my.py_modules`. | -| ✅ | ✅ | `'executor-config.yml'` | use an Executor from a YAML file defined by {ref}`Executor YAML interface `. | -| ✅ | ❌ | `'jinahub://TransformerTorchEncoder/'` | use an Executor as Python source from Jina Hub. | -| ✅ | ✅ | `'jinahub+docker://TransformerTorchEncoder'` | use an Executor as a Docker container from Jina Hub. | -| ✅ | ✅ | `'jinahub+sandbox://TransformerTorchEncoder'` | use a {ref}`Sandbox Executor ` hosted on Jina Hub. The Executor is running remotely on Jina Hub. | -| ✅ | ❌ | `'docker://sentence-encoder'` | use a pre-built Executor as a Docker container. | +| ✅ | ❌ | `ExecutorClass` | Use `ExecutorClass` from the inline context. | +| ✅ | ❌ | `'my.py_modules.ExecutorClass'` | Use `ExecutorClass` from `my.py_modules`. | +| ✅ | ✅ | `'executor-config.yml'` | Use an Executor from a YAML file defined by {ref}`Executor YAML interface `. | +| ✅ | ❌ | `'jinahub://TransformerTorchEncoder/'` | Use an Executor as Python source from Jina Hub. | +| ✅ | ✅ | `'jinahub+docker://TransformerTorchEncoder'` | Use an Executor as a Docker container from Jina Hub. | +| ✅ | ✅ | `'jinahub+sandbox://TransformerTorchEncoder'` | Use a {ref}`Sandbox Executor ` hosted on Jina Hub. The Executor runs remotely on Jina Hub. | +| ✅ | ❌ | `'docker://sentence-encoder'` | Use a pre-built Executor as a Docker container. | ````{admonition} Hint: Load multiple Executors from the same directory :class: hint -If you want to load multiple Executor YAMLs from the same directory, you don't need to specify the parent directory for -each Executor. +You don't need to specify the parent directory for each Executor. Instead, you can configure a common search path for all Executors: ``` @@ -97,17 +95,17 @@ f = Flow(extra_search_paths=['../executor']).add(uses='config1.yml').add(uses='c ````{admonition} How-To chapter :class: seealso For a more detailed look at this feature, see our {ref}`how-to on external Executors `. -The how-to also covers how to launch an Executor that can then be used as an External Executor in a Flow. +The also covers launching an Executor that can then be used as an External Executor in a Flow. ```` -Usually a Flow starts and stops all of its Executors. -External Executors are not started and stopped by the current Flow object but by others, which means that they can reside on any machine. +Usually a Flow starts and stops all of its own Executors. +However, external Executors are started and stopped by *other* Flows, meaning they can reside on any machine. -This is useful to share expensive Executors between Flows. Often these Executors are stateless, GPU based Encoders. +This is useful for sharing expensive Executors (like stateless, GPU-based encoders) between Flows. Both {ref}`served and shared Executors ` can be used as external Executors. -When you add such Executor to a Flow, you have to provide a `host` and `port`, and enable the `external` flag: +When you add an external Executor to a Flow, you have to provide a `host` and `port`, and enable the `external` flag: ```python from jina import Flow @@ -115,10 +113,9 @@ from jina import Flow Flow().add(host='123.45.67.89', port=12345, external=True) ``` -This is adding an external Executor to the Flow. -The Flow will not start or stop this Executor and assumes that it is externally managed and available at `123.45.67.89:12345`. +The Flow doesn't start or stop this Executor and assumes that it is externally managed and available at `123.45.67.89:12345`. -You can also use external Executors with `tls` enabled. +You can also use external Executors with `tls`: ```python from jina import Flow @@ -127,7 +124,7 @@ Flow().add(host='123.45.67.89', port=443, external=True, tls=True) ``` ```{hint} -Using `tls` to connect to the External Executor is especially needed if you want to use an external Executor deployed with JCloud. See the JCloud {ref}`documentation ` +Using `tls` to connect to the External Executor is especially needed to use an external Executor deployed with JCloud. See the JCloud {ref}`documentation ` for further details ``` @@ -135,14 +132,14 @@ for further details (floating-executors)= ## Floating Executors -Some Executors in your Flow may be used for asynchronous background tasks that can take some time and that do not generate a needed output. For instance, +Some Executors in your Flow can be used for asynchronous background tasks that take time and don't generate a required output. For instance, logging specific information in external services, storing partial results, etc. You can unblock your Flow from such tasks by using *floating Executors*. Normally, all Executors form a pipeline that handles and transforms a given request until it is finally returned to the Client. -However, floating Executors do not feed their outputs back to the pipeline. Therefore, this output will not form the response for the Client, and the response can be returned without waiting for the floating Executor to complete his task. +However, floating Executors do not feed their outputs back into the pipeline. Therefore, the Executor's output does not affect the response for the Client, and the response can be returned without waiting for the floating Executor to complete its task. Those Executors are marked with the `floating` keyword when added to a `Flow`: @@ -192,21 +189,20 @@ with f: Received ['Hello World', 'Hello World'] ``` -In this example you can see how the response is returned without waiting for the `floating` Executor to complete. However, the Flow is not closed until -the request has been handled also by it. - +In this example the response is returned without waiting for the floating Executor to complete. However, the Flow is not closed until +the floating Executor has handled the request. -You can plot the Flow and observe how the Executor is floating disconnected from the **Gateway**. +You can plot the Flow and see the Executor is floating disconnected from the **Gateway**. ```{figure} flow_floating.svg :width: 70% ``` -A floating Executor can never come before a non-floating Executor in the {ref}`topology ` of your Flow. +A floating Executor can *never* come before a non-floating Executor in your Flow's {ref}`topology `. This leads to the following behaviors: -- **Implicit reordering**: When adding a non-floating Executor after a floating Executor without specifying its `needs` parameter, the non-floating Executor is chained after the previous non-floating one. +- **Implicit reordering**: When you add a non-floating Executor after a floating Executor without specifying its `needs` parameter, the non-floating Executor is chained after the previous non-floating one. ```python from jina import Flow @@ -219,7 +215,7 @@ f.plot() ``` -- **Chaining floating Executors**: If you want to chain more than one floating Executor, you need to add all of them with the `floating` flag, and explicitly specify the `needs` argument. +- **Chaining floating Executors**: To chain more than one floating Executor, you need to add all of them with the `floating` flag, and explicitly specify the `needs` argument. ```python from jina import Flow @@ -233,7 +229,7 @@ f.plot() ``` -- **Overriding of `floating` flag**: If you try to add a floating Executor as part of `needs` parameter of a non-floating Executor, then the floating Executor is not considered floating anymore. +- **Overriding the `floating` flag**: If you add a floating Executor as part of `needs` parameter of a non-floating Executor, then the floating Executor is no longer considered floating. ```python from jina import Flow @@ -248,10 +244,10 @@ f.plot() ``` -## Config Executors -You can set and override {class}`~jina.Executor` configs when adding them into a {class}`~jina.Flow`. +## Configure Executors +You can set and override {class}`~jina.Executor` configuration when adding them to a {class}`~jina.Flow`. -This example shows how to start a Flow with an Executor via the Python API: +This example shows how to start a Flow with an Executor using the Python API: ```python from jina import Flow @@ -271,16 +267,16 @@ with Flow().add( ``` - `uses_with` is a key-value map that defines the {ref}`arguments of the Executor'` `__init__` method. -- `uses_requests` is a key-value map that defines the {ref}`mapping from endpoint to class method`. Useful if one needs to overwrite the default endpoint-to-method mapping defined in the Executor python implementation. -- `workspace` is a string value that defines the {ref}`workspace `. -- `py_modules` is a list of strings that defines the Python dependencies of the executor; -- `uses_metas` is a key-value map that defines some {ref}`internal attributes` of the Executor. It contains the following fields: - - `name` is a string that defines the name of the executor; - - `description` is a string that defines the description of this executor. It will be used in automatic docs UI; +- `uses_requests` is a key-value map that defines the {ref}`mapping from endpoint to class method`. This is useful to overwrite the default endpoint-to-method mapping defined in the Executor python implementation. +- `workspace` is a string that defines the {ref}`workspace `. +- `py_modules` is a list of strings that defines the Executor's Python dependencies; +- `uses_metas` is a key-value map that defines some of the Executor's {ref}`internal attributes`. It contains the following fields: + - `name` is a string that defines the name of the Executor; + - `description` is a string that defines the description of this Executor. It is used in the automatic docs UI; ### Set `with` via `uses_with` -To set/override the `with` configs of an executor, use `uses_with`. The `with` configuration refers to user-defined +To set/override an Executor's `with` configuration, use `uses_with`. The `with` configuration refers to user-defined constructor kwargs. ```python @@ -319,9 +315,13 @@ param3: 30 ``` ### Set `requests` via `uses_requests` -You can set/override the `requests` configuration of an executor and bind methods to endpoints that you provide. -In the following codes, we replace the endpoint `/foo` binded to the `foo()` function with both `/non_foo` and `/alias_foo`. -And add a new endpoint `/bar` for binding `bar()`. Note the `all_req()` function is binded to **all** the endpoints except those explicitly bound to other functions, i.e. `/non_foo`, `/alias_foo` and `/bar`. +You can set/override an Executor's `requests` configuration and bind methods to custom endpoints. +In the following code: + +- We replace the endpoint `/foo` bound to the `foo()` function with both `/non_foo` and `/alias_foo`. +- We add a new endpoint `/bar` for binding `bar()`. + +Note the `all_req()` function is bound to **all** endpoints except those explicitly bound to other functions, i.e. `/non_foo`, `/alias_foo` and `/bar`. ```python from jina import Executor, requests, Flow @@ -371,7 +371,7 @@ foo foo() ### Set `metas` via `uses_metas` -To set/override the `metas` configuration of an executor, use `uses_metas`: +To set/override an Executor's `metas` configuration, use `uses_metas`: ```python from jina import Executor, requests, Flow @@ -402,13 +402,13 @@ different_name ``` -## Unify NDArray types in output +## Unify output ndarray types -Different {class}`~jina.Executor`s in a {class}`~jina.Flow` may depend on slightly different `types` for array-like data such as `doc.tensor` and `doc.embedding`, -for example because they were written using different machine learning frameworks. -As the builder of a Flow you don't always have control over this, for example when using Executors from the Jina Hub. +Different {class}`~jina.Executor`s in a {class}`~jina.Flow` may depend on different `types` for array-like data such as `doc.tensor` and `doc.embedding`, +often because they were written with different machine learning frameworks. +As the builder of a Flow you don't always have control over this, for example when using Executors from Jina Hub. -In order to facilitate the integration between different Executors, the Flow allows you to convert `tensor` and `embedding` +To ease the integration of different Executors, a Flow allows you to convert `tensor` and `embedding` by using the `f.add(..., output_array_type=..)`: ```python @@ -418,13 +418,13 @@ f = Flow().add(uses=MyExecutor, output_array_type='numpy').add(uses=NeedsNumpyEx ``` This converts the `.tensor` and `.embedding` fields of all output Documents of `MyExecutor` to `numpy.ndarray`, making the data -usable by `NeedsNumpyExecutor`. This works regardless of whether MyExecutor populates these fields with arrays/tensors from +usable by `NeedsNumpyExecutor`. This works whether `MyExecutor` populates these fields with arrays/tensors from PyTorch, TensorFlow, or any other popular ML framework. ````{admonition} Output types :class: note -`output_array_type=` supports more types than `'numpy'`. For a full specification, and further details, take a look at the -documentation about [protobuf serialization](https://docarray.jina.ai/fundamentals/document/serialization/#from-to-protobuf). +`output_array_type=` supports more types than `'numpy'`. For the full specification and further details, check the +[protobuf serialization docs](https://docarray.jina.ai/fundamentals/document/serialization/#from-to-protobuf). ```` diff --git a/docs/fundamentals/flow/create-flow.md b/docs/fundamentals/flow/create-flow.md index 48797f17c5bdc..c929138b179dd 100644 --- a/docs/fundamentals/flow/create-flow.md +++ b/docs/fundamentals/flow/create-flow.md @@ -1,8 +1,8 @@ (flow)= -# Basic +# Basics -{class}`~jina.Flow` defines how your Executors are connected together and how your data *flows* through them. +A {class}`~jina.Flow` defines how your Executors are connected together and how your data *flows* through them. ## Create @@ -32,14 +32,14 @@ An empty Flow contains only {ref}`the Gateway`. :scale: 70% ``` -For production, it is recommended to define the Flows with YAML. This is because YAML files are independent of Python logic code and easy to maintain. +For production, you should define your Flows with YAML. This is because YAML files are independent of the Python logic code and easier to maintain. ### Conversion between Python and YAML -Python Flow definition can be easily converted to/from YAML definition. +A Python Flow definition can be easily converted to/from a YAML definition. -To load a Flow from a YAML file, use the {meth}`~jina.Flow.load_config`: +To load a Flow from a YAML file, use {meth}`~jina.Flow.load_config`: ```python from jina import Flow @@ -61,12 +61,12 @@ f.save_config('flow.yml') When a {class}`~jina.Flow` starts, all its {ref}`added Executors ` will start as well, making it possible to {ref}`reach the service through its API `. -There are three ways to start a Flow. Depending on the use case, you can start a Flow either in Python, or from a YAML file, or from the terminal. +There are three ways to start a Flow: In Python, from a YAML file, or from the terminal. - Generally in Python: use Flow as a context manager in Python. -- As an entrypoint from terminal: use Jina CLI and a Flow YAML. +- As an entrypoint from terminal: use `Jina CLI ` and a Flow YAML file. - As an entrypoint from Python code: use Flow as a context manager inside `if __name__ == '__main__'` -- No context manager: manually call {meth}`~jina.Flow.start` and {meth}`~jina.Flow.close`. +- No context manager: manually call {meth}`~jina.Flow.start` and {meth}`~jina.Flow.close`. ````{tab} General in Python @@ -119,14 +119,14 @@ A successful start of a Flow looks like this: :scale: 70% ``` -Your addresses and entrypoints can be found in the output. When enabling more features such as monitoring, HTTP gateway, TLS encryption, this display will also expand to contain more information. +Your addresses and entrypoints can be found in the output. When you enable more features such as monitoring, HTTP gateway, TLS encryption, this display expands to contain more information. ### Set multiprocessing `spawn` -Some cornet cases require to force `spawn` start method for multiprocessing, e.g. if you encounter "Cannot re-initialize CUDA in forked subprocess". +Some corner cases require forcing a `spawn` start method for multiprocessing, for example if you encounter "Cannot re-initialize CUDA in forked subprocess". -You may try `JINA_MP_START_METHOD=spawn` before starting the Python script to enable this. +You can use `JINA_MP_START_METHOD=spawn` before starting the Python script to enable this. ```bash JINA_MP_START_METHOD=spawn python app.py @@ -139,8 +139,7 @@ There's no need to set this for Windows, as it only supports spawn method for mu ## Serve forever In most scenarios, a Flow should remain reachable for prolonged periods of time. -This can be achieved by `jina flow --uses flow.yml` from terminal. - +This can be achieved by `jina flow --uses flow.yml` from the terminal. Or if you are serving a Flow from Python: @@ -153,7 +152,7 @@ with f: f.block() ``` -The `.block()` method blocks the execution of the current thread or process, which enables external clients to access the Flow. +The `.block()` method blocks the execution of the current thread or process, enabling external clients to access the Flow. In this case, the Flow can be stopped by interrupting the thread or process. @@ -186,7 +185,7 @@ e.set() # set event and stop (unblock) the Flow ## Visualize -A {class}`~jina.Flow` has a built-in `.plot()` function which can be used to visualize a `Flow`: +A {class}`~jina.Flow` has a built-in `.plot()` function which can be used to visualize the `Flow`: ```python from jina import Flow @@ -210,13 +209,13 @@ f.plot('flow-2.svg') :width: 70% ``` -One can also do it in the terminal via: +You can also do it in the terminal: ```bash jina export flowchart flow.yml flow.svg ``` -One can also visualize a remote Flow by passing the URL to `jina export flowchart`. +You can also visualize a remote Flow by passing the URL to `jina export flowchart`. ## Export @@ -230,7 +229,7 @@ f = Flow().add() f.to_docker_compose_yaml() ``` -One can also do it in the terminal via: +You can also do it in the terminal: ```shell jina export docker-compose flow.yml docker-compose.yml @@ -250,16 +249,16 @@ f = Flow().add() f.to_kubernetes_yaml('flow_k8s_configuration') ``` -One can also do it in the terminal via: +You can also do it in the terminal: ```shell jina export kubernetes flow.yml ./my-k8s ``` -This will generate the necessary Kubernetes configuration files for all the {class}`~jina.Executor`s of the Flow. +This generates the Kubernetes configuration files for all the {class}`~jina.Executor`s in the Flow. The generated folder can be used directly with `kubectl` to deploy the Flow to an existing Kubernetes cluster. -For an advance utilisation of Kubernetes with jina please refer to this {ref}`How to ` +For advanced utilisation of Kubernetes with Jina please refer to {ref}`How to ` ```{tip} @@ -270,7 +269,7 @@ If you do not wish to rebuild the image, set the environment variable `JINA_HUB_ ```{admonition} See also :class: seealso -For more in-depth guides on Flow deployment, take a look at our how-tos for {ref}`Docker compose ` and +For more in-depth guides on Flow deployment, check our how-tos for {ref}`Docker compose ` and {ref}`Kubernetes `. ``` diff --git a/docs/fundamentals/flow/health-check.md b/docs/fundamentals/flow/health-check.md index d24b051a83d70..f9facd9ec2ebb 100644 --- a/docs/fundamentals/flow/health-check.md +++ b/docs/fundamentals/flow/health-check.md @@ -1,23 +1,23 @@ # Readiness & health check A Jina {class}`~jina.Flow` consists of {ref}`a Gateway and Executors`, -each of which have to be healthy before the Flow is ready to receive requests. +all of which have to be healthy before the Flow is ready to receive requests. A Flow is marked as "ready", when all its Executors and its Gateway are fully loaded and ready. Each Executor provides a health check in the form of a [standardized gRPC endpoint](https://github.com/grpc/grpc/blob/master/doc/health-checking.md) that exposes this information to the outside world. -This means that health checks can automatically be performed by Jina itself as well as external tools like Docker Compose, Kubernetes service meshes, or load balancers. +This means health checks can be automatically performed by Jina itself, as well as external tools like Docker Compose, Kubernetes service meshes, or load balancers. -## Readiness of a Flow +## Flow Readiness -In most cases, it is most useful to check if an entire Flow is ready to accept requests. +In most cases, it is useful to check if an entire Flow is ready to accept requests. To enable this readiness check, the Jina Gateway can aggregate health check information from all services and provides a readiness check endpoint for the complete Flow. -{class}`~jina.Client` offer a convenient API to query these readiness endpoints. You can call {meth}`~jina.clients.mixin.HealthCheckMixin.is_flow_ready` or {meth}`~jina.Flow.is_flow_ready`, it will return `True` if the Flow is ready, and `False` when it is not. +{class}`~jina.Client` offers an API to query these readiness endpoints. You can call {meth}`~jina.clients.mixin.HealthCheckMixin.is_flow_ready` or {meth}`~jina.Flow.is_flow_ready`. It returns `True` if the Flow is ready, and `False` if it is not. ````{tab} via Flow ```python @@ -115,7 +115,7 @@ WARNI… JINA@92986 message lost 100% (3/3) ### Flow status using third-party clients -You can check the status of a Flow using any gRPC/HTTP/Websocket client, not just Jina's Client implementation. +You can check the status of a Flow using any gRPC/HTTP/WebSockets client, not just Jina's Client implementation. To see how this works, first instantiate the Flow with its corresponding protocol and block it for serving: @@ -149,7 +149,7 @@ DEBUG Flow@19059 2 Deployments (i.e. 2 Pods) are running in this Flow #### Using gRPC -When using grpc, you can use [grpcurl](https://github.com/fullstorydev/grpcurl) to hit the Gateway's gRPC service that is responsible for reporting the Flow status. +When using grpc, use [grpcurl](https://github.com/fullstorydev/grpcurl) to access the Gateway's gRPC service that is responsible for reporting the Flow status. ```shell docker pull fullstorydev/grpcurl:latest @@ -166,7 +166,7 @@ You can simulate an Executor going offline by killing its process. kill -9 $EXECUTOR_PID # in this case we can see in the logs that it is 19059 ``` -Then by doing the same check, you will see that it returns an error: +Then by doing the same check, you can see that it returns an error: ```shell docker run --network='host' fullstorydev/grpcurl -plaintext 127.0.0.1:12345 jina.JinaGatewayDryRunRPC/dry_run @@ -209,40 +209,39 @@ docker run --network='host' fullstorydev/grpcurl -plaintext 127.0.0.1:12345 jina ```` -#### Using HTTP or Websocket +#### Using HTTP or WebSockets -When using HTTP or Websocket as the Gateway protocol, you can use curl to target the `/dry_run` endpoint and get the status of the Flow. +When using HTTP or WebSockets as the Gateway protocol, use curl to target the `/dry_run` endpoint and get the status of the Flow. ```shell curl http://localhost:12345/dry_run ``` -The error-free output below signifies a correctly running Flow: +Error-free output signifies a correctly running Flow: ```json {"code":0,"description":"","exception":null} ``` -You can simulate an Executor going offline by killing its process. +You can simulate an Executor going offline by killing its process: ```shell script kill -9 $EXECUTOR_PID # in this case we can see in the logs that it is 19059 ``` -Then by doing the same check, you will see that the call returns an error: +Then by doing the same check, you can see that the call returns an error: ```json {"code":1,"description":"failed to connect to all addresses |Gateway: Communication error with deployment executor0 at address(es) {'0.0.0.0:12346'}. Head or worker(s) may be down.","exception":{"name":"InternalNetworkError","args":["failed to connect to all addresses |Gateway: Communication error with deployment executor0 at address(es) {'0.0.0.0:12346'}. Head or worker(s) may be down."],"stacks":["Traceback (most recent call last):\n"," File \"/home/joan/jina/jina/jina/serve/networking.py\", line 726, in task_wrapper\n timeout=timeout,\n"," File \"/home/joan/jina/jina/jina/serve/networking.py\", line 241, in send_requests\n await call_result,\n"," File \"/home/joan/.local/lib/python3.7/site-packages/grpc/aio/_call.py\", line 291, in __await__\n self._cython_call._status)\n","grpc.aio._call.AioRpcError: \n","\nDuring handling of the above exception, another exception occurred:\n\n","Traceback (most recent call last):\n"," File \"/home/joan/jina/jina/jina/serve/runtimes/gateway/http/app.py\", line 142, in _flow_health\n data_type=DataInputType.DOCUMENT,\n"," File \"/home/joan/jina/jina/jina/serve/runtimes/gateway/http/app.py\", line 399, in _get_singleton_result\n async for k in streamer.stream(request_iterator=request_iterator):\n"," File \"/home/joan/jina/jina/jina/serve/stream/__init__.py\", line 78, in stream\n async for response in async_iter:\n"," File \"/home/joan/jina/jina/jina/serve/stream/__init__.py\", line 154, in _stream_requests\n response = self._result_handler(future.result())\n"," File \"/home/joan/jina/jina/jina/serve/runtimes/gateway/request_handling.py\", line 148, in _process_results_at_end_gateway\n partial_responses = await asyncio.gather(*tasks)\n"," File \"/home/joan/jina/jina/jina/serve/runtimes/gateway/graph/topology_graph.py\", line 128, in _wait_previous_and_send\n self._handle_internalnetworkerror(err)\n"," File \"/home/joan/jina/jina/jina/serve/runtimes/gateway/graph/topology_graph.py\", line 70, in _handle_internalnetworkerror\n raise err\n"," File \"/home/joan/jina/jina/jina/serve/runtimes/gateway/graph/topology_graph.py\", line 125, in _wait_previous_and_send\n timeout=self._timeout_send,\n"," File \"/home/joan/jina/jina/jina/serve/networking.py\", line 734, in task_wrapper\n num_retries=num_retries,\n"," File \"/home/joan/jina/jina/jina/serve/networking.py\", line 697, in _handle_aiorpcerror\n details=e.details(),\n","jina.excepts.InternalNetworkError: failed to connect to all addresses |Gateway: Communication error with deployment executor0 at address(es) {'0.0.0.0:12346'}. Head or worker(s) may be down.\n"],"executor":""}} ``` (health-check-microservices)= -## Health check of an Executor +## Executor health check -In addition to a performing a readiness check for the entire Flow, it is also possible to check every individual Executor in said Flow, -by utilizing a [standardized gRPC health check endpoint](https://github.com/grpc/grpc/blob/master/doc/health-checking.md). +You can check every individual Executor in a Flow, by using a [standard gRPC health check endpoint](https://github.com/grpc/grpc/blob/master/doc/health-checking.md). In most cases this is not necessary, since such checks are performed by Jina, a Kubernetes service mesh or a load balancer under the hood. -Nevertheless, it is possible to perform these checks as a user. +Nevertheless, you can perform these checks yourself. -When performing these checks, you can expect on of the following `ServingStatus` responses: +When performing these checks, you can expect one of the following `ServingStatus` responses: - **`UNKNOWN` (0)**: The health of the Executor could not be determined - **`SERVING` (1)**: The Executor is healthy and ready to receive requests - **`NOT_SERVING` (2)**: The Executor is *not* healthy and *not* ready to receive requests @@ -264,7 +263,7 @@ with f: f.block() ``` -On another terminal, you can use [grpcurl](https://github.com/fullstorydev/grpcurl) to send RPC requests to your services. +In another terminal, you can use [grpcurl](https://github.com/fullstorydev/grpcurl) to send gRPC requests to your services. ```shell docker pull fullstorydev/grpcurl:latest @@ -278,18 +277,18 @@ docker run --network='host' fullstorydev/grpcurl -plaintext 127.0.0.1:12346 grpc ``` (health-check-gateway)= -## Health check of the Gateway +## Gateway health check -Just like each individual Executor, the Gateway also exposes a health check endpoint. +Just like each individual Executors, the Gateway also exposes a health check endpoint. -In contrast to Executors however, a Gateway can use gRPC, HTTP, or Websocket, and the health check endpoint changes accordingly. +In contrast to Executors however, a Gateway can use gRPC, HTTP, or WebSocketss, and the health check endpoint changes accordingly. #### Gateway health check with gRPC -When using gRPC as the protocol to communicate with the Gateway, the Gateway uses the exact same mechanism as Executors to expose its health status: It exposes the [ standard gRPC health check](https://github.com/grpc/grpc/blob/master/doc/health-checking.md) to the outside world. +When using gRPC as the protocol to communicate with the Gateway, the Gateway uses the exact same mechanism as Executors to expose its health status: It exposes the [standard gRPC health check](https://github.com/grpc/grpc/blob/master/doc/health-checking.md) to the outside world. -With the same Flow as described before, you can use the same way to check the Gateway status: +With the same Flow as before, you can use the same way to check the Gateway status: ```bash docker run --network='host' fullstorydev/grpcurl -plaintext 127.0.0.1:12345 grpc.health.v1.Health/Check @@ -302,18 +301,18 @@ docker run --network='host' fullstorydev/grpcurl -plaintext 127.0.0.1:12345 grpc ``` -#### Gateway health check with HTTP or Websocket +#### Gateway health check with HTTP or WebSockets ````{admonition} Caution :class: caution -For Gateways running with HTTP or Websocket, the gRPC health check response codes outlined {ref}`above ` do not apply. +For Gateways running with HTTP or WebSockets, the gRPC health check response codes outlined {ref}`above ` do not apply. Instead, an error free response signifies healthiness. ```` -When using HTTP or Websocket as the protocol for the Gateway, it exposes the endpoint `'/'` that one can query to check the status. +When using HTTP or WebSockets as the protocol for the Gateway, you can query the endpoint `'/'` to check the status. -First, crate a Flow with HTTP or Websocket protocol: +First, crate a Flow with HTTP or WebSockets protocol: ```python from jina import Flow @@ -322,21 +321,21 @@ f = Flow(protocol='http', port=12345).add() with f: f.block() ``` -Then, you can query the "empty" endpoint: +Then query the "empty" endpoint: ```bash curl http://localhost:12345 ``` -And you will get a valid empty response indicating the Gateway's ability to serve. +You get a valid empty response indicating the Gateway's ability to serve: ```json {} ``` -## Use jina ping to do health checks +## Use jina ping for health checks -Once a Flow is running, you can use `jina ping` CLI {ref}`CLI <../api/jina_cli>` to run readiness check of the complete Flow or of individual Executors or Gateway. +Once a Flow is running, you can use `jina ping` CLI {ref}`CLI <../api/jina_cli>` to run a readiness check of the complete Flow or of individual Executors or Gateway. -Let's start a Flow in the terminal by executing the following python code: +Start a Flow in Python: ```python from jina import Flow @@ -345,32 +344,32 @@ with Flow(protocol='grpc', port=12345).add(port=12346) as f: f.block() ``` -We can check the readiness of the Flow: +Check the readiness of the Flow: ```bash jina ping flow grpc://localhost:12345 ``` -Also we can check the readiness of an Executor: +You can also check the readiness of an Executor: ```bash jina ping executor localhost:12346 ``` -or the readiness of the Gateway service: +...or the readiness of the Gateway service: ```bash jina ping gateway grpc://localhost:12345 ``` -When these commands succeed, you will see something like: +When these commands succeed, you should see something like: ```text INFO JINA@28600 readiness check succeeded 1 times!!! ``` -```admonition Use it in Kubernetes +```admonition Use in Kubernetes :class: note -This CLI exits with code 1 when the readiness check is not successful, which makes it a good choice to be used as readinessProbe for Executor and Gateway when +The CLI exits with code 1 when the readiness check is not successful, which makes it a good choice to be used as readinessProbe for Executor and Gateway when deployed in Kubernetes. ``` diff --git a/docs/fundamentals/flow/index.md b/docs/fundamentals/flow/index.md index e91f32bf4fe67..6d44dc5783627 100644 --- a/docs/fundamentals/flow/index.md +++ b/docs/fundamentals/flow/index.md @@ -1,39 +1,38 @@ (flow-cookbook)= # Flow -A {class}`~jina.Flow` orchestrates Executors into a processing pipeline to build a multi-modal/cross-modal application. -Documents "flow" through the created pipeline and are processed by Executors. +A {class}`~jina.Flow` orchestrates {class}`~jina.Executor`s into a processing pipeline to build a multi-modal/cross-modal application. +Documents "flow" through the pipeline and are processed by Executors. You can think of Flow as an interface to configure and launch your {ref}`microservice architecture `, while the heavy lifting is done by the {ref}`services ` themselves. In particular, each Flow also launches a *Gateway* service, which can expose all other services through an API that you define. - The most important methods of the `Flow` object are the following: | Method | Description | |--------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| {meth}`~jina.Flow.add` | Add an Executor to the Flow | +| {meth}`~jina.Flow.add` | Adds an Executor to the Flow | | {meth}`~jina.Flow.start()` | Starts the Flow. This will start all its Executors and check if they are ready to be used. | | {meth}`~jina.Flow.close()` | Stops and closes the Flow. This will stop and shutdown all its Executors. | -| `with` context manager | Use the Flow as a context manager. It will automatically start and stop your Flow. | | +| `with` context manager | Uses the Flow as a context manager. It will automatically start and stop your Flow. | | | {meth}`~jina.Flow.plot()` | Visualizes the Flow. Helpful for building complex pipelines. | | {meth}`~jina.clients.mixin.PostMixin.post()` | Sends requests to the Flow API. | | {meth}`~jina.Flow.block()` | Blocks execution until the program is terminated. This is useful to keep the Flow alive so it can be used from other places (clients, etc). | -| {meth}`~jina.Flow.to_docker_compose_yaml()` | Generates a Docker-Compose file listing all its Executors as Services. | -| {meth}`~jina.Flow.to_kubernetes_yaml()` | Generates the Kubernetes configuration files in ``. Based on your local Jina version, Jina Hub may rebuild the Docker image during the YAML generation process. If you do not wish to rebuild the image, set the environment variable `JINA_HUB_NO_IMAGE_REBUILD`. | -| {meth}`~jina.clients.mixin.HealthCheckMixin.is_flow_ready()` | Check if the Flow is ready to process requests. Returns a boolean indicating the readiness | +| {meth}`~jina.Flow.to_docker_compose_yaml()` | Generates a Docker-Compose file listing all Executors as services. | +| {meth}`~jina.Flow.to_kubernetes_yaml()` | Generates Kubernetes configuration files in ``. Based on your local Jina version, Jina Hub may rebuild the Docker image during the YAML generation process. If you do not wish to rebuild the image, set the environment variable `JINA_HUB_NO_IMAGE_REBUILD`. | +| {meth}`~jina.clients.mixin.HealthCheckMixin.is_flow_ready()` | Check if the Flow is ready to process requests. Returns a boolean indicating the readiness. | ## Why should you use a Flow? -Once you have learned DocumentArray and Executor, you are able to split your multi-modal/cross-modal application into different independent modules and services. -But you need to chain them together in order to bring real value and to build and serve an application. That's exactly what Flows enable you to do. +Once you've learned DocumentArray and Executor, you can split your multi-modal/cross-modal application into different independent modules and services. +But you need to chain them together to bring real value and build and serve an application. Flows enable you to do exactly this. -- Flows connect microservices (Executors) to build a service with proper client/server style interface over HTTP, gRPC, or Websocket +- Flows connect microservices (Executors) to build a service with proper client/server style interface over HTTP, gRPC, or WebSockets. -- Flows let you scale these Executors independently to adjust to your requirements +- Flows let you scale these Executors independently to match your requirements. -- Flows allow you to easily use other cloud-native orchestrators, such as Kubernetes, to manage your service +- Flows let you easily use other cloud-native orchestrators, such as Kubernetes, to manage your service. ## Minimum working example diff --git a/docs/fundamentals/flow/monitoring-flow.md b/docs/fundamentals/flow/monitoring-flow.md index a6c29bad5622d..66da6b575f502 100644 --- a/docs/fundamentals/flow/monitoring-flow.md +++ b/docs/fundamentals/flow/monitoring-flow.md @@ -1,8 +1,8 @@ (monitoring-flow)= # Monitor -A Jina {class}`~jina.Flow` exposes several core metrics that allow you to have a deeper look -at what is happening inside it. Metrics allow you to, for example, monitor the overall performance +A Jina {class}`~jina.Flow` exposes several core metrics that let you have a deeper look +at what is happening inside. Metrics allow you to, for example, monitor the overall performance of your Flow, detect bottlenecks, or alert your team when some component of your Flow is down. Jina Flows expose metrics in the [Prometheus format](https://prometheus.io/docs/instrumenting/exposition_formats/). This is a plain text format that is understandable by both humans and machines. These metrics are intended to be scraped by @@ -21,11 +21,11 @@ your monitoring stack. A {class}`~jina.Flow` is composed of several Pods, namely the {class}`~jina.serve.runtimes.gateway.GatewayRuntime`, the {class}`~jina.Executor`s, and potentially a {class}`~jina.serve.runtimes.head.HeadRuntime` (see the {ref}`architecture overview ` for more details). Each of these Pods is its own microservice. These services expose their own metrics using the [Prometheus client](https://prometheus.io/docs/instrumenting/clientlibs/). This means that they are as many metrics endpoints as there are Pods in your Flow. -Let's give an example to illustrate it : +Let's see an example: ````{tab} via Python API -This example shows how to start a Flow with monitoring enabled via the Python API: +Start a Flow with monitoring using the Python API: ```python from jina import Flow @@ -38,7 +38,7 @@ with Flow(monitoring=True, port_monitoring=9090).add( ```` ````{tab} via YAML -This example shows how to start a Flow with monitoring enabled via yaml: +Start a Flow with monitoring using YAML: In a `flow.yaml` file ```yaml @@ -56,42 +56,42 @@ jina flow --uses flow.yaml ``` ```` -This Flow will create two Pods, one for the Gateway, and one for the SimpleIndexer Executor, therefore it will create two +This Flow creates two Pods: one for the Gateway, and one for the SimpleIndexer Executor. Therefore it creates two metrics endpoints: -* `http://localhost:9090` for the gateway +* `http://localhost:9090` for the Gateway * `http://localhost:9091` for the SimpleIndexer -````{admonition} Change the default monitoring port +````{admonition} Changing the default monitoring port :class: caution -When Jina is used locally, all of the `port_monitoring` will be random by default (within the range [49152, 65535]). However we -strongly encourage you to precise these ports for the Gateway and for all of the Executors. Otherwise it will change at -restart and you will have to change your Prometheus configuration file. +When Jina is used locally, all of the `port_monitoring` is random by default (within the range [49152, 65535]). We +strongly encourage you to explicitly set these ports for the Gateway and for all Executors. Failing to do so means ports will change on +restart and you would need to change your Prometheus configuration file accordingly. ```` -Because each Pod in a Flow exposes its own metrics, the monitoring feature can be used independently on each Pod. -This means that you are not forced to always monitor every Pod of your Flow. For example, you could be only interested in -metrics coming from the Gateway, and therefore you only activate the monitoring on it. On the other hand, you might be only -interested in monitoring a single Executor. Note that by default the monitoring is disabled everywhere. +Because each Pod in a Flow exposes its own metrics, monitoring can be used independently on each Pod. +This means that you are not forced to always monitor every Pod of your Flow. For example, you may only be interested in +metrics coming from the Gateway, and therefore you only activate that monitoring. On the other hand, you may only +be interested in monitoring a single Executor. Note that monitoring is disabled everywhere by default. -To enable the monitoring you need to pass `monitoring = True` when creating the Flow. +Pass `monitoring = True` when you create the Flow to enable monitoring. ```python Flow(monitoring=True).add(...) ``` ````{admonition} Enabling Flow :class: hint -Passing `monitoring = True` when creating the Flow will enable the monitoring on **all the Pods** of your Flow. +Passing `monitoring = True` when you create the Flow enables monitoring of **all the Pods** of your Flow. ```` -If you want to enable the monitoring only on the Gateway, you need to first enable the feature for the entire Flow, and then disable it for the Executor which you are not interested in. +To enable monitoring only on the Gateway, first enable the feature for the entire Flow, then disable it for the Executors which you do not want monitored. ```python Flow(monitoring=True).add(monitoring=False, ...).add(monitoring=False, ...) ``` -On the other hand, If you want to only enable the monitoring on a given Executor you should do: +To enable the monitoring on just a given Executor: ```python Flow().add(...).add(uses=MyExecutor, monitoring=True) ``` @@ -99,13 +99,11 @@ Flow().add(...).add(uses=MyExecutor, monitoring=True) ### Enable monitoring with replicas and shards ```{tip} -This section is only relevant if you deploy your Flow natively. When deploying your Flow with Kubernetes or Docker Compose -all of the `port_monitoring` will be set to default : `9090`. +This is only relevant if you deploy your Flow natively. When deploying your Flow with Kubernetes or Docker Compose +all `port_monitoring` is set to a default of `9090`. ``` -To enable monitoring with replicas and shards when deploying natively, you need to pass a list of `port_monitoring` separated by a comma to your Flow. - -Example: +To monitor replicas and shards when deploying natively, pass a comma-separated string to `port_monitoring`: ````{tab} via Python API @@ -120,7 +118,7 @@ with Flow(monitoring=True).add( ```` ````{tab} via YAML -This example shows how to start a Flow with monitoring enabled via yaml: +To start a Flow with monitoring via YAML: In a `flow.yaml` file ```yaml @@ -139,16 +137,16 @@ jina flow --uses flow.yaml ```` ```{tip} Monitoring with shards -When using shards, an extra head will be created and you will need to pass a list of N+1 ports to `port_monitoring`, N beeing the number of shards you desire +When using shards, an extra head is created and you need to pass a list of N+1 ports to `port_monitoring`, N being the number of shards you desire. ``` -If you precise fewer `port_monitoring` than you have replicas of your Executor (or even not passing any at all), the unknown ports -will be assigned randomly. It is a better practice to precise a port for every replica, otherwise you will have to change +If you specify fewer `port_monitoring` values than you have Executor replicas (or even not passing any at all), unknown ports +are assigned randomly. It is better practice to specify a port for every replica, otherwise you will have to change your Prometheus configuration each time you restart your application. ## Available metrics -A {class}`~jina.Flow` supports different metrics out of the box, in addition to allowing the user to define their own custom metrics. +A {class}`~jina.Flow` supports different metrics out of the box, also letting you define your own custom metrics. Because not all Pods have the same role, they expose different kinds of metrics: @@ -158,15 +156,15 @@ Because not all Pods have the same role, they expose different kinds of metrics: | Metrics name | Metrics type | Description | |-------------------------------------|------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------| -| `jina_receiving_request_seconds` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures the time elapsed between receiving a request from the client and sending back the response. | -| `jina_sending_request_seconds` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures the time elapsed between sending a downstream request to an Executor/Head and receiving the response back. | -| `jina_number_of_pending_requests` | [Gauge](https://prometheus.io/docs/concepts/metric_types/#gauge) | Counts the number of pending requests | -| `jina_successful_requests_total` | [Counter](https://prometheus.io/docs/concepts/metric_types/#counter) | Counts the number of successful requests returned by the gateway | -| `jina_failed_requests_total` | [Counter](https://prometheus.io/docs/concepts/metric_types/#counter) | Counts the number of failed requests returned by the gateway | -| `jina_sent_request_bytes` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures the size in bytes of the request sent by the Gateway to the Executor or the Head | -| `jina_received_response_bytes` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures the size in bytes of the request returned by the Executor | -| `jina_received_request_bytes` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures the size of the request in bytes received at the Gateway level | -| `jina_sent_response_bytes` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures the size in bytes of the response returned from the Gateway to the Client | +| `jina_receiving_request_seconds` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures time elapsed between receiving a request from the client and sending back the response. | +| `jina_sending_request_seconds` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures time elapsed between sending a downstream request to an Executor/Head and receiving the response back. | +| `jina_number_of_pending_requests` | [Gauge](https://prometheus.io/docs/concepts/metric_types/#gauge) | Counts the number of pending requests. | +| `jina_successful_requests_total` | [Counter](https://prometheus.io/docs/concepts/metric_types/#counter) | Counts the number of successful requests returned by the Gateway. | +| `jina_failed_requests_total` | [Counter](https://prometheus.io/docs/concepts/metric_types/#counter) | Counts the number of failed requests returned by the Gateway. | +| `jina_sent_request_bytes` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures the size in bytes of the request sent by the Gateway to the Executor or the Head. | +| `jina_received_response_bytes` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures the size in bytes of the request returned by the Executor. | +| `jina_received_request_bytes` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures the size of the request in bytes received at the Gateway level. | +| `jina_sent_response_bytes` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures the size in bytes of the response returned from the Gateway to the Client. | ```{seealso} You can find more information on the different type of metrics in Prometheus [here](https://prometheus.io/docs/concepts/metric_types/#metric-types) @@ -174,20 +172,20 @@ You can find more information on the different type of metrics in Prometheus [he ### Head Pods -| Metrics name | Metrics type | Description | +| Metric name | Metric type | Description | |-----------------------------------------|-------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------| -| `jina_receiving_request_seconds` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures the time elapsed between receiving a request from the gateway and sending back the response. | +| `jina_receiving_request_seconds` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures the time elapsed between receiving a request from the Gateway and sending back the response. | | `jina_sending_request_seconds` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures the time elapsed between sending a downstream request to an Executor and receiving the response back. | -| `jina_sending_request_bytes` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures the size of the downstream requests send to an Executor in bytes | -| `jina_failed_requests_total` | [Counter](https://prometheus.io/docs/concepts/metric_types/#counter) | Counts the number of failed requests returned by the gateway | -| `jina_sent_request_bytes` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures the size in bytes of the request sent by the Head to the Executor | -| `jina_received_response_bytes` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures the size in bytes of the response returned by the Executor | +| `jina_sending_request_bytes` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures the size of the downstream requests send to an Executor in bytes. | +| `jina_failed_requests_total` | [Counter](https://prometheus.io/docs/concepts/metric_types/#counter) | Counts the number of failed requests returned by the Gateway. | +| `jina_sent_request_bytes` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures the size in bytes of the request sent by the Head to the Executor. | +| `jina_received_response_bytes` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures the size in bytes of the response returned by the Executor. | ### Executor Pods -| Metrics name | Metrics type | Description | +| Metric name | Metric type | Description | |----------------------------------|----------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------| -| `jina_receiving_request_seconds` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures the time elapsed between receiving a request from the gateway (or the head) and sending back the response. | +| `jina_receiving_request_seconds` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures the time elapsed between receiving a request from the Gateway (or the head) and sending back the response. | | `jina_process_request_seconds` | [Summary](https://prometheus.io/docs/concepts/metric_types/#summary) | Measures the time spend calling the requested method | | `jina_document_processed_total` | [Counter](https://prometheus.io/docs/concepts/metric_types/#counter) | Counts the number of Documents processed by an Executor | | `jina_successful_requests_total` | [Counter](https://prometheus.io/docs/concepts/metric_types/#counter) | Total count of successful requests returned by the Executor across all endpoints | diff --git a/docs/fundamentals/flow/topologies.md b/docs/fundamentals/flow/topologies.md index feb7e1eadd63f..3eaa6d7cda626 100644 --- a/docs/fundamentals/flow/topologies.md +++ b/docs/fundamentals/flow/topologies.md @@ -1,9 +1,9 @@ (flow-complex-topologies)= # Topology -{class}`~jina.Flow`s are not restricted to sequential execution. Internally they are modelled as graphs and as such can represent any complex, non-cyclic topology. +{class}`~jina.Flow`s are not restricted to sequential execution. Internally they are modeled as graphs, so they can represent any complex, non-cyclic topology. A typical use case for such a Flow is a topology with a common pre-processing part, but different indexers separating embeddings and data. -To define a custom Flow topology you can use the `needs` keyword when adding an {class}`~jina.Executor`. By default, a Flow assumes that every Executor needs the previously added Executor. +To define a custom topology you can use the `needs` keyword when adding an {class}`~jina.Executor`. By default, a Flow assumes that every Executor needs the previously added Executor. ```python from jina import Executor, Flow, requests, Document, DocumentArray @@ -35,7 +35,7 @@ f = ( .add(needs=['barExecutor', 'bazExecutor']) ) -with f: # Using it as a Context Manager will start the Flow +with f: # Using it as a Context Manager starts the Flow response = f.post( on='/search' ) # This sends a request to the /search endpoint of the Flow @@ -45,24 +45,24 @@ with f: # Using it as a Context Manager will start the Flow ```{figure} needs-flow.svg :width: 70% :align: center -Complex Flow where one Executor requires two Executors to process Documents before +Complex Flow where one Executor requires two Executors to process Documents beforehand ``` -This will get you the following output: +This gives the output: ```text ['foo was here and got 0 document', 'bar was here and got 1 document', 'baz was here and got 1 document'] ``` -So both `BarExecutor` and `BazExecutor` only received a single `Document` from `FooExecutor` as they are run in parallel. The last Executor `executor3` will receive both DocumentArrays and merges them automatically. -The automated merging can be disabled by setting `disable_reduce=True`. This can be useful when you need to provide your custom merge logic in a separate Executor. In this case the last `.add()` call would like `.add(needs=['barExecutor', 'bazExecutor'], uses=CustomMergeExecutor, disable_reduce=True)`. This feature requires Jina >= 3.0.2. +Both `BarExecutor` and `BazExecutor` only received a single `Document` from `FooExecutor` because they are run in parallel. The last Executor `executor3` receives both DocumentArrays and merges them automatically. +This automated merging can be disabled with `disable_reduce=True`. This is useful for providing custom merge logic in a separate Executor. In this case the last `.add()` call would look like `.add(needs=['barExecutor', 'bazExecutor'], uses=CustomMergeExecutor, disable_reduce=True)`. This feature requires Jina >= 3.0.2. (replicate-executors)= ## Replicate Executors -Replication can be used to create multiple copies of the same {class}`~jina.Executor`s. Each request in the {class}`~jina.Flow` is then passed to only one replica (instance) of your Executor. This can be useful for a couple of challenges like performance and availability: -* If you have slow Executors (like some Encoders) you may want to scale up the number of instances of this particular Executor so that you can process multiple requests in parallel -* Executors might need to be taken offline from time to time (updates, failures, etc.), but you may want your Flow to be able to process requests without downtimes. In this case Replicas can be used as well so that any Replica of an Executor can be taken offline as long as there is still one running Replica online. Using this technique it is possible to create a High availability setup for your Flow. +Replication creates multiple copies of the same {class}`~jina.Executor`. Each request in the {class}`~jina.Flow` is then passed to only one replica (instance) of that Executor. This is useful for performance and availability: +* If you have slow Executors (like some Encoders) you can scale up the number of instances to process multiple requests in parallel. +* Executors might need to be taken offline occasionally (for updates, failures, etc.), but you may want your Flow to be able to process requests without downtime. Using Replicas, any single replica of an Executor can be taken offline as long as there is still at least one running online. This ensures high availability for your Flow. ```python from jina import Flow @@ -73,28 +73,26 @@ f = Flow().add(name='slow_encoder', replicas=3).add(name='fast_indexer') ```{figure} replicas-flow.svg :width: 70% :align: center -Flow with 3 replicas of slow_encoder and 1 replica of fast_indexer +Flow with three replicas of slow_encoder and one replica of fast_indexer ``` -The above Flow will create a topology with three Replicas of Executor `slow_encoder`. The `Flow` will send every -request to exactly one of the three instances. Then the replica will send its result to `fast_indexer`. - +The above Flow creates a topology with three replicas of the Executor `slow_encoder`. The `Flow` sends every +request to exactly one of the three instances. Then the replica sends its result to `fast_indexer`. ## Replicate on multiple GPUs -In certain situations, you may want to replicate your {class}`~jina.Executor`s so that each replica uses a different GPU on your machine. -To achieve this, you need to tell the {class}`~jina.Flow` to leverage multiple GPUs, by passing `CUDA_VISIBLE_DEVICES=RR` as an environment variable. -The Flow will then assign each available GPU to replicas in a round-robin fashion. +To replicate your {class}`~jina.Executor`s so that each replica uses a different GPU on your machine, you can tell the {class}`~jina.Flow` to use multiple GPUs by passing `CUDA_VISIBLE_DEVICES=RR` as an environment variable. +The Flow then assigns each available GPU to replicas in a round-robin fashion. ```{caution} -Replicate on multiple GPUs by using `CUDA_VISIBLE_DEVICES=RR` should only be used locally. +You should only replicate on multiple GPUs with `CUDA_VISIBLE_DEVICES=RR` locally. ``` ```{tip} -When working in Kubernetes or with Docker Compose you shoud allocate GPU ressources to each replica directly in the configuration files. +In Kubernetes or with Docker Compose you should allocate GPU resources to each replica directly in the configuration files. ``` -For example, if you have 3 GPUs and one of your Executor has 5 replicas then +For example, if you have three GPUs and one of your Executor has five replicas then: ````{tab} Python In a `flow.py` file @@ -127,7 +125,7 @@ CUDA_VISIBLE_DEVICES=RR jina flow --uses flow.yaml ``` ```` -The Flow will assign GPU devices in the following round-robin fashion: +The Flow assigns GPU devices in the following round-robin fashion: | GPU device | Replica ID | |------------|------------| @@ -138,7 +136,7 @@ The Flow will assign GPU devices in the following round-robin fashion: | 1 | 4 | -You can also restrict the visible devices in round-robin assignment by `CUDA_VISIBLE_DEVICES=RR0:2`, where `0:2` has the same meaning as Python slice. This will create the following assignment: +You can restrict the visible devices in round-robin assignment using `CUDA_VISIBLE_DEVICES=RR0:2`, where `0:2` corresponds to a Python slice. This creates the following assignment: | GPU device | Replica ID | |------------|------------| @@ -149,7 +147,7 @@ You can also restrict the visible devices in round-robin assignment by `CUDA_VIS | 0 | 4 | -You can also restrict the visible devices in round-robin assignment by assigning a list of devices ids `CUDA_VISIBLE_DEVICES=RR1,3`. This will create the following assignment: +You can restrict the visible devices in round-robin assignment by assigning a list of devices ids `CUDA_VISIBLE_DEVICES=RR1,3`. This creates the following assignment: | GPU device | Replica ID | |------------|------------| @@ -162,13 +160,13 @@ You can also restrict the visible devices in round-robin assignment by assigning ## Distributed replicas -Replicas of the same Executor can run on different machines. +You can run replicas of the same Executor on different machines. -To add distributed replicas to a Flow, the Executor replicas must be running on their respective machines already. +To add distributed replicas to a Flow, the Executor replicas must already be running on their respective machines. ````{admonition} External Executors :class: seealso -For more information about starting Executors outside of a Flow, see our {ref}`how-to on external Executors `. +To start Executors outside a Flow, see our {ref}`how-to on external Executors `. ```` Then, you can add them by specifying their hosts, ports, and `external=True`: @@ -179,21 +177,21 @@ from jina import Flow Flow().add(host='localhost:1234,91.198.174.192:12346', external=True) ``` -This will connect to `grpc://localhost:12345` and `grpc://91.198.174.192:12346` as two replicas of the same Executor. +This connects to `grpc://localhost:12345` and `grpc://91.198.174.192:12346` as two replicas of the same Executor. (partition-data-by-using-shards)= ## Partition data with shards -Sharding can be used to partition data (like an Index) into several parts. This enables the distribution of data across multiple machines. -This is helpful in two situations: +Sharding partitions data (like an index) into several parts. This distributes the data across multiple machines. +This is helpful when: -- When the full data does not fit on one machine -- When the latency of a single request becomes too large. +- The full data does not fit on one machine. +- The latency of a single request becomes too large. -Then splitting the load across two or more machines yields better results. +In these cases splitting the load across two or more machines yields better results. -For Shards, you can define which shard (instance) will receive the request from its predecessor. This behaviour is called `polling`. `ANY` means only one shard will receive a request and `ALL` means that all Shards will receive a request. +For shards, you can define which shard (instance) receives the request from its predecessor. This behaviour is called `polling`. `ANY` means only one shard receives a request, while `ALL` means that all shards receive a request. Polling can be configured per endpoint (like `/index`) and {class}`~jina.Executor`. By default the following `polling` is applied: - `ANY` for endpoints at `/index` @@ -202,11 +200,11 @@ By default the following `polling` is applied: When you shard your index, the request handling usually differs between index and search requests: -- Index (and update, delete) will just be handled by a single shard => `polling='any'` -- Search requests are handled by all Shards => `polling='all'` +- Index (and update, delete) are handled by a single shard => `polling='any'` +- Search requests are handled by all shards => `polling='all'` For indexing, you only want a single shard to receive a request, because this is sufficient to add it to the index. -For searching, you probably need to send the search request to all Shards, because the requested data could be on any shard. +For searching, you probably need to send the search request to all shards, because the requested data could be on any shard. ```python Usage from jina import Flow @@ -214,22 +212,22 @@ from jina import Flow flow = Flow().add(name='ExecutorWithShards', shards=3, polling={'/custom': 'ALL', '/search': 'ANY', '*': 'ANY'}) ``` -The example above will result in a {class}`~jina.Flow` having the Executor `ExecutorWithShards` with the following polling options configured +The above example results in a {class}`~jina.Flow` having the Executor `ExecutorWithShards` with the following polling options: - `/index` has polling `ANY` (the default value is not changed here) - `/search` has polling `ANY` as it is explicitly set (usually that should not be necessary) - `/custom` has polling `ALL` -- all other endpoints will have polling `ANY` due to the usage of `*` as a wildcard to catch all other cases +- All other endpoints have polling `ANY` due to using `*` as a wildcard to catch all other cases (flow-filter)= ## Filter by condition -To define a filter condition, you can use [DocArrays rich query language](https://docarray.jina.ai/fundamentals/documentarray/find/#query-by-conditions). -You can set a filter for each individual {class}`~jina.Executor`, and every Document that does not satisfy the filter condition will be +To define a filter condition, use [DocArrays rich query language](https://docarray.jina.ai/fundamentals/documentarray/find/#query-by-conditions). +You can set a filter for each individual {class}`~jina.Executor`, and every Document that does not satisfy the filter condition is removed before reaching that Executor. -To add a filter condition to an Executor, you pass it to the `when` parameter of {meth}`~jina.Flow.add` method of the Flow. -This then defines *when* a document will be processed by the Executor: +To add a filter condition to an Executor, pass it to the `when` parameter of {meth}`~jina.Flow.add` method of the Flow. +This then defines *when* a document is processed by the Executor: ````{tab} Python @@ -242,7 +240,7 @@ from jina import Flow, DocumentArray, Document f = Flow().add().add(when={'tags__key': {'$eq': 5}}) # Create the empty Flow, add condition -with f: # Using it as a Context Manager will start the Flow +with f: # Using it as a Context Manager starts the Flow ret = f.post( on='/search', inputs=DocumentArray([Document(tags={'key': 5}), Document(tags={'key': 4})]), @@ -250,7 +248,7 @@ with f: # Using it as a Context Manager will start the Flow print( ret[:, 'tags'] -) # only the Document fullfilling the condition is processed and therefore returned. +) # only the Document fulfilling the condition is processed and therefore returned. ``` ```shell @@ -280,7 +278,7 @@ from jina import Flow f = Flow.load_config('flow.yml') # Load the Flow definition from Yaml file -with f: # Using it as a Context Manager will start the Flow +with f: # Using it as a Context Manager starts the Flow ret = f.post( on='/search', inputs=DocumentArray([Document(tags={'key': 5}), Document(tags={'key': 4})]), @@ -288,7 +286,7 @@ with f: # Using it as a Context Manager will start the Flow print( ret[:, 'tags'] -) # only the Document fullfilling the condition is processed and therefore returned. +) # only the Document fulfilling the condition is processed and therefore returned. ``` ```shell @@ -296,12 +294,12 @@ print( ``` ```` -Note that whenever a Document does not satisfy the `when` condition of a filter, the filter removes it *for the entire branch of the Flow*. -This means that every Executor that is located behind a filter is affected by this, not just the specific Executor that defines the condition. -Like with a real-life filter, once something does not pass through it, it will not re-appear behind the filter. +Note that if a Document does not satisfy the `when` condition of a filter, the filter removes the Document *for the entire branch of the Flow*. +This means that every Executor located behind a filter is affected by this, not just the specific Executor that defines the condition. +As with a real-life filter, once something fails to pass through it, it no longer continues down the pipeline. Naturally, parallel branches in a Flow do not affect each other. So if a Document gets filtered out in only one branch, it can -still be used in the other branch, and also after the branches are re-joined together: +still be used in the other branch, and also after the branches are re-joined: ````{tab} Parallel Executors @@ -373,15 +371,15 @@ print(ret[:, 'tags']) # No Document satisfies both sequential filters ```` This feature is useful to prevent some specialized Executors from processing certain Documents. -It can also be used to build *switch-like nodes*, where some Documents pass through one parallel branch of the Flow, -while other Documents pass through a different branch. +It can also be used to build *switch-like nodes*, where some Documents pass through one branch of the Flow, +while other Documents pass through a different parallel branch. -Also note that whenever a Document does not satisfy the condition of an Executor, it will not even be sent to that Executor. -Instead, only a lightweight Request without any payload will be transferred. +Note that whenever a Document does not satisfy the condition of an Executor, it is not even sent to that Executor. +Instead, only a lightweight Request without any payload is transferred. This means that you can not only use this feature to build complex logic, but also to minimize your networking overhead. ````{admonition} See Also :class: seealso -For a hands-on example on how to leverage these filter conditions, see {ref}`this how-to `. +For a hands-on example of leveraging filter conditions, see {ref}`this how-to `. ```` diff --git a/docs/fundamentals/flow/when-things-go-wrong.md b/docs/fundamentals/flow/when-things-go-wrong.md index de53e6b0b66fd..35c26e947bf33 100644 --- a/docs/fundamentals/flow/when-things-go-wrong.md +++ b/docs/fundamentals/flow/when-things-go-wrong.md @@ -1,60 +1,57 @@ (flow-error-handling)= # Handle exceptions -When building a complex solution, unfortunately things go wrong sometimes. +When building a complex solution, things sometimes go wrong. Jina does its best to recover from failures, handle them gracefully, and report useful failure information to the user. -The following outlines a number of (more or less) common failure cases, and explains how Jina responds to each one of them. +The following outlines (more or less) common failure cases, and explains how Jina responds to each. ## Executor errors -In general there are two places where an Executor level error can be introduced. +In general there are two places where an Executor level error can be introduced: -If an {class}`~jina.Executor`'s `__init__` method raises and Exception, the {class}`~jina.Flow` cannot start. -In this case this Exception is gets raised by the Executor runtime, and the Flow throws a `RuntimeFailToStart` Exception. +- If an {class}`~jina.Executor`'s `__init__` method raises an Exception, the {class}`~jina.Flow` cannot start. +In this case this Executor runtime raises the Exception, and the Flow throws a `RuntimeFailToStart` Exception. +- If one of the Executor's `@requests` methods raises an Exception, the error message is added to the response +and sent back to the client. If the gRPC or WebSockets protocols are used, the networking stream is not interrupted and can accept further requests. -If one of the Executor's `@requests` methods raises and Exception, the offending error message gets added to the response -and is sent back to the client. -If the gRPC or WebSocket protocols are used, the networking stream is not interrupted and can accept further requests. - -In all cases, the {ref}`Jina Client ` will raise an Exception. +In both cases, the {ref}`Jina Client ` raises an Exception. ## Exit on exceptions -Some exceptions like network errors or request timeouts can be transient and can recover automatically. In some cases there -can be fatal errors or user defined errors that can put the Executor in an unuseable state. The executor can be restarted to recover from such a -state. Locally the flow must to be re-run manually to restore the Executor availability. +Some exceptions like network errors or request timeouts can be transient and can recover automatically. Sometimes +fatal errors or user-defined errors put the Executor in an unusable state, in which case it can be restarted. Locally the Flow must be re-run manually to restore Executor availability. -On Kubernetes deployments, the process can be automated by terminating the Exeuctor process which will cause the pod to terminate. The availability - is restored by the autoscaler by creating a new pod to replace the terminated pod. The termination can be enabled for one or more errors by using the `exit_on_exceptions` argument when creating the Executor in a Flow. Upon matching the caught exception, the Executor will perform a gracefull termination. +On Kubernetes deployments, this can be automated by terminating the Executor process, causing the Pod to terminate. The autoscaler restores availability + by creating a new Pod to replace the terminated one. Termination can be enabled for one or more errors by using the `exit_on_exceptions` argument when adding the Executor to a Flow. When it matches the caught exception, the Executor terminates gracefully. -A sample Flow can be `Flow().add(uses=MyExecutor, exit_on_exceptions: ['Exception', 'RuntimeException'])`. The `exit_on_exceptions` argument accepts a list of python or user defined custom Exception or Error class names. +A sample Flow can be `Flow().add(uses=MyExecutor, exit_on_exceptions: ['Exception', 'RuntimeException'])`. The `exit_on_exceptions` argument accepts a list of Python or user-defined Exception or Error class names. ## Network errors -When an {ref}`Executor or Head ` can't be reached by the {class}`~jina.Flow`'s gateway, it attempts to re-connect +When a {class}`~jina.Flow`'s gateway can't reach an {ref}`Executor or Head `, the Flow attempts to re-connect to the faulty deployment according to a retry policy. The same applies to calls to Executors that time out. -The specifics of this policy depend on the environment the Flow find itself in, and are outlined below. +The specifics of this policy depend on the Flow's environment, as outlined below. ````{admonition} Hint: Prevent Executor timeouts :class: hint -If you regularly experience timouts on Executor calls, you may want to consider setting the Flow's `timeout_send` attribute to a larger value. -You can do this by setting `Flow(timeout_send=time_in_ms)` in Python +If you regularly experience Executor call timeouts, set the Flow's `timeout_send` attribute to a larger value +by setting `Flow(timeout_send=time_in_ms)` in Python or `timeout_send: time_in_ms` in your Flow YAML with-block. -Especially neural network forward passes on CPU (and other unusually expensive operations) can lead to timeouts with the default setting. +Neural network forward passes on CPU (and other unusually expensive operations) commonly lead to timeouts with the default setting. ```` ````{admonition} Hint: Custom retry policy :class: hint -You can override the default retry policy and instead choose a number of retries performed for each Executor. -To perform `n` retries, set `Flow(retries=n)` in Python, or `retries: n` in the Flow +You can override the default retry policy and instead choose a number of retries performed for each Executor +with `Flow(retries=n)` in Python, or `retries: n` in the Flow YAML `with` block. ```` -If, during the complete execution of this policy, no successful call to any Executor replica could be made, the request is aborted +If, during the complete execution of this policy, no successful call to any Executor replica can be made, the request is aborted and the failure is {ref}`reported to the client `. ### Request retries: Local deployment @@ -62,8 +59,8 @@ and the failure is {ref}`reported to the client `. If a Flow is deployed locally (with or without {ref}`containerized Executors `), the following policy for failed requests applies on a per-Executor basis: -- If there are multiple replicas of the target Executor, try each replica at least once, or until the request succeeds -- Irrespective of the number of replicas, try the request at least 3 times, or until it succeeds. If there are fewer than 3 replicas, try them in a round-robin fashion +- If there are multiple replicas of the target Executor, try each replica at least once, or until the request succeeds. +- Irrespective of the number of replicas, try the request at least three times, or until it succeeds. If there are fewer than three replicas, try them in a round-robin fashion. ### Request retries: Deployment with Kubernetes @@ -73,18 +70,18 @@ If a Flow is {ref}`deployed in Kubernetes ` without a service mesh, :class: seealso The impossibility of retries across different replicas is a limitation of Kubernetes in combination with gRPC. -If you want to learn more about this limitation, see [this](https://kubernetes.io/blog/2018/11/07/grpc-load-balancing-on-kubernetes-without-tears/) Kubernetes Blog post. +If you want to learn more about this limitation, see [this](https://kubernetes.io/blog/2018/11/07/grpc-load-balancing-on-kubernetes-without-tears/) Kubernetes blog post. An easy way to overcome this limitation is to use a service mesh like [Linkerd](https://linkerd.io/). ```` Concretely, this results in the following per-Executor retry policy: -- Try the request 3 times, or until it succeeds, always on the same replica of the Executor +- Try the request three times, or until it succeeds, always on the same replica of the Executor ### Request retries: Deployment with Kubernetes and service mesh -A Kubernetes service mesh can enable load balancing, and thus retries, between replicas of an Executor. +A Kubernetes service mesh can enable load balancing, and thus retries, between an Executor's replicas. ````{admonition} Hint :class: hint @@ -93,7 +90,7 @@ While Jina supports any service mesh, the output of `f.to_kubernetes_yaml()` alr If a service mesh is installed alongside Jina in the Kubernetes cluster, the following retry policy applies for each Executor: -- Try the request at least 3 times, or until it succeeds +- Try the request at least three times, or until it succeeds - Distribute the requests to the replicas according to the service mesh's configuration @@ -113,29 +110,29 @@ YAML `with` block. If the retry policy is exhausted for a given request, the error is reported back to the corresponding client. -The resulting error message will contain the *network address* of the failing Executor. -If multiple replicas are present, all addresses will be reported - unless the Flow is deployed using Kubernetes, in which -case the replicas are managed by k8s and only a single address is available. +The resulting error message contains the *network address* of the failing Executor. +If multiple replicas are present, all addresses are reported - unless the Flow is deployed using Kubernetes, in which +case the replicas are managed by Kubernetes and only a single address is available. -Depending on the client-to-gateway protocol, and they type of error, the error message will be returned in one of the following ways: +Depending on the client-to-gateway protocol, and the type of error, the error message is returned in one of the following ways: **Could not connect to Executor:** -- **gRPC**: A response with the gRPC status code 14 (*UNAVAILABLE*) is issued, and the error message is contained in the `details` field -- **HTTP**: A response with the HTTP status code 503 (*SERVICE_UNAVAILABLE*) is issued, and the error message is contained in `response['header']['status']['description']` -- **WebSocket**: The stream closes with close code 1011 (*INTERNAL_ERROR*) and the message is contained in the WS close message +- **gRPC**: A response with the gRPC status code 14 (*UNAVAILABLE*) is issued, and the error message is contained in the `details` field. +- **HTTP**: A response with the HTTP status code 503 (*SERVICE_UNAVAILABLE*) is issued, and the error message is contained in `response['header']['status']['description']`. +- **WebSockets**: The stream closes with close code 1011 (*INTERNAL_ERROR*) and the message is contained in the WebSocket close message. **Call to Executor timed out:** -- **gRPC**: A response with the gRPC status code 4 (*DEADLINE_EXCEEDED*) is issued, and the error message is contained in the `details` field -- **HTTP**: A response with the HTTP status code 504 (*GATEWAY_TIMEOUT*) is issued, and the error message is contained in `response['header']['status']['description']` -- **WebSocket**: The stream closes with close code 1011 (*INTERNAL_ERROR*) and the message is contained in the WS close message +- **gRPC**: A response with the gRPC status code 4 (*DEADLINE_EXCEEDED*) is issued, and the error message is contained in the `details` field. +- **HTTP**: A response with the HTTP status code 504 (*GATEWAY_TIMEOUT*) is issued, and the error message is contained in `response['header']['status']['description']`. +- **WebSockets**: The stream closes with close code 1011 (*INTERNAL_ERROR*) and the message is contained in the WebSockets close message. -For any of these scenarios, the {ref}`Jina Client ` will raise a `ConnectionError` containing the error message. +For any of these scenarios, the {ref}`Jina Client ` raises a `ConnectionError` containing the error message. ## Debug via breakpoint -Standard Python breakpoints will not work inside `Executor` methods when called inside a Flow context manager. Nevertheless, `import epdb; epdb.set_trace()` will work just as a native python breakpoint. Note that you need to `pip install epdb` to have access to this type of breakpoints. +Standard Python breakpoints don't work inside `Executor` methods when called inside a Flow context manager. Nevertheless, `import epdb; epdb.set_trace()` works just like a native Python breakpoint. Note that you need to `pip install epdb` to access this type of breakpoints. ````{tab} ✅ Do @@ -185,4 +182,4 @@ def main(): if __name__ == '__main__': main() ``` -```` \ No newline at end of file +```` diff --git a/docs/fundamentals/flow/yaml-spec.md b/docs/fundamentals/flow/yaml-spec.md index 6508e60b5a855..97461472c8121 100644 --- a/docs/fundamentals/flow/yaml-spec.md +++ b/docs/fundamentals/flow/yaml-spec.md @@ -1,18 +1,11 @@ (flow-yaml-spec)= # {octicon}`file-code` YAML specification -This page outlines the specification for valid {class}`~jina.Executor` YAML files. - -Such YAML configurations can be used to generate a {class}`~jina.Executor` object via {meth}`~jina.jaml.JAMLCompatible.load_config`. - To generate a YAML configuration from a {class}`~jina.Flow` Python object, use {meth}`~jina.jaml.JAMLCompatible.save_config`. ## YAML completion in IDE -We provide a [JSON Schema](https://json-schema.org/) for your IDE to enable code completion, syntax validation, members listing and displaying help text. Here is a [video tutorial](https://youtu.be/qOD-6mihUzQ) to walk you through the setup. - - - +We provide a [JSON Schema](https://json-schema.org/) for your IDE to enable code completion, syntax validation, members listing and displaying help text. ### PyCharm users @@ -33,11 +26,8 @@ We provide a [JSON Schema](https://json-schema.org/) for your IDE to enable code You can bind Schema to any file suffix you commonly used for Jina Flow's YAML. - ## Example YAML -The following constitutes an example Flow YAML: - ```yaml jtype: Flow version: '1' @@ -69,7 +59,7 @@ String indicating the version of the Flow. ### `with` -Keyword arguments passed to Flow `__init__()` method. You can set Flow-specific arguments and Gateway-specific arguments here: +Keyword arguments are passed to a Flow's `__init__()` method. You can set Flow-specific arguments and Gateway-specific arguments here: #### Flow arguments @@ -94,9 +84,7 @@ All keyword arguments passed to the Flow {meth}`~jina.Flow.add` method can be us ## Variables -Jina Flow YAMLs support variables and variable substitution according to the [Github Actions syntax](https://docs.github.com/en/actions/learn-github-actions/environment-variables). - -This means that the following variable substitutions are supported: +Jina Flow YAML supports variables and variable substitution according to the [Github Actions syntax](https://docs.github.com/en/actions/learn-github-actions/environment-variables): ### Environment variables