Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added docs/source/_static/images/api_diagram.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
55 changes: 49 additions & 6 deletions docs/source/components/device.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,15 @@
Device
======

Device is a DepthAI `module <https://docs.luxonis.com/en/latest/pages/products/>`__. After the :ref:`Pipeline` is defined, it can be uploaded to the device.
When you create the device in the code, firmware is uploaded together with the pipeline.
Device represents an `OAK camera <https://docs.luxonis.com/projects/hardware/en/latest/>`__. On all of our devices there's a powerful vision processing unit
(**VPU**), called `Myriad X <https://www.intel.com/content/www/us/en/products/details/processors/movidius-vpu.html>`__.
The VPU is optimized for performing AI inference algorithms and for processing sensory inputs (eg. calculating stereo disparity from two cameras).

Device API
##########

:code:`Device` object represents an OAK device. When starting the device, you have to upload a :ref:`Pipeline` to it, which will get executed on the VPU.
When you create the device in the code, firmware is uploaded together with the pipeline and other assets (such as NN blobs).

.. code-block:: python

Expand All @@ -14,8 +21,10 @@ When you create the device in the code, firmware is uploaded together with the p

# Upload the pipeline to the device
with depthai.Device(pipeline) as device:
# Start the pipeline that is now on the device
device.startPipeline()
# Print Myriad X Id (MxID), USB speed, and available cameras on the device
print('MxId:',device.getDeviceInfo().getMxId())
print('USB speed:',device.getUsbSpeed())
print('Connected cameras:',device.getConnectedCameras())

# Input queue, to send message from the host to the device (you can recieve the message on the device with XLinkIn)
input_q = device.getInputQueue("input_name", maxSize=4, blocking=False)
Expand All @@ -24,7 +33,7 @@ When you create the device in the code, firmware is uploaded together with the p
output_q = device.getOutputQueue("output_name", maxSize=4, blocking=False)

while True:
# Get the message from the queue
# Get a message that came from the queue
output_q.get() # Or output_q.tryGet() for non-blocking

# Send a message to the device
Expand All @@ -40,7 +49,7 @@ If you want to use multiple devices on a host, check :ref:`Multiple DepthAI per
Device queues
#############

After initializing the device, one has to initialize the input/output queues as well.
After initializing the device, one has to initialize the input/output queues as well. These queues will be located on the host computer (in RAM).

.. code-block:: python

Expand All @@ -62,6 +71,40 @@ flags determine the behavior of the queue in this case. You can set these flags
queue.setMaxSize(10)
queue.setBlocking(True)

Specifying arguments for :code:`getOutputQueue` method
######################################################

When obtaining the output queue (example code below), the :code:`maxSize` and :code:`blocking` arguments should be set depending on how
the messages are intended to be used, where :code:`name` is the name of the outputting stream.

Since queues are on the host computer, memory (RAM) usually isn't that scarce. But if you are using a small SBC like RPI Zero, where there's only 0.5GB RAM,
you might need to specify max queue size as well.

.. code-block:: python

with dai.Device(pipeline) as device:
queueLeft = device.getOutputQueue(name="manip_left", maxSize=8, blocking=False)

If only the latest results are relevant and previous do not matter, one can set :code:`maxSize = 1` and :code:`blocking = False`.
That way only latest message will be kept (:code:`maxSize = 1`) and it might also be overwritten in order to avoid waiting for
the host to process every frame, thus providing only the latest data (:code:`blocking = False`).
However, if there are a lot of dropped/overwritten frames, because the host isn't able to process them fast enough
(eg. one-threaded environment which does some heavy computing), the :code:`maxSize` could be set to a higher
number, which would increase the queue size and reduce the number of dropped frames.
Specifically, at 30 FPS, a new frame is recieved every ~33ms, so if your host is able to process a frame in that time, the :code:`maxSize`
could be set to :code:`1`, otherwise to :code:`2` for processing times up to 66ms and so on.

If, however, there is a need to have some intervals of wait between retrieving messages, one could specify that differently.
An example would be checking the results of :code:`DetectionNetwork` for the last 1 second based on some other event,
in which case one could set :code:`maxSize = 30` and :code:`blocking = False`
(assuming :code:`DetectionNetwork` produces messages at ~30FPS).

The :code:`blocking = True` option is mostly used when correct order of messages is needed.
Two examples would be:

- matching passthrough frames and their original frames (eg. full 4K frames and smaller preview frames that went into NN),
- encoding (most prominently H264/H265 as frame drops can lead to artifacts).

Blocking behaviour
******************

Expand Down
50 changes: 46 additions & 4 deletions docs/source/components/messages.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,54 @@
Messages
========

Messages are sent between linked :ref:`Nodes`. The only way nodes communicate with each other is by sending messages from one to another.
Messages are sent between linked :ref:`Nodes`. The only way nodes communicate with each other is by sending messages from one to another. On the
table of contents (left side of the page) **all DepthAI messages are listed** under the :code:`Messages` entry. You can click on them to find out more.

If we have :code:`Node1` whose output is linked with :code:`Node2`'s input, a **message** is created in the :code:`Node1`,
sent out of the :code:`Node1`'s output and to the :code:`Node2`'s input.
.. rubric:: Creating a message in Script node

On the table of contents (left side of the page) all messages are listed under the :code:`Messages` entry. You can click on them to find out more.
A DepthAI message can be created either on the device, by a node automatically or manually inside the :ref:`Script` node. In below example,
the code is taken from the :ref:`Script camera control` example, where :ref:`CameraControl` is created inside the Script node every second
and sent to the :ref:`ColorCamera`'s input (:code:`cam.inputControl`).

.. code-block:: python

script = pipeline.create(dai.node.Script)
script.setScript("""
# Create a message
ctrl = CameraControl()
# Configure the message
ctrl.setCaptureStill(True)
# Send the message from the Script node
node.io['out'].send(ctrl)
""")

.. rubric:: Creating a message on a Host

It can also be created on a host computer and sent to the device via :ref:`XLinkIn` node. :ref:`RGB Camera Control`, :ref:`Video & MobilenetSSD`
and :ref:`Stereo Depth from host` code examples demonstrate this functionality perfectly. In the example below, we have removed all the code
that isn't relevant to showcase how a message can be created on the host and sent to the device via XLink.

.. code-block:: python

# Create XLinkIn node and configure it
xin = pipeline.create(dai.node.XLinkIn)
xin.setStreamName("frameIn")
xin.out.link(nn.input) # Connect it to NeuralNetwork's input

with dai.Device(pipeline) as device:
# Create input queue, which allows you to send messages to the device
qIn = device.getInputQueue("frameIn")
# Create ImgFrame message
img = dai.ImgFrame()
img.setData(frame)
img.setWidth(300)
img.setHeight(300)
qIn.send(img) # Send the message to the device

.. rubric:: Creating a message on an external MCU

A message can also be created on an external MCU and sent to the device via :ref:`SPIIn` node. An demo of such functionality is the
`spi_in_landmark <https://github.com/luxonis/esp32-spi-message-demo/tree/main/spi_in_landmark>`__ example.

.. toctree::
:maxdepth: 0
Expand Down
54 changes: 11 additions & 43 deletions docs/source/components/pipeline.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,36 +3,28 @@
Pipeline
========

Pipeline is a collection of :ref:`nodes <Nodes>` and links between them. This flow provides extensive flexibility that users get for their
DepthAI device.

Pipeline is a collection of :ref:`nodes <Nodes>` and links between them. This flow provides an extensive flexibility that users get for their
OAK device. When pipeline object is passed to the :ref:`Device` object, pipeline gets serialized to JSON and sent to the OAK device via XLink.

Pipeline first steps
####################

To get DepthAI up and running, one has to define a pipeline, populate it with nodes, configure the nodes and link them together. After that, the pipeline
To get DepthAI up and running, you have to create a pipeline, populate it with nodes, configure the nodes and link them together. After that, the pipeline
can be loaded onto the :ref:`Device` and be started.

.. code-block:: python

pipeline = depthai.Pipeline()

# If required, specify OpenVINO version
pipeline.setOpenVINOVersion(depthai.OpenVINO.Version.VERSION_2021_4)

# Create nodes, configure them and link them together

# Upload the pipeline to the device
with depthai.Device(pipeline) as device:
# Start the pipeline that is now on the device
device.startPipeline()

# Set input/output queues to configure device/host communication through the XLink...

Using multiple devices
######################

If user has multiple DepthAI devices, each device can run a separate pipeline or the same pipeline
(`demo here <https://github.com/luxonis/depthai-experiments/tree/master/gen2-multiple-devices>`__). To use different pipeline for each device,
you can create multiple pipelines and pass the desired pipeline to the desired device on initialization.

Specifying OpenVINO version
###########################

Expand All @@ -45,36 +37,12 @@ The reason behind this is that OpenVINO doesn't provide version inside the blob.
# Set the correct version:
pipeline.setOpenVINOVersion(depthai.OpenVINO.Version.VERSION_2021_4)

Specifying arguments for :code:`getOutputQueue` method
######################################################

When obtaining the output queue (example code below), the :code:`maxSize` and :code:`blocking` arguments should be set depending on how
the messages are intended to be used, where :code:`name` is the name of the outputting stream.

.. code-block:: python

with dai.Device(pipeline) as device:
queueLeft = device.getOutputQueue(name="manip_left", maxSize=8, blocking=False)

If only the latest results are relevant and previous do not matter, one can set :code:`maxSize = 1` and :code:`blocking = False`.
That way only latest message will be kept (:code:`maxSize = 1`) and it might also be overwritten in order to avoid waiting for
the host to process every frame, thus providing only the latest data (:code:`blocking = False`).
However, if there are a lot of dropped/overwritten frames, because the host isn't able to process them fast enough
(eg. one-threaded environment which does some heavy computing), the :code:`maxSize` could be set to a higher
number, which would increase the queue size and reduce the number of dropped frames.
Specifically, at 30 FPS, a new frame is recieved every ~33ms, so if your host is able to process a frame in that time, the :code:`maxSize`
should be set to :code:`1`, otherwise to :code:`2` for processing times up to 66ms and so on.

If, however, there is a need to have some intervals of wait between retrieving messages, one could specify that differently.
An example would be checking the results of :code:`DetectionNetwork` for the last 1 second based on some other event,
in which case one could set :code:`maxSize = 30` and :code:`blocking = False`
(assuming :code:`DetectionNetwork` produces messages at ~30FPS).

The :code:`blocking = True` option is mostly used when correct order of messages is needed.
Two examples would be:
Using multiple devices
######################

- matching passthrough frames and their original frames (eg. full 4K frames and smaller preview frames that went into NN),
- encoding (most prominently H264/H265 as frame drops can lead to artifacts).
If user has multiple DepthAI devices, each device can run a different pipeline or the same pipeline
(`demo here <https://github.com/luxonis/depthai-experiments/tree/master/gen2-multiple-devices>`__). To use different pipeline for each device,
you can create multiple pipelines and pass the desired pipeline to the desired device on initialization.

How to place it
###############
Expand Down
51 changes: 15 additions & 36 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,52 +3,32 @@
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.

Welcome to DepthAI Gen2 API Documentation
=========================================
DepthAI API Documentation
=========================

.. image:: https://github.com/luxonis/depthai-python/workflows/Python%20Wheel%20CI/badge.svg?branch=gen2_develop
:target: https://github.com/luxonis/depthai-python/actions?query=workflow%3A%22Python+Wheel+CI%22+branch%3A%22gen2_develop%22

On this page you can find the details regarding the Gen2 DepthAI API that will allow you to interact with the DepthAI device.
We support both :ref:`Python API <Python API Reference>` and :ref:`C++ API <C++ API Reference>`
DepthAI API allows users to connect to, configure and communicate with their OAK devices.
We support both :ref:`Python API <Python API Reference>` and :ref:`C++ API <C++ API Reference>`.

What is Gen2?
-------------
.. image:: /_static/images/api_diagram.png

Gen2 is a step forward in DepthAI integration, allowing users to define their own flow of data using pipelines, nodes
and connections. Gen2 was created based on user's feedback from Gen1 and from raising capabilities of both DepthAI and
supporting software like OpenVINO.

Basic glossary
--------------

- **Host side** is the device, like PC or RPi, to which the DepthAI is connected to. If something is happening on the host side, it means that this device is involved in it, not DepthAI itself

- **Device side** is the DepthAI itself. If something is happening on the device side, it means that the DepthAI is responsible for it

- **Pipeline** is a complete workflow on the device side, consisting of nodes and connections between them - these cannot exist outside of pipeline.

- **Node** is a single functionality of the DepthAI. It have either inputs or outputs or both, together with properties to be defined (like resolution on the camera node or blob path in neural network node)

- **Connection** is a link between one node's output and another one's input. In order to define the pipeline dataflow, the connections define where to send data in order to achieve an expected result

- **XLink** is a middleware that is capable to exchange data between device and host. XLinkIn node allows to send the data from host to device, XLinkOut does the opposite.
- **Host side** is a computer, like PC or RPi, to which an OAK device is connected.
- **Device side** is the OAK device itself. If something is happening on the device side, it means that it's running on the `Myriad X VPU <https://www.intel.com/content/www/us/en/products/details/processors/movidius-vpu/movidius-myriad-x.html>`__. More :ref:`information here <components_device>`.
- **Pipeline** is a complete workflow on the device side, consisting of :ref:`nodes <Nodes>` and connections between them. More :ref:`information here <components_device>`.
- **Node** is a single functionality of the DepthAI. :ref:`Nodes` have inputs or outputs, and have configurable properties (like resolution on the camera node).
- **Connection** is a link between one node's output and another one's input. In order to define the pipeline dataflow, the connections define where to send `messages <Messages>` in order to achieve an expected result
- **XLink** is a middleware that is capable to exchange data between device and host. :ref:`XLinkIn` node allows sending the data from the host to a device, while :ref:`XLinkOut` does the opposite.
- **Messages** are transferred between nodes, as defined by a connection. More :ref:`information here <components_messages>`.

Getting started
---------------

To help you get started with Gen2 API, we have prepared multiple examples of it's usage, with more yet to come, together
with some insightful tutorials.

Before running the example, install the DepthAI Python library using the command below

.. code-block:: python
:substitutions:

python3 -m pip install -U --force-reinstall depthai

First, you need to :ref:`install the DepthAI <Installation>` library and its dependencies.

Now, pick a tutorial or code sample and start utilizing Gen2 capabilities
After installation, you can continue with an insightful :ref:`Hello World tutorial <Hello World>`, or with :ref:`code examples <Code Samples>`, where different
node functionalities are presented with code.

.. toctree::
:maxdepth: 0
Expand All @@ -57,7 +37,6 @@ Now, pick a tutorial or code sample and start utilizing Gen2 capabilities

Home <self>
install.rst
tutorials/overview.rst

.. toctree::
:maxdepth: 1
Expand Down
Loading