diff --git a/docs/source/_static/images/api_diagram.png b/docs/source/_static/images/api_diagram.png
new file mode 100644
index 000000000..d68bcf714
Binary files /dev/null and b/docs/source/_static/images/api_diagram.png differ
diff --git a/docs/source/components/device.rst b/docs/source/components/device.rst
index c7b318d07..0ad428b10 100644
--- a/docs/source/components/device.rst
+++ b/docs/source/components/device.rst
@@ -3,8 +3,15 @@
Device
======
-Device is a DepthAI `module `__. After the :ref:`Pipeline` is defined, it can be uploaded to the device.
-When you create the device in the code, firmware is uploaded together with the pipeline.
+Device represents an `OAK camera `__. On all of our devices there's a powerful vision processing unit
+(**VPU**), called `Myriad X `__.
+The VPU is optimized for performing AI inference algorithms and for processing sensory inputs (eg. calculating stereo disparity from two cameras).
+
+Device API
+##########
+
+:code:`Device` object represents an OAK device. When starting the device, you have to upload a :ref:`Pipeline` to it, which will get executed on the VPU.
+When you create the device in the code, firmware is uploaded together with the pipeline and other assets (such as NN blobs).
.. code-block:: python
@@ -14,8 +21,10 @@ When you create the device in the code, firmware is uploaded together with the p
# Upload the pipeline to the device
with depthai.Device(pipeline) as device:
- # Start the pipeline that is now on the device
- device.startPipeline()
+ # Print Myriad X Id (MxID), USB speed, and available cameras on the device
+ print('MxId:',device.getDeviceInfo().getMxId())
+ print('USB speed:',device.getUsbSpeed())
+ print('Connected cameras:',device.getConnectedCameras())
# Input queue, to send message from the host to the device (you can recieve the message on the device with XLinkIn)
input_q = device.getInputQueue("input_name", maxSize=4, blocking=False)
@@ -24,7 +33,7 @@ When you create the device in the code, firmware is uploaded together with the p
output_q = device.getOutputQueue("output_name", maxSize=4, blocking=False)
while True:
- # Get the message from the queue
+ # Get a message that came from the queue
output_q.get() # Or output_q.tryGet() for non-blocking
# Send a message to the device
@@ -40,7 +49,7 @@ If you want to use multiple devices on a host, check :ref:`Multiple DepthAI per
Device queues
#############
-After initializing the device, one has to initialize the input/output queues as well.
+After initializing the device, one has to initialize the input/output queues as well. These queues will be located on the host computer (in RAM).
.. code-block:: python
@@ -62,6 +71,40 @@ flags determine the behavior of the queue in this case. You can set these flags
queue.setMaxSize(10)
queue.setBlocking(True)
+Specifying arguments for :code:`getOutputQueue` method
+######################################################
+
+When obtaining the output queue (example code below), the :code:`maxSize` and :code:`blocking` arguments should be set depending on how
+the messages are intended to be used, where :code:`name` is the name of the outputting stream.
+
+Since queues are on the host computer, memory (RAM) usually isn't that scarce. But if you are using a small SBC like RPI Zero, where there's only 0.5GB RAM,
+you might need to specify max queue size as well.
+
+.. code-block:: python
+
+ with dai.Device(pipeline) as device:
+ queueLeft = device.getOutputQueue(name="manip_left", maxSize=8, blocking=False)
+
+If only the latest results are relevant and previous do not matter, one can set :code:`maxSize = 1` and :code:`blocking = False`.
+That way only latest message will be kept (:code:`maxSize = 1`) and it might also be overwritten in order to avoid waiting for
+the host to process every frame, thus providing only the latest data (:code:`blocking = False`).
+However, if there are a lot of dropped/overwritten frames, because the host isn't able to process them fast enough
+(eg. one-threaded environment which does some heavy computing), the :code:`maxSize` could be set to a higher
+number, which would increase the queue size and reduce the number of dropped frames.
+Specifically, at 30 FPS, a new frame is recieved every ~33ms, so if your host is able to process a frame in that time, the :code:`maxSize`
+could be set to :code:`1`, otherwise to :code:`2` for processing times up to 66ms and so on.
+
+If, however, there is a need to have some intervals of wait between retrieving messages, one could specify that differently.
+An example would be checking the results of :code:`DetectionNetwork` for the last 1 second based on some other event,
+in which case one could set :code:`maxSize = 30` and :code:`blocking = False`
+(assuming :code:`DetectionNetwork` produces messages at ~30FPS).
+
+The :code:`blocking = True` option is mostly used when correct order of messages is needed.
+Two examples would be:
+
+- matching passthrough frames and their original frames (eg. full 4K frames and smaller preview frames that went into NN),
+- encoding (most prominently H264/H265 as frame drops can lead to artifacts).
+
Blocking behaviour
******************
diff --git a/docs/source/components/messages.rst b/docs/source/components/messages.rst
index b98f405e4..7130d0147 100644
--- a/docs/source/components/messages.rst
+++ b/docs/source/components/messages.rst
@@ -3,12 +3,54 @@
Messages
========
-Messages are sent between linked :ref:`Nodes`. The only way nodes communicate with each other is by sending messages from one to another.
+Messages are sent between linked :ref:`Nodes`. The only way nodes communicate with each other is by sending messages from one to another. On the
+table of contents (left side of the page) **all DepthAI messages are listed** under the :code:`Messages` entry. You can click on them to find out more.
-If we have :code:`Node1` whose output is linked with :code:`Node2`'s input, a **message** is created in the :code:`Node1`,
-sent out of the :code:`Node1`'s output and to the :code:`Node2`'s input.
+.. rubric:: Creating a message in Script node
-On the table of contents (left side of the page) all messages are listed under the :code:`Messages` entry. You can click on them to find out more.
+A DepthAI message can be created either on the device, by a node automatically or manually inside the :ref:`Script` node. In below example,
+the code is taken from the :ref:`Script camera control` example, where :ref:`CameraControl` is created inside the Script node every second
+and sent to the :ref:`ColorCamera`'s input (:code:`cam.inputControl`).
+
+.. code-block:: python
+
+ script = pipeline.create(dai.node.Script)
+ script.setScript("""
+ # Create a message
+ ctrl = CameraControl()
+ # Configure the message
+ ctrl.setCaptureStill(True)
+ # Send the message from the Script node
+ node.io['out'].send(ctrl)
+ """)
+
+.. rubric:: Creating a message on a Host
+
+It can also be created on a host computer and sent to the device via :ref:`XLinkIn` node. :ref:`RGB Camera Control`, :ref:`Video & MobilenetSSD`
+and :ref:`Stereo Depth from host` code examples demonstrate this functionality perfectly. In the example below, we have removed all the code
+that isn't relevant to showcase how a message can be created on the host and sent to the device via XLink.
+
+.. code-block:: python
+
+ # Create XLinkIn node and configure it
+ xin = pipeline.create(dai.node.XLinkIn)
+ xin.setStreamName("frameIn")
+ xin.out.link(nn.input) # Connect it to NeuralNetwork's input
+
+ with dai.Device(pipeline) as device:
+ # Create input queue, which allows you to send messages to the device
+ qIn = device.getInputQueue("frameIn")
+ # Create ImgFrame message
+ img = dai.ImgFrame()
+ img.setData(frame)
+ img.setWidth(300)
+ img.setHeight(300)
+ qIn.send(img) # Send the message to the device
+
+.. rubric:: Creating a message on an external MCU
+
+A message can also be created on an external MCU and sent to the device via :ref:`SPIIn` node. An demo of such functionality is the
+`spi_in_landmark `__ example.
.. toctree::
:maxdepth: 0
diff --git a/docs/source/components/pipeline.rst b/docs/source/components/pipeline.rst
index fe0995ad8..5babf9362 100644
--- a/docs/source/components/pipeline.rst
+++ b/docs/source/components/pipeline.rst
@@ -3,36 +3,28 @@
Pipeline
========
-Pipeline is a collection of :ref:`nodes ` and links between them. This flow provides extensive flexibility that users get for their
-DepthAI device.
-
+Pipeline is a collection of :ref:`nodes ` and links between them. This flow provides an extensive flexibility that users get for their
+OAK device. When pipeline object is passed to the :ref:`Device` object, pipeline gets serialized to JSON and sent to the OAK device via XLink.
Pipeline first steps
####################
-To get DepthAI up and running, one has to define a pipeline, populate it with nodes, configure the nodes and link them together. After that, the pipeline
+To get DepthAI up and running, you have to create a pipeline, populate it with nodes, configure the nodes and link them together. After that, the pipeline
can be loaded onto the :ref:`Device` and be started.
.. code-block:: python
pipeline = depthai.Pipeline()
+ # If required, specify OpenVINO version
+ pipeline.setOpenVINOVersion(depthai.OpenVINO.Version.VERSION_2021_4)
+
# Create nodes, configure them and link them together
# Upload the pipeline to the device
with depthai.Device(pipeline) as device:
- # Start the pipeline that is now on the device
- device.startPipeline()
-
# Set input/output queues to configure device/host communication through the XLink...
-Using multiple devices
-######################
-
-If user has multiple DepthAI devices, each device can run a separate pipeline or the same pipeline
-(`demo here `__). To use different pipeline for each device,
-you can create multiple pipelines and pass the desired pipeline to the desired device on initialization.
-
Specifying OpenVINO version
###########################
@@ -45,36 +37,12 @@ The reason behind this is that OpenVINO doesn't provide version inside the blob.
# Set the correct version:
pipeline.setOpenVINOVersion(depthai.OpenVINO.Version.VERSION_2021_4)
-Specifying arguments for :code:`getOutputQueue` method
-######################################################
-
-When obtaining the output queue (example code below), the :code:`maxSize` and :code:`blocking` arguments should be set depending on how
-the messages are intended to be used, where :code:`name` is the name of the outputting stream.
-
-.. code-block:: python
-
- with dai.Device(pipeline) as device:
- queueLeft = device.getOutputQueue(name="manip_left", maxSize=8, blocking=False)
-
-If only the latest results are relevant and previous do not matter, one can set :code:`maxSize = 1` and :code:`blocking = False`.
-That way only latest message will be kept (:code:`maxSize = 1`) and it might also be overwritten in order to avoid waiting for
-the host to process every frame, thus providing only the latest data (:code:`blocking = False`).
-However, if there are a lot of dropped/overwritten frames, because the host isn't able to process them fast enough
-(eg. one-threaded environment which does some heavy computing), the :code:`maxSize` could be set to a higher
-number, which would increase the queue size and reduce the number of dropped frames.
-Specifically, at 30 FPS, a new frame is recieved every ~33ms, so if your host is able to process a frame in that time, the :code:`maxSize`
-should be set to :code:`1`, otherwise to :code:`2` for processing times up to 66ms and so on.
-
-If, however, there is a need to have some intervals of wait between retrieving messages, one could specify that differently.
-An example would be checking the results of :code:`DetectionNetwork` for the last 1 second based on some other event,
-in which case one could set :code:`maxSize = 30` and :code:`blocking = False`
-(assuming :code:`DetectionNetwork` produces messages at ~30FPS).
-
-The :code:`blocking = True` option is mostly used when correct order of messages is needed.
-Two examples would be:
+Using multiple devices
+######################
-- matching passthrough frames and their original frames (eg. full 4K frames and smaller preview frames that went into NN),
-- encoding (most prominently H264/H265 as frame drops can lead to artifacts).
+If user has multiple DepthAI devices, each device can run a different pipeline or the same pipeline
+(`demo here `__). To use different pipeline for each device,
+you can create multiple pipelines and pass the desired pipeline to the desired device on initialization.
How to place it
###############
diff --git a/docs/source/index.rst b/docs/source/index.rst
index 48b615f33..f45c7bd97 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -3,52 +3,32 @@
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
-Welcome to DepthAI Gen2 API Documentation
-=========================================
+DepthAI API Documentation
+=========================
.. image:: https://github.com/luxonis/depthai-python/workflows/Python%20Wheel%20CI/badge.svg?branch=gen2_develop
:target: https://github.com/luxonis/depthai-python/actions?query=workflow%3A%22Python+Wheel+CI%22+branch%3A%22gen2_develop%22
-On this page you can find the details regarding the Gen2 DepthAI API that will allow you to interact with the DepthAI device.
-We support both :ref:`Python API ` and :ref:`C++ API `
+DepthAI API allows users to connect to, configure and communicate with their OAK devices.
+We support both :ref:`Python API ` and :ref:`C++ API `.
-What is Gen2?
--------------
+.. image:: /_static/images/api_diagram.png
-Gen2 is a step forward in DepthAI integration, allowing users to define their own flow of data using pipelines, nodes
-and connections. Gen2 was created based on user's feedback from Gen1 and from raising capabilities of both DepthAI and
-supporting software like OpenVINO.
-
-Basic glossary
---------------
-
-- **Host side** is the device, like PC or RPi, to which the DepthAI is connected to. If something is happening on the host side, it means that this device is involved in it, not DepthAI itself
-
-- **Device side** is the DepthAI itself. If something is happening on the device side, it means that the DepthAI is responsible for it
-
-- **Pipeline** is a complete workflow on the device side, consisting of nodes and connections between them - these cannot exist outside of pipeline.
-
-- **Node** is a single functionality of the DepthAI. It have either inputs or outputs or both, together with properties to be defined (like resolution on the camera node or blob path in neural network node)
-
-- **Connection** is a link between one node's output and another one's input. In order to define the pipeline dataflow, the connections define where to send data in order to achieve an expected result
-
-- **XLink** is a middleware that is capable to exchange data between device and host. XLinkIn node allows to send the data from host to device, XLinkOut does the opposite.
+- **Host side** is a computer, like PC or RPi, to which an OAK device is connected.
+- **Device side** is the OAK device itself. If something is happening on the device side, it means that it's running on the `Myriad X VPU `__. More :ref:`information here `.
+- **Pipeline** is a complete workflow on the device side, consisting of :ref:`nodes ` and connections between them. More :ref:`information here `.
+- **Node** is a single functionality of the DepthAI. :ref:`Nodes` have inputs or outputs, and have configurable properties (like resolution on the camera node).
+- **Connection** is a link between one node's output and another one's input. In order to define the pipeline dataflow, the connections define where to send `messages ` in order to achieve an expected result
+- **XLink** is a middleware that is capable to exchange data between device and host. :ref:`XLinkIn` node allows sending the data from the host to a device, while :ref:`XLinkOut` does the opposite.
+- **Messages** are transferred between nodes, as defined by a connection. More :ref:`information here `.
Getting started
---------------
-To help you get started with Gen2 API, we have prepared multiple examples of it's usage, with more yet to come, together
-with some insightful tutorials.
-
-Before running the example, install the DepthAI Python library using the command below
-
-.. code-block:: python
- :substitutions:
-
- python3 -m pip install -U --force-reinstall depthai
-
+First, you need to :ref:`install the DepthAI ` library and its dependencies.
-Now, pick a tutorial or code sample and start utilizing Gen2 capabilities
+After installation, you can continue with an insightful :ref:`Hello World tutorial `, or with :ref:`code examples `, where different
+node functionalities are presented with code.
.. toctree::
:maxdepth: 0
@@ -57,7 +37,6 @@ Now, pick a tutorial or code sample and start utilizing Gen2 capabilities
Home
install.rst
- tutorials/overview.rst
.. toctree::
:maxdepth: 1
diff --git a/docs/source/install.rst b/docs/source/install.rst
index 8150770bf..94ba13e14 100644
--- a/docs/source/install.rst
+++ b/docs/source/install.rst
@@ -1,9 +1,8 @@
Installation
============
-Please :ref:`install the necessary dependencies ` for your
-platform by referring to the table below. Once installed you can :ref:`install
-the DepthAI library `.
+Please install the necessary dependencies for your platform by :ref:`referring to the table below `.
+Once installed, you can :ref:`install the DepthAI library `.
We are constantly striving to improve how we release our software to keep up
with countless platforms and the numerous ways to package it. If you do not
@@ -14,8 +13,6 @@ or on `Github `__.
Supported Platforms
###################
-We keep up-to-date, pre-compiled, libraries for the following platforms. Note that a new change is that for Ubuntu now also work unchanged for the Jetson/Xavier series:
-
======================== ============================================== ================================================================================
Platform Instructions Support
======================== ============================================== ================================================================================
@@ -26,19 +23,19 @@ Raspberry Pi OS :ref:`Platform dependencies ` `Discord
Jestson Nano/Xavier :ref:`Platform dependencies ` `Discord `__
======================== ============================================== ================================================================================
-And the following platforms are also supported by a combination of the community and Luxonis.
-
-====================== ===================================================== ================================================================================
-Platform Instructions Support
-====================== ===================================================== ================================================================================
-Fedora `Discord `__
-Robot Operating System `Discord `__
-Windows 7 :ref:`WinUSB driver ` `Discord `__
-Docker :ref:`Pull and run official images ` `Discord `__
-Kernel Virtual Machine :ref:`Run on KVM ` `Discord `__
-VMware :ref:`Run on VMware ` `Discord `__
-Virtual Box :ref:`Run on Virtual Box ` `Discord `__
-====================== ===================================================== ================================================================================
+The following platforms are also supported by a combination of the community and Luxonis:
+
+====================== =========================================================================== ================================================================================
+Platform Instructions Support
+====================== =========================================================================== ================================================================================
+Fedora `Discord `__
+Robot Operating System Follow tutorial at `depthai-ros `__ `Discord `__
+Windows 7 :ref:`WinUSB driver ` `Discord `__
+Docker :ref:`Pull and run official images ` `Discord `__
+Kernel Virtual Machine :ref:`Run on KVM ` `Discord `__
+VMware :ref:`Run on VMware ` `Discord `__
+Virtual Box :ref:`Run on Virtual Box ` `Discord `__
+====================== =========================================================================== ================================================================================
macOS
*****
diff --git a/docs/source/tutorials/code_samples.rst b/docs/source/tutorials/code_samples.rst
index b70bd8b84..db4328210 100644
--- a/docs/source/tutorials/code_samples.rst
+++ b/docs/source/tutorials/code_samples.rst
@@ -24,7 +24,8 @@ Code Samples
../samples/VideoEncoder/*
../samples/Yolo/*
-Code samples are used for automated testing. They are also a great starting point for the gen2 API.
+Code samples are used for automated testing. They are also a great starting point for the DepthAI API, as different node functionalities
+are presented with code.
.. rubric:: Bootloader
diff --git a/docs/source/tutorials/overview.rst b/docs/source/tutorials/overview.rst
deleted file mode 100644
index 7b11a6ac0..000000000
--- a/docs/source/tutorials/overview.rst
+++ /dev/null
@@ -1,67 +0,0 @@
-Overview
-========
-
-..
- Section which described mental model, flow / pipeline programming,
- messages that carry data, nodes that compute upon them
-
-.. code-block::
-
- DepthAI device (eg. OAK-D) Host (eg. RaspberryPi)
- ┌───────────────────────────────────────────────┐ ┌─────────────────────────┐
- │ │ │ │
- │ Node Node │ │ # Your python code that │
- │ ┌─────────────┐ ┌──────────┤ │ # runs on the host │
- │ │ │ │ │ │ │
- │ │ │preview input│ │ XLink protocol │ # Get the frame │
- │ │ ColorCamera ├───────────────────┤ XLinkOut ├──────────────────►│ data=q_preview.get() │
- │ │ │ ImgFrame │ │(USB/Ethernet/PCIe)│ frame=data.getCvFrame() │
- │ │ │ Message │ │ │ # Show the frame │
- │ └─────────────┘ └──────────┤ │ cv2.imshow("rgb",frame) │
- │ inputControl ▲ │ │ │
- │ │ │ │ │
- │ │ Node │ │ │
- │ │ ┌─────────┤ │ # Control the camera │
- │ │ │ │ │ cc=dai.CameraControl() │
- │ │ out │ │ XLink protocol │ cc.setManualFocus(100) │
- │ └──────────────────────┤ XLinkIn │◄──────────────────┤ q_cam_control.send(cc) │
- │ CameraControl │ │(USB/Ethernet/PCIe)│ │
- │ Message │ │ │ │
- │ └─────────┤ │ │
- │ │ │ │
- └───────────────────────────────────────────────┘ └─────────────────────────┘
- A simple pipeline visualzied
-
-Device
-######
-
-Device is the `DepthAI module `__ itself. On the device there is a powerful vision processing unit
-(:code:`VPU`) from Intel, called `Myriad X `__ (MX for short).
-The VPU is optimized for performing AI inference algorithms and for processing sensory inputs (eg. calculating stereo disparity from two cameras).
-
-For more details, click :ref:`here `
-
-Pipeline
-########
-
-The upper flowchart is a simple pipeline visualized. So a **pipeline is collection of nodes and links** between them.
-
-For more details, click :ref:`here `
-
-Nodes
-#####
-
-Each node provides a specific functionality on the DepthAI, a set of configurable properties and inputs/outputs. On the flowchart above, we have 3 nodes;
-:code:`ColorCamera`, :code:`XLinkOut` and :code:`XLinkIn`.
-
-For more details, click :ref:`here `
-
-Messages
-########
-
-Messages are sent between linked nodes. On the flowchart above, there are two links - visualized as arrows that are inside the device. There are a few
-different types of messages, on the chart we have :code:`ImgFrame` and :code:`CameraControl`
-
-For more details, click :ref:`here `
-
-.. include:: ../includes/footer-short.rst
\ No newline at end of file
diff --git a/docs/source/tutorials/ram_usage.rst b/docs/source/tutorials/ram_usage.rst
index 445851061..677d38ec8 100644
--- a/docs/source/tutorials/ram_usage.rst
+++ b/docs/source/tutorials/ram_usage.rst
@@ -1,7 +1,7 @@
RAM usage
=========
-All devices have 512 MiB (4 Gbit) on-board RAM, which is used for firmware (about 15MB), assets (a few KB up to 100MB+, eg. NN models), and other
+All OAK devices have 512 MiB (4 Gbit) on-board RAM, which is used for firmware (about 15MB), assets (a few KB up to 100MB+, eg. NN models), and other
resources, such as message pools where messages are stored.
If you enable :code:`info` :ref:`logging `, you will see how RAM is used: