diff --git a/docs/source/_static/images/favicon.png b/docs/source/_static/images/favicon.png index 2bbaad0ec..decd05d31 100644 Binary files a/docs/source/_static/images/favicon.png and b/docs/source/_static/images/favicon.png differ diff --git a/docs/source/_static/images/logo.png b/docs/source/_static/images/logo.png deleted file mode 100644 index 5f294fb91..000000000 Binary files a/docs/source/_static/images/logo.png and /dev/null differ diff --git a/docs/source/components/messages/image_manip_config.rst b/docs/source/components/messages/image_manip_config.rst index 85ac597f9..5535536f5 100644 --- a/docs/source/components/messages/image_manip_config.rst +++ b/docs/source/components/messages/image_manip_config.rst @@ -2,10 +2,9 @@ ImageManipConfig ================ This message can is used for cropping, warping, rotating, resizing, etc. an image in runtime. -It is sent either from the host to :ref:`ColorCamera` or :ref:`ImageManip`. +It can be sent from host/:ref:`Script` node to either :ref:`ColorCamera` or :ref:`ImageManip`. -.. - It is sent either from the host or from the :ref:`Script` node to :ref:`ColorCamera` or :ref:`ImageManip`. +.. note:: This message will reconfigure the whole config of the node, meaning you need to set all settings, not just the setting you want to change. Examples of functionality ######################### diff --git a/docs/source/components/nodes/video_encoder.rst b/docs/source/components/nodes/video_encoder.rst index 060d36c1c..4f899de7a 100644 --- a/docs/source/components/nodes/video_encoder.rst +++ b/docs/source/components/nodes/video_encoder.rst @@ -2,7 +2,7 @@ VideoEncoder ============ VideoEncoder node is used to encode :ref:`ImgFrame` into either H264, H265, or MJPEG streams. Only NV12 or GRAY8 (which gets converted to NV12) format is -supported as an input. +supported as an input. All codecs are lossy (except lossless MJPEG), for more information please see `encoding quality docs `__. .. include:: /includes/container-encoding.rst diff --git a/docs/source/install.rst b/docs/source/install.rst index 020b76b61..c72fbee2b 100644 --- a/docs/source/install.rst +++ b/docs/source/install.rst @@ -31,7 +31,7 @@ Follow the steps below to just install depthai api library dependencies for your .. code-block:: bash - sudo wget -qO- https://docs.luxonis.com/install_depthai.sh | bash + sudo wget -qO- https://docs.luxonis.com/install_dependencies.sh | bash Please refer to :ref:`Supported Platforms` if any issues occur. diff --git a/docs/source/samples/StereoDepth/rgb_depth_aligned.rst b/docs/source/samples/StereoDepth/rgb_depth_aligned.rst index fbdd16264..4c854cbe6 100644 --- a/docs/source/samples/StereoDepth/rgb_depth_aligned.rst +++ b/docs/source/samples/StereoDepth/rgb_depth_aligned.rst @@ -12,6 +12,9 @@ By default, the depth map will get scaled to match the resolution of the camera depth is aligned to the 1080P color sensor, StereoDepth will upscale depth to 1080P as well. Depth scaling can be avoided by configuring :ref:`StereoDepth`'s ``stereo.setOutputSize(width, height)``. +To align depth with **higher resolution color stream** (eg. 12MP), you need to limit the resolution of the depth map. You can +do that with ``stereo.setOutputSize(w,h)``. Code `example here `__. + Demo #### diff --git a/docs/source/tutorials/low-latency.rst b/docs/source/tutorials/low-latency.rst index 376b7cd1f..72358b398 100644 --- a/docs/source/tutorials/low-latency.rst +++ b/docs/source/tutorials/low-latency.rst @@ -128,14 +128,98 @@ On PoE, the latency can vary quite a bit due to a number of factors: * 100% OAK Leon CSS (CPU) usage. The Leon CSS core handles the POE communication (`see docs here `__), and if the CPU is 100% used, it will not be able to handle the communication as fast as it should. * Another potential way to improve PoE latency would be to fine-tune network settings, like MTU, TCP window size, etc. (see `here `__ for more info) +Bandwidth +######### + +With large, unencoded frames, one can quickly saturate the bandwidth even at 30FPS, especially on PoE devices (1gbps link): + +.. code-block::bash + + 4K NV12/YUV420 frames: 3840 * 2160 * 1.5 * 30fps * 8bits = 3 gbps + 1080P NV12/YUV420 frames: 1920 * 1080 * 1.5 * 30fps * 8bits = 747 mbps + 720P NV12/YUV420 frames: 1280 * 720 * 1.5 * 30fps * 8bits = 331 mbps + + 1080P RGB frames: 1920 * 1080 * 3 * 30fps * 8bits = 1.5 gbps + + 800P depth frames: 1280 * 800 * 2 * 30fps * 8bits = 492 mbps + 400P depth frames: 640 * 400 * 2 * 30fps * 8bits = 123 mbps + + 800P mono frames: 1280 * 800 * 1 * 30fps * 8bits = 246 mbps + 400P mono frames: 640 * 400 * 1 * 30fps * 8bits = 62 mbps + +The third value in the formula is byte/pixel, which is 1.5 for NV12/YUV420, 3 for RGB, and 2 for depth frames, and 1 +for mono (grayscale) frames. It's either 1 (normal) or 2 (subpixel mode) for disparity frames. + +A few options to reduce bandwidth: + +- Encode frames (H.264, H.265, MJPEG) on-device using :ref:`VideoEncoder node ` +- Reduce FPS/resolution/number of streams + Reducing latency when running NN ################################ In the examples above we were only streaming frames, without doing anything else on the OAK camera. This section will focus on how to reduce latency when also running NN model on the OAK. -Lowering camera FPS to match NN FPS ------------------------------------ +1. Increasing NN resources +-------------------------- + +One option to reduce latency is to increase the NN resources. This can be done by changing the number of allocated NCEs and SHAVES (see HW accelerator `docs here `__). +`Compile Tool `__ can compile a model for more SHAVE cores. To allocate more NCEs, you can use API below: + +.. code-block:: python + + import depthai as dai + + pipeline = dai.Pipeline() + # nn = pipeline.createNeuralNetwork() + # nn = pipeline.create(dai.node.MobileNetDetectionNetwork) + nn = pipeline.create(dai.node.YoloDetectionNetwork) + nn.setNumInferenceThreads(1) # By default 2 threads are used + nn.setNumNCEPerInferenceThread(2) # By default, 1 NCE is used per thread + +Models usually run at **max FPS** when using 2 threads (1 NCE/Thread), and compiling model for ``AVAILABLE_SHAVES / 2``. + +Example of FPS & latency comparison for YoloV7-tiny: + +.. list-table:: + :header-rows: 1 + + * - NN resources + - Camera FPS + - Latency + - NN FPS + * - **6 SHAVEs, 2x Threads (1NCE/Thread)** + - 15 + - 155 ms + - 15 + * - 6 SHAVEs, 2x Threads (1NCE/Thread) + - 14 + - 149 ms + - 14 + * - 6 SHAVEs, 2x Threads (1NCE/Thread) + - 13 + - 146 ms + - 13 + * - 6 SHAVEs, 2x Threads (1NCE/Thread) + - 10 + - 141 ms + - 10 + * - **13 SHAVEs, 1x Thread (2NCE/Thread)** + - 30 + - 145 ms + - 11.6 + * - 13 SHAVEs, 1x Thread (2NCE/Thread) + - 12 + - 128 ms + - 12 + * - 13 SHAVEs, 1x Thread (2NCE/Thread) + - 10 + - 118 ms + - 10 + +2. Lowering camera FPS to match NN FPS +-------------------------------------- Lowering FPS to not exceed NN capabilities typically provides the best latency performance, since the NN is able to start the inference as soon as a new frame is available. @@ -153,11 +237,11 @@ This time includes the following: - And finally, eventual extra latency until it reaches the app Note: if the FPS is increased slightly more, towards 19..21 FPS, an extra latency of about 10ms appears, that we believe -is related to firmware. We are activaly looking for improvements for lower latencies. +is related to firmware. We are actively looking for improvements for lower latencies. -NN input queue size and blocking behaviour ------------------------------------------- +3. NN input queue size and blocking behavior +-------------------------------------------- If the app has ``detNetwork.input.setBlocking(False)``, but the queue size doesn't change, the following adjustment may help improve latency performance: