Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ You can also calculate spatial coordiantes on host side, `demo here <https://git
- :ref:`RGB & MobilenetSSD with spatial data`
- :ref:`Mono & MobilenetSSD with spatial data`
- :ref:`RGB & TinyYolo with spatial data`
- :ref:`Collision avoidance`

Demo
####
Expand Down
43 changes: 43 additions & 0 deletions docs/source/samples/mixed/collision_avoidance.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
Collision avoidance
===================

This example demonstrates how to use DepthAI to implement a collision avoidance system with the OAK-D camera. The script measures objects distance from the camera in real-time, displaying warnings based on predefined distance thresholds.

The script uses stereo cameras to calculate the distance of objects from the camera. The depth map is then aligned to center (color) camera in order to overlay the distance information on the color frame.

User-defined constants **`WARNING`** and **`CRITICAL`** are used to define distance thresholds for orange and red alerts respectively.

Similar examples
################

- :ref:`Spatial Location Calculator`
- :ref:`RGB Depth Alignment`

Demo
####

.. image:: ../../_static/images/examples/collision_avoidance.gif
:width: 100%
:alt: Collision Avoidance

Setup
#####

.. include:: /includes/install_from_pypi.rst

.. include:: /includes/install_req.rst

Source code
###########

.. tabs::

.. tab:: Python

Also `available on GitHub <https://github.com/luxonis/depthai-python/blob/main/examples/mixed/collision_avoidance.py>`__

.. literalinclude:: ../../../../examples/mixed/collision_avoidance.py
:language: python
:linenos:

.. include:: /includes/footer-short.rst
1 change: 1 addition & 0 deletions docs/source/tutorials/code_samples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,7 @@ are presented with code.
- :ref:`RGB Encoding & Mono & MobilenetSSD` - Runs MobileNetSSD on mono frames and displays detections on the frame + encodes RGB to :code:`.h265`
- :ref:`RGB Encoding & Mono with MobilenetSSD & Depth` - A combination of **RGB Encoding** and **Mono & MobilenetSSD & Depth** code samples
- :ref:`Spatial detections on rotated OAK` - Spatail detections on upside down OAK camera
- :ref:`Collision avoidance` - Collision avoidance system using depth and RGB

.. rubric:: MobileNet

Expand Down
39 changes: 37 additions & 2 deletions docs/source/tutorials/debugging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,10 @@ Level Logging
:code:`trace` Trace will print out a :ref:`Message <components_messages>` whenever one is received from the device.
================ =======

Debugging can be enabled either **in code**:
Debugging can be enabled either:

In code
*******

.. code-block:: python

Expand All @@ -42,7 +45,35 @@ Where :code:`setLogLevel` sets verbosity which filters messages that get sent fr
verbosity which filters messages that get printed on the host (stdout). This difference allows to capture the log messages internally and
not print them to stdout, and use those to eg. display them somewhere else or analyze them.

You can also enable debugging using an **environmental variable DEPTHAI_LEVEL**:

Using an environmental variable `DEPTHAI_LEVEL`
***********************************************

Using an environment variable to set the debugging level, rather than configuring it directly in code, provides additional detailed information.
This includes metrics such as CMX and SHAVE usage, and the time taken by each node in the pipeline to process a single frame.

Example of a log message for :ref:`RGB Preview` in **INFO** mode:

.. code-block:: bash

[184430102189660F00] [2.1] [0.675] [system] [info] SIPP (Signal Image Processing Pipeline) internal buffer size '18432'B, DMA buffer size: '16384'B
[184430102189660F00] [2.1] [0.711] [system] [info] ImageManip internal buffer size '285440'B, shave buffer size '34816'B
[184430102189660F00] [2.1] [0.711] [system] [info] ColorCamera allocated resources: no shaves; cmx slices: [13-15]
ImageManip allocated resources: shaves: [15-15] no cmx slices.


Example of a log message for :ref:`Depth Preview` in **TRACE** mode:

.. code-block:: bash

[19443010513F4D1300] [0.1.2] [2.014] [MonoCamera(0)] [trace] Mono ISP took '0.866377' ms.
[19443010513F4D1300] [0.1.2] [2.016] [MonoCamera(1)] [trace] Mono ISP took '1.272838' ms.
[19443010513F4D1300] [0.1.2] [2.019] [StereoDepth(2)] [trace] Stereo rectification took '2.661958' ms.
[19443010513F4D1300] [0.1.2] [2.027] [StereoDepth(2)] [trace] Stereo took '7.144515' ms.
[19443010513F4D1300] [0.1.2] [2.028] [StereoDepth(2)] [trace] 'Median' pipeline took '0.772257' ms.
[19443010513F4D1300] [0.1.2] [2.028] [StereoDepth(2)] [trace] Stereo post processing (total) took '0.810216' ms.
[2024-05-16 14:27:51.294] [depthai] [trace] Received message from device (disparity) - parsing time: 11µs, data size: 256000


.. tabs::

Expand Down Expand Up @@ -107,6 +138,10 @@ Code above will print the following values to the user:
Resource Debugging
==================

.. warning::

Resource debugging in only available when setting the debug level using environmental variable `DEPTHAI_LEVEL`. It's **not** available when setting the debug level in code.

By enabling ``info`` log level (or lower), depthai will print usage of `hardware resources <https://docs.luxonis.com/projects/hardware/en/latest/pages/rvc/rvc2.html#hardware-blocks-and-accelerators>`__,
specifically SHAVE core and CMX memory usage:

Expand Down
114 changes: 114 additions & 0 deletions examples/mixed/collision_avoidance.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
import depthai as dai
import cv2
import numpy as np
import math

# User-defined constants
WARNING = 500 # 50cm, orange
CRITICAL = 300 # 30cm, red

# Create pipeline
pipeline = dai.Pipeline()

# Color camera
camRgb = pipeline.create(dai.node.ColorCamera)
camRgb.setPreviewSize(300, 300)
camRgb.setInterleaved(False)

# Define source - stereo depth cameras
left = pipeline.create(dai.node.MonoCamera)
left.setResolution(dai.MonoCameraProperties.SensorResolution.THE_720_P)
left.setBoardSocket(dai.CameraBoardSocket.LEFT)

right = pipeline.create(dai.node.MonoCamera)
right.setResolution(dai.MonoCameraProperties.SensorResolution.THE_720_P)
right.setBoardSocket(dai.CameraBoardSocket.RIGHT)

# Create stereo depth node
stereo = pipeline.create(dai.node.StereoDepth)
stereo.setConfidenceThreshold(50)
stereo.setLeftRightCheck(True)
stereo.setExtendedDisparity(True)

# Linking
left.out.link(stereo.left)
right.out.link(stereo.right)

# Spatial location calculator configuration
slc = pipeline.create(dai.node.SpatialLocationCalculator)
for x in range(15):
for y in range(9):
config = dai.SpatialLocationCalculatorConfigData()
config.depthThresholds.lowerThreshold = 200
config.depthThresholds.upperThreshold = 10000
config.roi = dai.Rect(dai.Point2f((x+0.5)*0.0625, (y+0.5)*0.1), dai.Point2f((x+1.5)*0.0625, (y+1.5)*0.1))
config.calculationAlgorithm = dai.SpatialLocationCalculatorAlgorithm.MEDIAN
slc.initialConfig.addROI(config)

stereo.depth.link(slc.inputDepth)
stereo.setDepthAlign(dai.CameraBoardSocket.RGB)

# Create output
slcOut = pipeline.create(dai.node.XLinkOut)
slcOut.setStreamName('slc')
slc.out.link(slcOut.input)

colorOut = pipeline.create(dai.node.XLinkOut)
colorOut.setStreamName('color')
camRgb.video.link(colorOut.input)

# Connect to device and start pipeline
with dai.Device(pipeline) as device:
# Output queues will be used to get the color mono frames and spatial location data
qColor = device.getOutputQueue(name="color", maxSize=4, blocking=False)
qSlc = device.getOutputQueue(name="slc", maxSize=4, blocking=False)

fontType = cv2.FONT_HERSHEY_TRIPLEX

while True:
inColor = qColor.get() # Try to get a frame from the color camera
inSlc = qSlc.get() # Try to get spatial location data

if inColor is None:
print("No color camera data")
if inSlc is None:
print("No spatial location data")

colorFrame = None
if inColor is not None:
colorFrame = inColor.getCvFrame() # Fetch the frame from the color mono camera


if inSlc is not None and colorFrame is not None:
slc_data = inSlc.getSpatialLocations()
for depthData in slc_data:
roi = depthData.config.roi
roi = roi.denormalize(width=colorFrame.shape[1], height=colorFrame.shape[0])

xmin = int(roi.topLeft().x)
ymin = int(roi.topLeft().y)
xmax = int(roi.bottomRight().x)
ymax = int(roi.bottomRight().y)

coords = depthData.spatialCoordinates
distance = math.sqrt(coords.x ** 2 + coords.y ** 2 + coords.z ** 2)

if distance == 0: # Invalid
continue

# Determine color based on distance
if distance < CRITICAL:
color = (0, 0, 255) # Red
elif distance < WARNING:
color = (0, 140, 255) # Orange
else:
continue # Skip drawing for non-critical/non-warning distances

# Draw rectangle and distance text on the color frame
cv2.rectangle(colorFrame, (xmin, ymin), (xmax, ymax), color, thickness=2)
cv2.putText(colorFrame, "{:.1f}m".format(distance / 1000), (xmin + 10, ymin + 20), fontType, 0.5, color)

# Display the color frame
cv2.imshow('Left Mono Camera', colorFrame)
if cv2.waitKey(1) == ord('q'):
break