Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/source/components/nodes/image_manip.rst
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,8 @@ Examples of functionality
- :ref:`Mono & MobilenetSSD`
- :ref:`RGB Encoding & Mono & MobilenetSSD`
- :ref:`RGB Camera Control`
- :ref:`ImageManip tiling` - Using ImageManip for frame tiling
- :ref:`ImageManip rotate` - Using ImageManip to rotate color/mono frames

Reference
#########
Expand Down
44 changes: 44 additions & 0 deletions docs/source/samples/image_manip_rotate.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
ImageManip Rotate
=================

This example showcases how to rotate color and mono frames with the help of :ref:`ImageManip` node.
In the example, we are rotating by 90°.

.. note::
Due to HW warp constraint, input image (to be rotated) has to have **width value of multiples of 16.**

Demos
#####

.. image:: https://user-images.githubusercontent.com/18037362/128074634-d2baa78e-8f35-40fc-8661-321f3a3c3850.png
:alt: Rotated mono and color frames

Here I have DepthAI device positioned vertically on my desk.

Setup
#####

.. include:: /includes/install_from_pypi.rst

Source code
###########

.. tabs::

.. tab:: Python

Also `available on GitHub <https://github.com/luxonis/depthai-python/blob/main/examples/image_manip_rotate.py>`__

.. literalinclude:: ../../../examples/image_manip_rotate.py
:language: python
:linenos:

.. tab:: C++

Also `available on GitHub <https://github.com/luxonis/depthai-core/blob/main/examples/src/image_manip_rotate.cpp>`__

.. literalinclude:: ../../../depthai-core/examples/src/image_manip_rotate.cpp
:language: cpp
:linenos:

.. include:: /includes/footer-short.rst
41 changes: 41 additions & 0 deletions docs/source/samples/image_manip_tiling.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
ImageManip Tiling
=================

Frame tiling could be useful for eg. feeding large frame into a :ref:`NeuralNetwork` whose input size isn't as large. In such case,
you can tile the large frame into multiple smaller ones and feed smaller frames to the :ref:`NeuralNetwork`.

In this example we use 2 :ref:`ImageManip` for splitting the original :code:`1000x500` preview frame into two :code:`500x500` frames.

Demo
####

.. image:: https://user-images.githubusercontent.com/18037362/128074673-045ed4b6-ac8c-4a76-83bb-0f3dc996f7a5.png
:alt: Tiling preview into 2 frames/tiles

Setup
#####

.. include:: /includes/install_from_pypi.rst

Source code
###########

.. tabs::

.. tab:: Python

Also `available on GitHub <https://github.com/luxonis/depthai-python/blob/main/examples/image_manip_tiling.py>`__

.. literalinclude:: ../../../examples/image_manip_tiling.py
:language: python
:linenos:

.. tab:: C++

Also `available on GitHub <https://github.com/luxonis/depthai-core/blob/main/examples/src/image_manip_tiling.cpp>`__

.. literalinclude:: ../../../depthai-core/examples/src/image_manip_tiling.cpp
:language: cpp
:linenos:

.. include:: /includes/footer-short.rst
2 changes: 2 additions & 0 deletions docs/source/tutorials/code_samples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,8 @@ Code samples are used for automated testing. They are also a great starting poin
- :ref:`Edge detector` - Edge detection on input frame
- :ref:`Script camera control` - Controlling the camera with the Script node
- :ref:`Bootloader version` - Retrieves Version of Bootloader on the device
- :ref:`ImageManip tiling` - Using ImageManip for frame tiling
- :ref:`ImageManip rotate` - Using ImageManip to rotate color/mono frames

.. rubric:: Complex

Expand Down
5 changes: 5 additions & 0 deletions docs/source/tutorials/simple_samples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,8 @@ Simple
../samples/edge_detector.rst
../samples/script_camera_control.rst
../samples/bootloader_version.rst
../samples/image_manip_tiling.rst
../samples/image_manip_rotate.rst

These samples are great starting point for the gen2 API.

Expand All @@ -41,4 +43,7 @@ These samples are great starting point for the gen2 API.
- :ref:`Mono & MobilenetSSD` - Runs MobileNetSSD on mono frames and displays detections on the frame
- :ref:`Video & MobilenetSSD` - Runs MobileNetSSD on the video from the host
- :ref:`Edge detector` - Edge detection on input frame
- :ref:`Script camera control` - Controlling the camera with the Script node
- :ref:`Bootloader Version` - Retrieves Version of Bootloader on the device
- :ref:`ImageManip Tiling` - Using ImageManip for frame tiling
- :ref:`ImageManip Rotate` - Using ImageManip to rotate color/mono frames
58 changes: 58 additions & 0 deletions examples/image_manip_rotate.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
#!/usr/bin/env python3

import cv2
import depthai as dai

# Create pipeline
pipeline = dai.Pipeline()

# Rotate color frames
camRgb = pipeline.createColorCamera()
camRgb.setPreviewSize(640, 400)
camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
camRgb.setInterleaved(False)

manipRgb = pipeline.createImageManip()
rgbRr = dai.RotatedRect()
rgbRr.center.x, rgbRr.center.y = camRgb.getPreviewWidth() // 2, camRgb.getPreviewHeight() // 2
rgbRr.size.width, rgbRr.size.height = camRgb.getPreviewHeight(), camRgb.getPreviewWidth()
rgbRr.angle = 90
manipRgb.initialConfig.setCropRotatedRect(rgbRr, False)
camRgb.preview.link(manipRgb.inputImage)

manipRgbOut = pipeline.createXLinkOut()
manipRgbOut.setStreamName("manip_rgb")
manipRgb.out.link(manipRgbOut.input)

# Rotate mono frames
monoLeft = pipeline.createMonoCamera()
monoLeft.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
monoLeft.setBoardSocket(dai.CameraBoardSocket.LEFT)

manipLeft = pipeline.createImageManip()
rr = dai.RotatedRect()
rr.center.x, rr.center.y = monoLeft.getResolutionWidth() // 2, monoLeft.getResolutionHeight() // 2
rr.size.width, rr.size.height = monoLeft.getResolutionHeight(), monoLeft.getResolutionWidth()
rr.angle = 90
manipLeft.initialConfig.setCropRotatedRect(rr, False)
monoLeft.out.link(manipLeft.inputImage)

manipLeftOut = pipeline.createXLinkOut()
manipLeftOut.setStreamName("manip_left")
manipLeft.out.link(manipLeftOut.input)

with dai.Device(pipeline) as device:
qLeft = device.getOutputQueue(name="manip_left", maxSize=8, blocking=False)
qRgb = device.getOutputQueue(name="manip_rgb", maxSize=8, blocking=False)

while True:
inLeft = qLeft.tryGet()
if inLeft is not None:
cv2.imshow('Left rotated', inLeft.getCvFrame())

inRgb = qRgb.tryGet()
if inRgb is not None:
cv2.imshow('Color rotated', inRgb.getCvFrame())

if cv2.waitKey(1) == ord('q'):
break
50 changes: 50 additions & 0 deletions examples/image_manip_tiling.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
#!/usr/bin/env python3

import cv2
import depthai as dai

# Create pipeline
pipeline = dai.Pipeline()

camRgb = pipeline.createColorCamera()
camRgb.setPreviewSize(1000, 500)
camRgb.setInterleaved(False)
maxFrameSize = camRgb.getPreviewHeight() * camRgb.getPreviewHeight() * 3

# In this example we use 2 imageManips for splitting the original 1000x500
# preview frame into 2 500x500 frames
manip1 = pipeline.createImageManip()
manip1.initialConfig.setCropRect(0, 0, 0.5, 1)
manip1.setMaxOutputFrameSize(maxFrameSize)
camRgb.preview.link(manip1.inputImage)

manip2 = pipeline.createImageManip()
manip2.initialConfig.setCropRect(0.5, 0, 1, 1)
manip2.setMaxOutputFrameSize(maxFrameSize)
camRgb.preview.link(manip2.inputImage)

xout1 = pipeline.createXLinkOut()
xout1.setStreamName('out1')
manip1.out.link(xout1.input)

xout2 = pipeline.createXLinkOut()
xout2.setStreamName('out2')
manip2.out.link(xout2.input)

# Connect to device and start pipeline
with dai.Device(pipeline) as device:
# Output queue will be used to get the rgb frames from the output defined above
q1 = device.getOutputQueue(name="out1", maxSize=4, blocking=False)
q2 = device.getOutputQueue(name="out2", maxSize=4, blocking=False)

while True:
in1 = q1.tryGet()
if in1 is not None:
cv2.imshow("Tile 1", in1.getCvFrame())

in2 = q2.tryGet()
if in2 is not None:
cv2.imshow("Tile 2", in2.getCvFrame())

if cv2.waitKey(1) == ord('q'):
break