Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -91,15 +91,16 @@ Now, pick a tutorial or code sample and start utilizing Gen2 capabilities
samples/17_video_mobilenet.rst
samples/18_rgb_encoding_mobilenet.rst
samples/21_mobilenet_decoding_on_device.rst
samples/22_1_tiny_tolo_v3_decoding_on_device.rst
samples/22_2_tiny_tolo_v4_decoding_on_device.rst
samples/22_1_tiny_yolo_v3_decoding_on_device.rst
samples/22_2_tiny_yolo_v4_decoding_on_device.rst
samples/23_autoexposure_roi.rst
samples/24_opencv_support.rst
samples/25_system_information.rst
samples/26_1_spatial_mobilenet.rst
samples/26_2_spatial_mobilenet_mono.rst
samples/26_3_spatial_tiny_yolo.rst
samples/27_spatial_location_calculator.rst
samples/28_camera_video_example.rst

.. toctree::
:maxdepth: 1
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
21 - RGB & TinyYoloV3 decoding on device
22.1 - RGB & TinyYoloV3 decoding on device
==========================================

This example shows how to run TinyYoloV3 on the RGB input frame, and how to display both the RGB
Expand Down
21 changes: 21 additions & 0 deletions docs/source/samples/28_camera_video_example.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
28 - Camera video high resolution
=================================

This example shows how to use high resolution video at low latency. Compared to :ref:`01 - RGB Preview`, this demo outputs NV12 frames whereas
preview frames are BGR and are not suited for larger resoulution (eg. 2000x1000). Preview is more suitable for either NN or visualization purposes.

Setup
#####

.. include:: /includes/install_from_pypi.rst

Source code
###########

Also `available on GitHub <https://github.com/luxonis/depthai-python/blob/develop/examples/28_camera_video_example.py>`__

.. literalinclude:: ../../../examples/28_camera_video_example.py
:language: python
:linenos:

.. include:: /includes/footer-short.rst
37 changes: 37 additions & 0 deletions examples/28_camera_video_example.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
#!/usr/bin/env python3

import cv2
import depthai as dai
import numpy as np

# Start defining a pipeline
pipeline = dai.Pipeline()

# Define a source - color camera
colorCam = pipeline.createColorCamera()
colorCam.setBoardSocket(dai.CameraBoardSocket.RGB)
colorCam.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
colorCam.setVideoSize(1920, 1080)

# Create output
xoutVideo = pipeline.createXLinkOut()
xoutVideo.setStreamName("video")

colorCam.video.link(xoutVideo.input)

# Pipeline defined, now the device is connected to
with dai.Device(pipeline) as device:
# Start pipeline
device.startPipeline()
video = device.getOutputQueue(name="video", maxSize=1, blocking=False)

while True:
# Get preview and video frames
videoIn = video.get()

# Get BGR frame from NV12 encoded video frame to show with opencv
# Visualizing the frame on slower hosts might have overhead
cv2.imshow("video", videoIn.getCvFrame())

if cv2.waitKey(1) == ord('q'):
break
1 change: 1 addition & 0 deletions examples/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -136,3 +136,4 @@ add_python_example(26_2_spatial_mobilenet_mono 26_2_spatial_mobilenet_mono.py "$
add_python_example(26_3_spatial_tiny_yolo_v3 26_3_spatial_tiny_yolo.py "${tiny_yolo_v3_blob}")
add_python_example(26_3_spatial_tiny_yolo_v4 26_3_spatial_tiny_yolo.py "${tiny_yolo_v4_blob}")
add_python_example(27_spatial_location_calculator 27_spatial_location_calculator.py)
add_python_example(28_camera_video_example 28_camera_video_example.py)