Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
59 commits
Select commit Hold shift + click to select a range
87a13df
FW: OV9282 720p/800p max FPS: 60 -> 120
alex-luxonis Oct 25, 2022
dc88bcd
Added device side trace logging
themarpe Nov 8, 2022
9a3671b
Merge branch 'main' into develop
themarpe Nov 13, 2022
2bdf25b
Added EepromError exception class
themarpe Nov 17, 2022
4077fe0
Increased limit on DeviceBootloader monitor timeout
themarpe Nov 17, 2022
454d1e7
Added camera naming capabilites, setting and retrieving. WIP: rest of…
themarpe Nov 29, 2022
3a784d4
FW/CameraControl: fix still-capture for sockets other than RGB/center
alex-luxonis Nov 29, 2022
181c68b
Merge branch 'camera_board_configuration' into develop
themarpe Nov 29, 2022
85c2eb2
Merge branch 'develop' of github.com:luxonis/depthai-python into develop
themarpe Nov 30, 2022
ba9bdfb
[FW] Fixed ImageManip + Subpixel issues and added FFC camera naming
themarpe Dec 1, 2022
fa263a8
Update core
Dec 12, 2022
045f1c9
Merge pull request #717 from luxonis/nn_node_seq_num_fix
moratom Dec 12, 2022
d5b09fa
WIP: Camera node
themarpe Dec 12, 2022
13bd22c
Update FW: Add missing python bindings for boundingBoxMapping
Dec 14, 2022
3d5b514
Updated NETWORK Bootloader with dual protocol capabilities
themarpe Dec 16, 2022
3bd8b58
Added some bindings and modified camera preview example
themarpe Dec 16, 2022
997fd8f
[FW] Fixed a bug in board downselection. OAK-D S2/Pro camera enumerat…
themarpe Dec 17, 2022
f2de1dd
Merge remote-tracking branch 'origin/develop' into camera_node
themarpe Dec 17, 2022
bae849d
Updated camera_preview example
themarpe Dec 17, 2022
3b3766f
[FW] Added support for Mono video/preview in Camera node
themarpe Dec 17, 2022
8f23f96
Added workflow notify for HIL tests
themarpe Dec 18, 2022
e5e841c
Updated dual BL to v0.0.23 temporary build
themarpe Dec 19, 2022
5c80c1d
Merge branch 'develop' into bootloader_dual_protocol
themarpe Dec 19, 2022
f26cb3b
Added OAK-D-LR support. WIP: Orientation capability
themarpe Dec 19, 2022
524201f
Merge branch 'trace_event' into develop
themarpe Dec 19, 2022
a9ece1f
[FW/XLink] Explicitly limited to single connection
themarpe Dec 19, 2022
755e89d
Merge remote-tracking branch 'origin/ov9282_full_res_120fps' into dev…
themarpe Dec 19, 2022
531a384
Add python bindings for frame event
whoactuallycares Dec 19, 2022
d06c706
Merge remote-tracking branch 'origin/frame-event' into develop
themarpe Dec 19, 2022
646d52e
ImageManip added colormap capability. TODO min/max range selection
themarpe Dec 21, 2022
6cfdd56
Add option to override baseline and/or focal length for disparity to …
Dec 22, 2022
fb04df8
[FW] OAK-D-LR - Fixed default image orientation and added depth previ…
themarpe Dec 23, 2022
0804b7c
Modified LR depth example
themarpe Dec 27, 2022
c7e4a29
Fixed image_manip_warp_mesh.py example
themarpe Dec 28, 2022
1c868cf
Updated FW with Camera changes and warp capabilities. Modified camera…
themarpe Dec 29, 2022
33744cb
Updated FW with Camera warp capabilities
themarpe Dec 30, 2022
32533fd
Added span bindings
themarpe Jan 2, 2023
0516f92
Added 'Camera' related bindings
themarpe Jan 2, 2023
71f4bd1
Merge remote-tracking branch 'origin/develop' into HEAD
Jan 3, 2023
b7556a9
Update core
Jan 3, 2023
b9c4c64
Merge pull request #726 from luxonis/stereo_baseline_focal_length_ove…
SzabolcsGergely Jan 3, 2023
9ca3b40
FW - Modifed watchdog to do a graceful reset instead
themarpe Jan 5, 2023
0a45ec9
Added additional API to retrieve timestamps at various exposure points
themarpe Jan 8, 2023
3c45f70
Merge branch 'develop' into camera_node
themarpe Jan 9, 2023
5976071
Merge branch 'bootloader_dual_protocol' into develop
themarpe Jan 11, 2023
b05af12
Merge branch 'fw_wd_fix' into develop
themarpe Jan 11, 2023
1d25275
WIP: mockIsp capabilities
themarpe Jan 12, 2023
87ff67c
[FW] Fix for CAM_C not being detected
themarpe Jan 12, 2023
ef31c11
Device - Added non exclusive boot option
themarpe Jan 13, 2023
164e6cc
Slight Colormap API improvements
themarpe Jan 15, 2023
9880147
Added DeviceBase convinience constructors taking name or deviceid as …
themarpe Jan 17, 2023
8b65b53
Merge remote-tracking branch 'origin/develop' into develop
themarpe Jan 17, 2023
f90eed9
Camera - Disabled some of the functionality for now
themarpe Jan 18, 2023
7ff599b
Tweaked getTimestamp & exposure offset API
themarpe Jan 20, 2023
d5f9473
Merge commit 'f90eed9a3af09a0c43c36e0383c095ef4786fe4e' into develop
themarpe Jan 20, 2023
9740db6
FW: IMX296 support, add THE_1440x1080 resolution
alex-luxonis Dec 13, 2022
b04d2df
cam_test.py: add `-tun`/`--camera-tuning` option
alex-luxonis Nov 10, 2022
125b025
FW: IMX296 Camera node, IMX378 1080p limited to 60fps
alex-luxonis Jan 20, 2023
8e5b547
Bump version to 2.20.0.0
themarpe Jan 20, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -538,3 +538,15 @@ jobs:
repository: luxonis/robothub-apps
event-type: depthai-python-release
client-payload: '{"ref": "${{ github.ref }}", "sha": "${{ github.sha }}"}'

notify_hil_workflow_linux_x86_64:
needs: [build-linux-x86_64]
runs-on: ubuntu-latest
steps:
- name: Repository Dispatch
uses: peter-evans/repository-dispatch@v2
with:
token: ${{ secrets.HIL_CORE_DISPATCH_TOKEN }}
repository: luxonis/depthai-core-hil-tests
event-type: python-hil-event
client-payload: '{"ref": "${{ github.ref }}", "sha": "${{ github.sha }}"}'
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ wheelhouse/
.venv
env/
venv/
venv_*/
ENV/
env.bak/
venv.bak/
Expand Down
1 change: 1 addition & 0 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,7 @@ pybind11_add_module(${TARGET_NAME}
src/pipeline/node/XLinkInBindings.cpp
src/pipeline/node/XLinkOutBindings.cpp
src/pipeline/node/ColorCameraBindings.cpp
src/pipeline/node/CameraBindings.cpp
src/pipeline/node/MonoCameraBindings.cpp
src/pipeline/node/StereoDepthBindings.cpp
src/pipeline/node/NeuralNetworkBindings.cpp
Expand Down
2 changes: 1 addition & 1 deletion depthai-core
Submodule depthai-core updated 38 files
+2 −2 .github/workflows/main.workflow.yml
+5 −1 CMakeLists.txt
+1 −0 README.md
+2 −2 cmake/Depthai/DepthaiBootloaderConfig.cmake
+1 −1 cmake/Depthai/DepthaiDeviceSideConfig.cmake
+2 −2 cmake/Hunter/config.cmake
+7 −7 cmake/depthaiDependencies.cmake
+1 −5 examples/ColorCamera/rgb_preview.cpp
+23 −1 examples/bootloader/flash_bootloader.cpp
+23 −1 examples/bootloader/flash_user_bootloader.cpp
+13 −0 include/depthai/common/CameraExposureOffset.hpp
+43 −0 include/depthai/common/CameraFeatures.hpp
+27 −0 include/depthai/common/CameraImageOrientation.hpp
+27 −0 include/depthai/common/CameraSensorType.hpp
+20 −1 include/depthai/device/DeviceBase.hpp
+11 −0 include/depthai/device/EepromError.hpp
+11 −3 include/depthai/openvino/OpenVINO.hpp
+13 −0 include/depthai/pipeline/datatype/ImageManipConfig.hpp
+16 −3 include/depthai/pipeline/datatype/ImgFrame.hpp
+331 −0 include/depthai/pipeline/node/Camera.hpp
+18 −0 include/depthai/pipeline/node/ColorCamera.hpp
+18 −0 include/depthai/pipeline/node/MonoCamera.hpp
+14 −0 include/depthai/pipeline/node/StereoDepth.hpp
+514 −0 include/depthai/utility/span.hpp
+1 −1 shared/depthai-shared
+52 −18 src/device/DeviceBase.cpp
+3 −1 src/device/DeviceBootloader.cpp
+29 −14 src/openvino/OpenVINO.cpp
+4 −0 src/pipeline/Pipeline.cpp
+45 −0 src/pipeline/datatype/ImageManipConfig.cpp
+27 −0 src/pipeline/datatype/ImgFrame.cpp
+275 −0 src/pipeline/node/Camera.cpp
+28 −0 src/pipeline/node/ColorCamera.cpp
+16 −0 src/pipeline/node/MonoCamera.cpp
+8 −0 src/pipeline/node/StereoDepth.cpp
+18 −2 src/utility/Initialization.cpp
+5 −6 src/utility/Resources.cpp
+64 −2 tests/src/openvino_blob_test.cpp
62 changes: 62 additions & 0 deletions examples/Camera/camera_isp.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
#!/usr/bin/env python3

import cv2
import depthai as dai
import time

# Connect to device and start pipeline
with dai.Device() as device:
# Device name
print('Device name:', device.getDeviceName())
# Bootloader version
if device.getBootloaderVersion() is not None:
print('Bootloader version:', device.getBootloaderVersion())
# Print out usb speed
print('Usb speed:', device.getUsbSpeed().name)
# Connected cameras
print('Connected cameras:', device.getConnectedCameraFeatures())

# Create pipeline
pipeline = dai.Pipeline()
cams = device.getConnectedCameraFeatures()
streams = []
for cam in cams:
print(str(cam), str(cam.socket), cam.socket)
c = pipeline.create(dai.node.Camera)
x = pipeline.create(dai.node.XLinkOut)
c.isp.link(x.input)
c.setBoardSocket(cam.socket)
stream = str(cam.socket)
if cam.name:
stream = f'{cam.name} ({stream})'
x.setStreamName(stream)
streams.append(stream)

# Start pipeline
device.startPipeline(pipeline)
fpsCounter = {}
lastFpsCount = {}
tfps = time.time()
while not device.isClosed():
queueNames = device.getQueueEvents(streams)
for stream in queueNames:
messages = device.getOutputQueue(stream).tryGetAll()
fpsCounter[stream] = fpsCounter.get(stream, 0.0) + len(messages)
for message in messages:
# Display arrived frames
if type(message) == dai.ImgFrame:
# render fps
fps = lastFpsCount.get(stream, 0)
frame = message.getCvFrame()
cv2.putText(frame, "Fps: {:.2f}".format(fps), (10, 10), cv2.FONT_HERSHEY_TRIPLEX, 0.4, (255,255,255))
cv2.imshow(stream, frame)

if time.time() - tfps >= 1.0:
scale = time.time() - tfps
for stream in fpsCounter.keys():
lastFpsCount[stream] = fpsCounter[stream] / scale
fpsCounter = {}
tfps = time.time()

if cv2.waitKey(1) == ord('q'):
break
62 changes: 62 additions & 0 deletions examples/Camera/camera_preview.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
#!/usr/bin/env python3

import cv2
import depthai as dai
import time

# Connect to device and start pipeline
with dai.Device(dai.OpenVINO.DEFAULT_VERSION, dai.UsbSpeed.SUPER_PLUS) as device:
# Device name
print('Device name:', device.getDeviceName())
# Bootloader version
if device.getBootloaderVersion() is not None:
print('Bootloader version:', device.getBootloaderVersion())
# Print out usb speed
print('Usb speed:', device.getUsbSpeed().name)
# Connected cameras
print('Connected cameras:', device.getConnectedCameraFeatures())

# Create pipeline
pipeline = dai.Pipeline()
cams = device.getConnectedCameraFeatures()
streams = []
for cam in cams:
print(str(cam), str(cam.socket), cam.socket)
c = pipeline.create(dai.node.Camera)
x = pipeline.create(dai.node.XLinkOut)
c.preview.link(x.input)
c.setBoardSocket(cam.socket)
stream = str(cam.socket)
if cam.name:
stream = f'{cam.name} ({stream})'
x.setStreamName(stream)
streams.append(stream)

# Start pipeline
device.startPipeline(pipeline)
fpsCounter = {}
lastFpsCount = {}
tfps = time.time()
while not device.isClosed():
queueNames = device.getQueueEvents(streams)
for stream in queueNames:
messages = device.getOutputQueue(stream).tryGetAll()
fpsCounter[stream] = fpsCounter.get(stream, 0.0) + len(messages)
for message in messages:
# Display arrived frames
if type(message) == dai.ImgFrame:
# render fps
fps = lastFpsCount.get(stream, 0)
frame = message.getCvFrame()
cv2.putText(frame, "Fps: {:.2f}".format(fps), (10, 10), cv2.FONT_HERSHEY_TRIPLEX, 0.4, (255,255,255))
cv2.imshow(stream, frame)

if time.time() - tfps >= 1.0:
scale = time.time() - tfps
for stream in fpsCounter.keys():
lastFpsCount[stream] = fpsCounter[stream] / scale
fpsCounter = {}
tfps = time.time()

if cv2.waitKey(1) == ord('q'):
break
2 changes: 1 addition & 1 deletion examples/ColorCamera/rgb_preview.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
# Connect to device and start pipeline
with dai.Device(pipeline) as device:

print('Connected cameras:', device.getConnectedCameras())
print('Connected cameras:', device.getConnectedCameraFeatures())
# Print out usb speed
print('Usb speed:', device.getUsbSpeed().name)
# Bootloader version
Expand Down
2 changes: 1 addition & 1 deletion examples/ImageManip/image_manip_warp_mesh.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
maxFrameSize = camRgb.getPreviewWidth() * camRgb.getPreviewHeight() * 3

# Warp preview frame 1
manip1 = pipeline.create(dai.node.Warp)
manip1 = pipeline.create(dai.node.ImageManip)
# Create a custom warp mesh
tl = dai.Point2f(20, 20)
tr = dai.Point2f(460, 20)
Expand Down
62 changes: 62 additions & 0 deletions examples/StereoDepth/depth_colormap.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
#!/usr/bin/env python3

import cv2
import depthai as dai
import numpy as np

# Closer-in minimum depth, disparity range is doubled (from 95 to 190):
extended_disparity = False
# Better accuracy for longer distance, fractional disparity 32-levels:
subpixel = False
# Better handling for occlusions:
lr_check = True

# Create pipeline
pipeline = dai.Pipeline()

# Define sources and outputs
monoLeft = pipeline.create(dai.node.MonoCamera)
monoRight = pipeline.create(dai.node.MonoCamera)
depth = pipeline.create(dai.node.StereoDepth)
xout = pipeline.create(dai.node.XLinkOut)

xout.setStreamName("disparity")

# Properties
monoLeft.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
monoLeft.setBoardSocket(dai.CameraBoardSocket.LEFT)
monoRight.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
monoRight.setBoardSocket(dai.CameraBoardSocket.RIGHT)

# Create a node that will produce the depth map (using disparity output as it's easier to visualize depth this way)
depth.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_DENSITY)
# Options: MEDIAN_OFF, KERNEL_3x3, KERNEL_5x5, KERNEL_7x7 (default)
depth.initialConfig.setMedianFilter(dai.MedianFilter.KERNEL_7x7)
depth.setLeftRightCheck(lr_check)
depth.setExtendedDisparity(extended_disparity)
depth.setSubpixel(subpixel)

# Create a colormap
colormap = pipeline.create(dai.node.ImageManip)
colormap.initialConfig.setColormap(dai.Colormap.STEREO_TURBO, depth.initialConfig.getMaxDisparity())
colormap.initialConfig.setFrameType(dai.ImgFrame.Type.NV12)

# Linking
monoLeft.out.link(depth.left)
monoRight.out.link(depth.right)
depth.disparity.link(colormap.inputImage)
colormap.out.link(xout.input)

# Connect to device and start pipeline
with dai.Device(pipeline) as device:

# Output queue will be used to get the disparity frames from the outputs defined above
q = device.getOutputQueue(name="disparity", maxSize=4, blocking=False)

while True:
inDisparity = q.get() # blocking call, will wait until a new data has arrived
frame = inDisparity.getCvFrame()
cv2.imshow("disparity", frame)

if cv2.waitKey(1) == ord('q'):
break
72 changes: 72 additions & 0 deletions examples/StereoDepth/depth_preview_lr.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
#!/usr/bin/env python3

import cv2
import depthai as dai
import numpy as np

# Closer-in minimum depth, disparity range is doubled (from 95 to 190):
extended_disparity = True
# Better accuracy for longer distance, fractional disparity 32-levels:
subpixel = True
# Better handling for occlusions:
lr_check = True

# Create pipeline
pipeline = dai.Pipeline()

# Define sources and outputs
left = pipeline.create(dai.node.ColorCamera)
right = pipeline.create(dai.node.ColorCamera)
depth = pipeline.create(dai.node.StereoDepth)
xout = pipeline.create(dai.node.XLinkOut)
xoutl = pipeline.create(dai.node.XLinkOut)
xoutr = pipeline.create(dai.node.XLinkOut)

xout.setStreamName("disparity")
xoutl.setStreamName("rectifiedLeft")
xoutr.setStreamName("rectifiedRight")

# Properties
left.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1200_P)
left.setBoardSocket(dai.CameraBoardSocket.LEFT)
right.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1200_P)
right.setBoardSocket(dai.CameraBoardSocket.RIGHT)
right.setIspScale(2, 3)
left.setIspScale(2, 3)


# Create a node that will produce the depth map (using disparity output as it's easier to visualize depth this way)
depth.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_DENSITY)
# Options: MEDIAN_OFF, KERNEL_3x3, KERNEL_5x5, KERNEL_7x7 (default)
depth.initialConfig.setMedianFilter(dai.MedianFilter.KERNEL_7x7)
depth.setInputResolution(1280, 800)
depth.setLeftRightCheck(lr_check)
depth.setExtendedDisparity(extended_disparity)
depth.setSubpixel(subpixel)
depth.setInputResolution(1280, 800)

# Linking
left.isp.link(depth.left)
right.isp.link(depth.right)
depth.disparity.link(xout.input)
depth.rectifiedLeft.link(xoutl.input)
depth.rectifiedRight.link(xoutr.input)

# Connect to device and start pipeline
with dai.Device(pipeline) as device:
while not device.isClosed():
queueNames = device.getQueueEvents()
for q in queueNames:
message = device.getOutputQueue(q).get()
# Display arrived frames
if type(message) == dai.ImgFrame:
frame = message.getCvFrame()
if 'disparity' in q:
maxDisp = depth.initialConfig.getMaxDisparity()
disp = (frame * (255.0 / maxDisp)).astype(np.uint8)
disp = cv2.applyColorMap(disp, cv2.COLORMAP_JET)
cv2.imshow(q, disp)
else:
cv2.imshow(q, frame)
if cv2.waitKey(1) == ord('q'):
break
62 changes: 62 additions & 0 deletions examples/VideoEncoder/disparity_colormap_encoding.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
#!/usr/bin/env python3

import depthai as dai

# Create pipeline
pipeline = dai.Pipeline()

# Create left/right mono cameras for Stereo depth
monoLeft = pipeline.create(dai.node.MonoCamera)
monoLeft.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
monoLeft.setBoardSocket(dai.CameraBoardSocket.LEFT)

monoRight = pipeline.create(dai.node.MonoCamera)
monoRight.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
monoRight.setBoardSocket(dai.CameraBoardSocket.RIGHT)

# Create a node that will produce the depth map
depth = pipeline.create(dai.node.StereoDepth)
depth.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_DENSITY)
depth.initialConfig.setMedianFilter(dai.MedianFilter.KERNEL_7x7)
depth.setLeftRightCheck(False)
depth.setExtendedDisparity(False)
# Subpixel disparity is of UINT16 format, which is unsupported by VideoEncoder
depth.setSubpixel(False)
monoLeft.out.link(depth.left)
monoRight.out.link(depth.right)

# Colormap
colormap = pipeline.create(dai.node.ImageManip)
colormap.initialConfig.setColormap(dai.Colormap.TURBO, depth.initialConfig.getMaxDisparity())
colormap.initialConfig.setFrameType(dai.ImgFrame.Type.NV12)

videoEnc = pipeline.create(dai.node.VideoEncoder)
# Depth resolution/FPS will be the same as mono resolution/FPS
videoEnc.setDefaultProfilePreset(monoLeft.getFps(), dai.VideoEncoderProperties.Profile.H264_HIGH)

# Link
depth.disparity.link(colormap.inputImage)
colormap.out.link(videoEnc.input)

xout = pipeline.create(dai.node.XLinkOut)
xout.setStreamName("enc")
videoEnc.bitstream.link(xout.input)

# Connect to device and start pipeline
with dai.Device(pipeline) as device:

# Output queue will be used to get the encoded data from the output defined above
q = device.getOutputQueue(name="enc")

# The .h265 file is a raw stream file (not playable yet)
with open('disparity.h264', 'wb') as videoFile:
print("Press Ctrl+C to stop encoding...")
try:
while True:
videoFile.write(q.get().getData())
except KeyboardInterrupt:
# Keyboard interrupt (Ctrl + C) detected
pass

print("To view the encoded data, convert the stream file (.mjpeg) into a video file (.mp4) using a command below:")
print("ffmpeg -framerate 30 -i disparity.mjpeg -c copy video.mp4")
6 changes: 6 additions & 0 deletions examples/device/device_all_boot_bootloader.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
import depthai as dai

devices = dai.Device.getAllConnectedDevices()

for device in devices:
dai.XLinkConnection.bootBootloader(device)
10 changes: 10 additions & 0 deletions examples/device/device_boot_non_exclusive.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
import depthai as dai
import time

cfg = dai.Device.Config()
cfg.nonExclusiveMode = True

with dai.Device(cfg) as device:
while not device.isClosed():
print('CPU usage:',device.getLeonCssCpuUsage().average)
time.sleep(1)
Loading